基本的HTTP文件下载并保存到python中的磁盘上?

问题:基本的HTTP文件下载并保存到python中的磁盘上?

我是Python的新手,并且已经在本网站上进行了问答,以解答我的问题。但是,我是一个初学者,我发现很难理解某些解决方案。我需要一个非常基本的解决方案。

有人可以向我解释一下“通过http下载文件”和“在Windows中保存到磁盘”的简单解决方案吗?

我也不知道如何使用shutil和os模块。

我要下载的文件不到500 MB,是一个.gz存档文件。如果有人可以解释如何提取存档并利用其中的文件,那就太好了!

这是部分解决方案,是我根据各种答案写的:

import requests
import os
import shutil

global dump

def download_file():
    global dump
    url = "http://randomsite.com/file.gz"
    file = requests.get(url, stream=True)
    dump = file.raw

def save_file():
    global dump
    location = os.path.abspath("D:\folder\file.gz")
    with open("file.gz", 'wb') as location:
        shutil.copyfileobj(dump, location)
    del dump

有人可以指出错误(初学者水平)并解释执行此操作的更简单方法吗?

谢谢!

I’m new to Python and I’ve been going through the Q&A on this site, for an answer to my question. However, I’m a beginner and I find it difficult to understand some of the solutions. I need a very basic solution.

Could someone please explain a simple solution to ‘Downloading a file through http’ and ‘Saving it to disk, in Windows’, to me?

I’m not sure how to use shutil and os modules, either.

The file I want to download is under 500 MB and is an .gz archive file.If someone can explain how to extract the archive and utilise the files in it also, that would be great!

Here’s a partial solution, that I wrote from various answers combined:

import requests
import os
import shutil

global dump

def download_file():
    global dump
    url = "http://randomsite.com/file.gz"
    file = requests.get(url, stream=True)
    dump = file.raw

def save_file():
    global dump
    location = os.path.abspath("D:\folder\file.gz")
    with open("file.gz", 'wb') as location:
        shutil.copyfileobj(dump, location)
    del dump

Could someone point out errors (beginner level) and explain any easier methods to do this?

Thanks!


回答 0

一种下载文件的干净方法是:

import urllib

testfile = urllib.URLopener()
testfile.retrieve("http://randomsite.com/file.gz", "file.gz")

这将从网站下载文件并命名file.gz。这是我最喜欢的解决方案之一,从通过urllib和python下载图片开始

本示例使用该urllib库,它将直接从源中检索文件。

A clean way to download a file is:

import urllib

testfile = urllib.URLopener()
testfile.retrieve("http://randomsite.com/file.gz", "file.gz")

This downloads a file from a website and names it file.gz. This is one of my favorite solutions, from Downloading a picture via urllib and python.

This example uses the urllib library, and it will directly retrieve the file form a source.


回答 1

如前所述这里

import urllib
urllib.urlretrieve ("http://randomsite.com/file.gz", "file.gz")

EDIT:如果您仍想使用请求,请查看此问题问题

As mentioned here:

import urllib
urllib.urlretrieve ("http://randomsite.com/file.gz", "file.gz")

EDIT: If you still want to use requests, take a look at this question or this one.


回答 2

我用wget

如果您想举例说明简单而又好的库?

import wget

file_url = 'http://johndoe.com/download.zip'

file_name = wget.download(file_url)

wget模块支持python 2和python 3版本

I use wget.

Simple and good library if you want to example?

import wget

file_url = 'http://johndoe.com/download.zip'

file_name = wget.download(file_url)

wget module support python 2 and python 3 versions


回答 3

四种使用wget,urllib和request的方法。

#!/usr/bin/python
import requests
from StringIO import StringIO
from PIL import Image
import profile as profile
import urllib
import wget


url = 'https://tinypng.com/images/social/website.jpg'

def testRequest():
    image_name = 'test1.jpg'
    r = requests.get(url, stream=True)
    with open(image_name, 'wb') as f:
        for chunk in r.iter_content():
            f.write(chunk)

def testRequest2():
    image_name = 'test2.jpg'
    r = requests.get(url)
    i = Image.open(StringIO(r.content))
    i.save(image_name)

def testUrllib():
    image_name = 'test3.jpg'
    testfile = urllib.URLopener()
    testfile.retrieve(url, image_name)

def testwget():
    image_name = 'test4.jpg'
    wget.download(url, image_name)

if __name__ == '__main__':
    profile.run('testRequest()')
    profile.run('testRequest2()')
    profile.run('testUrllib()')
    profile.run('testwget()')

testRequest-在20.236秒内调用4469882函数(4469842基本调用)

testRequest2-8580个函数调用(8574个基本调用)在0.072秒内

testUrllib-在0.036秒内调用3810个函数(调用3775个原始函数)

testwget-在0.020秒内调用3489函数

Four methods using wget, urllib and request.

#!/usr/bin/python
import requests
from StringIO import StringIO
from PIL import Image
import profile as profile
import urllib
import wget


url = 'https://tinypng.com/images/social/website.jpg'

def testRequest():
    image_name = 'test1.jpg'
    r = requests.get(url, stream=True)
    with open(image_name, 'wb') as f:
        for chunk in r.iter_content():
            f.write(chunk)

def testRequest2():
    image_name = 'test2.jpg'
    r = requests.get(url)
    i = Image.open(StringIO(r.content))
    i.save(image_name)

def testUrllib():
    image_name = 'test3.jpg'
    testfile = urllib.URLopener()
    testfile.retrieve(url, image_name)

def testwget():
    image_name = 'test4.jpg'
    wget.download(url, image_name)

if __name__ == '__main__':
    profile.run('testRequest()')
    profile.run('testRequest2()')
    profile.run('testUrllib()')
    profile.run('testwget()')

testRequest – 4469882 function calls (4469842 primitive calls) in 20.236 seconds

testRequest2 – 8580 function calls (8574 primitive calls) in 0.072 seconds

testUrllib – 3810 function calls (3775 primitive calls) in 0.036 seconds

testwget – 3489 function calls in 0.020 seconds


回答 4

对于Python3 +, URLopener已弃用。使用时会出现如下错误:

url_opener = urllib.URLopener()AttributeError:模块’urllib’没有属性’URLopener’

因此,请尝试:

import urllib.request 
urllib.request.urlretrieve(url, filename)

For Python3+ URLopener is deprecated. And when used you will get error as below:

url_opener = urllib.URLopener() AttributeError: module ‘urllib’ has no attribute ‘URLopener’

So, try:

import urllib.request 
urllib.request.urlretrieve(url, filename)

回答 5

异国Windows解决方案

import subprocess

subprocess.run("powershell Invoke-WebRequest {} -OutFile {}".format(your_url, filename), shell=True)

Exotic Windows Solution

import subprocess

subprocess.run("powershell Invoke-WebRequest {} -OutFile {}".format(your_url, filename), shell=True)

回答 6

我开始沿着这条路走,因为ESXi的wget没有使用SSL编译,我想将OVA从供应商的网站直接下载到位于世界另一端的ESXi主机上。

我必须通过编辑规则来禁用防火墙(懒惰)/启用https(正确)

创建了python脚本:

import ssl
import shutil
import tempfile
import urllib.request
context = ssl._create_unverified_context()

dlurl='https://somesite/path/whatever'
with urllib.request.urlopen(durl, context=context) as response:
    with open("file.ova", 'wb') as tmp_file:
        shutil.copyfileobj(response, tmp_file)

ESXi库是配对的,但是开源的鼬鼠安装程序似乎将urllib用于https …因此启发了我走这条路

I started down this path because ESXi’s wget is not compiled with SSL and I wanted to download an OVA from a vendor’s website directly onto the ESXi host which is on the other side of the world.

I had to disable the firewall(lazy)/enable https out by editing the rules(proper)

created the python script:

import ssl
import shutil
import tempfile
import urllib.request
context = ssl._create_unverified_context()

dlurl='https://somesite/path/whatever'
with urllib.request.urlopen(durl, context=context) as response:
    with open("file.ova", 'wb') as tmp_file:
        shutil.copyfileobj(response, tmp_file)

ESXi libraries are kind of paired down but the open source weasel installer seemed to use urllib for https… so it inspired me to go down this path


回答 7

另一种保存文件的干净方法是:

import csv
import urllib

urllib.retrieve("your url goes here" , "output.csv")

Another clean way to save the file is this:

import csv
import urllib

urllib.retrieve("your url goes here" , "output.csv")