标签归档:download

基本的HTTP文件下载并保存到python中的磁盘上?

问题:基本的HTTP文件下载并保存到python中的磁盘上?

我是Python的新手,并且已经在本网站上进行了问答,以解答我的问题。但是,我是一个初学者,我发现很难理解某些解决方案。我需要一个非常基本的解决方案。

有人可以向我解释一下“通过http下载文件”和“在Windows中保存到磁盘”的简单解决方案吗?

我也不知道如何使用shutil和os模块。

我要下载的文件不到500 MB,是一个.gz存档文件。如果有人可以解释如何提取存档并利用其中的文件,那就太好了!

这是部分解决方案,是我根据各种答案写的:

import requests
import os
import shutil

global dump

def download_file():
    global dump
    url = "http://randomsite.com/file.gz"
    file = requests.get(url, stream=True)
    dump = file.raw

def save_file():
    global dump
    location = os.path.abspath("D:\folder\file.gz")
    with open("file.gz", 'wb') as location:
        shutil.copyfileobj(dump, location)
    del dump

有人可以指出错误(初学者水平)并解释执行此操作的更简单方法吗?

谢谢!

I’m new to Python and I’ve been going through the Q&A on this site, for an answer to my question. However, I’m a beginner and I find it difficult to understand some of the solutions. I need a very basic solution.

Could someone please explain a simple solution to ‘Downloading a file through http’ and ‘Saving it to disk, in Windows’, to me?

I’m not sure how to use shutil and os modules, either.

The file I want to download is under 500 MB and is an .gz archive file.If someone can explain how to extract the archive and utilise the files in it also, that would be great!

Here’s a partial solution, that I wrote from various answers combined:

import requests
import os
import shutil

global dump

def download_file():
    global dump
    url = "http://randomsite.com/file.gz"
    file = requests.get(url, stream=True)
    dump = file.raw

def save_file():
    global dump
    location = os.path.abspath("D:\folder\file.gz")
    with open("file.gz", 'wb') as location:
        shutil.copyfileobj(dump, location)
    del dump

Could someone point out errors (beginner level) and explain any easier methods to do this?

Thanks!


回答 0

一种下载文件的干净方法是:

import urllib

testfile = urllib.URLopener()
testfile.retrieve("http://randomsite.com/file.gz", "file.gz")

这将从网站下载文件并命名file.gz。这是我最喜欢的解决方案之一,从通过urllib和python下载图片开始

本示例使用该urllib库,它将直接从源中检索文件。

A clean way to download a file is:

import urllib

testfile = urllib.URLopener()
testfile.retrieve("http://randomsite.com/file.gz", "file.gz")

This downloads a file from a website and names it file.gz. This is one of my favorite solutions, from Downloading a picture via urllib and python.

This example uses the urllib library, and it will directly retrieve the file form a source.


回答 1

如前所述这里

import urllib
urllib.urlretrieve ("http://randomsite.com/file.gz", "file.gz")

EDIT:如果您仍想使用请求,请查看此问题问题

As mentioned here:

import urllib
urllib.urlretrieve ("http://randomsite.com/file.gz", "file.gz")

EDIT: If you still want to use requests, take a look at this question or this one.


回答 2

我用wget

如果您想举例说明简单而又好的库?

import wget

file_url = 'http://johndoe.com/download.zip'

file_name = wget.download(file_url)

wget模块支持python 2和python 3版本

I use wget.

Simple and good library if you want to example?

import wget

file_url = 'http://johndoe.com/download.zip'

file_name = wget.download(file_url)

wget module support python 2 and python 3 versions


回答 3

四种使用wget,urllib和request的方法。

#!/usr/bin/python
import requests
from StringIO import StringIO
from PIL import Image
import profile as profile
import urllib
import wget


url = 'https://tinypng.com/images/social/website.jpg'

def testRequest():
    image_name = 'test1.jpg'
    r = requests.get(url, stream=True)
    with open(image_name, 'wb') as f:
        for chunk in r.iter_content():
            f.write(chunk)

def testRequest2():
    image_name = 'test2.jpg'
    r = requests.get(url)
    i = Image.open(StringIO(r.content))
    i.save(image_name)

def testUrllib():
    image_name = 'test3.jpg'
    testfile = urllib.URLopener()
    testfile.retrieve(url, image_name)

def testwget():
    image_name = 'test4.jpg'
    wget.download(url, image_name)

if __name__ == '__main__':
    profile.run('testRequest()')
    profile.run('testRequest2()')
    profile.run('testUrllib()')
    profile.run('testwget()')

testRequest-在20.236秒内调用4469882函数(4469842基本调用)

testRequest2-8580个函数调用(8574个基本调用)在0.072秒内

testUrllib-在0.036秒内调用3810个函数(调用3775个原始函数)

testwget-在0.020秒内调用3489函数

Four methods using wget, urllib and request.

#!/usr/bin/python
import requests
from StringIO import StringIO
from PIL import Image
import profile as profile
import urllib
import wget


url = 'https://tinypng.com/images/social/website.jpg'

def testRequest():
    image_name = 'test1.jpg'
    r = requests.get(url, stream=True)
    with open(image_name, 'wb') as f:
        for chunk in r.iter_content():
            f.write(chunk)

def testRequest2():
    image_name = 'test2.jpg'
    r = requests.get(url)
    i = Image.open(StringIO(r.content))
    i.save(image_name)

def testUrllib():
    image_name = 'test3.jpg'
    testfile = urllib.URLopener()
    testfile.retrieve(url, image_name)

def testwget():
    image_name = 'test4.jpg'
    wget.download(url, image_name)

if __name__ == '__main__':
    profile.run('testRequest()')
    profile.run('testRequest2()')
    profile.run('testUrllib()')
    profile.run('testwget()')

testRequest – 4469882 function calls (4469842 primitive calls) in 20.236 seconds

testRequest2 – 8580 function calls (8574 primitive calls) in 0.072 seconds

testUrllib – 3810 function calls (3775 primitive calls) in 0.036 seconds

testwget – 3489 function calls in 0.020 seconds


回答 4

对于Python3 +, URLopener已弃用。使用时会出现如下错误:

url_opener = urllib.URLopener()AttributeError:模块’urllib’没有属性’URLopener’

因此,请尝试:

import urllib.request 
urllib.request.urlretrieve(url, filename)

For Python3+ URLopener is deprecated. And when used you will get error as below:

url_opener = urllib.URLopener() AttributeError: module ‘urllib’ has no attribute ‘URLopener’

So, try:

import urllib.request 
urllib.request.urlretrieve(url, filename)

回答 5

异国Windows解决方案

import subprocess

subprocess.run("powershell Invoke-WebRequest {} -OutFile {}".format(your_url, filename), shell=True)

Exotic Windows Solution

import subprocess

subprocess.run("powershell Invoke-WebRequest {} -OutFile {}".format(your_url, filename), shell=True)

回答 6

我开始沿着这条路走,因为ESXi的wget没有使用SSL编译,我想将OVA从供应商的网站直接下载到位于世界另一端的ESXi主机上。

我必须通过编辑规则来禁用防火墙(懒惰)/启用https(正确)

创建了python脚本:

import ssl
import shutil
import tempfile
import urllib.request
context = ssl._create_unverified_context()

dlurl='https://somesite/path/whatever'
with urllib.request.urlopen(durl, context=context) as response:
    with open("file.ova", 'wb') as tmp_file:
        shutil.copyfileobj(response, tmp_file)

ESXi库是配对的,但是开源的鼬鼠安装程序似乎将urllib用于https …因此启发了我走这条路

I started down this path because ESXi’s wget is not compiled with SSL and I wanted to download an OVA from a vendor’s website directly onto the ESXi host which is on the other side of the world.

I had to disable the firewall(lazy)/enable https out by editing the rules(proper)

created the python script:

import ssl
import shutil
import tempfile
import urllib.request
context = ssl._create_unverified_context()

dlurl='https://somesite/path/whatever'
with urllib.request.urlopen(durl, context=context) as response:
    with open("file.ova", 'wb') as tmp_file:
        shutil.copyfileobj(response, tmp_file)

ESXi libraries are kind of paired down but the open source weasel installer seemed to use urllib for https… so it inspired me to go down this path


回答 7

另一种保存文件的干净方法是:

import csv
import urllib

urllib.retrieve("your url goes here" , "output.csv")

Another clean way to save the file is this:

import csv
import urllib

urllib.retrieve("your url goes here" , "output.csv")

使用请求在python中下载大文件

问题:使用请求在python中下载大文件

请求是一个非常不错的库。我想用它来下载大文件(> 1GB)。问题是不可能将整个文件保留在内存中,我需要分块读取它。这是以下代码的问题

import requests

def DownloadFile(url)
    local_filename = url.split('/')[-1]
    r = requests.get(url)
    f = open(local_filename, 'wb')
    for chunk in r.iter_content(chunk_size=512 * 1024): 
        if chunk: # filter out keep-alive new chunks
            f.write(chunk)
    f.close()
    return 

由于某种原因,它无法按这种方式工作。仍将响应加载到内存中,然后再将其保存到文件中。

更新

如果您需要一个小型客户端(Python 2.x /3.x),可以从FTP下载大文件,则可以在此处找到它。它支持多线程和重新连接(它确实监视连接),还可以为下载任务调整套接字参数。

Requests is a really nice library. I’d like to use it for download big files (>1GB). The problem is it’s not possible to keep whole file in memory I need to read it in chunks. And this is a problem with the following code

import requests

def DownloadFile(url)
    local_filename = url.split('/')[-1]
    r = requests.get(url)
    f = open(local_filename, 'wb')
    for chunk in r.iter_content(chunk_size=512 * 1024): 
        if chunk: # filter out keep-alive new chunks
            f.write(chunk)
    f.close()
    return 

By some reason it doesn’t work this way. It still loads response into memory before save it to a file.

UPDATE

If you need a small client (Python 2.x /3.x) which can download big files from FTP, you can find it here. It supports multithreading & reconnects (it does monitor connections) also it tunes socket params for the download task.


回答 0

使用以下流代码,无论下载文件的大小如何,Python内存的使用都受到限制:

def download_file(url):
    local_filename = url.split('/')[-1]
    # NOTE the stream=True parameter below
    with requests.get(url, stream=True) as r:
        r.raise_for_status()
        with open(local_filename, 'wb') as f:
            for chunk in r.iter_content(chunk_size=8192): 
                # If you have chunk encoded response uncomment if
                # and set chunk_size parameter to None.
                #if chunk: 
                f.write(chunk)
    return local_filename

请注意,使用返回的字节数iter_content不完全是chunk_size; 它应该是一个通常更大的随机数,并且在每次迭代中都应该有所不同。

https://requests.readthedocs.io/en/latest/user/advanced/#body-content-workflowhttps://requests.readthedocs.io/en/latest/api/#requests.Response.iter_content进一步参考。

With the following streaming code, the Python memory usage is restricted regardless of the size of the downloaded file:

def download_file(url):
    local_filename = url.split('/')[-1]
    # NOTE the stream=True parameter below
    with requests.get(url, stream=True) as r:
        r.raise_for_status()
        with open(local_filename, 'wb') as f:
            for chunk in r.iter_content(chunk_size=8192): 
                # If you have chunk encoded response uncomment if
                # and set chunk_size parameter to None.
                #if chunk: 
                f.write(chunk)
    return local_filename

Note that the number of bytes returned using iter_content is not exactly the chunk_size; it’s expected to be a random number that is often far bigger, and is expected to be different in every iteration.

See https://requests.readthedocs.io/en/latest/user/advanced/#body-content-workflow and https://requests.readthedocs.io/en/latest/api/#requests.Response.iter_content for further reference.


回答 1

如果使用Response.raw和,则容易得多shutil.copyfileobj()

import requests
import shutil

def download_file(url):
    local_filename = url.split('/')[-1]
    with requests.get(url, stream=True) as r:
        with open(local_filename, 'wb') as f:
            shutil.copyfileobj(r.raw, f)

    return local_filename

这样就无需占用过多内存就可以将文件流式传输到磁盘,并且代码很简单。

It’s much easier if you use Response.raw and shutil.copyfileobj():

import requests
import shutil

def download_file(url):
    local_filename = url.split('/')[-1]
    with requests.get(url, stream=True) as r:
        with open(local_filename, 'wb') as f:
            shutil.copyfileobj(r.raw, f)

    return local_filename

This streams the file to disk without using excessive memory, and the code is simple.


回答 2

OP并不是在问什么,但是…这样做很简单urllib

from urllib.request import urlretrieve
url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
dst = 'ubuntu-16.04.2-desktop-amd64.iso'
urlretrieve(url, dst)

或这样,如果您要将其保存到临时文件中:

from urllib.request import urlopen
from shutil import copyfileobj
from tempfile import NamedTemporaryFile
url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
with urlopen(url) as fsrc, NamedTemporaryFile(delete=False) as fdst:
    copyfileobj(fsrc, fdst)

我看了看这个过程:

watch 'ps -p 18647 -o pid,ppid,pmem,rsz,vsz,comm,args; ls -al *.iso'

而且我看到文件在增长,但内存使用量保持在17 MB。我想念什么吗?

Not exactly what OP was asking, but… it’s ridiculously easy to do that with urllib:

from urllib.request import urlretrieve
url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
dst = 'ubuntu-16.04.2-desktop-amd64.iso'
urlretrieve(url, dst)

Or this way, if you want to save it to a temporary file:

from urllib.request import urlopen
from shutil import copyfileobj
from tempfile import NamedTemporaryFile
url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
with urlopen(url) as fsrc, NamedTemporaryFile(delete=False) as fdst:
    copyfileobj(fsrc, fdst)

I watched the process:

watch 'ps -p 18647 -o pid,ppid,pmem,rsz,vsz,comm,args; ls -al *.iso'

And I saw the file growing, but memory usage stayed at 17 MB. Am I missing something?


回答 3

您的块大小可能太大,您是否尝试过删除它-一次一次可能是1024个字节?(同样,您可以with用来整理语法)

def DownloadFile(url):
    local_filename = url.split('/')[-1]
    r = requests.get(url)
    with open(local_filename, 'wb') as f:
        for chunk in r.iter_content(chunk_size=1024): 
            if chunk: # filter out keep-alive new chunks
                f.write(chunk)
    return 

顺便说一句,您如何推断响应已加载到内存中?

这听起来仿佛Python没有刷新数据文件,从其他SO问题,你可以尝试f.flush(),并os.fsync()迫使文件的写入和释放内存;

    with open(local_filename, 'wb') as f:
        for chunk in r.iter_content(chunk_size=1024): 
            if chunk: # filter out keep-alive new chunks
                f.write(chunk)
                f.flush()
                os.fsync(f.fileno())

Your chunk size could be too large, have you tried dropping that – maybe 1024 bytes at a time? (also, you could use with to tidy up the syntax)

def DownloadFile(url):
    local_filename = url.split('/')[-1]
    r = requests.get(url)
    with open(local_filename, 'wb') as f:
        for chunk in r.iter_content(chunk_size=1024): 
            if chunk: # filter out keep-alive new chunks
                f.write(chunk)
    return 

Incidentally, how are you deducing that the response has been loaded into memory?

It sounds as if python isn’t flushing the data to file, from other SO questions you could try f.flush() and os.fsync() to force the file write and free memory;

    with open(local_filename, 'wb') as f:
        for chunk in r.iter_content(chunk_size=1024): 
            if chunk: # filter out keep-alive new chunks
                f.write(chunk)
                f.flush()
                os.fsync(f.fileno())