问题:使用Python 3从网上下载文件

我正在创建一个程序,该程序将通过读取同一游戏/应用程序的.jad文件中指定的URL从Web服务器下载.jar(java)文件。我正在使用Python 3.2.1

我设法从JAD文件中提取JAR文件的URL(每个JAD文件都包含指向JAR文件的URL),但是正如您所想象的,提取的值是type()字符串。

相关功能如下:

def downloadFile(URL=None):
    import httplib2
    h = httplib2.Http(".cache")
    resp, content = h.request(URL, "GET")
    return content

downloadFile(URL_from_file)

但是,我总是得到一个错误,指出上面函数中的类型必须是字节,而不是字符串。我尝试使用URL.encode(’utf-8’)和字节(URL,encoding =’utf-8’),但是我总是会遇到相同或相似的错误。

因此,基本上我的问题是,当URL以字符串类型存储时,如何从服务器下载文件?

I am creating a program that will download a .jar (java) file from a web server, by reading the URL that is specified in the .jad file of the same game/application. I’m using Python 3.2.1

I’ve managed to extract the URL of the JAR file from the JAD file (every JAD file contains the URL to the JAR file), but as you may imagine, the extracted value is type() string.

Here’s the relevant function:

def downloadFile(URL=None):
    import httplib2
    h = httplib2.Http(".cache")
    resp, content = h.request(URL, "GET")
    return content

downloadFile(URL_from_file)

However I always get an error saying that the type in the function above has to be bytes, and not string. I’ve tried using the URL.encode(‘utf-8′), and also bytes(URL,encoding=’utf-8’), but I’d always get the same or similar error.

So basically my question is how to download a file from a server when the URL is stored in a string type?


回答 0

如果要将网页的内容转换为变量,则只需read响应urllib.request.urlopen

import urllib.request
...
url = 'http://example.com/'
response = urllib.request.urlopen(url)
data = response.read()      # a `bytes` object
text = data.decode('utf-8') # a `str`; this step can't be used if data is binary

下载和保存文件的最简单方法是使用以下urllib.request.urlretrieve功能:

import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve(url, file_name)
import urllib.request
...
# Download the file from `url`, save it in a temporary directory and get the
# path to it (e.g. '/tmp/tmpb48zma.txt') in the `file_name` variable:
file_name, headers = urllib.request.urlretrieve(url)

但是请记住,这urlretrieve被认为是遗留的,并且可能会被弃用(尽管不确定为什么)。

因此,最正确的方法是使用urllib.request.urlopen函数返回代表HTTP响应的类似文件的对象,然后使用将其复制到真实文件中shutil.copyfileobj

import urllib.request
import shutil
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
    shutil.copyfileobj(response, out_file)

如果这看起来太复杂,则可能要简化一些并将整个下载存储在一个bytes对象中,然后将其写入文件。但这仅适用于小文件。

import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
    data = response.read() # a `bytes` object
    out_file.write(data)

可以动态提取.gz(可能还有其他格式)压缩数据,但是这种操作可能需要HTTP服务器支持对文件的随机访问。

import urllib.request
import gzip
...
# Read the first 64 bytes of the file inside the .gz archive located at `url`
url = 'http://example.com/something.gz'
with urllib.request.urlopen(url) as response:
    with gzip.GzipFile(fileobj=response) as uncompressed:
        file_header = uncompressed.read(64) # a `bytes` object
        # Or do anything shown above using `uncompressed` instead of `response`.

If you want to obtain the contents of a web page into a variable, just read the response of urllib.request.urlopen:

import urllib.request
...
url = 'http://example.com/'
response = urllib.request.urlopen(url)
data = response.read()      # a `bytes` object
text = data.decode('utf-8') # a `str`; this step can't be used if data is binary

The easiest way to download and save a file is to use the urllib.request.urlretrieve function:

import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve(url, file_name)
import urllib.request
...
# Download the file from `url`, save it in a temporary directory and get the
# path to it (e.g. '/tmp/tmpb48zma.txt') in the `file_name` variable:
file_name, headers = urllib.request.urlretrieve(url)

But keep in mind that urlretrieve is considered legacy and might become deprecated (not sure why, though).

So the most correct way to do this would be to use the urllib.request.urlopen function to return a file-like object that represents an HTTP response and copy it to a real file using shutil.copyfileobj.

import urllib.request
import shutil
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
    shutil.copyfileobj(response, out_file)

If this seems too complicated, you may want to go simpler and store the whole download in a bytes object and then write it to a file. But this works well only for small files.

import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
    data = response.read() # a `bytes` object
    out_file.write(data)

It is possible to extract .gz (and maybe other formats) compressed data on the fly, but such an operation probably requires the HTTP server to support random access to the file.

import urllib.request
import gzip
...
# Read the first 64 bytes of the file inside the .gz archive located at `url`
url = 'http://example.com/something.gz'
with urllib.request.urlopen(url) as response:
    with gzip.GzipFile(fileobj=response) as uncompressed:
        file_header = uncompressed.read(64) # a `bytes` object
        # Or do anything shown above using `uncompressed` instead of `response`.

回答 1

requests每当我想要与HTTP请求相关的内容时,我都会使用package,因为它的API很容易开头:

首先,安装 requests

$ pip install requests

然后是代码:

from requests import get  # to make GET request


def download(url, file_name):
    # open in binary mode
    with open(file_name, "wb") as file:
        # get request
        response = get(url)
        # write to file
        file.write(response.content)

I use requests package whenever I want something related to HTTP requests because its API is very easy to start with:

first, install requests

$ pip install requests

then the code:

from requests import get  # to make GET request


def download(url, file_name):
    # open in binary mode
    with open(file_name, "wb") as file:
        # get request
        response = get(url)
        # write to file
        file.write(response.content)

回答 2

我希望我理解正确的问题,那就是:当URL以字符串类型存储时,如何从服务器下载文件?

我下载文件并使用以下代码将其保存在本地:

import requests

url = 'https://www.python.org/static/img/python-logo.png'
fileName = 'D:\Python\dwnldPythonLogo.png'
req = requests.get(url)
file = open(fileName, 'wb')
for chunk in req.iter_content(100000):
    file.write(chunk)
file.close()

I hope I understood the question right, which is: how to download a file from a server when the URL is stored in a string type?

I download files and save it locally using the below code:

import requests

url = 'https://www.python.org/static/img/python-logo.png'
fileName = 'D:\Python\dwnldPythonLogo.png'
req = requests.get(url)
file = open(fileName, 'wb')
for chunk in req.iter_content(100000):
    file.write(chunk)
file.close()

回答 3

在这里,我们可以在Python3中使用urllib的Legacy接口:

以下函数和类是从Python 2模块urllib(与urllib2相对)移植的。他们可能在将来的某个时候被弃用。

示例(两行代码)

import urllib.request

url = 'https://www.python.org/static/img/python-logo.png'
urllib.request.urlretrieve(url, "logo.png")

Here we can use urllib’s Legacy interface in Python3:

The following functions and classes are ported from the Python 2 module urllib (as opposed to urllib2). They might become deprecated at some point in the future.

Example (2 lines code):

import urllib.request

url = 'https://www.python.org/static/img/python-logo.png'
urllib.request.urlretrieve(url, "logo.png")

回答 4

您可以使用wget,它是流行的下载shell工具。https://pypi.python.org/pypi/wget 这将是最简单的方法,因为它不需要打开目标文件。这是一个例子。

import wget
url = 'https://i1.wp.com/python3.codes/wp-content/uploads/2015/06/Python3-powered.png?fit=650%2C350'  
wget.download(url, '/Users/scott/Downloads/cat4.jpg') 

You can use wget which is popular downloading shell tool for that. https://pypi.python.org/pypi/wget This will be the simplest method since it does not need to open up the destination file. Here is an example.

import wget
url = 'https://i1.wp.com/python3.codes/wp-content/uploads/2015/06/Python3-powered.png?fit=650%2C350'  
wget.download(url, '/Users/scott/Downloads/cat4.jpg') 

回答 5

是的,绝对请求是用于与HTTP请求相关的东西的很好的程序包。但是我们需要注意传入数据的编码类型,下面是一个说明差异的示例


from requests import get

# case when the response is byte array
url = 'some_image_url'

response = get(url)
with open('output', 'wb') as file:
    file.write(response.content)


# case when the response is text
# Here unlikely if the reponse content is of type **iso-8859-1** we will have to override the response encoding
url = 'some_page_url'

response = get(url)
# override encoding by real educated guess as provided by chardet
r.encoding = r.apparent_encoding

with open('output', 'w', encoding='utf-8') as file:
    file.write(response.content)

Yes, definietly requests is great package to use in something related to HTTP requests. but we need to be careful with the encoding type of the incoming data as well below is an example which explains the difference


from requests import get

# case when the response is byte array
url = 'some_image_url'

response = get(url)
with open('output', 'wb') as file:
    file.write(response.content)


# case when the response is text
# Here unlikely if the reponse content is of type **iso-8859-1** we will have to override the response encoding
url = 'some_page_url'

response = get(url)
# override encoding by real educated guess as provided by chardet
r.encoding = r.apparent_encoding

with open('output', 'w', encoding='utf-8') as file:
    file.write(response.content)


回答 6

动机

有时,我们想要获取图片,但无需将其下载到真实文件中,

下载数据并将其保存在内存中。

例如,如果我使用机器学习方法,则训练一个可以识别带有数字(条形码)图像的模型。

当我搜寻一些具有这些图像的网站时,我可以使用模型来识别它,

而且我不想将这些图片保存在磁盘驱动器上,

那么您可以尝试以下方法来帮助您将下载数据保留在内存中。

点数

import requests
from io import BytesIO
response = requests.get(url)
with BytesIO as io_obj:
    for chunk in response.iter_content(chunk_size=4096):
        io_obj.write(chunk)

基本上,就像@Ranvijay Kumar

一个例子

import requests
from typing import NewType, TypeVar
from io import StringIO, BytesIO
import matplotlib.pyplot as plt
import imageio

URL = NewType('URL', str)
T_IO = TypeVar('T_IO', StringIO, BytesIO)


def download_and_keep_on_memory(url: URL, headers=None, timeout=None, **option) -> T_IO:
    chunk_size = option.get('chunk_size', 4096)  # default 4KB
    max_size = 1024 ** 2 * option.get('max_size', -1)  # MB, default will ignore.
    response = requests.get(url, headers=headers, timeout=timeout)
    if response.status_code != 200:
        raise requests.ConnectionError(f'{response.status_code}')

    instance_io = StringIO if isinstance(next(response.iter_content(chunk_size=1)), str) else BytesIO
    io_obj = instance_io()
    cur_size = 0
    for chunk in response.iter_content(chunk_size=chunk_size):
        cur_size += chunk_size
        if 0 < max_size < cur_size:
            break
        io_obj.write(chunk)
    io_obj.seek(0)
    """ save it to real file.
    with open('temp.png', mode='wb') as out_f:
        out_f.write(io_obj.read())
    """
    return io_obj


def main():
    headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
        'Accept-Encoding': 'gzip, deflate',
        'Accept-Language': 'zh-TW,zh;q=0.9,en-US;q=0.8,en;q=0.7',
        'Cache-Control': 'max-age=0',
        'Connection': 'keep-alive',
        'Host': 'statics.591.com.tw',
        'Upgrade-Insecure-Requests': '1',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36'
    }
    io_img = download_and_keep_on_memory(URL('http://statics.591.com.tw/tools/showPhone.php?info_data=rLsGZe4U%2FbphHOimi2PT%2FhxTPqI&type=rLEFMu4XrrpgEw'),
                                         headers,  # You may need this. Otherwise, some websites will send the 404 error to you.
                                         max_size=4)  # max loading < 4MB
    with io_img:
        plt.rc('axes.spines', top=False, bottom=False, left=False, right=False)
        plt.rc(('xtick', 'ytick'), color=(1, 1, 1, 0))  # same of plt.axis('off')
        plt.imshow(imageio.imread(io_img, as_gray=False, pilmode="RGB"))
        plt.show()


if __name__ == '__main__':
    main()

Motivation

Sometimes, we are want to get the picture but not need to download it to real files,

i.e., download the data and keep it on memory.

For example, If I use the machine learning method, train a model that can recognize an image with the number (bar code).

When I spider some websites and that have those images so I can use the model to recognize it,

and I don’t want to save those pictures on my disk drive,

then you can try the below method to help you keep download data on memory.

Points

import requests
from io import BytesIO
response = requests.get(url)
with BytesIO as io_obj:
    for chunk in response.iter_content(chunk_size=4096):
        io_obj.write(chunk)

basically, is like to @Ranvijay Kumar

An Example

import requests
from typing import NewType, TypeVar
from io import StringIO, BytesIO
import matplotlib.pyplot as plt
import imageio

URL = NewType('URL', str)
T_IO = TypeVar('T_IO', StringIO, BytesIO)


def download_and_keep_on_memory(url: URL, headers=None, timeout=None, **option) -> T_IO:
    chunk_size = option.get('chunk_size', 4096)  # default 4KB
    max_size = 1024 ** 2 * option.get('max_size', -1)  # MB, default will ignore.
    response = requests.get(url, headers=headers, timeout=timeout)
    if response.status_code != 200:
        raise requests.ConnectionError(f'{response.status_code}')

    instance_io = StringIO if isinstance(next(response.iter_content(chunk_size=1)), str) else BytesIO
    io_obj = instance_io()
    cur_size = 0
    for chunk in response.iter_content(chunk_size=chunk_size):
        cur_size += chunk_size
        if 0 < max_size < cur_size:
            break
        io_obj.write(chunk)
    io_obj.seek(0)
    """ save it to real file.
    with open('temp.png', mode='wb') as out_f:
        out_f.write(io_obj.read())
    """
    return io_obj


def main():
    headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
        'Accept-Encoding': 'gzip, deflate',
        'Accept-Language': 'zh-TW,zh;q=0.9,en-US;q=0.8,en;q=0.7',
        'Cache-Control': 'max-age=0',
        'Connection': 'keep-alive',
        'Host': 'statics.591.com.tw',
        'Upgrade-Insecure-Requests': '1',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36'
    }
    io_img = download_and_keep_on_memory(URL('http://statics.591.com.tw/tools/showPhone.php?info_data=rLsGZe4U%2FbphHOimi2PT%2FhxTPqI&type=rLEFMu4XrrpgEw'),
                                         headers,  # You may need this. Otherwise, some websites will send the 404 error to you.
                                         max_size=4)  # max loading < 4MB
    with io_img:
        plt.rc('axes.spines', top=False, bottom=False, left=False, right=False)
        plt.rc(('xtick', 'ytick'), color=(1, 1, 1, 0))  # same of plt.axis('off')
        plt.imshow(imageio.imread(io_img, as_gray=False, pilmode="RGB"))
        plt.show()


if __name__ == '__main__':
    main()


回答 7

from urllib import request

def get(url):
    with request.urlopen(url) as r:
        return r.read()


def download(url, file=None):
    if not file:
        file = url.split('/')[-1]
    with open(file, 'wb') as f:
        f.write(get(url))
from urllib import request

def get(url):
    with request.urlopen(url) as r:
        return r.read()


def download(url, file=None):
    if not file:
        file = url.split('/')[-1]
    with open(file, 'wb') as f:
        f.write(get(url))

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。