带有Python’请求’模块的代理

问题:带有Python’请求’模块的代理

简短,简单的介绍了出色的Python 请求模块。

我似乎在文档中找不到变量“代理”应包含的内容。当我发送带有标准“ IP:PORT”值的字典时,它拒绝要求2个值。所以,我猜(因为在文档中似乎没有涵盖),第一个值是ip,第二个值是端口?

文档只提到了这一点:

代理–(可选)字典到代理URL的映射协议。

所以我尝试了…我应该怎么做?

proxy = { ip: port}

在将它们放入字典之前,我应该将它们转换为某种类型吗?

r = requests.get(url,headers=headers,proxies=proxy)

Just a short, simple one about the excellent Requests module for Python.

I can’t seem to find in the documentation what the variable ‘proxies’ should contain. When I send it a dict with a standard “IP:PORT” value it rejected it asking for 2 values. So, I guess (because this doesn’t seem to be covered in the docs) that the first value is the ip and the second the port?

The docs mention this only:

proxies – (optional) Dictionary mapping protocol to the URL of the proxy.

So I tried this… what should I be doing?

proxy = { ip: port}

and should I convert these to some type before putting them in the dict?

r = requests.get(url,headers=headers,proxies=proxy)

回答 0

proxies“字典语法{"protocol":"ip:port", ...}。使用它,您可以使用httphttpsftp协议为请求指定不同(或相同)的代理:

http_proxy  = "http://10.10.1.10:3128"
https_proxy = "https://10.10.1.11:1080"
ftp_proxy   = "ftp://10.10.1.10:3128"

proxyDict = { 
              "http"  : http_proxy, 
              "https" : https_proxy, 
              "ftp"   : ftp_proxy
            }

r = requests.get(url, headers=headers, proxies=proxyDict)

requests文档推导:

参数:
method –新Request对象的方法。
url–新的Request对象的URL。

proxies–(可选)字典映射 协议代理URL


在Linux上,你也可以通过这样做HTTP_PROXYHTTPS_PROXY以及FTP_PROXY环境变量:

export HTTP_PROXY=10.10.1.10:3128
export HTTPS_PROXY=10.10.1.11:1080
export FTP_PROXY=10.10.1.10:3128

在Windows上:

set http_proxy=10.10.1.10:3128
set https_proxy=10.10.1.11:1080
set ftp_proxy=10.10.1.10:3128

谢谢,Jay指出了这一点:
语法随请求2.0.0更改。
您需要将架构添加到url:https : //2.python-requests.org/en/latest/user/advanced/#proxies

The proxies‘ dict syntax is {"protocol":"ip:port", ...}. With it you can specify different (or the same) proxie(s) for requests using http, https, and ftp protocols:

http_proxy  = "http://10.10.1.10:3128"
https_proxy = "https://10.10.1.11:1080"
ftp_proxy   = "ftp://10.10.1.10:3128"

proxyDict = { 
              "http"  : http_proxy, 
              "https" : https_proxy, 
              "ftp"   : ftp_proxy
            }

r = requests.get(url, headers=headers, proxies=proxyDict)

Deduced from the requests documentation:

Parameters:
method – method for the new Request object.
url – URL for the new Request object.

proxies – (optional) Dictionary mapping protocol to the URL of the proxy.


On linux you can also do this via the HTTP_PROXY, HTTPS_PROXY, and FTP_PROXY environment variables:

export HTTP_PROXY=10.10.1.10:3128
export HTTPS_PROXY=10.10.1.11:1080
export FTP_PROXY=10.10.1.10:3128

On Windows:

set http_proxy=10.10.1.10:3128
set https_proxy=10.10.1.11:1080
set ftp_proxy=10.10.1.10:3128

Thanks, Jay for pointing this out:
The syntax changed with requests 2.0.0.
You’ll need to add a schema to the url: https://2.python-requests.org/en/latest/user/advanced/#proxies


回答 1

我发现urllib有一些非常好的代码来选取系统的代理设置,而且它们恰好采用直接使用的正确形式。您可以这样使用:

import urllib

...
r = requests.get('http://example.org', proxies=urllib.request.getproxies())

它确实运行良好,并且urllib知道如何获取Mac OS X和Windows设置。

I have found that urllib has some really good code to pick up the system’s proxy settings and they happen to be in the correct form to use directly. You can use this like:

import urllib

...
r = requests.get('http://example.org', proxies=urllib.request.getproxies())

It works really well and urllib knows about getting Mac OS X and Windows settings as well.


回答 2

您可以在此处参考代理文档

如果需要使用代理,则可以使用任何请求方法的proxies参数配置单个请求:

import requests

proxies = {
  "http": "http://10.10.1.10:3128",
  "https": "https://10.10.1.10:1080",
}

requests.get("http://example.org", proxies=proxies)

要将HTTP Basic Auth与您的代理一起使用,请使用http:// user:password@host.com/语法:

proxies = {
    "http": "http://user:pass@10.10.1.10:3128/"
}

You can refer to the proxy documentation here.

If you need to use a proxy, you can configure individual requests with the proxies argument to any request method:

import requests

proxies = {
  "http": "http://10.10.1.10:3128",
  "https": "https://10.10.1.10:1080",
}

requests.get("http://example.org", proxies=proxies)

To use HTTP Basic Auth with your proxy, use the http://user:password@host.com/ syntax:

proxies = {
    "http": "http://user:pass@10.10.1.10:3128/"
}

回答 3

可接受的答案对我来说是一个好的开始,但是我不断遇到以下错误:

AssertionError: Not supported proxy scheme None

解决此问题的方法是在代理url中指定http://,从而:

http_proxy  = "http://194.62.145.248:8080"
https_proxy  = "https://194.62.145.248:8080"
ftp_proxy   = "10.10.1.10:3128"

proxyDict = {
              "http"  : http_proxy,
              "https" : https_proxy,
              "ftp"   : ftp_proxy
            }

我想知道为什么原始作品对某些人有用,但对我却不有用。

编辑:我看到主要的答案现在已更新以反映此:)

The accepted answer was a good start for me, but I kept getting the following error:

AssertionError: Not supported proxy scheme None

Fix to this was to specify the http:// in the proxy url thus:

http_proxy  = "http://194.62.145.248:8080"
https_proxy  = "https://194.62.145.248:8080"
ftp_proxy   = "10.10.1.10:3128"

proxyDict = {
              "http"  : http_proxy,
              "https" : https_proxy,
              "ftp"   : ftp_proxy
            }

I’d be interested as to why the original works for some people but not me.

Edit: I see the main answer is now updated to reflect this :)


回答 4

如果您要坚持使用cookie和会话数据,则最好这样做:

import requests

proxies = {
    'http': 'http://user:pass@10.10.1.0:3128',
    'https': 'https://user:pass@10.10.1.0:3128',
}

# Create the session and set the proxies.
s = requests.Session()
s.proxies = proxies

# Make the HTTP request through the session.
r = s.get('http://www.showmemyip.com/')

If you’d like to persisist cookies and session data, you’d best do it like this:

import requests

proxies = {
    'http': 'http://user:pass@10.10.1.0:3128',
    'https': 'https://user:pass@10.10.1.0:3128',
}

# Create the session and set the proxies.
s = requests.Session()
s.proxies = proxies

# Make the HTTP request through the session.
r = s.get('http://www.showmemyip.com/')

回答 5

晚了8年。但我喜欢:

import os
import requests

os.environ['HTTP_PROXY'] = os.environ['http_proxy'] = 'http://http-connect-proxy:3128/'
os.environ['HTTPS_PROXY'] = os.environ['https_proxy'] = 'http://http-connect-proxy:3128/'
os.environ['NO_PROXY'] = os.environ['no_proxy'] = '127.0.0.1,localhost,.local'

r = requests.get('https://example.com')  # , verify=False

8 years late. But I like:

import os
import requests

os.environ['HTTP_PROXY'] = os.environ['http_proxy'] = 'http://http-connect-proxy:3128/'
os.environ['HTTPS_PROXY'] = os.environ['https_proxy'] = 'http://http-connect-proxy:3128/'
os.environ['NO_PROXY'] = os.environ['no_proxy'] = '127.0.0.1,localhost,.local'

r = requests.get('https://example.com')  # , verify=False

回答 6

这是我的python基本类,带有一些代理配置和秒表的requests模块!

import requests
import time
class BaseCheck():
    def __init__(self, url):
        self.http_proxy  = "http://user:pw@proxy:8080"
        self.https_proxy = "http://user:pw@proxy:8080"
        self.ftp_proxy   = "http://user:pw@proxy:8080"
        self.proxyDict = {
                      "http"  : self.http_proxy,
                      "https" : self.https_proxy,
                      "ftp"   : self.ftp_proxy
                    }
        self.url = url
        def makearr(tsteps):
            global stemps
            global steps
            stemps = {}
            for step in tsteps:
                stemps[step] = { 'start': 0, 'end': 0 }
            steps = tsteps
        makearr(['init','check'])
        def starttime(typ = ""):
            for stemp in stemps:
                if typ == "":
                    stemps[stemp]['start'] = time.time()
                else:
                    stemps[stemp][typ] = time.time()
        starttime()
    def __str__(self):
        return str(self.url)
    def getrequests(self):
        g=requests.get(self.url,proxies=self.proxyDict)
        print g.status_code
        print g.content
        print self.url
        stemps['init']['end'] = time.time()
        #print stemps['init']['end'] - stemps['init']['start']
        x= stemps['init']['end'] - stemps['init']['start']
        print x


test=BaseCheck(url='http://google.com')
test.getrequests()

here is my basic class in python for the requests module with some proxy configs and stopwatch !

import requests
import time
class BaseCheck():
    def __init__(self, url):
        self.http_proxy  = "http://user:pw@proxy:8080"
        self.https_proxy = "http://user:pw@proxy:8080"
        self.ftp_proxy   = "http://user:pw@proxy:8080"
        self.proxyDict = {
                      "http"  : self.http_proxy,
                      "https" : self.https_proxy,
                      "ftp"   : self.ftp_proxy
                    }
        self.url = url
        def makearr(tsteps):
            global stemps
            global steps
            stemps = {}
            for step in tsteps:
                stemps[step] = { 'start': 0, 'end': 0 }
            steps = tsteps
        makearr(['init','check'])
        def starttime(typ = ""):
            for stemp in stemps:
                if typ == "":
                    stemps[stemp]['start'] = time.time()
                else:
                    stemps[stemp][typ] = time.time()
        starttime()
    def __str__(self):
        return str(self.url)
    def getrequests(self):
        g=requests.get(self.url,proxies=self.proxyDict)
        print g.status_code
        print g.content
        print self.url
        stemps['init']['end'] = time.time()
        #print stemps['init']['end'] - stemps['init']['start']
        x= stemps['init']['end'] - stemps['init']['start']
        print x


test=BaseCheck(url='http://google.com')
test.getrequests()

回答 7

我只是做了一个代理抓取器,也可以与相同的抓取代理连接,而无需任何输入,这里是:

#Import Modules

from termcolor import colored
from selenium import webdriver
import requests
import os
import sys
import time

#Proxy Grab

options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(chrome_options=options)
driver.get("https://www.sslproxies.org/")
tbody = driver.find_element_by_tag_name("tbody")
cell = tbody.find_elements_by_tag_name("tr")
for column in cell:

        column = column.text.split(" ")
        print(colored(column[0]+":"+column[1],'yellow'))
driver.quit()
print("")

os.system('clear')
os.system('cls')

#Proxy Connection

print(colored('Getting Proxies from graber...','green'))
time.sleep(2)
os.system('clear')
os.system('cls')
proxy = {"http": "http://"+ column[0]+":"+column[1]}
url = 'https://mobile.facebook.com/login'
r = requests.get(url,  proxies=proxy)
print("")
print(colored('Connecting using proxy' ,'green'))
print("")
sts = r.status_code

i just made a proxy graber and also can connect with same grabed proxy without any input here is :

#Import Modules

from termcolor import colored
from selenium import webdriver
import requests
import os
import sys
import time

#Proxy Grab

options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(chrome_options=options)
driver.get("https://www.sslproxies.org/")
tbody = driver.find_element_by_tag_name("tbody")
cell = tbody.find_elements_by_tag_name("tr")
for column in cell:

        column = column.text.split(" ")
        print(colored(column[0]+":"+column[1],'yellow'))
driver.quit()
print("")

os.system('clear')
os.system('cls')

#Proxy Connection

print(colored('Getting Proxies from graber...','green'))
time.sleep(2)
os.system('clear')
os.system('cls')
proxy = {"http": "http://"+ column[0]+":"+column[1]}
url = 'https://mobile.facebook.com/login'
r = requests.get(url,  proxies=proxy)
print("")
print(colored('Connecting using proxy' ,'green'))
print("")
sts = r.status_code

回答 8

有点晚了,但这是一个包装器类,它简化了抓取代理,然后进行了HTTP POST或GET:

代理请求

https://github.com/rootVIII/proxy_requests

It’s a bit late but here is a wrapper class that simplifies scraping proxies and then making an http POST or GET:

ProxyRequests

https://github.com/rootVIII/proxy_requests

回答 9

我共享一些代码,该代码介绍如何从“ https://free-proxy-list.net”站点获取代理并将数据存储到与“ Elite Proxy Switcher”(格式为IP:PORT)这样的工具兼容的文件中:

## PROXY_UPDATER-从https://free-proxy-list.net/获取免费代理

from lxml.html import fromstring
import requests
from itertools import cycle
import traceback
import re

######################FIND PROXIES#########################################
def get_proxies():
    url = 'https://free-proxy-list.net/'
    response = requests.get(url)
    parser = fromstring(response.text)
    proxies = set()
    for i in parser.xpath('//tbody/tr')[:299]:   #299 proxies max
        proxy = ":".join([i.xpath('.//td[1]/text()') 
        [0],i.xpath('.//td[2]/text()')[0]])
        proxies.add(proxy)
    return proxies



######################write to file in format   IP:PORT######################
try:
    proxies = get_proxies()
    f=open('proxy_list.txt','w')
    for proxy in proxies:
        f.write(proxy+'\n')
    f.close()
    print ("DONE")
except:
    print ("MAJOR ERROR")

I share some code how to fetch proxies from the site “https://free-proxy-list.net” and store data to a file compatible with tools like “Elite Proxy Switcher”(format IP:PORT):

##PROXY_UPDATER – get free proxies from https://free-proxy-list.net/

from lxml.html import fromstring
import requests
from itertools import cycle
import traceback
import re

######################FIND PROXIES#########################################
def get_proxies():
    url = 'https://free-proxy-list.net/'
    response = requests.get(url)
    parser = fromstring(response.text)
    proxies = set()
    for i in parser.xpath('//tbody/tr')[:299]:   #299 proxies max
        proxy = ":".join([i.xpath('.//td[1]/text()') 
        [0],i.xpath('.//td[2]/text()')[0]])
        proxies.add(proxy)
    return proxies



######################write to file in format   IP:PORT######################
try:
    proxies = get_proxies()
    f=open('proxy_list.txt','w')
    for proxy in proxies:
        f.write(proxy+'\n')
    f.close()
    print ("DONE")
except:
    print ("MAJOR ERROR")