Python 3 Web Scraping中的HTTP错误403

问题:Python 3 Web Scraping中的HTTP错误403

我试图通过抓取网站进行练习,但是我一直收到HTTP错误403(它认为我是机器人)吗?

这是我的代码:

#import requests
import urllib.request
from bs4 import BeautifulSoup
#from urllib import urlopen
import re

webpage = urllib.request.urlopen('http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1').read
findrows = re.compile('<tr class="- banding(?:On|Off)>(.*?)</tr>')
findlink = re.compile('<a href =">(.*)</a>')

row_array = re.findall(findrows, webpage)
links = re.finall(findlink, webpate)

print(len(row_array))

iterator = []

我得到的错误是:

 File "C:\Python33\lib\urllib\request.py", line 160, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Python33\lib\urllib\request.py", line 479, in open
    response = meth(req, response)
  File "C:\Python33\lib\urllib\request.py", line 591, in http_response
    'http', request, response, code, msg, hdrs)
  File "C:\Python33\lib\urllib\request.py", line 517, in error
    return self._call_chain(*args)
  File "C:\Python33\lib\urllib\request.py", line 451, in _call_chain
    result = func(*args)
  File "C:\Python33\lib\urllib\request.py", line 599, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden

I was trying to scrap a website for practice, but I kept on getting the HTTP Error 403 (does it think I’m a bot)?

Here is my code:

#import requests
import urllib.request
from bs4 import BeautifulSoup
#from urllib import urlopen
import re

webpage = urllib.request.urlopen('http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1').read
findrows = re.compile('<tr class="- banding(?:On|Off)>(.*?)</tr>')
findlink = re.compile('<a href =">(.*)</a>')

row_array = re.findall(findrows, webpage)
links = re.finall(findlink, webpate)

print(len(row_array))

iterator = []

The error I get is:

 File "C:\Python33\lib\urllib\request.py", line 160, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Python33\lib\urllib\request.py", line 479, in open
    response = meth(req, response)
  File "C:\Python33\lib\urllib\request.py", line 591, in http_response
    'http', request, response, code, msg, hdrs)
  File "C:\Python33\lib\urllib\request.py", line 517, in error
    return self._call_chain(*args)
  File "C:\Python33\lib\urllib\request.py", line 451, in _call_chain
    result = func(*args)
  File "C:\Python33\lib\urllib\request.py", line 599, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden

回答 0

这可能是由于mod_security某种或类似的服务器安全功能阻止了已知的蜘蛛/机器人用户代理(urllib使用python urllib/3.3.0,很容易检测到)。尝试使用以下方法设置已知的浏览器用户代理:

from urllib.request import Request, urlopen

req = Request('http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1', headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()

这对我有用。

顺便说一句,在您的代码中您缺少该行中的()after ,但是我认为这是一个错字。.readurlopen

提示:由于这是练习,因此请选择其他非限制性站点。也许是urllib因为某种原因而被封锁…

This is probably because of mod_security or some similar server security feature which blocks known spider/bot user agents (urllib uses something like python urllib/3.3.0, it’s easily detected). Try setting a known browser user agent with:

from urllib.request import Request, urlopen

req = Request('http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1', headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()

This works for me.

By the way, in your code you are missing the () after .read in the urlopen line, but I think that it’s a typo.

TIP: since this is exercise, choose a different, non restrictive site. Maybe they are blocking urllib for some reason…


回答 1

肯定是因为您基于用户代理使用urllib而被阻止。OfferUp对我来说也是一样。您可以创建一个名为AppURLopener的新类,该类使用Mozilla覆盖用户代理。

import urllib.request

class AppURLopener(urllib.request.FancyURLopener):
    version = "Mozilla/5.0"

opener = AppURLopener()
response = opener.open('http://httpbin.org/user-agent')

资源

Definitely it’s blocking because of your use of urllib based on the user agent. This same thing is happening to me with OfferUp. You can create a new class called AppURLopener which overrides the user-agent with Mozilla.

import urllib.request

class AppURLopener(urllib.request.FancyURLopener):
    version = "Mozilla/5.0"

opener = AppURLopener()
response = opener.open('http://httpbin.org/user-agent')

Source


回答 2

“这可能是由于mod_security或某些类似的服务器安全功能阻止了已知的

蜘蛛/机器人

用户代理(urllib使用python urllib / 3.3.0之类的东西,很容易检测到)”-正如Stefano Sanfilippo所述

from urllib.request import Request, urlopen
url="https://stackoverflow.com/search?q=html+error+403"
req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})

web_byte = urlopen(req).read()

webpage = web_byte.decode('utf-8')

web_byte是由服务器和类型存在于网页中的内容返回的字节对象主要是UTF-8 。因此,您需要使用解码方法来解码web_byte

当我尝试使用PyCharm从网站上抓取时,这解决了完整的问题

PS->我使用Python 3.4

“This is probably because of mod_security or some similar server security feature which blocks known

spider/bot

user agents (urllib uses something like python urllib/3.3.0, it’s easily detected)” – as already mentioned by Stefano Sanfilippo

from urllib.request import Request, urlopen
url="https://stackoverflow.com/search?q=html+error+403"
req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})

web_byte = urlopen(req).read()

webpage = web_byte.decode('utf-8')

The web_byte is a byte object returned by the server and the content type present in webpage is mostly utf-8. Therefore you need to decode web_byte using decode method.

This solves complete problem while I was having trying to scrap from a website using PyCharm

P.S -> I use python 3.4


回答 3

由于该页面在浏览器中工作,而不是在python程序中调用时工作,因此似乎提供该URL的Web应用程序识别出您不是通过浏览器请求内容。

示范:

curl --dump-header r.txt http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1

...
<HTML><HEAD>
<TITLE>Access Denied</TITLE>
</HEAD><BODY>
<H1>Access Denied</H1>
You don't have permission to access ...
</HTML>

并且r.txt中的内容具有状态行:

HTTP/1.1 403 Forbidden

尝试发布伪造网络客户端的标头“ User-Agent” 。

注意:该页面包含Ajax调用,该调用创建您可能要解析的表。您需要检查页面的javascript逻辑,或仅使用浏览器调试器(如Firebug / Net选项卡)查看需要调用哪个url才能获取表的内容。

Since the page works in browser and not when calling within python program, it seems that the web app that serves that url recognizes that you request the content not by the browser.

Demonstration:

curl --dump-header r.txt http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1

...
<HTML><HEAD>
<TITLE>Access Denied</TITLE>
</HEAD><BODY>
<H1>Access Denied</H1>
You don't have permission to access ...
</HTML>

and the content in r.txt has status line:

HTTP/1.1 403 Forbidden

Try posting header ‘User-Agent’ which fakes web client.

NOTE: The page contains Ajax call that creates the table you probably want to parse. You’ll need to check the javascript logic of the page or simply using browser debugger (like Firebug / Net tab) to see which url you need to call to get the table’s content.


回答 4

您可以通过两种方式尝试。详细信息在此链接中

1)通过点

点安装-升级证书

2)如果它不起作用,请尝试运行适用于Mac的Python 3. *附带的Cerificates.command:(转到您的python安装位置,然后双击该文件)

打开/ Applications / Python \ 3。* / Install \ Certificates.command

You can try in two ways. The detail is in this link.

1) Via pip

pip install –upgrade certifi

2) If it doesn’t work, try to run a Cerificates.command that comes bundled with Python 3.* for Mac:(Go to your python installation location and double click the file)

open /Applications/Python\ 3.*/Install\ Certificates.command


回答 5

根据先前的答案,

from urllib.request import Request, urlopen       
#specify url
url = 'https://xyz/xyz'
req = Request(url, headers={'User-Agent': 'XYZ/3.0'})
response = urlopen(req, timeout=20).read()

这通过延长超时为我工作。

Based on the previous answer,

from urllib.request import Request, urlopen       
#specify url
url = 'https://xyz/xyz'
req = Request(url, headers={'User-Agent': 'XYZ/3.0'})
response = urlopen(req, timeout=20).read()

This worked for me by extending the timeout.


回答 6

如果您对将用户代理伪装成Mozilla感到内gui(在Stefano的最高答案中有评论),那么它也可以与非urllib User-Agent一起使用。这适用于我引用的网站:

    req = urlrequest.Request(link, headers={'User-Agent': 'XYZ/3.0'})
    urlrequest.urlopen(req, timeout=10).read()

我的应用程序是通过抓取我在文章中引用的特定链接来测试有效性。不是通用刮板。

If you feel guilty about faking the user-agent as Mozilla (comment in the top answer from Stefano), it could work with a non-urllib User-Agent as well. This worked for the sites I reference:

    req = urlrequest.Request(link, headers={'User-Agent': 'XYZ/3.0'})
    urlrequest.urlopen(req, timeout=10).read()

My application is to test validity by scraping specific links that I refer to, in my articles. Not a generic scraper.


回答 7

根据先前的答案,这已在Python 3.7中起作用

from urllib.request import Request, urlopen

req = Request('Url_Link', headers={'User-Agent': 'XYZ/3.0'})
webpage = urlopen(req, timeout=10).read()

print(webpage)

Based on previous answers this has worked for me with Python 3.7

from urllib.request import Request, urlopen

req = Request('Url_Link', headers={'User-Agent': 'XYZ/3.0'})
webpage = urlopen(req, timeout=10).read()

print(webpage)