UnicodeEncodeError:“ charmap”编解码器无法编码字符

问题:UnicodeEncodeError:“ charmap”编解码器无法编码字符

我正在尝试抓取一个网站,但这给我一个错误。

我正在使用以下代码:

import urllib.request
from bs4 import BeautifulSoup

get = urllib.request.urlopen("https://www.website.com/")
html = get.read()

soup = BeautifulSoup(html)

print(soup)

我收到以下错误:

File "C:\Python34\lib\encodings\cp1252.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 70924-70950: character maps to <undefined>

我该怎么做才能解决此问题?

I’m trying to scrape a website, but it gives me an error.

I’m using the following code:

import urllib.request
from bs4 import BeautifulSoup

get = urllib.request.urlopen("https://www.website.com/")
html = get.read()

soup = BeautifulSoup(html)

print(soup)

And I’m getting the following error:

File "C:\Python34\lib\encodings\cp1252.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 70924-70950: character maps to <undefined>

What can I do to fix this?


回答 0

UnicodeEncodeError将抓取的网页内容保存到文件中时,我得到的是相同的。为了解决这个问题,我替换了以下代码:

with open(fname, "w") as f:
    f.write(html)

有了这个:

import io
with io.open(fname, "w", encoding="utf-8") as f:
    f.write(html)

使用io可以向后兼容Python 2。

如果只需要支持Python 3,则可以改用内置open函数:

with open(fname, "w", encoding="utf-8") as f:
    f.write(html)

I was getting the same UnicodeEncodeError when saving scraped web content to a file. To fix it I replaced this code:

with open(fname, "w") as f:
    f.write(html)

with this:

import io
with io.open(fname, "w", encoding="utf-8") as f:
    f.write(html)

Using io gives you backward compatibility with Python 2.

If you only need to support Python 3 you can use the builtin open function instead:

with open(fname, "w", encoding="utf-8") as f:
    f.write(html)

回答 1

我通过添加将.encode("utf-8")其修复soup

那意味着print(soup)变成print(soup.encode("utf-8"))

I fixed it by adding .encode("utf-8") to soup.

That means that print(soup) becomes print(soup.encode("utf-8")).


回答 2

在Python 3.7中,并且运行Windows 10可以正常工作(我不确定它是否可以在其他平台和/或其他版本的Python上运行)

替换此行:

with open('filename', 'w') as f:

有了这个:

with open('filename', 'w', encoding='utf-8') as f:

之所以起作用,是因为在使用文件时将编码更改为UTF-8,因此能够将UTF-8中的字符转换为文本,而不是遇到UTF-8字符时返回错误。当前编码不支持。

In Python 3.7, and running Windows 10 this worked (I am not sure whether it will work on other platforms and/or other versions of Python)

Replacing this line:

with open('filename', 'w') as f:

With this:

with open('filename', 'w', encoding='utf-8') as f:

The reason why it is working is because the encoding is changed to UTF-8 when using the file, so characters in UTF-8 are able to be converted to text, instead of returning an error when it encounters a UTF-8 character that is not suppord by the current encoding.


回答 3

在保存get请求的响应时,在窗口10上的Python 3.7上引发了相同的错误。从URL接收到的响应的编码为UTF-8,因此始终建议检查编码,以便可以传递相同的编码以避免此类琐碎的问题因为它确实浪费了很多生产时间

import requests
resp = requests.get('https://en.wikipedia.org/wiki/NIFTY_50')
print(resp.encoding)
with open ('NiftyList.txt', 'w') as f:
    f.write(resp.text)

当我用open命令添加encoding =“ utf-8”时,它以正确的响应保存了文件

with open ('NiftyList.txt', 'w', encoding="utf-8") as f:
    f.write(resp.text)

While saving the response of get request, same error was thrown on Python 3.7 on window 10. The response received from the URL, encoding was UTF-8 so it is always recommended to check the encoding so same can be passed to avoid such trivial issue as it really kills lots of time in production

import requests
resp = requests.get('https://en.wikipedia.org/wiki/NIFTY_50')
print(resp.encoding)
with open ('NiftyList.txt', 'w') as f:
    f.write(resp.text)

When I added encoding=”utf-8″ with the open command it saved the file with the correct response

with open ('NiftyList.txt', 'w', encoding="utf-8") as f:
    f.write(resp.text)

回答 4

甚至我在尝试打印,读取/写入或打开它时遇到的编码问题都相同。如上文所述,如果您尝试打印.encoding =“ utf-8”,则将有所帮助。

soup.encode(“ utf-8”)

如果您尝试打开抓取的数据并将其写入文件,请使用(……,encoding =“ utf-8”)打开文件

使用open(filename_csv,’w’,newline =”,encoding =“ utf-8”)作为csv_file:

Even I faced the same issue with the encoding that occurs when you try to print it, read/write it or open it. As others mentioned above adding .encoding=”utf-8″ will help if you are trying to print it.

soup.encode(“utf-8”)

If you are trying to open scraped data and maybe write it into a file, then open the file with (……,encoding=”utf-8″)

with open(filename_csv , ‘w’, newline=”,encoding=”utf-8″) as csv_file:


回答 5

对于那些仍然收到此错误,添加encode("utf-8")soup也将解决这个问题。

soup = BeautifulSoup(html_doc, 'html.parser').encode("utf-8")
print(soup)

For those still getting this error, adding encode("utf-8") to soup will also fix this.

soup = BeautifulSoup(html_doc, 'html.parser').encode("utf-8")
print(soup)