问题:如何确定文本的编码?
回答 0
始终无法正确检测编码。
(来自chardet常见问题解答:)
但是,某些编码针对特定语言进行了优化,并且语言不是随机的。某些字符序列始终弹出,而其他字符序列毫无意义。一个会说英语的人,打开报纸发现“ txzqJv 2!dasd0a QqdKjvz”,会立即意识到这不是英语(即使它完全由英文字母组成)。通过研究大量的“典型”文本,计算机算法可以模拟这种流利程度,并对文本的语言做出有根据的猜测。
有一个chardet库使用该研究来尝试检测编码。chardet是Mozilla中自动检测代码的端口。
您也可以使用UnicodeDammit。它将尝试以下方法:
- 在文档本身中发现的编码:例如,在XML声明或(对于HTML文档)http等效的META标记中。如果Beautiful Soup在文档中找到这种编码,它将从头开始再次解析文档,然后尝试使用新的编码。唯一的exceptions是,如果您显式指定了一种编码,并且该编码确实起作用:那么它将忽略它在文档中找到的任何编码。
- 通过查看文件的前几个字节来嗅探编码。如果在此阶段检测到编码,它将是UTF- *编码,EBCDIC或ASCII之一。
- chardet库嗅探到的编码(如果已安装)。
- UTF-8
- Windows-1252
Correctly detecting the encoding all times is impossible.
(From chardet FAQ:)
However, some encodings are optimized
for specific languages, and languages
are not random. Some character
sequences pop up all the time, while
other sequences make no sense. A
person fluent in English who opens a
newspaper and finds “txzqJv 2!dasd0a
QqdKjvz” will instantly recognize that
that isn’t English (even though it is
composed entirely of English letters).
By studying lots of “typical” text, a
computer algorithm can simulate this
kind of fluency and make an educated
guess about a text’s language.
There is the chardet library that uses that study to try to detect encoding. chardet is a port of the auto-detection code in Mozilla.
You can also use UnicodeDammit. It will try the following methods:
- An encoding discovered in the document itself: for instance, in an XML declaration or (for HTML documents) an http-equiv META tag. If Beautiful Soup finds this kind of encoding within the document, it parses the document again from the beginning and gives the new encoding a try. The only exception is if you explicitly specified an encoding, and that encoding actually worked: then it will ignore any encoding it finds in the document.
- An encoding sniffed by looking at the first few bytes of the file. If an encoding is detected at this stage, it will be one of the UTF-* encodings, EBCDIC, or ASCII.
- An encoding sniffed by the chardet library, if you have it installed.
- UTF-8
- Windows-1252
回答 1
进行编码的另一种方法是使用
libmagic(这是file命令后面的代码
)。有大量可用的python绑定。
存在于文件源树中的python绑定可以作为
python-magic(或python3-magic)debian软件包使用。它可以通过执行以下操作来确定文件的编码:
import magic
blob = open('unknown-file', 'rb').read()
m = magic.open(magic.MAGIC_MIME_ENCODING)
m.load()
encoding = m.buffer(blob) # "utf-8" "us-ascii" etc
在pypi上有一个同名但不兼容的python-magic pip包,该包也使用libmagic
。它还可以通过执行以下操作获取编码:
import magic
blob = open('unknown-file', 'rb').read()
m = magic.Magic(mime_encoding=True)
encoding = m.from_buffer(blob)
Another option for working out the encoding is to use
libmagic (which is the code behind the
file command). There are a profusion of
python bindings available.
The python bindings that live in the file source tree are available as the
python-magic (or python3-magic)
debian package. It can determine the encoding of a file by doing:
import magic
blob = open('unknown-file', 'rb').read()
m = magic.open(magic.MAGIC_MIME_ENCODING)
m.load()
encoding = m.buffer(blob) # "utf-8" "us-ascii" etc
There is an identically named, but incompatible, python-magic pip package on pypi that also uses libmagic
. It can also get the encoding, by doing:
import magic
blob = open('unknown-file', 'rb').read()
m = magic.Magic(mime_encoding=True)
encoding = m.from_buffer(blob)
回答 2
一些编码策略,请不加评论:
#!/bin/bash
#
tmpfile=$1
echo '-- info about file file ........'
file -i $tmpfile
enca -g $tmpfile
echo 'recoding ........'
#iconv -f iso-8859-2 -t utf-8 back_test.xml > $tmpfile
#enca -x utf-8 $tmpfile
#enca -g $tmpfile
recode CP1250..UTF-8 $tmpfile
您可能想通过以循环形式打开和读取文件来检查编码…但是您可能需要先检查文件大小:
encodings = ['utf-8', 'windows-1250', 'windows-1252' ...etc]
for e in encodings:
try:
fh = codecs.open('file.txt', 'r', encoding=e)
fh.readlines()
fh.seek(0)
except UnicodeDecodeError:
print('got unicode error with %s , trying different encoding' % e)
else:
print('opening the file with encoding: %s ' % e)
break
Some encoding strategies, please uncomment to taste :
#!/bin/bash
#
tmpfile=$1
echo '-- info about file file ........'
file -i $tmpfile
enca -g $tmpfile
echo 'recoding ........'
#iconv -f iso-8859-2 -t utf-8 back_test.xml > $tmpfile
#enca -x utf-8 $tmpfile
#enca -g $tmpfile
recode CP1250..UTF-8 $tmpfile
You might like to check the encoding by opening and reading the file in a form of a loop… but you might need to check the filesize first :
encodings = ['utf-8', 'windows-1250', 'windows-1252' ...etc]
for e in encodings:
try:
fh = codecs.open('file.txt', 'r', encoding=e)
fh.readlines()
fh.seek(0)
except UnicodeDecodeError:
print('got unicode error with %s , trying different encoding' % e)
else:
print('opening the file with encoding: %s ' % e)
break
回答 3
这是一个读取和获取chardet
编码预测值的示例,n_lines
如果文件很大,则从文件中读取。
chardet
还为您提供confidence
了其编码预测的概率(即)(尚未查看它们是如何得出的),该预测的预测结果来自chardet.predict()
,因此您可以根据需要以某种方式进行操作。
def predict_encoding(file_path, n_lines=20):
'''Predict a file's encoding using chardet'''
import chardet
# Open the file as binary data
with open(file_path, 'rb') as f:
# Join binary lines for specified number of lines
rawdata = b''.join([f.readline() for _ in range(n_lines)])
return chardet.detect(rawdata)['encoding']
Here is an example of reading and taking at face value a chardet
encoding prediction, reading n_lines
from the file in the event it is large.
chardet
also gives you a probability (i.e. confidence
) of it’s encoding prediction (haven’t looked how they come up with that), which is returned with its prediction from chardet.predict()
, so you could work that in somehow if you like.
def predict_encoding(file_path, n_lines=20):
'''Predict a file's encoding using chardet'''
import chardet
# Open the file as binary data
with open(file_path, 'rb') as f:
# Join binary lines for specified number of lines
rawdata = b''.join([f.readline() for _ in range(n_lines)])
return chardet.detect(rawdata)['encoding']
回答 4
# Function: OpenRead(file)
# A text file can be encoded using:
# (1) The default operating system code page, Or
# (2) utf8 with a BOM header
#
# If a text file is encoded with utf8, and does not have a BOM header,
# the user can manually add a BOM header to the text file
# using a text editor such as notepad++, and rerun the python script,
# otherwise the file is read as a codepage file with the
# invalid codepage characters removed
import sys
if int(sys.version[0]) != 3:
print('Aborted: Python 3.x required')
sys.exit(1)
def bomType(file):
"""
returns file encoding string for open() function
EXAMPLE:
bom = bomtype(file)
open(file, encoding=bom, errors='ignore')
"""
f = open(file, 'rb')
b = f.read(4)
f.close()
if (b[0:3] == b'\xef\xbb\xbf'):
return "utf8"
# Python automatically detects endianess if utf-16 bom is present
# write endianess generally determined by endianess of CPU
if ((b[0:2] == b'\xfe\xff') or (b[0:2] == b'\xff\xfe')):
return "utf16"
if ((b[0:5] == b'\xfe\xff\x00\x00')
or (b[0:5] == b'\x00\x00\xff\xfe')):
return "utf32"
# If BOM is not provided, then assume its the codepage
# used by your operating system
return "cp1252"
# For the United States its: cp1252
def OpenRead(file):
bom = bomType(file)
return open(file, 'r', encoding=bom, errors='ignore')
#######################
# Testing it
#######################
fout = open("myfile1.txt", "w", encoding="cp1252")
fout.write("* hi there (cp1252)")
fout.close()
fout = open("myfile2.txt", "w", encoding="utf8")
fout.write("\u2022 hi there (utf8)")
fout.close()
# this case is still treated like codepage cp1252
# (User responsible for making sure that all utf8 files
# have a BOM header)
fout = open("badboy.txt", "wb")
fout.write(b"hi there. barf(\x81\x8D\x90\x9D)")
fout.close()
# Read Example file with Bom Detection
fin = OpenRead("myfile1.txt")
L = fin.readline()
print(L)
fin.close()
# Read Example file with Bom Detection
fin = OpenRead("myfile2.txt")
L =fin.readline()
print(L) #requires QtConsole to view, Cmd.exe is cp1252
fin.close()
# Read CP1252 with a few undefined chars without barfing
fin = OpenRead("badboy.txt")
L =fin.readline()
print(L)
fin.close()
# Check that bad characters are still in badboy codepage file
fin = open("badboy.txt", "rb")
fin.read(20)
fin.close()
# Function: OpenRead(file)
# A text file can be encoded using:
# (1) The default operating system code page, Or
# (2) utf8 with a BOM header
#
# If a text file is encoded with utf8, and does not have a BOM header,
# the user can manually add a BOM header to the text file
# using a text editor such as notepad++, and rerun the python script,
# otherwise the file is read as a codepage file with the
# invalid codepage characters removed
import sys
if int(sys.version[0]) != 3:
print('Aborted: Python 3.x required')
sys.exit(1)
def bomType(file):
"""
returns file encoding string for open() function
EXAMPLE:
bom = bomtype(file)
open(file, encoding=bom, errors='ignore')
"""
f = open(file, 'rb')
b = f.read(4)
f.close()
if (b[0:3] == b'\xef\xbb\xbf'):
return "utf8"
# Python automatically detects endianess if utf-16 bom is present
# write endianess generally determined by endianess of CPU
if ((b[0:2] == b'\xfe\xff') or (b[0:2] == b'\xff\xfe')):
return "utf16"
if ((b[0:5] == b'\xfe\xff\x00\x00')
or (b[0:5] == b'\x00\x00\xff\xfe')):
return "utf32"
# If BOM is not provided, then assume its the codepage
# used by your operating system
return "cp1252"
# For the United States its: cp1252
def OpenRead(file):
bom = bomType(file)
return open(file, 'r', encoding=bom, errors='ignore')
#######################
# Testing it
#######################
fout = open("myfile1.txt", "w", encoding="cp1252")
fout.write("* hi there (cp1252)")
fout.close()
fout = open("myfile2.txt", "w", encoding="utf8")
fout.write("\u2022 hi there (utf8)")
fout.close()
# this case is still treated like codepage cp1252
# (User responsible for making sure that all utf8 files
# have a BOM header)
fout = open("badboy.txt", "wb")
fout.write(b"hi there. barf(\x81\x8D\x90\x9D)")
fout.close()
# Read Example file with Bom Detection
fin = OpenRead("myfile1.txt")
L = fin.readline()
print(L)
fin.close()
# Read Example file with Bom Detection
fin = OpenRead("myfile2.txt")
L =fin.readline()
print(L) #requires QtConsole to view, Cmd.exe is cp1252
fin.close()
# Read CP1252 with a few undefined chars without barfing
fin = OpenRead("badboy.txt")
L =fin.readline()
print(L)
fin.close()
# Check that bad characters are still in badboy codepage file
fin = open("badboy.txt", "rb")
fin.read(20)
fin.close()
回答 5
根据您的平台,我只是选择使用linux shell file
命令。这对我有用,因为我在专门在我们的Linux机器之一上运行的脚本中使用了它。
显然,这不是理想的解决方案或答案,但可以对其进行修改以满足您的需求。就我而言,我只需要确定文件是否为UTF-8。
import subprocess
file_cmd = ['file', 'test.txt']
p = subprocess.Popen(file_cmd, stdout=subprocess.PIPE)
cmd_output = p.stdout.readlines()
# x will begin with the file type output as is observed using 'file' command
x = cmd_output[0].split(": ")[1]
return x.startswith('UTF-8')
Depending on your platform, I just opt to use the linux shell file
command. This works for me since I am using it in a script that exclusively runs on one of our linux machines.
Obviously this isn’t an ideal solution or answer, but it could be modified to fit your needs. In my case I just need to determine whether a file is UTF-8 or not.
import subprocess
file_cmd = ['file', 'test.txt']
p = subprocess.Popen(file_cmd, stdout=subprocess.PIPE)
cmd_output = p.stdout.readlines()
# x will begin with the file type output as is observed using 'file' command
x = cmd_output[0].split(": ")[1]
return x.startswith('UTF-8')
回答 6
这可能会有所帮助
from bs4 import UnicodeDammit
with open('automate_data/billboard.csv', 'rb') as file:
content = file.read()
suggestion = UnicodeDammit(content)
suggestion.original_encoding
#'iso-8859-1'
This might be helpful
from bs4 import UnicodeDammit
with open('automate_data/billboard.csv', 'rb') as file:
content = file.read()
suggestion = UnicodeDammit(content)
suggestion.original_encoding
#'iso-8859-1'
回答 7
在一般情况下,原则上不可能确定文本文件的编码。因此,没有,没有标准的Python库可以为您执行此操作。
如果您对文本文件有更具体的了解(例如,它是XML),则可能有库函数。
It is, in principle, impossible to determine the encoding of a text file, in the general case. So no, there is no standard Python library to do that for you.
If you have more specific knowledge about the text file (e.g. that it is XML), there might be library functions.
回答 8
如果您知道文件的某些内容,则可以尝试使用几种编码对其进行解码,然后查看缺少的内容。通常,由于文本文件是文本文件而那些文本文件是愚蠢的,所以没有办法;)
If you know the some content of the file you can try to decode it with several encoding and see which is missing. In general there is no way since a text file is a text file and those are stupid ;)
回答 9
该站点具有用于识别ascii,使用boms编码和utf8 no bom的python代码:https ://unicodebook.readthedocs.io/guess_encoding.html 。将文件读入字节数组(数据):http : //www.codecodex.com/wiki/Read_a_file_into_a_byte_array。这是一个例子。我在osx中。
#!/usr/bin/python
import sys
def isUTF8(data):
try:
decoded = data.decode('UTF-8')
except UnicodeDecodeError:
return False
else:
for ch in decoded:
if 0xD800 <= ord(ch) <= 0xDFFF:
return False
return True
def get_bytes_from_file(filename):
return open(filename, "rb").read()
filename = sys.argv[1]
data = get_bytes_from_file(filename)
result = isUTF8(data)
print(result)
PS /Users/js> ./isutf8.py hi.txt
True
This site has python code for recognizing ascii, encoding with boms, and utf8 no bom: https://unicodebook.readthedocs.io/guess_encoding.html. Read file into byte array (data): http://www.codecodex.com/wiki/Read_a_file_into_a_byte_array. Here’s an example. I’m in osx.
#!/usr/bin/python
import sys
def isUTF8(data):
try:
decoded = data.decode('UTF-8')
except UnicodeDecodeError:
return False
else:
for ch in decoded:
if 0xD800 <= ord(ch) <= 0xDFFF:
return False
return True
def get_bytes_from_file(filename):
return open(filename, "rb").read()
filename = sys.argv[1]
data = get_bytes_from_file(filename)
result = isUTF8(data)
print(result)
PS /Users/js> ./isutf8.py hi.txt
True