问题:用Python写入UTF-8文件
我真的很困惑codecs.open function
。当我做:
file = codecs.open("temp", "w", "utf-8")
file.write(codecs.BOM_UTF8)
file.close()
它给我错误
UnicodeDecodeError:’ascii’编解码器无法解码位置0的字节0xef:序数不在范围内(128)
如果我做:
file = open("temp", "w")
file.write(codecs.BOM_UTF8)
file.close()
它工作正常。
问题是为什么第一种方法会失败?以及如何插入宝?
如果第二种方法是正确的方法,那么使用的重点是codecs.open(filename, "w", "utf-8")
什么?
I’m really confused with the codecs.open function
. When I do:
file = codecs.open("temp", "w", "utf-8")
file.write(codecs.BOM_UTF8)
file.close()
It gives me the error
UnicodeDecodeError: ‘ascii’ codec can’t decode byte 0xef in position
0: ordinal not in range(128)
If I do:
file = open("temp", "w")
file.write(codecs.BOM_UTF8)
file.close()
It works fine.
Question is why does the first method fail? And how do I insert the bom?
If the second method is the correct way of doing it, what the point of using codecs.open(filename, "w", "utf-8")
?
回答 0
我相信问题在于这codecs.BOM_UTF8
是字节字符串,而不是Unicode字符串。我怀疑文件处理程序试图基于“我是将Unicode编写为UTF-8编码的文本,但是您给了我一个字节字符串!”来猜测您的意思是什么!
尝试直接为字节顺序标记(即Unicode U + FEFF)编写Unicode字符串,以便文件将其编码为UTF-8:
import codecs
file = codecs.open("lol", "w", "utf-8")
file.write(u'\ufeff')
file.close()
(这似乎给出了正确的答案-字节为EF BB BF的文件。)
编辑:S. Lott 建议使用“ utf-8-sig”作为编码要比自己直接编写BOM更好,但是在此我将保留此答案,因为它可以解释以前的问题。
I believe the problem is that codecs.BOM_UTF8
is a byte string, not a Unicode string. I suspect the file handler is trying to guess what you really mean based on “I’m meant to be writing Unicode as UTF-8-encoded text, but you’ve given me a byte string!”
Try writing the Unicode string for the byte order mark (i.e. Unicode U+FEFF) directly, so that the file just encodes that as UTF-8:
import codecs
file = codecs.open("lol", "w", "utf-8")
file.write(u'\ufeff')
file.close()
(That seems to give the right answer – a file with bytes EF BB BF.)
EDIT: S. Lott’s suggestion of using “utf-8-sig” as the encoding is a better one than explicitly writing the BOM yourself, but I’ll leave this answer here as it explains what was going wrong before.
回答 1
回答 2
@ S-Lott提供了正确的过程,但是在解释Unicode问题时,Python解释器可以提供更多的见解。
乔恩·斯凯特(Jon Skeet)对codecs
模块是正确的(非常规)-它包含字节字符串:
>>> import codecs
>>> codecs.BOM
'\xff\xfe'
>>> codecs.BOM_UTF8
'\xef\xbb\xbf'
>>>
挑选另一个尼特,它BOM
具有标准的Unicode名称,可以输入以下形式:
>>> bom= u"\N{ZERO WIDTH NO-BREAK SPACE}"
>>> bom
u'\ufeff'
也可以通过unicodedata
以下方式访问:
>>> import unicodedata
>>> unicodedata.lookup('ZERO WIDTH NO-BREAK SPACE')
u'\ufeff'
>>>
@S-Lott gives the right procedure, but expanding on the Unicode issues, the Python interpreter can provide more insights.
Jon Skeet is right (unusual) about the codecs
module – it contains byte strings:
>>> import codecs
>>> codecs.BOM
'\xff\xfe'
>>> codecs.BOM_UTF8
'\xef\xbb\xbf'
>>>
Picking another nit, the BOM
has a standard Unicode name, and it can be entered as:
>>> bom= u"\N{ZERO WIDTH NO-BREAK SPACE}"
>>> bom
u'\ufeff'
It is also accessible via unicodedata
:
>>> import unicodedata
>>> unicodedata.lookup('ZERO WIDTH NO-BREAK SPACE')
u'\ufeff'
>>>
回答 3
我使用file * nix命令将未知字符集文件转换为utf-8文件
# -*- encoding: utf-8 -*-
# converting a unknown formatting file in utf-8
import codecs
import commands
file_location = "jumper.sub"
file_encoding = commands.getoutput('file -b --mime-encoding %s' % file_location)
file_stream = codecs.open(file_location, 'r', file_encoding)
file_output = codecs.open(file_location+"b", 'w', 'utf-8')
for l in file_stream:
file_output.write(l)
file_stream.close()
file_output.close()
I use the file *nix command to convert a unknown charset file in a utf-8 file
# -*- encoding: utf-8 -*-
# converting a unknown formatting file in utf-8
import codecs
import commands
file_location = "jumper.sub"
file_encoding = commands.getoutput('file -b --mime-encoding %s' % file_location)
file_stream = codecs.open(file_location, 'r', file_encoding)
file_output = codecs.open(file_location+"b", 'w', 'utf-8')
for l in file_stream:
file_output.write(l)
file_stream.close()
file_output.close()