标签归档:encoding

如何确定文本的编码?

问题:如何确定文本的编码?

我收到了一些经过编码的文本,但是我不知道使用了什么字符集。有没有一种方法可以使用Python确定文本文件的编码?如何检测 C#处理的文本文件的编码/代码页

I received some text that is encoded, but I don’t know what charset was used. Is there a way to determine the encoding of a text file using Python? How can I detect the encoding/codepage of a text file deals with C#.


回答 0

始终无法正确检测编码。

(来自chardet常见问题解答:)

但是,某些编码针对特定语言进行了优化,并且语言不是随机的。某些字符序列始终弹出,而其他字符序列毫无意义。一个会说英语的人,打开报纸发现“ txzqJv 2!dasd0a QqdKjvz”,会立即意识到这不是英语(即使它完全由英文字母组成)。通过研究大量的“典型”文本,计算机算法可以模拟这种流利程度,并对文本的语言做出有根据的猜测。

有一个chardet库使用该研究来尝试检测编码。chardet是Mozilla中自动检测代码的端口。

您也可以使用UnicodeDammit。它将尝试以下方法:

  • 在文档本身中发现的编码:例如,在XML声明或(对于HTML文档)http等效的META标记中。如果Beautiful Soup在文档中找到这种编码,它将从头开始再次解析文档,然后尝试使用新的编码。唯一的exceptions是,如果您显式指定了一种编码,并且该编码确实起作用:那么它将忽略它在文档中找到的任何编码。
  • 通过查看文件的前几个字节来嗅探编码。如果在此阶段检测到编码,它将是UTF- *编码,EBCDIC或ASCII之一。
  • chardet库嗅探到的编码(如果已安装)。
  • UTF-8
  • Windows-1252

Correctly detecting the encoding all times is impossible.

(From chardet FAQ:)

However, some encodings are optimized for specific languages, and languages are not random. Some character sequences pop up all the time, while other sequences make no sense. A person fluent in English who opens a newspaper and finds “txzqJv 2!dasd0a QqdKjvz” will instantly recognize that that isn’t English (even though it is composed entirely of English letters). By studying lots of “typical” text, a computer algorithm can simulate this kind of fluency and make an educated guess about a text’s language.

There is the chardet library that uses that study to try to detect encoding. chardet is a port of the auto-detection code in Mozilla.

You can also use UnicodeDammit. It will try the following methods:

  • An encoding discovered in the document itself: for instance, in an XML declaration or (for HTML documents) an http-equiv META tag. If Beautiful Soup finds this kind of encoding within the document, it parses the document again from the beginning and gives the new encoding a try. The only exception is if you explicitly specified an encoding, and that encoding actually worked: then it will ignore any encoding it finds in the document.
  • An encoding sniffed by looking at the first few bytes of the file. If an encoding is detected at this stage, it will be one of the UTF-* encodings, EBCDIC, or ASCII.
  • An encoding sniffed by the chardet library, if you have it installed.
  • UTF-8
  • Windows-1252

回答 1

进行编码的另一种方法是使用 libmagic(这是file命令后面的代码 )。有大量可用的python绑定。

存在于文件源树中的python绑定可以作为 python-magic(或python3-magic)debian软件包使用。它可以通过执行以下操作来确定文件的编码:

import magic

blob = open('unknown-file', 'rb').read()
m = magic.open(magic.MAGIC_MIME_ENCODING)
m.load()
encoding = m.buffer(blob)  # "utf-8" "us-ascii" etc

在pypi上有一个同名但不兼容的python-magic pip包,该包也使用libmagic。它还可以通过执行以下操作获取编码:

import magic

blob = open('unknown-file', 'rb').read()
m = magic.Magic(mime_encoding=True)
encoding = m.from_buffer(blob)

Another option for working out the encoding is to use libmagic (which is the code behind the file command). There are a profusion of python bindings available.

The python bindings that live in the file source tree are available as the python-magic (or python3-magic) debian package. It can determine the encoding of a file by doing:

import magic

blob = open('unknown-file', 'rb').read()
m = magic.open(magic.MAGIC_MIME_ENCODING)
m.load()
encoding = m.buffer(blob)  # "utf-8" "us-ascii" etc

There is an identically named, but incompatible, python-magic pip package on pypi that also uses libmagic. It can also get the encoding, by doing:

import magic

blob = open('unknown-file', 'rb').read()
m = magic.Magic(mime_encoding=True)
encoding = m.from_buffer(blob)

回答 2

一些编码策略,请不加评论​​:

#!/bin/bash
#
tmpfile=$1
echo '-- info about file file ........'
file -i $tmpfile
enca -g $tmpfile
echo 'recoding ........'
#iconv -f iso-8859-2 -t utf-8 back_test.xml > $tmpfile
#enca -x utf-8 $tmpfile
#enca -g $tmpfile
recode CP1250..UTF-8 $tmpfile

您可能想通过以循环形式打开和读取文件来检查编码…但是您可能需要先检查文件大小:

encodings = ['utf-8', 'windows-1250', 'windows-1252' ...etc]
            for e in encodings:
                try:
                    fh = codecs.open('file.txt', 'r', encoding=e)
                    fh.readlines()
                    fh.seek(0)
                except UnicodeDecodeError:
                    print('got unicode error with %s , trying different encoding' % e)
                else:
                    print('opening the file with encoding:  %s ' % e)
                    break              

Some encoding strategies, please uncomment to taste :

#!/bin/bash
#
tmpfile=$1
echo '-- info about file file ........'
file -i $tmpfile
enca -g $tmpfile
echo 'recoding ........'
#iconv -f iso-8859-2 -t utf-8 back_test.xml > $tmpfile
#enca -x utf-8 $tmpfile
#enca -g $tmpfile
recode CP1250..UTF-8 $tmpfile

You might like to check the encoding by opening and reading the file in a form of a loop… but you might need to check the filesize first :

encodings = ['utf-8', 'windows-1250', 'windows-1252' ...etc]
            for e in encodings:
                try:
                    fh = codecs.open('file.txt', 'r', encoding=e)
                    fh.readlines()
                    fh.seek(0)
                except UnicodeDecodeError:
                    print('got unicode error with %s , trying different encoding' % e)
                else:
                    print('opening the file with encoding:  %s ' % e)
                    break              

回答 3

这是一个读取和获取chardet编码预测值的示例,n_lines如果文件很大,则从文件中读取。

chardet还为您提供confidence了其编码预测的概率(即)(尚未查看它们是如何得出的),该预测的预测结果来自chardet.predict(),因此您可以根据需要以某种方式进行操作。

def predict_encoding(file_path, n_lines=20):
    '''Predict a file's encoding using chardet'''
    import chardet

    # Open the file as binary data
    with open(file_path, 'rb') as f:
        # Join binary lines for specified number of lines
        rawdata = b''.join([f.readline() for _ in range(n_lines)])

    return chardet.detect(rawdata)['encoding']

Here is an example of reading and taking at face value a chardet encoding prediction, reading n_lines from the file in the event it is large.

chardet also gives you a probability (i.e. confidence) of it’s encoding prediction (haven’t looked how they come up with that), which is returned with its prediction from chardet.predict(), so you could work that in somehow if you like.

def predict_encoding(file_path, n_lines=20):
    '''Predict a file's encoding using chardet'''
    import chardet

    # Open the file as binary data
    with open(file_path, 'rb') as f:
        # Join binary lines for specified number of lines
        rawdata = b''.join([f.readline() for _ in range(n_lines)])

    return chardet.detect(rawdata)['encoding']

回答 4

# Function: OpenRead(file)

# A text file can be encoded using:
#   (1) The default operating system code page, Or
#   (2) utf8 with a BOM header
#
#  If a text file is encoded with utf8, and does not have a BOM header,
#  the user can manually add a BOM header to the text file
#  using a text editor such as notepad++, and rerun the python script,
#  otherwise the file is read as a codepage file with the 
#  invalid codepage characters removed

import sys
if int(sys.version[0]) != 3:
    print('Aborted: Python 3.x required')
    sys.exit(1)

def bomType(file):
    """
    returns file encoding string for open() function

    EXAMPLE:
        bom = bomtype(file)
        open(file, encoding=bom, errors='ignore')
    """

    f = open(file, 'rb')
    b = f.read(4)
    f.close()

    if (b[0:3] == b'\xef\xbb\xbf'):
        return "utf8"

    # Python automatically detects endianess if utf-16 bom is present
    # write endianess generally determined by endianess of CPU
    if ((b[0:2] == b'\xfe\xff') or (b[0:2] == b'\xff\xfe')):
        return "utf16"

    if ((b[0:5] == b'\xfe\xff\x00\x00') 
              or (b[0:5] == b'\x00\x00\xff\xfe')):
        return "utf32"

    # If BOM is not provided, then assume its the codepage
    #     used by your operating system
    return "cp1252"
    # For the United States its: cp1252


def OpenRead(file):
    bom = bomType(file)
    return open(file, 'r', encoding=bom, errors='ignore')


#######################
# Testing it
#######################
fout = open("myfile1.txt", "w", encoding="cp1252")
fout.write("* hi there (cp1252)")
fout.close()

fout = open("myfile2.txt", "w", encoding="utf8")
fout.write("\u2022 hi there (utf8)")
fout.close()

# this case is still treated like codepage cp1252
#   (User responsible for making sure that all utf8 files
#   have a BOM header)
fout = open("badboy.txt", "wb")
fout.write(b"hi there.  barf(\x81\x8D\x90\x9D)")
fout.close()

# Read Example file with Bom Detection
fin = OpenRead("myfile1.txt")
L = fin.readline()
print(L)
fin.close()

# Read Example file with Bom Detection
fin = OpenRead("myfile2.txt")
L =fin.readline() 
print(L) #requires QtConsole to view, Cmd.exe is cp1252
fin.close()

# Read CP1252 with a few undefined chars without barfing
fin = OpenRead("badboy.txt")
L =fin.readline() 
print(L)
fin.close()

# Check that bad characters are still in badboy codepage file
fin = open("badboy.txt", "rb")
fin.read(20)
fin.close()
# Function: OpenRead(file)

# A text file can be encoded using:
#   (1) The default operating system code page, Or
#   (2) utf8 with a BOM header
#
#  If a text file is encoded with utf8, and does not have a BOM header,
#  the user can manually add a BOM header to the text file
#  using a text editor such as notepad++, and rerun the python script,
#  otherwise the file is read as a codepage file with the 
#  invalid codepage characters removed

import sys
if int(sys.version[0]) != 3:
    print('Aborted: Python 3.x required')
    sys.exit(1)

def bomType(file):
    """
    returns file encoding string for open() function

    EXAMPLE:
        bom = bomtype(file)
        open(file, encoding=bom, errors='ignore')
    """

    f = open(file, 'rb')
    b = f.read(4)
    f.close()

    if (b[0:3] == b'\xef\xbb\xbf'):
        return "utf8"

    # Python automatically detects endianess if utf-16 bom is present
    # write endianess generally determined by endianess of CPU
    if ((b[0:2] == b'\xfe\xff') or (b[0:2] == b'\xff\xfe')):
        return "utf16"

    if ((b[0:5] == b'\xfe\xff\x00\x00') 
              or (b[0:5] == b'\x00\x00\xff\xfe')):
        return "utf32"

    # If BOM is not provided, then assume its the codepage
    #     used by your operating system
    return "cp1252"
    # For the United States its: cp1252


def OpenRead(file):
    bom = bomType(file)
    return open(file, 'r', encoding=bom, errors='ignore')


#######################
# Testing it
#######################
fout = open("myfile1.txt", "w", encoding="cp1252")
fout.write("* hi there (cp1252)")
fout.close()

fout = open("myfile2.txt", "w", encoding="utf8")
fout.write("\u2022 hi there (utf8)")
fout.close()

# this case is still treated like codepage cp1252
#   (User responsible for making sure that all utf8 files
#   have a BOM header)
fout = open("badboy.txt", "wb")
fout.write(b"hi there.  barf(\x81\x8D\x90\x9D)")
fout.close()

# Read Example file with Bom Detection
fin = OpenRead("myfile1.txt")
L = fin.readline()
print(L)
fin.close()

# Read Example file with Bom Detection
fin = OpenRead("myfile2.txt")
L =fin.readline() 
print(L) #requires QtConsole to view, Cmd.exe is cp1252
fin.close()

# Read CP1252 with a few undefined chars without barfing
fin = OpenRead("badboy.txt")
L =fin.readline() 
print(L)
fin.close()

# Check that bad characters are still in badboy codepage file
fin = open("badboy.txt", "rb")
fin.read(20)
fin.close()

回答 5

根据您的平台,我只是选择使用linux shell file命令。这对我有用,因为我在专门在我们的Linux机器之一上运行的脚本中使用了它。

显然,这不是理想的解决方案或答案,但可以对其进行修改以满足您的需求。就我而言,我只需要确定文件是否为UTF-8。

import subprocess
file_cmd = ['file', 'test.txt']
p = subprocess.Popen(file_cmd, stdout=subprocess.PIPE)
cmd_output = p.stdout.readlines()
# x will begin with the file type output as is observed using 'file' command
x = cmd_output[0].split(": ")[1]
return x.startswith('UTF-8')

Depending on your platform, I just opt to use the linux shell file command. This works for me since I am using it in a script that exclusively runs on one of our linux machines.

Obviously this isn’t an ideal solution or answer, but it could be modified to fit your needs. In my case I just need to determine whether a file is UTF-8 or not.

import subprocess
file_cmd = ['file', 'test.txt']
p = subprocess.Popen(file_cmd, stdout=subprocess.PIPE)
cmd_output = p.stdout.readlines()
# x will begin with the file type output as is observed using 'file' command
x = cmd_output[0].split(": ")[1]
return x.startswith('UTF-8')

回答 6

这可能会有所帮助

from bs4 import UnicodeDammit
with open('automate_data/billboard.csv', 'rb') as file:
   content = file.read()

suggestion = UnicodeDammit(content)
suggestion.original_encoding
#'iso-8859-1'

This might be helpful

from bs4 import UnicodeDammit
with open('automate_data/billboard.csv', 'rb') as file:
   content = file.read()

suggestion = UnicodeDammit(content)
suggestion.original_encoding
#'iso-8859-1'

回答 7

在一般情况下,原则上不可能确定文本文件的编码。因此,没有,没有标准的Python库可以为您执行此操作。

如果您对文本文件有更具体的了解(例如,它是XML),则可能有库函数。

It is, in principle, impossible to determine the encoding of a text file, in the general case. So no, there is no standard Python library to do that for you.

If you have more specific knowledge about the text file (e.g. that it is XML), there might be library functions.


回答 8

如果您知道文件的某些内容,则可以尝试使用几种编码对其进行解码,然后查看缺少的内容。通常,由于文本文件是文本文件而那些文本文件是愚蠢的,所以没有办法;)

If you know the some content of the file you can try to decode it with several encoding and see which is missing. In general there is no way since a text file is a text file and those are stupid ;)


回答 9

该站点具有用于识别ascii,使用boms编码和utf8 no bom的python代码:https ://unicodebook.readthedocs.io/guess_encoding.html 。将文件读入字节数组(数据):http : //www.codecodex.com/wiki/Read_a_file_into_a_byte_array。这是一个例子。我在osx中​​。

#!/usr/bin/python                                                                                                  

import sys

def isUTF8(data):
    try:
        decoded = data.decode('UTF-8')
    except UnicodeDecodeError:
        return False
    else:
        for ch in decoded:
            if 0xD800 <= ord(ch) <= 0xDFFF:
                return False
        return True

def get_bytes_from_file(filename):
    return open(filename, "rb").read()

filename = sys.argv[1]
data = get_bytes_from_file(filename)
result = isUTF8(data)
print(result)


PS /Users/js> ./isutf8.py hi.txt                                                                                     
True

This site has python code for recognizing ascii, encoding with boms, and utf8 no bom: https://unicodebook.readthedocs.io/guess_encoding.html. Read file into byte array (data): http://www.codecodex.com/wiki/Read_a_file_into_a_byte_array. Here’s an example. I’m in osx.

#!/usr/bin/python                                                                                                  

import sys

def isUTF8(data):
    try:
        decoded = data.decode('UTF-8')
    except UnicodeDecodeError:
        return False
    else:
        for ch in decoded:
            if 0xD800 <= ord(ch) <= 0xDFFF:
                return False
        return True

def get_bytes_from_file(filename):
    return open(filename, "rb").read()

filename = sys.argv[1]
data = get_bytes_from_file(filename)
result = isUTF8(data)
print(result)


PS /Users/js> ./isutf8.py hi.txt                                                                                     
True

让JSON对象接受字节或让urlopen输出字符串

问题:让JSON对象接受字节或让urlopen输出字符串

使用Python 3,我需要从URL请求json文档。

response = urllib.request.urlopen(request)

response对象是带有readreadline方法的类似文件的对象。通常,可以使用在文本模式下打开的文件来创建JSON对象。

obj = json.load(fp)

我想做的是:

obj = json.load(response)

但是,此方法不起作用,因为urlopen以二进制模式返回文件对象。

解决方法当然是:

str_response = response.read().decode('utf-8')
obj = json.loads(str_response)

但这感觉不好…

有没有更好的方法可以将字节文件对象转换为字符串文件对象?还是我缺少任何一个参数urlopenjson.load给出编码?

With Python 3 I am requesting a json document from a URL.

response = urllib.request.urlopen(request)

The response object is a file-like object with read and readline methods. Normally a JSON object can be created with a file opened in text mode.

obj = json.load(fp)

What I would like to do is:

obj = json.load(response)

This however does not work as urlopen returns a file object in binary mode.

A work around is of course:

str_response = response.read().decode('utf-8')
obj = json.loads(str_response)

but this feels bad…

Is there a better way that I can transform a bytes file object to a string file object? Or am I missing any parameters for either urlopen or json.load to give an encoding?


回答 0

HTTP发送字节。如果所讨论的资源是文本,则通常通过Content-Type HTTP标头或其他机制(RFC,HTML meta http-equiv等)指定字符编码。

urllib 应该知道如何将字节编码为字符串,但这太幼稚了-这是一个功能强大且功能强大的非Pythonic库。

深入Python 3提供了有关情况的概述。

您的“变通方法”很好-尽管感觉不对,但这是正确的方法。

HTTP sends bytes. If the resource in question is text, the character encoding is normally specified, either by the Content-Type HTTP header or by another mechanism (an RFC, HTML meta http-equiv,…).

urllib should know how to encode the bytes to a string, but it’s too naïve—it’s a horribly underpowered and un-Pythonic library.

Dive Into Python 3 provides an overview about the situation.

Your “work-around” is fine—although it feels wrong, it’s the correct way to do it.


网址在Python中解码UTF-8

问题:网址在Python中解码UTF-8

就我是Python的新手而言,我已经花了很多时间。
我怎么能解码这样的URL:

example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0

到python 2.7中的这个: example.com?title==правовая+защита

url=urllib.unquote(url.encode("utf8")) 返回的东西非常丑陋。

仍然没有解决方案,感谢您的帮助。

I have spent plenty of time as far as I am newbie in Python.
How could I ever decode such a URL:

example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0

to this one in python 2.7: example.com?title==правовая+защита

url=urllib.unquote(url.encode("utf8")) is returning something very ugly.

Still no solution, any help is appreciated.


回答 0

该数据是UTF-8编码的字节逃脱URL引用,所以要解码,用urllib.parse.unquote(),它处理从百分比编码数据进行解码,以UTF-8字节,然后于文字,透明:

from urllib.parse import unquote

url = unquote(url)

演示:

>>> from urllib.parse import unquote
>>> url = 'example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0'
>>> unquote(url)
'example.com?title=правовая+защита'

Python 2的等效项是urllib.unquote(),但是它返回一个字节串,因此您必须手动进行解码:

from urllib import unquote

url = unquote(url).decode('utf8')

The data is UTF-8 encoded bytes escaped with URL quoting, so you want to decode, with urllib.parse.unquote(), which handles decoding from percent-encoded data to UTF-8 bytes and then to text, transparently:

from urllib.parse import unquote

url = unquote(url)

Demo:

>>> from urllib.parse import unquote
>>> url = 'example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0'
>>> unquote(url)
'example.com?title=правовая+защита'

The Python 2 equivalent is urllib.unquote(), but this returns a bytestring, so you’d have to decode manually:

from urllib import unquote

url = unquote(url).decode('utf8')

回答 1

如果您使用的是Python 3,则可以使用 urllib.parse

url = """example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0"""

import urllib.parse
urllib.parse.unquote(url)

给出:

'example.com?title=правовая+защита'

If you are using Python 3, you can use urllib.parse

url = """example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0"""

import urllib.parse
urllib.parse.unquote(url)

gives:

'example.com?title=правовая+защита'

用单个空格替换非ASCII字符

问题:用单个空格替换非ASCII字符

我需要用空格替换所有非ASCII(\ x00- \ x7F)字符。令我惊讶的是,这在Python中并不是一件容易的事,除非我丢失了一些东西。以下功能仅删除所有非ASCII字符:

def remove_non_ascii_1(text):

    return ''.join(i for i in text if ord(i)<128)

并且该字符将非ASCII字符替换为空格,该空格数量与字符代码点中的字节数相同(即,字符替换为3个空格):

def remove_non_ascii_2(text):

    return re.sub(r'[^\x00-\x7F]',' ', text)

如何用单个空格替换所有非ASCII字符?

无数 类似 SO 问题 地址 的字符 替换 反对 剥离进一步解决所有非ASCII字符不是一个特定的字符。

I need to replace all non-ASCII (\x00-\x7F) characters with a space. I’m surprised that this is not dead-easy in Python, unless I’m missing something. The following function simply removes all non-ASCII characters:

def remove_non_ascii_1(text):

    return ''.join(i for i in text if ord(i)<128)

And this one replaces non-ASCII characters with the amount of spaces as per the amount of bytes in the character code point (i.e. the character is replaced with 3 spaces):

def remove_non_ascii_2(text):

    return re.sub(r'[^\x00-\x7F]',' ', text)

How can I replace all non-ASCII characters with a single space?

Of the myriad of similar SO questions, none address character replacement as opposed to stripping, and additionally address all non-ascii characters not a specific character.


回答 0

您的''.join()表达式正在过滤,删除所有非ASCII内容;您可以改为使用条件表达式:

return ''.join([i if ord(i) < 128 else ' ' for i in text])

这将一个接一个地处理字符,每个替换字符仍将使用一个空格。

您的正则表达式应仅将连续的非ASCII字符替换为一个空格:

re.sub(r'[^\x00-\x7F]+',' ', text)

注意+那里。

Your ''.join() expression is filtering, removing anything non-ASCII; you could use a conditional expression instead:

return ''.join([i if ord(i) < 128 else ' ' for i in text])

This handles characters one by one and would still use one space per character replaced.

Your regular expression should just replace consecutive non-ASCII characters with a space:

re.sub(r'[^\x00-\x7F]+',' ', text)

Note the + there.


回答 1

对于您来说,您可以获得原始字符串的最相似的表示形式,我建议使用unidecode模块

from unidecode import unidecode
def remove_non_ascii(text):
    return unidecode(unicode(text, encoding = "utf-8"))

然后,您可以在字符串中使用它:

remove_non_ascii("Ceñía")
Cenia

For you the get the most alike representation of your original string I recommend the unidecode module:

from unidecode import unidecode
def remove_non_ascii(text):
    return unidecode(unicode(text, encoding = "utf-8"))

Then you can use it in a string:

remove_non_ascii("Ceñía")
Cenia

回答 2

对于字符处理,请使用Unicode字符串:

PythonWin 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:57:17) [MSC v.1600 64 bit (AMD64)] on win32.
>>> s='ABC马克def'
>>> import re
>>> re.sub(r'[^\x00-\x7f]',r' ',s)   # Each char is a Unicode codepoint.
'ABC  def'
>>> b = s.encode('utf8')
>>> re.sub(rb'[^\x00-\x7f]',rb' ',b) # Each char is a 3-byte UTF-8 sequence.
b'ABC      def'

但是请注意,如果您的字符串包含分解的Unicode字符(例如,单独的字符和带重音符号的组合),您仍然会遇到问题:

>>> s = 'mañana'
>>> len(s)
6
>>> import unicodedata as ud
>>> n=ud.normalize('NFD',s)
>>> n
'mañana'
>>> len(n)
7
>>> re.sub(r'[^\x00-\x7f]',r' ',s) # single codepoint
'ma ana'
>>> re.sub(r'[^\x00-\x7f]',r' ',n) # only combining mark replaced
'man ana'

For character processing, use Unicode strings:

PythonWin 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:57:17) [MSC v.1600 64 bit (AMD64)] on win32.
>>> s='ABC马克def'
>>> import re
>>> re.sub(r'[^\x00-\x7f]',r' ',s)   # Each char is a Unicode codepoint.
'ABC  def'
>>> b = s.encode('utf8')
>>> re.sub(rb'[^\x00-\x7f]',rb' ',b) # Each char is a 3-byte UTF-8 sequence.
b'ABC      def'

But note you will still have a problem if your string contains decomposed Unicode characters (separate character and combining accent marks, for example):

>>> s = 'mañana'
>>> len(s)
6
>>> import unicodedata as ud
>>> n=ud.normalize('NFD',s)
>>> n
'mañana'
>>> len(n)
7
>>> re.sub(r'[^\x00-\x7f]',r' ',s) # single codepoint
'ma ana'
>>> re.sub(r'[^\x00-\x7f]',r' ',n) # only combining mark replaced
'man ana'

回答 3

如果替换字符可以是“?” 而不是空格,那么我建议result = text.encode('ascii', 'replace').decode()

"""Test the performance of different non-ASCII replacement methods."""


import re
from timeit import timeit


# 10_000 is typical in the project that I'm working on and most of the text
# is going to be non-ASCII.
text = 'Æ' * 10_000


print(timeit(
    """
result = ''.join([c if ord(c) < 128 else '?' for c in text])
    """,
    number=1000,
    globals=globals(),
))

print(timeit(
    """
result = text.encode('ascii', 'replace').decode()
    """,
    number=1000,
    globals=globals(),
))

结果:

0.7208260721400134
0.009975979187503592

If the replacement character can be ‘?’ instead of a space, then I’d suggest result = text.encode('ascii', 'replace').decode():

"""Test the performance of different non-ASCII replacement methods."""


import re
from timeit import timeit


# 10_000 is typical in the project that I'm working on and most of the text
# is going to be non-ASCII.
text = 'Æ' * 10_000


print(timeit(
    """
result = ''.join([c if ord(c) < 128 else '?' for c in text])
    """,
    number=1000,
    globals=globals(),
))

print(timeit(
    """
result = text.encode('ascii', 'replace').decode()
    """,
    number=1000,
    globals=globals(),
))

Results:

0.7208260721400134
0.009975979187503592

回答 4

这个如何?

def replace_trash(unicode_string):
     for i in range(0, len(unicode_string)):
         try:
             unicode_string[i].encode("ascii")
         except:
              #means it's non-ASCII
              unicode_string=unicode_string[i].replace(" ") #replacing it with a single space
     return unicode_string

What about this one?

def replace_trash(unicode_string):
     for i in range(0, len(unicode_string)):
         try:
             unicode_string[i].encode("ascii")
         except:
              #means it's non-ASCII
              unicode_string=unicode_string[i].replace(" ") #replacing it with a single space
     return unicode_string

回答 5

作为一种本机且高效的方法,您不需要使用ord字符或对其进行任何循环。只需使用进行编码,ascii然后忽略错误即可。

以下内容将只删除非ASCII字符:

new_string = old_string.encode('ascii',errors='ignore')

现在,如果要替换已删除的字符,请执行以下操作:

final_string = new_string + b' ' * (len(old_string) - len(new_string))

As a native and efficient approach, you don’t need to use ord or any loop over the characters. Just encode with ascii and ignore the errors.

The following will just remove the non-ascii characters:

new_string = old_string.encode('ascii',errors='ignore')

Now if you want to replace the deleted characters just do the following:

final_string = new_string + b' ' * (len(old_string) - len(new_string))

回答 6

可能会提出其他问题,但我正在提供@Alvero的答案(使用unidecode)。我想对字符串进行“常规”删除,即字符串的开头和结尾为空白字符,然后仅将其他空白字符替换为“常规”空格,即

"Ceñíaㅤmañanaㅤㅤㅤㅤ"

"Ceñía mañana"

def safely_stripped(s: str):
    return ' '.join(
        stripped for stripped in
        (bit.strip() for bit in
         ''.join((c if unidecode(c) else ' ') for c in s).strip().split())
        if stripped)

我们首先将所有非unicode空格替换为常规空格(然后重新加入),

''.join((c if unidecode(c) else ' ') for c in s)

然后,我们使用python的正常拆分方法再次拆分,并剥离每个“位”,

(bit.strip() for bit in s.split())

最后,再次将它们重新加入,但是只有当字符串通过if测试时,

' '.join(stripped for stripped in s if stripped)

然后,safely_stripped('ㅤㅤㅤㅤCeñíaㅤmañanaㅤㅤㅤㅤ')正确返回'Ceñía mañana'

Potentially for a different question, but I’m providing my version of @Alvero’s answer (using unidecode). I want to do a “regular” strip on my strings, i.e. the beginning and end of my string for whitespace characters, and then replace only other whitespace characters with a “regular” space, i.e.

"Ceñíaㅤmañanaㅤㅤㅤㅤ"

to

"Ceñía mañana"

,

def safely_stripped(s: str):
    return ' '.join(
        stripped for stripped in
        (bit.strip() for bit in
         ''.join((c if unidecode(c) else ' ') for c in s).strip().split())
        if stripped)

We first replace all non-unicode spaces with a regular space (and join it back again),

''.join((c if unidecode(c) else ' ') for c in s)

And then we split that again, with python’s normal split, and strip each “bit”,

(bit.strip() for bit in s.split())

And lastly join those back again, but only if the string passes an if test,

' '.join(stripped for stripped in s if stripped)

And with that, safely_stripped('ㅤㅤㅤㅤCeñíaㅤmañanaㅤㅤㅤㅤ') correctly returns 'Ceñía mañana'.


如何检查字符串是unicode还是ascii?

问题:如何检查字符串是unicode还是ascii?

我必须在Python中做什么才能弄清楚字符串具有哪种编码?

What do I have to do in Python to figure out which encoding a string has?


回答 0

在Python 3中,所有字符串都是Unicode字符序列。有一种bytes类型可以保存原始字节。

在Python 2中,字符串可以是type str或type unicode。您可以使用以下代码告诉哪个使用代码:

def whatisthis(s):
    if isinstance(s, str):
        print "ordinary string"
    elif isinstance(s, unicode):
        print "unicode string"
    else:
        print "not a string"

这不能区分“ Unicode或ASCII”;它仅区分Python类型。Unicode字符串可能仅包含ASCII范围内的字符,而字节字符串可能包含ASCII,编码的Unicode甚至非文本数据。

In Python 3, all strings are sequences of Unicode characters. There is a bytes type that holds raw bytes.

In Python 2, a string may be of type str or of type unicode. You can tell which using code something like this:

def whatisthis(s):
    if isinstance(s, str):
        print "ordinary string"
    elif isinstance(s, unicode):
        print "unicode string"
    else:
        print "not a string"

This does not distinguish “Unicode or ASCII”; it only distinguishes Python types. A Unicode string may consist of purely characters in the ASCII range, and a bytestring may contain ASCII, encoded Unicode, or even non-textual data.


回答 1

如何判断对象是unicode字符串还是字节字符串

您可以使用typeisinstance

在Python 2中:

>>> type(u'abc')  # Python 2 unicode string literal
<type 'unicode'>
>>> type('abc')   # Python 2 byte string literal
<type 'str'>

在Python 2中,str只是字节序列。Python不知道其编码是什么。该unicode类型是存储文本的更安全的方式。如果您想了解更多,我建议http://farmdev.com/talks/unicode/

在Python 3中:

>>> type('abc')   # Python 3 unicode string literal
<class 'str'>
>>> type(b'abc')  # Python 3 byte string literal
<class 'bytes'>

在Python 3中,str就像Python 2一样unicode,用于存储文本。什么叫str在Python 2被称为bytes在Python 3。


如何判断一个字节字符串是有效的utf-8还是ascii

您可以调用decode。如果它引发UnicodeDecodeError异常,则无效。

>>> u_umlaut = b'\xc3\x9c'   # UTF-8 representation of the letter 'Ü'
>>> u_umlaut.decode('utf-8')
u'\xdc'
>>> u_umlaut.decode('ascii')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)

How to tell if an object is a unicode string or a byte string

You can use type or isinstance.

In Python 2:

>>> type(u'abc')  # Python 2 unicode string literal
<type 'unicode'>
>>> type('abc')   # Python 2 byte string literal
<type 'str'>

In Python 2, str is just a sequence of bytes. Python doesn’t know what its encoding is. The unicode type is the safer way to store text. If you want to understand this more, I recommend http://farmdev.com/talks/unicode/.

In Python 3:

>>> type('abc')   # Python 3 unicode string literal
<class 'str'>
>>> type(b'abc')  # Python 3 byte string literal
<class 'bytes'>

In Python 3, str is like Python 2’s unicode, and is used to store text. What was called str in Python 2 is called bytes in Python 3.


How to tell if a byte string is valid utf-8 or ascii

You can call decode. If it raises a UnicodeDecodeError exception, it wasn’t valid.

>>> u_umlaut = b'\xc3\x9c'   # UTF-8 representation of the letter 'Ü'
>>> u_umlaut.decode('utf-8')
u'\xdc'
>>> u_umlaut.decode('ascii')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)

回答 2

在python 3.x中,所有字符串都是Unicode字符序列。并进行str的isinstance检查(默认情况下意味着unicode字符串)就足够了。

isinstance(x, str)

关于python 2.x,大多数人似乎正在使用带有两个检查的if语句。一个用于str,另一个用于unicode。

如果要检查是否只有一个语句具有“类似字符串的”对象,则可以执行以下操作:

isinstance(x, basestring)

In python 3.x all strings are sequences of Unicode characters. and doing the isinstance check for str (which means unicode string by default) should suffice.

isinstance(x, str)

With regards to python 2.x, Most people seem to be using an if statement that has two checks. one for str and one for unicode.

If you want to check if you have a ‘string-like’ object all with one statement though, you can do the following:

isinstance(x, basestring)

回答 3

Unicode不是一种编码-引用Kumar McMillan的话:

如果ASCII,UTF-8和其他字节字符串是“文本” …

…那么Unicode是“文字性”;

它是文本的抽象形式

阅读了PyCon 2008 上McMillan的Unicode In Python中的《 Unifiedly Mystified》演讲,它解释的内容比Stack Overflow上的大多数相关答案要好得多。

Unicode is not an encoding – to quote Kumar McMillan:

If ASCII, UTF-8, and other byte strings are “text” …

…then Unicode is “text-ness”;

it is the abstract form of text

Have a read of McMillan’s Unicode In Python, Completely Demystified talk from PyCon 2008, it explains things a lot better than most of the related answers on Stack Overflow.


回答 4

如果你的代码需要兼容两者的Python 2和Python 3,你不能直接使用之类的东西isinstance(s,bytes)isinstance(s,unicode)不带/包裹它们可尝试不同的或Python版本的测试,因为bytes在Python 2不定,unicode在Python 3未定义。

有一些丑陋的解决方法。一个非常丑陋的是比较类型的名称,而不是比较类型本身。这是一个例子:

# convert bytes (python 3) or unicode (python 2) to str
if str(type(s)) == "<class 'bytes'>":
    # only possible in Python 3
    s = s.decode('ascii')  # or  s = str(s)[2:-1]
elif str(type(s)) == "<type 'unicode'>":
    # only possible in Python 2
    s = str(s)

可以说稍微麻烦一点的解决方法是检查Python版本号,例如:

if sys.version_info >= (3,0,0):
    # for Python 3
    if isinstance(s, bytes):
        s = s.decode('ascii')  # or  s = str(s)[2:-1]
else:
    # for Python 2
    if isinstance(s, unicode):
        s = str(s)

这些都不是Python风格的,在大多数情况下,可能有更好的方法。

If your code needs to be compatible with both Python 2 and Python 3, you can’t directly use things like isinstance(s,bytes) or isinstance(s,unicode) without wrapping them in either try/except or a python version test, because bytes is undefined in Python 2 and unicode is undefined in Python 3.

There are some ugly workarounds. An extremely ugly one is to compare the name of the type, instead of comparing the type itself. Here’s an example:

# convert bytes (python 3) or unicode (python 2) to str
if str(type(s)) == "<class 'bytes'>":
    # only possible in Python 3
    s = s.decode('ascii')  # or  s = str(s)[2:-1]
elif str(type(s)) == "<type 'unicode'>":
    # only possible in Python 2
    s = str(s)

An arguably slightly less ugly workaround is to check the Python version number, e.g.:

if sys.version_info >= (3,0,0):
    # for Python 3
    if isinstance(s, bytes):
        s = s.decode('ascii')  # or  s = str(s)[2:-1]
else:
    # for Python 2
    if isinstance(s, unicode):
        s = str(s)

Those are both unpythonic, and most of the time there’s probably a better way.


回答 5

用:

import six
if isinstance(obj, six.text_type)

在六个库中,它表示为:

if PY3:
    string_types = str,
else:
    string_types = basestring,

use:

import six
if isinstance(obj, six.text_type)

inside the six library it is represented as:

if PY3:
    string_types = str,
else:
    string_types = basestring,

回答 6

请注意,在Python 3上,以下任何一种说法都不公平:

  • strs是任何x的UTFx(例如UTF8)

  • strs是Unicode

  • strs是Unicode字符的有序集合

Python的 str类型(通常)是一系列Unicode代码点,其中一些映射到字符。


即使在Python 3上,回答这个问题也不像您想象的那么简单。

一种测试ASCII兼容字符串的明显方法是尝试进行编码:

"Hello there!".encode("ascii")
#>>> b'Hello there!'

"Hello there... ☃!".encode("ascii")
#>>> Traceback (most recent call last):
#>>>   File "", line 4, in <module>
#>>> UnicodeEncodeError: 'ascii' codec can't encode character '\u2603' in position 15: ordinal not in range(128)

该错误区分情况。

在Python 3中,甚至有些字符串包含无效的Unicode代码点:

"Hello there!".encode("utf8")
#>>> b'Hello there!'

"\udcc3".encode("utf8")
#>>> Traceback (most recent call last):
#>>>   File "", line 19, in <module>
#>>> UnicodeEncodeError: 'utf-8' codec can't encode character '\udcc3' in position 0: surrogates not allowed

使用相同的方法来区分它们。

Note that on Python 3, it’s not really fair to say any of:

  • strs are UTFx for any x (eg. UTF8)

  • strs are Unicode

  • strs are ordered collections of Unicode characters

Python’s str type is (normally) a sequence of Unicode code points, some of which map to characters.


Even on Python 3, it’s not as simple to answer this question as you might imagine.

An obvious way to test for ASCII-compatible strings is by an attempted encode:

"Hello there!".encode("ascii")
#>>> b'Hello there!'

"Hello there... ☃!".encode("ascii")
#>>> Traceback (most recent call last):
#>>>   File "", line 4, in <module>
#>>> UnicodeEncodeError: 'ascii' codec can't encode character '\u2603' in position 15: ordinal not in range(128)

The error distinguishes the cases.

In Python 3, there are even some strings that contain invalid Unicode code points:

"Hello there!".encode("utf8")
#>>> b'Hello there!'

"\udcc3".encode("utf8")
#>>> Traceback (most recent call last):
#>>>   File "", line 19, in <module>
#>>> UnicodeEncodeError: 'utf-8' codec can't encode character '\udcc3' in position 0: surrogates not allowed

The same method to distinguish them is used.


回答 7

这可能会对其他人有所帮助,我开始测试变量s的字符串类型,但是对于我的应用程序来说,简单地将s返回为utf-8更有意义。然后,调用return_utf的进程知道它正在处理什么,并且可以适当地处理该字符串。该代码不是原始的,但我打算将其与Python版本无关,而无需进行版本测试或导入六个代码。请评论以下示例代码的改进以帮助其他人。

def return_utf(s):
    if isinstance(s, str):
        return s.encode('utf-8')
    if isinstance(s, (int, float, complex)):
        return str(s).encode('utf-8')
    try:
        return s.encode('utf-8')
    except TypeError:
        try:
            return str(s).encode('utf-8')
        except AttributeError:
            return s
    except AttributeError:
        return s
    return s # assume it was already utf-8

This may help someone else, I started out testing for the string type of the variable s, but for my application, it made more sense to simply return s as utf-8. The process calling return_utf, then knows what it is dealing with and can handle the string appropriately. The code is not pristine, but I intend for it to be Python version agnostic without a version test or importing six. Please comment with improvements to the sample code below to help other people.

def return_utf(s):
    if isinstance(s, str):
        return s.encode('utf-8')
    if isinstance(s, (int, float, complex)):
        return str(s).encode('utf-8')
    try:
        return s.encode('utf-8')
    except TypeError:
        try:
            return str(s).encode('utf-8')
        except AttributeError:
            return s
    except AttributeError:
        return s
    return s # assume it was already utf-8

回答 8

您可以使用Universal Encoding Detector,但是请注意,它只会给您最好的猜测,而不是实际的编码,因为例如,不可能知道字符串“ abc”的编码。您将需要在其他地方获取编码信息,例如HTTP协议为此使用Content-Type标头。

You could use Universal Encoding Detector, but be aware that it will just give you best guess, not the actual encoding, because it’s impossible to know encoding of a string “abc” for example. You will need to get encoding information elsewhere, eg HTTP protocol uses Content-Type header for that.


回答 9

为了实现py2 / py3兼容性,只需使用

import six if isinstance(obj, six.text_type)

For py2/py3 compatibility simply use

import six if isinstance(obj, six.text_type)


回答 10

一种简单的方法是检查是否unicode为内置函数。如果是这样,则说明您使用的是Python 2,并且您的字符串将是一个字符串。确保一切都unicode可以做到:

import builtins

i = 'cats'
if 'unicode' in dir(builtins):     # True in python 2, False in 3
  i = unicode(i)

One simple approach is to check if unicode is a builtin function. If so, you’re in Python 2 and your string will be a string. To ensure everything is in unicode one can do:

import builtins

i = 'cats'
if 'unicode' in dir(builtins):     # True in python 2, False in 3
  i = unicode(i)

如何对Python中的URL参数进行百分比编码?

问题:如何对Python中的URL参数进行百分比编码?

如果我做

url = "http://example.com?p=" + urllib.quote(query)
  1. 它不编码/%2F(破坏OAuth规范化)
  2. 它不处理Unicode(引发异常)

有没有更好的图书馆?

If I do

url = "http://example.com?p=" + urllib.quote(query)
  1. It doesn’t encode / to %2F (breaks OAuth normalization)
  2. It doesn’t handle Unicode (it throws an exception)

Is there a better library?


回答 0

Python 2

文档

urllib.quote(string[, safe])

使用%xx转义符替换字符串中的特殊字符。字母,数字和字符“ _.-”都不会被引用。默认情况下,此函数用于引用URL的路径部分。可选的safe参数指定不应引用的其他字符- 其默认值为’/’

这意味着通过“安全”将解决您的第一个问题:

>>> urllib.quote('/test')
'/test'
>>> urllib.quote('/test', safe='')
'%2Ftest'

关于第二个问题,有关于它的bug报告在这里。显然,它已在python 3中修复。您可以通过编码为utf8来解决此问题,如下所示:

>>> query = urllib.quote(u"Müller".encode('utf8'))
>>> print urllib.unquote(query).decode('utf8')
Müller

顺便看看urlencode

Python 3

相同的,除了更换urllib.quoteurllib.parse.quote

Python 2

From the docs:

urllib.quote(string[, safe])

Replace special characters in string using the %xx escape. Letters, digits, and the characters ‘_.-‘ are never quoted. By default, this function is intended for quoting the path section of the URL.The optional safe parameter specifies additional characters that should not be quoted — its default value is ‘/’

That means passing ” for safe will solve your first issue:

>>> urllib.quote('/test')
'/test'
>>> urllib.quote('/test', safe='')
'%2Ftest'

About the second issue, there is a bug report about it here. Apparently it was fixed in python 3. You can workaround it by encoding as utf8 like this:

>>> query = urllib.quote(u"Müller".encode('utf8'))
>>> print urllib.unquote(query).decode('utf8')
Müller

By the way have a look at urlencode

Python 3

The same, except replace urllib.quote with urllib.parse.quote.


回答 1

在Python 3中,urllib.quote已移至,urllib.parse.quote并且默认情况下确实处理unicode。

>>> from urllib.parse import quote
>>> quote('/test')
'/test'
>>> quote('/test', safe='')
'%2Ftest'
>>> quote('/El Niño/')
'/El%20Ni%C3%B1o/'

In Python 3, urllib.quote has been moved to urllib.parse.quote and it does handle unicode by default.

>>> from urllib.parse import quote
>>> quote('/test')
'/test'
>>> quote('/test', safe='')
'%2Ftest'
>>> quote('/El Niño/')
'/El%20Ni%C3%B1o/'

回答 2

我的答案类似于保罗的答案。

我认为模块requests要好得多。它基于urllib3。您可以尝试以下方法:

>>> from requests.utils import quote
>>> quote('/test')
'/test'
>>> quote('/test', safe='')
'%2Ftest'

My answer is similar to Paolo’s answer.

I think module requests is much better. It’s based on urllib3. You can try this:

>>> from requests.utils import quote
>>> quote('/test')
'/test'
>>> quote('/test', safe='')
'%2Ftest'

回答 3

如果您使用的是django,则可以使用urlquote:

>>> from django.utils.http import urlquote
>>> urlquote(u"Müller")
u'M%C3%BCller'

请注意,自发布此答案以来对Python的更改意味着它现在是旧版包装器。从django.utils.http的Django 2.1源代码中:

A legacy compatibility wrapper to Python's urllib.parse.quote() function.
(was used for unicode handling on Python 2)

If you’re using django, you can use urlquote:

>>> from django.utils.http import urlquote
>>> urlquote(u"Müller")
u'M%C3%BCller'

Note that changes to Python since this answer was published mean that this is now a legacy wrapper. From the Django 2.1 source code for django.utils.http:

A legacy compatibility wrapper to Python's urllib.parse.quote() function.
(was used for unicode handling on Python 2)

回答 4

最好在urlencode这里使用。单个参数没有太大区别,但是恕我直言使代码更清晰。(看一个函数看起来很混乱quote_plus!尤其是那些来自其他语言的函数)

In [21]: query='lskdfj/sdfkjdf/ksdfj skfj'

In [22]: val=34

In [23]: from urllib.parse import urlencode

In [24]: encoded = urlencode(dict(p=query,val=val))

In [25]: print(f"http://example.com?{encoded}")
http://example.com?p=lskdfj%2Fsdfkjdf%2Fksdfj+skfj&val=34

文件

urlencode:https//docs.python.org/3/library/urllib.parse.html#urllib.parse.urlencode

quote_plus:https ://docs.python.org/3/library/urllib.parse.html#urllib.parse.quote_plus

It is better to use urlencode here. Not much difference for single parameter but IMHO makes the code clearer. (It looks confusing to see a function quote_plus! especially those coming from other languates)

In [21]: query='lskdfj/sdfkjdf/ksdfj skfj'

In [22]: val=34

In [23]: from urllib.parse import urlencode

In [24]: encoded = urlencode(dict(p=query,val=val))

In [25]: print(f"http://example.com?{encoded}")
http://example.com?p=lskdfj%2Fsdfkjdf%2Fksdfj+skfj&val=34

Docs

urlencode: https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlencode

quote_plus: https://docs.python.org/3/library/urllib.parse.html#urllib.parse.quote_plus


在Python中管道输出标准输出时设置正确的编码

问题:在Python中管道输出标准输出时设置正确的编码

当传递Python程序的输出的管道时,Python解释器会对编码感到困惑,并将其设置为None。这意味着这样的程序:

# -*- coding: utf-8 -*-
print u"åäö"

正常运行时可以正常工作,但失败:

UnicodeEncodeError:’ascii’编解码器无法在位置0编码字符u’\ xa0’:序数不在范围内(128)

以管道顺序使用时。

使管道工作的最佳方法是什么?我能告诉它使用外壳程序/文件系统/正在使用的任何编码吗?

到目前为止,我所看到的建议是直接修改site.py,或使用此hack硬编码defaultencoding:

# -*- coding: utf-8 -*-
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
print u"åäö"

有没有更好的方法可以使管道工作?

When piping the output of a Python program, the Python interpreter gets confused about encoding and sets it to None. This means a program like this:

# -*- coding: utf-8 -*-
print u"åäö"

will work fine when run normally, but fail with:

UnicodeEncodeError: ‘ascii’ codec can’t encode character u’\xa0′ in position 0: ordinal not in range(128)

when used in a pipe sequence.

What is the best way to make this work when piping? Can I just tell it to use whatever encoding the shell/filesystem/whatever is using?

The suggestions I have seen thus far is to modify your site.py directly, or hardcoding the defaultencoding using this hack:

# -*- coding: utf-8 -*-
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
print u"åäö"

Is there a better way to make piping work?


回答 0

您的代码在脚本中运行时有效,因为Python将输出编码为您的终端应用程序正在使用的任何编码。如果要进行管道传输,则必须自己对其进行编码。

经验法则是:始终在内部使用Unicode。解码收到的内容,并对发送的内容进行编码。

# -*- coding: utf-8 -*-
print u"åäö".encode('utf-8')

另一个教学示例是一个Python程序,用于在ISO-8859-1和UTF-8之间进行转换,从而使两者之间的所有内容均大写。

import sys
for line in sys.stdin:
    # Decode what you receive:
    line = line.decode('iso8859-1')

    # Work with Unicode internally:
    line = line.upper()

    # Encode what you send:
    line = line.encode('utf-8')
    sys.stdout.write(line)

设置系统默认编码不是一个好主意,因为您使用的某些模块和库可能依赖于它是ASCII的事实。不要这样

Your code works when run in an script because Python encodes the output to whatever encoding your terminal application is using. If you are piping you must encode it yourself.

A rule of thumb is: Always use Unicode internally. Decode what you receive, and encode what you send.

# -*- coding: utf-8 -*-
print u"åäö".encode('utf-8')

Another didactic example is a Python program to convert between ISO-8859-1 and UTF-8, making everything uppercase in between.

import sys
for line in sys.stdin:
    # Decode what you receive:
    line = line.decode('iso8859-1')

    # Work with Unicode internally:
    line = line.upper()

    # Encode what you send:
    line = line.encode('utf-8')
    sys.stdout.write(line)

Setting the system default encoding is a bad idea, because some modules and libraries you use can rely on the fact it is ASCII. Don’t do it.


回答 1

首先,关于此解决方案:

# -*- coding: utf-8 -*-
print u"åäö".encode('utf-8')

每次都使用给定的编码显式打印是不实际的。那将是重复的并且容易出错。

更好的解决方案是sys.stdout在程序开始时进行更改,以使用选定的编码进行编码。这是我在Python上找到的一种解决方案:如何选择sys.stdout.encoding?,特别是“ toka”的评论:

import sys
import codecs
sys.stdout = codecs.getwriter('utf8')(sys.stdout)

First, regarding this solution:

# -*- coding: utf-8 -*-
print u"åäö".encode('utf-8')

It’s not practical to explicitly print with a given encoding every time. That would be repetitive and error-prone.

A better solution is to change sys.stdout at the start of your program, to encode with a selected encoding. Here is one solution I found on Python: How is sys.stdout.encoding chosen?, in particular a comment by “toka”:

import sys
import codecs
sys.stdout = codecs.getwriter('utf8')(sys.stdout)

回答 2

您可能需要尝试将环境变量“ PYTHONIOENCODING”更改为“ utf_8”。我写了一篇关于这个问题的磨难页面

博客文章的Tl; dr:

import sys, locale, os
print(sys.stdout.encoding)
print(sys.stdout.isatty())
print(locale.getpreferredencoding())
print(sys.getfilesystemencoding())
print(os.environ["PYTHONIOENCODING"])
print(chr(246), chr(9786), chr(9787))

给你

utf_8
False
ANSI_X3.4-1968
ascii
utf_8
ö ☺ ☻

You may want to try changing the environment variable “PYTHONIOENCODING” to “utf_8”. I have written a page on my ordeal with this problem.

Tl;dr of the blog post:

import sys, locale, os
print(sys.stdout.encoding)
print(sys.stdout.isatty())
print(locale.getpreferredencoding())
print(sys.getfilesystemencoding())
print(os.environ["PYTHONIOENCODING"])
print(chr(246), chr(9786), chr(9787))

gives you

utf_8
False
ANSI_X3.4-1968
ascii
utf_8
ö ☺ ☻

回答 3

export PYTHONIOENCODING=utf-8

做这项工作,但不能在python本身上设置它…

我们可以做的是验证是否未设置,并在调用脚本之前通过以下命令告诉用户进行设置:

if __name__ == '__main__':
    if (sys.stdout.encoding is None):
        print >> sys.stderr, "please set python env PYTHONIOENCODING=UTF-8, example: export PYTHONIOENCODING=UTF-8, when write to stdout."
        exit(1)

更新以回复评论:该问题仅在传递到stdout时存在。我在Fedora 25 Python 2.7.13中进行了测试

python --version
Python 2.7.13

猫b.py

#!/usr/bin/env python
#-*- coding: utf-8 -*-
import sys

print sys.stdout.encoding

运行./b.py

UTF-8

运行./b.py | 减

None
export PYTHONIOENCODING=utf-8

do the job, but can’t set it on python itself …

what we can do is verify if isn’t setting and tell the user to set it before call script with :

if __name__ == '__main__':
    if (sys.stdout.encoding is None):
        print >> sys.stderr, "please set python env PYTHONIOENCODING=UTF-8, example: export PYTHONIOENCODING=UTF-8, when write to stdout."
        exit(1)

Update to reply to the comment: the problem just exist when piping to stdout . I tested in Fedora 25 Python 2.7.13

python --version
Python 2.7.13

cat b.py

#!/usr/bin/env python
#-*- coding: utf-8 -*-
import sys

print sys.stdout.encoding

running ./b.py

UTF-8

running ./b.py | less

None

回答 4

上周有一个类似的问题。在我的IDE(PyCharm)中很容易修复。

这是我的解决方法:

从PyCharm菜单栏开始:文件->设置…->编辑器->文件编码,然后将:“ IDE编码”,“项目编码”和“属性文件的默认编码”全部设置为UTF-8,她现在可以工作了像个魅力。

希望这可以帮助!

I had a similar issue last week. It was easy to fix in my IDE (PyCharm).

Here was my fix:

Starting from PyCharm menu bar: File -> Settings… -> Editor -> File Encodings, then set: “IDE Encoding”, “Project Encoding” and “Default encoding for properties files” ALL to UTF-8 and she now works like a charm.

Hope this helps!


回答 5

克雷格·麦昆(Craig McQueen)的答案可能是经过消毒的版本。

import sys, codecs
class EncodedOut:
    def __init__(self, enc):
        self.enc = enc
        self.stdout = sys.stdout
    def __enter__(self):
        if sys.stdout.encoding is None:
            w = codecs.getwriter(self.enc)
            sys.stdout = w(sys.stdout)
    def __exit__(self, exc_ty, exc_val, tb):
        sys.stdout = self.stdout

用法:

with EncodedOut('utf-8'):
    print u'ÅÄÖåäö'

An arguable sanitized version of Craig McQueen’s answer.

import sys, codecs
class EncodedOut:
    def __init__(self, enc):
        self.enc = enc
        self.stdout = sys.stdout
    def __enter__(self):
        if sys.stdout.encoding is None:
            w = codecs.getwriter(self.enc)
            sys.stdout = w(sys.stdout)
    def __exit__(self, exc_ty, exc_val, tb):
        sys.stdout = self.stdout

Usage:

with EncodedOut('utf-8'):
    print u'ÅÄÖåäö'

回答 6

我可以通过以下方式“自动化”它:

def __fix_io_encoding(last_resort_default='UTF-8'):
  import sys
  if [x for x in (sys.stdin,sys.stdout,sys.stderr) if x.encoding is None] :
      import os
      defEnc = None
      if defEnc is None :
        try:
          import locale
          defEnc = locale.getpreferredencoding()
        except: pass
      if defEnc is None :
        try: defEnc = sys.getfilesystemencoding()
        except: pass
      if defEnc is None :
        try: defEnc = sys.stdin.encoding
        except: pass
      if defEnc is None :
        defEnc = last_resort_default
      os.environ['PYTHONIOENCODING'] = os.environ.get("PYTHONIOENCODING",defEnc)
      os.execvpe(sys.argv[0],sys.argv,os.environ)
__fix_io_encoding() ; del __fix_io_encoding

是的,如果此“ setenv”失败,则有可能在此处获得无限循环。

I could “automate” it with a call to:

def __fix_io_encoding(last_resort_default='UTF-8'):
  import sys
  if [x for x in (sys.stdin,sys.stdout,sys.stderr) if x.encoding is None] :
      import os
      defEnc = None
      if defEnc is None :
        try:
          import locale
          defEnc = locale.getpreferredencoding()
        except: pass
      if defEnc is None :
        try: defEnc = sys.getfilesystemencoding()
        except: pass
      if defEnc is None :
        try: defEnc = sys.stdin.encoding
        except: pass
      if defEnc is None :
        defEnc = last_resort_default
      os.environ['PYTHONIOENCODING'] = os.environ.get("PYTHONIOENCODING",defEnc)
      os.execvpe(sys.argv[0],sys.argv,os.environ)
__fix_io_encoding() ; del __fix_io_encoding

Yes, it’s possible to get an infinite loop here if this “setenv” fails.


回答 7

我只是以为我在这里提到了一些东西,在我最终意识到发生了什么之前,我不得不花很长时间进行试验。对于这里的每个人来说,这可能是如此明显,以至于他们都没有理会它。但是如果他们有的话,这对我会有所帮助,所以按照这个原则…!

注意:我专门使用的是Jython 2.7版,所以可能这可能不适用于CPython

NB2:我的.py文件的前两行是:

# -*- coding: utf-8 -*-
from __future__ import print_function

“%”(也称为“插值运算符”)字符串构造机制也会引起其他问题……如果“环境”的默认编码为ASCII,则尝试执行类似的操作

print( "bonjour, %s" % "fréd" )  # Call this "print A"

您将在Eclipse中运行没有困难…在Windows CLI(DOS窗口)中,您会发现编码是代码页850(我的Windows 7 OS)或类似的东西,至少可以处理欧洲带有重音符号的字符,因此它会工作的。

print( u"bonjour, %s" % "fréd" ) # Call this "print B"

也可以。

如果是OTOH,您从CLI定向到文件,则stdout编码将为None,它将默认设置为ASCII(无论如何在我的OS上),它将无法处理以上任何打印…(可怕的编码)错误)。

因此,您可能会考虑使用来重定向您的标准输出

sys.stdout = codecs.getwriter('utf8')(sys.stdout)

并尝试在CLI管道中运行到文件…很奇怪,上面的打印A可以工作…但是上面的打印B将抛出编码错误!但是,以下内容可以正常运行:

print( u"bonjour, " + "fréd" ) # Call this "print C"

我得出的结论(临时)是,如果将使用“ u”前缀指定为Unicode字符串的字符串提交给%-handling机制,则似乎涉及使用默认环境编码,无论是否已将stdout设置为重定向!

人们如何处理这是一个选择问题。我欢迎Unicode专家说出为什么会发生这种情况,我是否以某种方式出错了,对此的首选解决方案,是否也适用于CPython,它是否发生在Python 3中,等等。

I just thought I’d mention something here which I had to spent a long time experimenting with before I finally realised what was going on. This may be so obvious to everyone here that they haven’t bothered mentioning it. But it would’ve helped me if they had, so on that principle…!

NB: I am using Jython specifically, v 2.7, so just possibly this may not apply to CPython

NB2: the first two lines of my .py file here are:

# -*- coding: utf-8 -*-
from __future__ import print_function

The “%” (AKA “interpolation operator”) string construction mechanism causes ADDITIONAL problems too… If the default encoding of the “environment” is ASCII and you try to do something like

print( "bonjour, %s" % "fréd" )  # Call this "print A"

You will have no difficulty running in Eclipse… In a Windows CLI (DOS window) you will find that the encoding is code page 850 (my Windows 7 OS) or something similar, which can handle European accented characters at least, so it’ll work.

print( u"bonjour, %s" % "fréd" ) # Call this "print B"

will also work.

If, OTOH, you direct to a file from the CLI, the stdout encoding will be None, which will default to ASCII (on my OS anyway), which will not be able to handle either of the above prints… (dreaded encoding error).

So then you might think of redirecting your stdout by using

sys.stdout = codecs.getwriter('utf8')(sys.stdout)

and try running in the CLI piping to a file… Very oddly, print A above will work… But print B above will throw the encoding error! The following will however work OK:

print( u"bonjour, " + "fréd" ) # Call this "print C"

The conclusion I have come to (provisionally) is that if a string which is specified to be a Unicode string using the “u” prefix is submitted to the %-handling mechanism it appears to involve the use of the default environment encoding, regardless of whether you have set stdout to redirect!

How people deal with this is a matter of choice. I would welcome a Unicode expert to say why this happens, whether I’ve got it wrong in some way, what the preferred solution to this, whether it also applies to CPython, whether it happens in Python 3, etc., etc.


回答 8

我在旧版应用程序中遇到了这个问题,很难确定打印的内容。我帮助自己解决了这个问题:

# encoding_utf8.py
import codecs
import builtins


def print_utf8(text, **kwargs):
    print(str(text).encode('utf-8'), **kwargs)


def print_utf8(fn):
    def print_fn(*args, **kwargs):
        return fn(str(*args).encode('utf-8'), **kwargs)
    return print_fn


builtins.print = print_utf8(print)

在我的脚本之上,test.py:

import encoding_utf8
string = 'Axwell Λ Ingrosso'
print(string)

请注意,这会将所有调用更改为使用编码进行打印,因此您的控制台将打印以下内容:

$ python test.py
b'Axwell \xce\x9b Ingrosso'

I ran into this problem in a legacy application, and it was difficult to identify where what was printed. I helped myself with this hack:

# encoding_utf8.py
import codecs
import builtins


def print_utf8(text, **kwargs):
    print(str(text).encode('utf-8'), **kwargs)


def print_utf8(fn):
    def print_fn(*args, **kwargs):
        return fn(str(*args).encode('utf-8'), **kwargs)
    return print_fn


builtins.print = print_utf8(print)

On top of my script, test.py:

import encoding_utf8
string = 'Axwell Λ Ingrosso'
print(string)

Note that this changes ALL calls to print to use an encoding, so your console will print this:

$ python test.py
b'Axwell \xce\x9b Ingrosso'

回答 9

在Windows上,当从编辑器(例如Sublime Text)运行Python代码时,我经常遇到此问题,但没有从命令行运行它时。

在这种情况下,请检查编辑器的参数。对于SublimeText,这Python.sublime-build解决了它:

{
  "cmd": ["python", "-u", "$file"],
  "file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
  "selector": "source.python",
  "encoding": "utf8",
  "env": {"PYTHONIOENCODING": "utf-8", "LANG": "en_US.UTF-8"}
}

On Windows, I had this problem very often when running a Python code from an editor (like Sublime Text), but not if running it from command-line.

In this case, check your editor’s parameters. In the case of SublimeText, this Python.sublime-build solved it:

{
  "cmd": ["python", "-u", "$file"],
  "file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
  "selector": "source.python",
  "encoding": "utf8",
  "env": {"PYTHONIOENCODING": "utf-8", "LANG": "en_US.UTF-8"}
}