问题:将字节转换为字符串

我正在使用以下代码从外部程序获取标准输出:

>>> from subprocess import *
>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]

communication()方法返回一个字节数组:

>>> command_stdout
b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2\n'

但是,我想将输出作为普通的Python字符串使用。这样我就可以像这样打印它:

>>> print(command_stdout)
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2

我认为这就是binascii.b2a_qp()方法的用途,但是当我尝试使用它时,我又得到了相同的字节数组:

>>> binascii.b2a_qp(command_stdout)
b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2\n'

如何将字节值转换回字符串?我的意思是,使用“电池”而不是手动进行操作。我希望它与Python 3兼容。

I’m using this code to get standard output from an external program:

>>> from subprocess import *
>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]

The communicate() method returns an array of bytes:

>>> command_stdout
b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2\n'

However, I’d like to work with the output as a normal Python string. So that I could print it like this:

>>> print(command_stdout)
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2

I thought that’s what the binascii.b2a_qp() method is for, but when I tried it, I got the same byte array again:

>>> binascii.b2a_qp(command_stdout)
b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2\n'

How do I convert the bytes value back to string? I mean, using the “batteries” instead of doing it manually. And I’d like it to be OK with Python 3.


回答 0

您需要解码bytes对象以产生一个字符串:

>>> b"abcde"
b'abcde'

# utf-8 is used here because it is a very common encoding, but you
# need to use the encoding your data is actually in.
>>> b"abcde".decode("utf-8") 
'abcde'

You need to decode the bytes object to produce a string:

>>> b"abcde"
b'abcde'

# utf-8 is used here because it is a very common encoding, but you
# need to use the encoding your data is actually in.
>>> b"abcde".decode("utf-8") 
'abcde'

回答 1

您需要解码该字节字符串,然后将其转换为字符(Unicode)字符串。

在Python 2上

encoding = 'utf-8'
'hello'.decode(encoding)

要么

unicode('hello', encoding)

在Python 3上

encoding = 'utf-8'
b'hello'.decode(encoding)

要么

str(b'hello', encoding)

You need to decode the byte string and turn it in to a character (Unicode) string.

On Python 2

encoding = 'utf-8'
'hello'.decode(encoding)

or

unicode('hello', encoding)

On Python 3

encoding = 'utf-8'
b'hello'.decode(encoding)

or

str(b'hello', encoding)

回答 2

我认为这种方式很简单:

>>> bytes_data = [112, 52, 52]
>>> "".join(map(chr, bytes_data))
'p44'

I think this way is easy:

>>> bytes_data = [112, 52, 52]
>>> "".join(map(chr, bytes_data))
'p44'

回答 3

如果您不知道编码,则要以Python 3和Python 2兼容的方式将二进制输入读取为字符串,请使用古老的MS-DOS CP437编码:

PY3K = sys.version_info >= (3, 0)

lines = []
for line in stream:
    if not PY3K:
        lines.append(line)
    else:
        lines.append(line.decode('cp437'))

因为编码是未知的,所以希望将非英语符号转换为字符cp437(不会翻译英语字符,因为它们在大多数单字节编码和UTF-8中都匹配)。

将任意二进制输入解码为UTF-8是不安全的,因为您可能会得到以下信息:

>>> b'\x00\x01\xffsd'.decode('utf-8')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 2: invalid
start byte

同样适用于latin-1,这在Python 2中很流行(默认?)。请参见“ 代码页布局”中的遗漏之处-这是Python臭名昭著的地方ordinal not in range

UPDATE 20150604:有传言称Python 3具有surrogateescape错误策略,可将内容编码为二进制数据而不会导致数据丢失和崩溃,但它需要进行转换测试[binary] -> [str] -> [binary],以验证性能和可靠性。

更新20170116:感谢评论-还可以使用backslashreplace错误处理程序对所有未知字节进行斜线转义。这仅适用于Python 3,因此即使采用这种解决方法,您仍然会从不同的Python版本获得不一致的输出:

PY3K = sys.version_info >= (3, 0)

lines = []
for line in stream:
    if not PY3K:
        lines.append(line)
    else:
        lines.append(line.decode('utf-8', 'backslashreplace'))

看到 详细信息, Python的Unicode支持

更新20170119:我决定实现适用于Python 2和Python 3的斜线转义解码。它应该比cp437解决方案要慢,但是在每个Python版本上都应产生相同的结果

# --- preparation

import codecs

def slashescape(err):
    """ codecs error handler. err is UnicodeDecode instance. return
    a tuple with a replacement for the unencodable part of the input
    and a position where encoding should continue"""
    #print err, dir(err), err.start, err.end, err.object[:err.start]
    thebyte = err.object[err.start:err.end]
    repl = u'\\x'+hex(ord(thebyte))[2:]
    return (repl, err.end)

codecs.register_error('slashescape', slashescape)

# --- processing

stream = [b'\x80abc']

lines = []
for line in stream:
    lines.append(line.decode('utf-8', 'slashescape'))

If you don’t know the encoding, then to read binary input into string in Python 3 and Python 2 compatible way, use the ancient MS-DOS CP437 encoding:

PY3K = sys.version_info >= (3, 0)

lines = []
for line in stream:
    if not PY3K:
        lines.append(line)
    else:
        lines.append(line.decode('cp437'))

Because encoding is unknown, expect non-English symbols to translate to characters of cp437 (English characters are not translated, because they match in most single byte encodings and UTF-8).

Decoding arbitrary binary input to UTF-8 is unsafe, because you may get this:

>>> b'\x00\x01\xffsd'.decode('utf-8')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 2: invalid
start byte

The same applies to latin-1, which was popular (the default?) for Python 2. See the missing points in Codepage Layout – it is where Python chokes with infamous ordinal not in range.

UPDATE 20150604: There are rumors that Python 3 has the surrogateescape error strategy for encoding stuff into binary data without data loss and crashes, but it needs conversion tests, [binary] -> [str] -> [binary], to validate both performance and reliability.

UPDATE 20170116: Thanks to comment by Nearoo – there is also a possibility to slash escape all unknown bytes with backslashreplace error handler. That works only for Python 3, so even with this workaround you will still get inconsistent output from different Python versions:

PY3K = sys.version_info >= (3, 0)

lines = []
for line in stream:
    if not PY3K:
        lines.append(line)
    else:
        lines.append(line.decode('utf-8', 'backslashreplace'))

See Python’s Unicode Support for details.

UPDATE 20170119: I decided to implement slash escaping decode that works for both Python 2 and Python 3. It should be slower than the cp437 solution, but it should produce identical results on every Python version.

# --- preparation

import codecs

def slashescape(err):
    """ codecs error handler. err is UnicodeDecode instance. return
    a tuple with a replacement for the unencodable part of the input
    and a position where encoding should continue"""
    #print err, dir(err), err.start, err.end, err.object[:err.start]
    thebyte = err.object[err.start:err.end]
    repl = u'\\x'+hex(ord(thebyte))[2:]
    return (repl, err.end)

codecs.register_error('slashescape', slashescape)

# --- processing

stream = [b'\x80abc']

lines = []
for line in stream:
    lines.append(line.decode('utf-8', 'slashescape'))

回答 4

在Python 3中,默认编码为"utf-8",因此您可以直接使用:

b'hello'.decode()

相当于

b'hello'.decode(encoding="utf-8")

另一方面,在Python 2中,编码默认为默认的字符串编码。因此,您应该使用:

b'hello'.decode(encoding)

encoding您想要的编码在哪里。

注意:在Python 2.7中添加了对关键字参数的支持。

In Python 3, the default encoding is "utf-8", so you can directly use:

b'hello'.decode()

which is equivalent to

b'hello'.decode(encoding="utf-8")

On the other hand, in Python 2, encoding defaults to the default string encoding. Thus, you should use:

b'hello'.decode(encoding)

where encoding is the encoding you want.

Note: support for keyword arguments was added in Python 2.7.


回答 5

我认为您实际上想要这样:

>>> from subprocess import *
>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]
>>> command_text = command_stdout.decode(encoding='windows-1252')

亚伦的答案是正确的,除了您需要知道哪个要使用编码。而且我相信Windows使用的是“ windows-1252”。仅当您的内容中包含一些不寻常的(非ASCII)字符时,这才有意义,但这将有所作为。

顺便说一句,它事实上事情的原因了Python转移到使用两种不同类型的二进制和文本数据:它不能神奇地将它们转换之间,因为它不知道编码,除非你告诉它!您唯一知道的方法是阅读Windows文档(或在此处阅读)。

I think you actually want this:

>>> from subprocess import *
>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]
>>> command_text = command_stdout.decode(encoding='windows-1252')

Aaron’s answer was correct, except that you need to know which encoding to use. And I believe that Windows uses ‘windows-1252’. It will only matter if you have some unusual (non-ASCII) characters in your content, but then it will make a difference.

By the way, the fact that it does matter is the reason that Python moved to using two different types for binary and text data: it can’t convert magically between them, because it doesn’t know the encoding unless you tell it! The only way YOU would know is to read the Windows documentation (or read it here).


回答 6

将Universal_newlines设置为True,即

command_stdout = Popen(['ls', '-l'], stdout=PIPE, universal_newlines=True).communicate()[0]

Set universal_newlines to True, i.e.

command_stdout = Popen(['ls', '-l'], stdout=PIPE, universal_newlines=True).communicate()[0]

回答 7

虽然@Aaron Maenpaa的答案有效,但最近有用户

有没有更简单的方法?’fhand.read()。decode(“ ASCII”)'[…]太长了!

您可以使用:

command_stdout.decode()

decode()有一个标准参数

codecs.decode(obj, encoding='utf-8', errors='strict')

While @Aaron Maenpaa’s answer just works, a user recently asked:

Is there any more simply way? ‘fhand.read().decode(“ASCII”)’ […] It’s so long!

You can use:

command_stdout.decode()

decode() has a standard argument:

codecs.decode(obj, encoding='utf-8', errors='strict')


回答 8

要将字节序列解释为文本,您必须知道相应的字符编码:

unicode_text = bytestring.decode(character_encoding)

例:

>>> b'\xc2\xb5'.decode('utf-8')
'µ'

ls命令可能会产生无法解释为文本的输出。Unix上的文件名可以是任何字节序列,但斜杠b'/'和零 除外b'\0'

>>> open(bytes(range(0x100)).translate(None, b'\0/'), 'w').close()

尝试使用utf-8编码对此类字节汤进行解码将引发UnicodeDecodeError

可能会更糟。 如果使用错误的不兼容编码,解码可能会默默失败并产生mojibake

>>> '—'.encode('utf-8').decode('cp1252')
'—'

数据已损坏,但是您的程序仍然不知道发生了故障。

通常,要使用的字符编码不会嵌入字节序列本身。您必须带外传达此信息。一些结果比其他结果更有可能,因此chardet存在可以猜测字符编码的模块。单个Python脚本可能在不同位置使用多种字符编码。


ls可以使用os.fsdecode() 即使对于无法解码的文件名也成功的函数将输出转换为Python字符串(在Unix上使用 sys.getfilesystemencoding()surrogateescape错误处理程序):

import os
import subprocess

output = os.fsdecode(subprocess.check_output('ls'))

要获取原始字节,可以使用os.fsencode()

如果传递universal_newlines=True参数,则subprocess用于 locale.getpreferredencoding(False)解码字节,例如,它可以 cp1252在Windows上使用。

要实时解码字节流, io.TextIOWrapper() 可以使用:example

不同的命令可能对其输出使用不同的字符编码,例如,dir内部命令(cmd)可能使用cp437。要解码其输出,可以显式传递编码(Python 3.6+):

output = subprocess.check_output('dir', shell=True, encoding='cp437')

文件名可能与os.listdir()(使用Windows Unicode API)不同(例如,'\xb6'可以用'\x14'—Python的cp437编解码器映射b'\x14'代替)来控制字符U + 0014而不是U + 00B6(¶)。要支持带有任意Unicode字符的文件名,请参阅将 PowerShell输出可能包含非ASCII Unicode字符解码为Python字符串。

To interpret a byte sequence as a text, you have to know the corresponding character encoding:

unicode_text = bytestring.decode(character_encoding)

Example:

>>> b'\xc2\xb5'.decode('utf-8')
'µ'

ls command may produce output that can’t be interpreted as text. File names on Unix may be any sequence of bytes except slash b'/' and zero b'\0':

>>> open(bytes(range(0x100)).translate(None, b'\0/'), 'w').close()

Trying to decode such byte soup using utf-8 encoding raises UnicodeDecodeError.

It can be worse. The decoding may fail silently and produce mojibake if you use a wrong incompatible encoding:

>>> '—'.encode('utf-8').decode('cp1252')
'—'

The data is corrupted but your program remains unaware that a failure has occurred.

In general, what character encoding to use is not embedded in the byte sequence itself. You have to communicate this info out-of-band. Some outcomes are more likely than others and therefore chardet module exists that can guess the character encoding. A single Python script may use multiple character encodings in different places.


ls output can be converted to a Python string using os.fsdecode() function that succeeds even for undecodable filenames (it uses sys.getfilesystemencoding() and surrogateescape error handler on Unix):

import os
import subprocess

output = os.fsdecode(subprocess.check_output('ls'))

To get the original bytes, you could use os.fsencode().

If you pass universal_newlines=True parameter then subprocess uses locale.getpreferredencoding(False) to decode bytes e.g., it can be cp1252 on Windows.

To decode the byte stream on-the-fly, io.TextIOWrapper() could be used: example.

Different commands may use different character encodings for their output e.g., dir internal command (cmd) may use cp437. To decode its output, you could pass the encoding explicitly (Python 3.6+):

output = subprocess.check_output('dir', shell=True, encoding='cp437')

The filenames may differ from os.listdir() (which uses Windows Unicode API) e.g., '\xb6' can be substituted with '\x14'—Python’s cp437 codec maps b'\x14' to control character U+0014 instead of U+00B6 (¶). To support filenames with arbitrary Unicode characters, see Decode PowerShell output possibly containing non-ASCII Unicode characters into a Python string


回答 9

由于这个问题实际上是在询问subprocess输出,因此您可以使用更直接的方法,因为它Popen接受了encoding关键字(在Python 3.6+中):

>>> from subprocess import Popen, PIPE
>>> text = Popen(['ls', '-l'], stdout=PIPE, encoding='utf-8').communicate()[0]
>>> type(text)
str
>>> print(text)
total 0
-rw-r--r-- 1 wim badger 0 May 31 12:45 some_file.txt

其他用户的一般答案是将字节解码为文本:

>>> b'abcde'.decode()
'abcde'

没有参数,将被使用。如果您的数据不是sys.getdefaultencoding(),那么您必须在decode调用中显式指定编码:

>>> b'caf\xe9'.decode('cp1250')
'café'

Since this question is actually asking about subprocess output, you have a more direct approach available since Popen accepts an encoding keyword (in Python 3.6+):

>>> from subprocess import Popen, PIPE
>>> text = Popen(['ls', '-l'], stdout=PIPE, encoding='utf-8').communicate()[0]
>>> type(text)
str
>>> print(text)
total 0
-rw-r--r-- 1 wim badger 0 May 31 12:45 some_file.txt

The general answer for other users is to decode bytes to text:

>>> b'abcde'.decode()
'abcde'

With no argument, will be used. If your data is not sys.getdefaultencoding(), then you must specify the encoding explicitly in the decode call:

>>> b'caf\xe9'.decode('cp1250')
'café'

回答 10

如果您应该尝试以下操作decode()

AttributeError:“ str”对象没有属性“ decode”

您还可以直接在转换中指定编码类型:

>>> my_byte_str
b'Hello World'

>>> str(my_byte_str, 'utf-8')
'Hello World'

If you should get the following by trying decode():

AttributeError: ‘str’ object has no attribute ‘decode’

You can also specify the encoding type straight in a cast:

>>> my_byte_str
b'Hello World'

>>> str(my_byte_str, 'utf-8')
'Hello World'

回答 11

当使用Windows系统中的数据(以\r\n行结尾)时,我的答案是

String = Bytes.decode("utf-8").replace("\r\n", "\n")

为什么?尝试使用多行Input.txt:

Bytes = open("Input.txt", "rb").read()
String = Bytes.decode("utf-8")
open("Output.txt", "w").write(String)

您所有的行尾都将加倍(以 \r\r\n),从而导致多余的空行。Python的文本读取函数通常会规范行尾,因此字符串只能使用\n。如果您从Windows系统接收二进制数据,Python将没有机会这样做。从而,

Bytes = open("Input.txt", "rb").read()
String = Bytes.decode("utf-8").replace("\r\n", "\n")
open("Output.txt", "w").write(String)

将复制您的原始文件。

When working with data from Windows systems (with \r\n line endings), my answer is

String = Bytes.decode("utf-8").replace("\r\n", "\n")

Why? Try this with a multiline Input.txt:

Bytes = open("Input.txt", "rb").read()
String = Bytes.decode("utf-8")
open("Output.txt", "w").write(String)

All your line endings will be doubled (to \r\r\n), leading to extra empty lines. Python’s text-read functions usually normalize line endings so that strings use only \n. If you receive binary data from a Windows system, Python does not have a chance to do that. Thus,

Bytes = open("Input.txt", "rb").read()
String = Bytes.decode("utf-8").replace("\r\n", "\n")
open("Output.txt", "w").write(String)

will replicate your original file.


回答 12

我做了一个清理清单的功能

def cleanLists(self, lista):
    lista = [x.strip() for x in lista]
    lista = [x.replace('\n', '') for x in lista]
    lista = [x.replace('\b', '') for x in lista]
    lista = [x.encode('utf8') for x in lista]
    lista = [x.decode('utf8') for x in lista]

    return lista

I made a function to clean a list

def cleanLists(self, lista):
    lista = [x.strip() for x in lista]
    lista = [x.replace('\n', '') for x in lista]
    lista = [x.replace('\b', '') for x in lista]
    lista = [x.encode('utf8') for x in lista]
    lista = [x.decode('utf8') for x in lista]

    return lista

回答 13

对于Python 3,这是一个更安全和Python的方法来从转换bytestring

def byte_to_str(bytes_or_str):
    if isinstance(bytes_or_str, bytes): # Check if it's in bytes
        print(bytes_or_str.decode('utf-8'))
    else:
        print("Object not of byte type")

byte_to_str(b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2\n')

输出:

total 0
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2

For Python 3, this is a much safer and Pythonic approach to convert from byte to string:

def byte_to_str(bytes_or_str):
    if isinstance(bytes_or_str, bytes): # Check if it's in bytes
        print(bytes_or_str.decode('utf-8'))
    else:
        print("Object not of byte type")

byte_to_str(b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2\n')

Output:

total 0
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2

回答 14

sys —系统特定的参数和功能

要从标准流写入二进制数据或从标准流读取二进制数据,请使用基础二进制缓冲区。例如,要将字节写入stdout,请使用sys.stdout.buffer.write(b'abc')

From sys — System-specific parameters and functions:

To write or read binary data from/to the standard streams, use the underlying binary buffer. For example, to write bytes to stdout, use sys.stdout.buffer.write(b'abc').


回答 15

def toString(string):    
    try:
        return v.decode("utf-8")
    except ValueError:
        return string

b = b'97.080.500'
s = '97.080.500'
print(toString(b))
print(toString(s))
def toString(string):    
    try:
        return v.decode("utf-8")
    except ValueError:
        return string

b = b'97.080.500'
s = '97.080.500'
print(toString(b))
print(toString(s))

回答 16

对于“运行shell命令并以文本而不是字节形式获取其输出” 的特定情况,在Python 3.7上,您应该使用并传入text=True(以及capture_output=True捕获输出)

command_result = subprocess.run(["ls", "-l"], capture_output=True, text=True)
command_result.stdout  # is a `str` containing your program's stdout

text过去称为universal_newlines,并在Python 3.7中进行了更改(很好,为别名)。如果要支持3.7之前的Python版本,请传入universal_newlines=True而不是text=True

For your specific case of “run a shell command and get its output as text instead of bytes”, on Python 3.7, you should use and pass in text=True (as well as capture_output=True to capture the output)

command_result = subprocess.run(["ls", "-l"], capture_output=True, text=True)
command_result.stdout  # is a `str` containing your program's stdout

text used to be called universal_newlines, and was changed (well, aliased) in Python 3.7. If you want to support Python versions before 3.7, pass in universal_newlines=True instead of text=True


回答 17

如果要转换任何字节,而不仅仅是将字符串转换为字节:

with open("bytesfile", "rb") as infile:
    str = base64.b85encode(imageFile.read())

with open("bytesfile", "rb") as infile:
    str2 = json.dumps(list(infile.read()))

但是,这不是很有效。它将2 MB的图片变成9 MB。

If you want to convert any bytes, not just string converted to bytes:

with open("bytesfile", "rb") as infile:
    str = base64.b85encode(imageFile.read())

with open("bytesfile", "rb") as infile:
    str2 = json.dumps(list(infile.read()))

This is not very efficient, however. It will turn a 2 MB picture into 9 MB.


回答 18

尝试这个

bytes.fromhex('c3a9').decode('utf-8') 

try this

bytes.fromhex('c3a9').decode('utf-8') 

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。