编码/解码有什么区别?

问题:编码/解码有什么区别?

我从来不确定我了解str / unicode解码和编码之间的区别。

我知道这str().decode()是针对当您有一个字节字符串,并且您知道该字符串具有某种字符编码时,给定该编码名称,它将返回一个unicode字符串。

我知道unicode().encode()根据给定的编码名称将Unicode字符转换为字节字符串。

但我不明白是什么str().encode()以及unicode().decode()是。有人可以解释,也可以更正我在上面遇到的其他错误吗?

编辑:

有几个答案给出了.encode有关字符串处理内容的信息,但似乎没人知道.decodeUnicode的处理内容。

I’ve never been sure that I understand the difference between str/unicode decode and encode.

I know that str().decode() is for when you have a string of bytes that you know has a certain character encoding, given that encoding name it will return a unicode string.

I know that unicode().encode() converts unicode chars into a string of bytes according to a given encoding name.

But I don’t understand what str().encode() and unicode().decode() are for. Can anyone explain, and possibly also correct anything else I’ve gotten wrong above?

EDIT:

Several answers give info on what .encode does on a string, but no-one seems to know what .decode does for unicode.


回答 0

decodeUnicode字符串的方法实际上根本没有任何应用程序(除非出于某种原因在Unicode字符串中包含一些非文本数据,请参见下文)。我认为主要是出于历史原因。在Python 3中,它完全消失了。

unicode().decode()将执行隐式编码s使用默认(ASCII)编解码器。像这样验证:

>>> s = u'ö'
>>> s.decode()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0:
ordinal not in range(128)

>>> s.encode('ascii')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0:
ordinal not in range(128)

错误消息是完全相同的。

对于str().encode()它周围的其他方法-它试图隐式解码s默认编码方式:

>>> s = 'ö'
>>> s.decode('utf-8')
u'\xf6'
>>> s.encode()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
ordinal not in range(128)

这样使用,str().encode()也是多余的。

但是后一种方法的另一个应用很有用:有些编码与字符集无关,因此可以有意义的方式应用于8位字符串:

>>> s.encode('zip')
'x\x9c;\xbc\r\x00\x02>\x01z'

但是,您是对的:这两个应用程序对“编码”的模棱两可用法令人生厌。同样,在Python 3中使用单独bytestring类型,这不再是问题。

The decode method of unicode strings really doesn’t have any applications at all (unless you have some non-text data in a unicode string for some reason — see below). It is mainly there for historical reasons, i think. In Python 3 it is completely gone.

unicode().decode() will perform an implicit encoding of s using the default (ascii) codec. Verify this like so:

>>> s = u'ö'
>>> s.decode()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0:
ordinal not in range(128)

>>> s.encode('ascii')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0:
ordinal not in range(128)

The error messages are exactly the same.

For str().encode() it’s the other way around — it attempts an implicit decoding of s with the default encoding:

>>> s = 'ö'
>>> s.decode('utf-8')
u'\xf6'
>>> s.encode()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
ordinal not in range(128)

Used like this, str().encode() is also superfluous.

But there is another application of the latter method that is useful: there are encodings that have nothing to do with character sets, and thus can be applied to 8-bit strings in a meaningful way:

>>> s.encode('zip')
'x\x9c;\xbc\r\x00\x02>\x01z'

You are right, though: the ambiguous usage of “encoding” for both these applications is… awkard. Again, with separate byte and string types in Python 3, this is no longer an issue.


回答 1

将unicode字符串表示为字节字符串被​​称为encoding。使用u'...'.encode(encoding)

例:

    >>>u'æøå'.encode('utf8')
    '\ xc3 \ x83 \ xc2 \ xa6 \ xc3 \ x83 \ xc2 \ xb8 \ xc3 \ x83 \ xc2 \ xa5'
    >>>u'æøå'.encode('latin1')
    '\ xc3 \ xa6 \ xc3 \ xb8 \ xc3 \ xa5'
    >>>u'æøå'.encode('ascii')
    UnicodeEncodeError:'ascii'编解码器无法编码位置0-5处的字符: 
    序数不在范围内(128)

通常,在需要将unicode字符串用于IO(例如,通过网络传输它或将其保存到磁盘文件)时,通常会对其进行编码。

将字节字符串转换为unicode字符串称为解码。使用unicode('...', encoding)或’…’。decode(encoding)。

例:

   >>>u'æøå'
   u'\ xc3 \ xa6 \ xc3 \ xb8 \ xc3 \ xa5'#解释程序将这样打印unicode对象
   >>> unicode('\ xc3 \ xa6 \ xc3 \ xb8 \ xc3 \ xa5','latin1')
   u'\ xc3 \ xa6 \ xc3 \ xb8 \ xc3 \ xa5'
   >>>'\ xc3 \ xa6 \ xc3 \ xb8 \ xc3 \ xa5'.decode('latin1')
   u'\ xc3 \ xa6 \ xc3 \ xb8 \ xc3 \ xa5'

通常,每当您从网络或磁盘文件接收到字符串数据时,就对字节字符串进行解码。

我相信python 3的unicode处理方式有所变化,因此以上内容可能不适用于python 3。

一些好的链接:

To represent a unicode string as a string of bytes is known as encoding. Use u'...'.encode(encoding).

Example:

    >>> u'æøå'.encode('utf8')
    '\xc3\x83\xc2\xa6\xc3\x83\xc2\xb8\xc3\x83\xc2\xa5'
    >>> u'æøå'.encode('latin1')
    '\xc3\xa6\xc3\xb8\xc3\xa5'
    >>> u'æøå'.encode('ascii')
    UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-5: 
    ordinal not in range(128)

You typically encode a unicode string whenever you need to use it for IO, for instance transfer it over the network, or save it to a disk file.

To convert a string of bytes to a unicode string is known as decoding. Use unicode('...', encoding) or ‘…’.decode(encoding).

Example:

   >>> u'æøå'
   u'\xc3\xa6\xc3\xb8\xc3\xa5' # the interpreter prints the unicode object like so
   >>> unicode('\xc3\xa6\xc3\xb8\xc3\xa5', 'latin1')
   u'\xc3\xa6\xc3\xb8\xc3\xa5'
   >>> '\xc3\xa6\xc3\xb8\xc3\xa5'.decode('latin1')
   u'\xc3\xa6\xc3\xb8\xc3\xa5'

You typically decode a string of bytes whenever you receive string data from the network or from a disk file.

I believe there are some changes in unicode handling in python 3, so the above is probably not correct for python 3.

Some good links:


回答 2

Unicode。encode(’encoding’)产生一个字符串对象,并且可以在unicode对象上调用

aString。解码(“编码”)产生一个unicode对象,可以在以给定编码方式编码的字符串上调用。


一些更多的解释:

您可以创建一些未设置任何编码的unicode对象。Python将其存储在内存中的方式与您无关。您可以对其进行搜索,拆分并调用您喜欢的任何字符串操作函数。

但是有时候,您想将unicode对象打印为控制台或某些文本文件。因此,您必须对其进行编码(例如-在UTF-8中),调用encode(’utf-8’),然后会得到一个带有’\ u <someNumber>’的字符串,该字符串可完美打印。

然后,再次(您想做相反的事情)读取以UTF-8编码的字符串并将其视为Unicode,因此\ u360将是一个字符,而不是5。然后解码一个字符串(使用选定的编码),然后获取unicode类型的全新对象。

恰如其分-您可以选择一些变态编码,例如’zip’,’base64’,’rot’,其中一些会在字符串之间转换,但是我认为最常见的情况是涉及UTF-8 / UTF-16和字符串。

anUnicode.encode(‘encoding’) results in a string object and can be called on a unicode object

aString.decode(‘encoding’) results in an unicode object and can be called on a string, encoded in given encoding.


Some more explanations:

You can create some unicode object, which doesn’t have any encoding set. The way it is stored by Python in memory is none of your concern. You can search it, split it and call any string manipulating function you like.

But there comes a time, when you’d like to print your unicode object to console or into some text file. So you have to encode it (for example – in UTF-8), you call encode(‘utf-8’) and you get a string with ‘\u<someNumber>’ inside, which is perfectly printable.

Then, again – you’d like to do the opposite – read string encoded in UTF-8 and treat it as an Unicode, so the \u360 would be one character, not 5. Then you decode a string (with selected encoding) and get brand new object of the unicode type.

Just as a side note – you can select some pervert encoding, like ‘zip’, ‘base64’, ‘rot’ and some of them will convert from string to string, but I believe the most common case is one that involves UTF-8/UTF-16 and string.


回答 3

mybytestring.encode(somecodec)对于以下值有意义somecodec

  • base64
  • bz2
  • zlib
  • 十六进制
  • 夸普里
  • 腐烂13
  • string_escape
  • u

我不确定解码已解码的unicode文本适合什么。尝试使用任何编码似乎总是先尝试使用系统的默认编码进行编码。

mybytestring.encode(somecodec) is meaningful for these values of somecodec:

  • base64
  • bz2
  • zlib
  • hex
  • quopri
  • rot13
  • string_escape
  • uu

I am not sure what decoding an already decoded unicode text is good for. Trying that with any encoding seems to always try to encode with the system’s default encoding first.


回答 4

有几种编码可用于从str到str或从unicode到unicode解码/编码。例如base64,hex甚至rot13。它们在编解码器模块中列出。

编辑:

Unicode字符串上的解码消息可以撤消相应的编码操作:

In [1]: u'0a'.decode('hex')
Out[1]: '\n'

返回的类型是str而不是unicode,我认为这很不幸。但是,当您没有在str和unicode之间进行适当的编码/解码时,无论如何这看起来都是一团糟。

There are a few encodings that can be used to de-/encode from str to str or from unicode to unicode. For example base64, hex or even rot13. They are listed in the codecs module.

Edit:

The decode message on a unicode string can undo the corresponding encode operation:

In [1]: u'0a'.decode('hex')
Out[1]: '\n'

The returned type is str instead of unicode which is unfortunate in my opinion. But when you are not doing a proper en-/decode between str and unicode this looks like a mess anyway.


回答 5

简单的答案是它们彼此完全相反。

计算机使用字节的最基本单位来存储和处理信息。这对人眼毫无意义。

例如,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\中文单词,在这种情况下,它是“ utf-8”字典,如果您查看其他或错误的字典(使用其他解码方法),它将无法正确显示预期的中文单词。

在上述情况下,计算机查找中文单词的过程为decode()

并且计算机将中文写入计算机存储器的过程是encode()

因此,编码信息是原始字节,解码信息是原始字节和要引用的字典的名称(但不是字典本身)。

The simple answer is that they are the exact opposite of each other.

The computer uses the very basic unit of byte to store and process information; it is meaningless for human eyes.

For example,’\xe4\xb8\xad\xe6\x96\x87′ is the representation of two Chinese characters, but the computer only knows (meaning print or store) it is Chinese Characters when they are given a dictionary to look for that Chinese word, in this case, it is a “utf-8” dictionary, and it would fail to correctly show the intended Chinese word if you look into a different or wrong dictionary (using a different decoding method).

In the above case, the process for a computer to look for Chinese word is decode().

And the process of computer writing the Chinese into computer memory is encode().

So the encoded information is the raw bytes, and the decoded information is the raw bytes and the name of the dictionary to reference (but not the dictionary itself).