I'm still learning python and I have a doubt:
我还在学习python,我有一个疑问:
In python 2.6.x I usually declare encoding in the file header like this (as in PEP 0263)
在python 2.6。我通常在文件头中声明编码(如PEP 0263)
# -*- coding: utf-8 -*-
After that, my strings are written as usual:
在那之后,我的字串就像往常一样:
a = "A normal string without declared Unicode"
But everytime I see a python project code, the encoding is not declared at the header. Instead, it is declared at every string like this:
但是每次我看到python项目代码时,编码并没有在header中声明。相反,它是在每个字符串中声明的:
a = u"A string with declared Unicode"
What's the difference? What's the purpose of this? I know Python 2.6.x sets ASCII encoding by default, but it can be overriden by the header declaration, so what's the point of per string declaration?
有什么区别呢?它的目的是什么?我知道Python 2.6。默认情况下,x设置了ASCII编码,但它可以通过头声明来实现,那么每个字符串声明的要点是什么呢?
Addendum: Seems that I've mixed up file encoding with string encoding. Thanks for explaining it :)
附录:似乎我把文件编码和字符串编码混在一起了。谢谢你的解释
6 个解决方案
#1
144
Those are two different things, as others have mentioned.
这是两件不同的事情,就像其他人提到的那样。
When you specify # -*- coding: utf-8 -*-
, you're telling Python the source file you've saved is utf-8
. The default for Python 2 is ASCII (for Python 3 it's utf-8
). This just affects how the interpreter reads the characters in the file.
当您指定# -*编码:utf-8 -*时,您告诉Python您保存的源文件是utf-8。Python 2的默认值是ASCII(对于Python 3,它是utf-8)。这只会影响解释器如何读取文件中的字符。
In general, it's probably not the best idea to embed high unicode characters into your file no matter what the encoding is; you can use string unicode escapes, which work in either encoding.
一般来说,无论编码是什么,在文件中嵌入高的unicode字符可能不是最好的主意;您可以使用字符串unicode转义,它可以在任何编码中工作。
When you declare a string with a u
in front, like u'This is a string'
, it tells the Python compiler that the string is Unicode, not bytes. This is handled mostly transparently by the interpreter; the most obvious difference is that you can now embed unicode characters in the string (that is, u'\u2665'
is now legal). You can use from __future__ import unicode_literals
to make it the default.
当您在前面声明一个带有u的字符串时,比如u' this is a string',它告诉Python编译器该字符串是Unicode,而不是字节。这是由解释器透明地处理的;最明显的区别是,现在可以在字符串中嵌入unicode字符(也就是说,u'\u2665'现在是合法的)。您可以使用__future__导入unicode_literals使其成为默认值。
This only applies to Python 2; in Python 3 the default is Unicode, and you need to specify a b
in front (like b'These are bytes'
, to declare a sequence of bytes).
这只适用于Python 2;在Python 3中,默认是Unicode,您需要在前面指定一个b(如b'这些是字节',以声明一个字节序列)。
#2
19
As others have said, # coding:
specifies the encoding the source file is saved in. Here are some examples to illustrate this:
正如其他人所说,#编码:指定源文件被保存的编码。这里有一些例子来说明这一点:
A file saved on disk as cp437 (my console encoding), but no encoding declared
在磁盘上保存的文件为cp437(我的控制台编码),但没有声明编码。
b = 'über'
u = u'über'
print b,repr(b)
print u,repr(u)
Output:
输出:
File "C:\ex.py", line 1
SyntaxError: Non-ASCII character '\x81' in file C:\ex.py on line 1, but no
encoding declared; see http://www.python.org/peps/pep-0263.html for details
Output of file with # coding: cp437
added:
#编码文件输出:cp437添加:
über '\x81ber'
über u'\xfcber'
At first, Python didn't know the encoding and complained about the non-ASCII character. Once it knew the encoding, the byte string got the bytes that were actually on disk. For the Unicode string, Python read \x81, knew that in cp437 that was a ü, and decoded it into the Unicode codepoint for ü which is U+00FC. When the byte string was printed, Python sent the hex value 81
to the console directly. When the Unicode string was printed, Python correctly detected my console encoding as cp437 and translated Unicode ü to the cp437 value for ü.
起初,Python不知道编码,并抱怨非ascii字符。一旦它知道了编码,字节字符串就得到了实际在磁盘上的字节。对于Unicode字符串,Python read \x81,知道在cp437中是一个u,并将它解码为u +00FC的Unicode码点。当字节字符串被打印时,Python会直接将十六进制值81发送到控制台。当Unicode字符串被打印时,Python正确地检测到我的控制台编码为cp437,并将Unicode u转换为cp437值。
Here's what happens with a file declared and saved in UTF-8:
下面是在UTF-8中声明和保存的文件所发生的情况:
├╝ber '\xc3\xbcber'
über u'\xfcber'
In UTF-8, ü is encoded as the hex bytes C3 BC
, so the byte string contains those bytes, but the Unicode string is identical to the first example. Python read the two bytes and decoded it correctly. Python printed the byte string incorrectly, because it sent the two UTF-8 bytes representing ü directly to my cp437 console.
在UTF-8中,u被编码为十六进制字节的C3 BC,因此字节字符串包含这些字节,但是Unicode字符串与第一个示例相同。Python读取两个字节并正确解码。Python错误地打印了字节字符串,因为它将表示u的两个UTF-8字节直接发送到我的cp437控制台。
Here the file is declared cp437, but saved in UTF-8:
在这里,文件被声明为cp437,但保存在UTF-8中:
├╝ber '\xc3\xbcber'
├╝ber u'\u251c\u255dber'
The byte string still got the bytes on disk (UTF-8 hex bytes C3 BC
), but interpreted them as two cp437 characters instead of a single UTF-8-encoded character. Those two characters where translated to Unicode code points, and everything prints incorrectly.
字节字符串仍然得到磁盘上的字节(UTF-8十六进制字节),但是将它们解释为两个cp437字符,而不是一个UTF-8编码的字符。这两个字符转换为Unicode代码点,所有的打印错误。
#3
10
That doesn't set the format of the string; it sets the format of the file. Even with that header, "hello"
is a byte string, not a Unicode string. To make it Unicode, you're going to have to use u"hello"
everywhere. The header is just a hint of what format to use when reading the .py
file.
它没有设置字符串的格式;它设置文件的格式。即使有了这个头,“hello”是一个字节字符串,而不是Unicode字符串。要使它成为Unicode,您将不得不在任何地方使用u“hello”。header只是在读取.py文件时使用什么格式的提示。
#4
7
The header definition is to define the encoding of the code itself, not the resulting strings at runtime.
头定义是定义代码本身的编码,而不是在运行时生成的字符串。
putting a non-ascii character like ۲ in the python script without the utf-8 header definition will throw a warning error http://www.freeimagehosting.net/uploads/1ed15124c4.jpg
把非ascii字符像۲python脚本没有utf - 8头定义将抛出一个警告错误http://www.freeimagehosting.net/uploads/1ed15124c4.jpg
#5
0
if you are using python 2, add this:from __future__ import unicode_literals
如果您使用的是python 2,请添加:从__future__导入unicode_literals。
#6
0
I made the following module called unicoder to be able to do the transformation on variables:
我做了一个叫做unicoder的模块来做变量的变换:
import sys
import os
def ustr(string):
string = 'u"%s"'%string
with open('_unicoder.py', 'w') as script:
script.write('# -*- coding: utf-8 -*-\n')
script.write('_ustr = %s'%string)
import _unicoder
value = _unicoder._ustr
del _unicoder
del sys.modules['_unicoder']
os.system('del _unicoder.py')
os.system('del _unicoder.pyc')
return value
Then in your program you could do the following:
在你的程序中,你可以做如下的事情:
# -*- coding: utf-8 -*-
from unicoder import ustr
txt = 'Hello, Unicode World'
txt = ustr(txt)
print type(txt) # <type 'unicode'>
#1
144
Those are two different things, as others have mentioned.
这是两件不同的事情,就像其他人提到的那样。
When you specify # -*- coding: utf-8 -*-
, you're telling Python the source file you've saved is utf-8
. The default for Python 2 is ASCII (for Python 3 it's utf-8
). This just affects how the interpreter reads the characters in the file.
当您指定# -*编码:utf-8 -*时,您告诉Python您保存的源文件是utf-8。Python 2的默认值是ASCII(对于Python 3,它是utf-8)。这只会影响解释器如何读取文件中的字符。
In general, it's probably not the best idea to embed high unicode characters into your file no matter what the encoding is; you can use string unicode escapes, which work in either encoding.
一般来说,无论编码是什么,在文件中嵌入高的unicode字符可能不是最好的主意;您可以使用字符串unicode转义,它可以在任何编码中工作。
When you declare a string with a u
in front, like u'This is a string'
, it tells the Python compiler that the string is Unicode, not bytes. This is handled mostly transparently by the interpreter; the most obvious difference is that you can now embed unicode characters in the string (that is, u'\u2665'
is now legal). You can use from __future__ import unicode_literals
to make it the default.
当您在前面声明一个带有u的字符串时,比如u' this is a string',它告诉Python编译器该字符串是Unicode,而不是字节。这是由解释器透明地处理的;最明显的区别是,现在可以在字符串中嵌入unicode字符(也就是说,u'\u2665'现在是合法的)。您可以使用__future__导入unicode_literals使其成为默认值。
This only applies to Python 2; in Python 3 the default is Unicode, and you need to specify a b
in front (like b'These are bytes'
, to declare a sequence of bytes).
这只适用于Python 2;在Python 3中,默认是Unicode,您需要在前面指定一个b(如b'这些是字节',以声明一个字节序列)。
#2
19
As others have said, # coding:
specifies the encoding the source file is saved in. Here are some examples to illustrate this:
正如其他人所说,#编码:指定源文件被保存的编码。这里有一些例子来说明这一点:
A file saved on disk as cp437 (my console encoding), but no encoding declared
在磁盘上保存的文件为cp437(我的控制台编码),但没有声明编码。
b = 'über'
u = u'über'
print b,repr(b)
print u,repr(u)
Output:
输出:
File "C:\ex.py", line 1
SyntaxError: Non-ASCII character '\x81' in file C:\ex.py on line 1, but no
encoding declared; see http://www.python.org/peps/pep-0263.html for details
Output of file with # coding: cp437
added:
#编码文件输出:cp437添加:
über '\x81ber'
über u'\xfcber'
At first, Python didn't know the encoding and complained about the non-ASCII character. Once it knew the encoding, the byte string got the bytes that were actually on disk. For the Unicode string, Python read \x81, knew that in cp437 that was a ü, and decoded it into the Unicode codepoint for ü which is U+00FC. When the byte string was printed, Python sent the hex value 81
to the console directly. When the Unicode string was printed, Python correctly detected my console encoding as cp437 and translated Unicode ü to the cp437 value for ü.
起初,Python不知道编码,并抱怨非ascii字符。一旦它知道了编码,字节字符串就得到了实际在磁盘上的字节。对于Unicode字符串,Python read \x81,知道在cp437中是一个u,并将它解码为u +00FC的Unicode码点。当字节字符串被打印时,Python会直接将十六进制值81发送到控制台。当Unicode字符串被打印时,Python正确地检测到我的控制台编码为cp437,并将Unicode u转换为cp437值。
Here's what happens with a file declared and saved in UTF-8:
下面是在UTF-8中声明和保存的文件所发生的情况:
├╝ber '\xc3\xbcber'
über u'\xfcber'
In UTF-8, ü is encoded as the hex bytes C3 BC
, so the byte string contains those bytes, but the Unicode string is identical to the first example. Python read the two bytes and decoded it correctly. Python printed the byte string incorrectly, because it sent the two UTF-8 bytes representing ü directly to my cp437 console.
在UTF-8中,u被编码为十六进制字节的C3 BC,因此字节字符串包含这些字节,但是Unicode字符串与第一个示例相同。Python读取两个字节并正确解码。Python错误地打印了字节字符串,因为它将表示u的两个UTF-8字节直接发送到我的cp437控制台。
Here the file is declared cp437, but saved in UTF-8:
在这里,文件被声明为cp437,但保存在UTF-8中:
├╝ber '\xc3\xbcber'
├╝ber u'\u251c\u255dber'
The byte string still got the bytes on disk (UTF-8 hex bytes C3 BC
), but interpreted them as two cp437 characters instead of a single UTF-8-encoded character. Those two characters where translated to Unicode code points, and everything prints incorrectly.
字节字符串仍然得到磁盘上的字节(UTF-8十六进制字节),但是将它们解释为两个cp437字符,而不是一个UTF-8编码的字符。这两个字符转换为Unicode代码点,所有的打印错误。
#3
10
That doesn't set the format of the string; it sets the format of the file. Even with that header, "hello"
is a byte string, not a Unicode string. To make it Unicode, you're going to have to use u"hello"
everywhere. The header is just a hint of what format to use when reading the .py
file.
它没有设置字符串的格式;它设置文件的格式。即使有了这个头,“hello”是一个字节字符串,而不是Unicode字符串。要使它成为Unicode,您将不得不在任何地方使用u“hello”。header只是在读取.py文件时使用什么格式的提示。
#4
7
The header definition is to define the encoding of the code itself, not the resulting strings at runtime.
头定义是定义代码本身的编码,而不是在运行时生成的字符串。
putting a non-ascii character like ۲ in the python script without the utf-8 header definition will throw a warning error http://www.freeimagehosting.net/uploads/1ed15124c4.jpg
把非ascii字符像۲python脚本没有utf - 8头定义将抛出一个警告错误http://www.freeimagehosting.net/uploads/1ed15124c4.jpg
#5
0
if you are using python 2, add this:from __future__ import unicode_literals
如果您使用的是python 2,请添加:从__future__导入unicode_literals。
#6
0
I made the following module called unicoder to be able to do the transformation on variables:
我做了一个叫做unicoder的模块来做变量的变换:
import sys
import os
def ustr(string):
string = 'u"%s"'%string
with open('_unicoder.py', 'w') as script:
script.write('# -*- coding: utf-8 -*-\n')
script.write('_ustr = %s'%string)
import _unicoder
value = _unicoder._ustr
del _unicoder
del sys.modules['_unicoder']
os.system('del _unicoder.py')
os.system('del _unicoder.pyc')
return value
Then in your program you could do the following:
在你的程序中,你可以做如下的事情:
# -*- coding: utf-8 -*-
from unicoder import ustr
txt = 'Hello, Unicode World'
txt = ustr(txt)
print type(txt) # <type 'unicode'>