在Python中写入UTF-8文件。

时间:2021-06-28 20:17:11

I'm really confused with the codecs.open function. When I do:

我对编解码器很困惑。打开功能。当我做:

file = codecs.open("temp", "w", "utf-8")
file.write(codecs.BOM_UTF8)
file.close()

It gives me the error

它给出了误差。

UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 0: ordinal not in range(128)

UnicodeDecodeError: 'ascii' codec不能解码字节0xef在位置0:序数不在范围(128)

If I do:

如果我做的事:

file = open("temp", "w")
file.write(codecs.BOM_UTF8)
file.close()

It works fine.

它将正常工作。

Question is why does the first method fail? And how do I insert the bom?

问题是为什么第一个方法失败了?如何插入bom?

If the second method is the correct way of doing it, what the point of using codecs.open(filename, "w", "utf-8")?

如果第二种方法是正确的方法,那么使用编解码器的意义是什么。open(filename,“w”,“utf - 8”)?

4 个解决方案

#1


217  

I believe the problem is that codecs.BOM_UTF8 is a byte string, not a Unicode string. I suspect the file handler is trying to guess what you really mean based on "I'm meant to be writing Unicode as UTF-8-encoded text, but you've given me a byte string!"

我认为问题在于编解码器。BOM_UTF8是一个字节字符串,而不是Unicode字符串。我怀疑这个文件处理程序试图猜测你真正的意思是基于“我打算将Unicode编码为utf -8编码的文本,但是你给了我一个字节字符串!”

Try writing the Unicode string for the byte order mark (i.e. Unicode U+FEFF) directly, so that the file just encodes that as UTF-8:

试着直接为字节顺序标记(即Unicode U+FEFF)编写Unicode字符串,以便该文件只编码为UTF-8:

import codecs

file = codecs.open("lol", "w", "utf-8")
file.write(u'\ufeff')
file.close()

(That seems to give the right answer - a file with bytes EF BB BF.)

(这似乎给出了正确的答案——一个带有字节EF BF的文件。)

EDIT: S. Lott's suggestion of using "utf-8-sig" as the encoding is a better one than explicitly writing the BOM yourself, but I'll leave this answer here as it explains what was going wrong before.

编辑:S. Lott建议使用“utf-8-sig”作为编码,这比明确地自己编写BOM要好得多,但是我将在这里留下这个答案,因为它解释了之前的错误。

#2


150  

Read the following: http://docs.python.org/library/codecs.html#module-encodings.utf_8_sig

阅读下面的:http://docs.python.org/library/codecs.html # module-encodings.utf_8_sig

Do this

这样做

with codecs.open("test_output", "w", "utf-8-sig") as temp:
    temp.write("hi mom\n")
    temp.write(u"This has ♭")

The resulting file is UTF-8 with the expected BOM.

结果文件是UTF-8和预期的BOM。

#3


11  

@S-Lott gives the right procedure, but expanding on the Unicode issues, the Python interpreter can provide more insights.

@S-Lott提供了正确的过程,但是在Unicode问题上进行扩展,Python解释器可以提供更多的见解。

Jon Skeet is right (unusual) about the codecs module - it contains byte strings:

Jon Skeet对codecs模块是正确的(不同寻常)——它包含了字节字符串:

>>> import codecs
>>> codecs.BOM
'\xff\xfe'
>>> codecs.BOM_UTF8
'\xef\xbb\xbf'
>>> 

Picking another nit, the BOM has a standard Unicode name, and it can be entered as:

选择另一个nit, BOM有一个标准的Unicode名称,可以输入如下:

>>> bom= u"\N{ZERO WIDTH NO-BREAK SPACE}"
>>> bom
u'\ufeff'

It is also accessible via unicodedata:

它也可以通过unicodedata访问:

>>> import unicodedata
>>> unicodedata.lookup('ZERO WIDTH NO-BREAK SPACE')
u'\ufeff'
>>> 

#4


5  

I use the file *nix command to convert a unknown charset file in a utf-8 file

我使用文件*nix命令将一个未知的charset文件转换为utf-8文件。

# -*- encoding: utf-8 -*-

# converting a unknown formatting file in utf-8

import codecs
import commands

file_location = "jumper.sub"
file_encoding = commands.getoutput('file -b --mime-encoding %s' % file_location)

file_stream = codecs.open(file_location, 'r', file_encoding)
file_output = codecs.open(file_location+"b", 'w', 'utf-8')

for l in file_stream:
    file_output.write(l)

file_stream.close()
file_output.close()

#1


217  

I believe the problem is that codecs.BOM_UTF8 is a byte string, not a Unicode string. I suspect the file handler is trying to guess what you really mean based on "I'm meant to be writing Unicode as UTF-8-encoded text, but you've given me a byte string!"

我认为问题在于编解码器。BOM_UTF8是一个字节字符串,而不是Unicode字符串。我怀疑这个文件处理程序试图猜测你真正的意思是基于“我打算将Unicode编码为utf -8编码的文本,但是你给了我一个字节字符串!”

Try writing the Unicode string for the byte order mark (i.e. Unicode U+FEFF) directly, so that the file just encodes that as UTF-8:

试着直接为字节顺序标记(即Unicode U+FEFF)编写Unicode字符串,以便该文件只编码为UTF-8:

import codecs

file = codecs.open("lol", "w", "utf-8")
file.write(u'\ufeff')
file.close()

(That seems to give the right answer - a file with bytes EF BB BF.)

(这似乎给出了正确的答案——一个带有字节EF BF的文件。)

EDIT: S. Lott's suggestion of using "utf-8-sig" as the encoding is a better one than explicitly writing the BOM yourself, but I'll leave this answer here as it explains what was going wrong before.

编辑:S. Lott建议使用“utf-8-sig”作为编码,这比明确地自己编写BOM要好得多,但是我将在这里留下这个答案,因为它解释了之前的错误。

#2


150  

Read the following: http://docs.python.org/library/codecs.html#module-encodings.utf_8_sig

阅读下面的:http://docs.python.org/library/codecs.html # module-encodings.utf_8_sig

Do this

这样做

with codecs.open("test_output", "w", "utf-8-sig") as temp:
    temp.write("hi mom\n")
    temp.write(u"This has ♭")

The resulting file is UTF-8 with the expected BOM.

结果文件是UTF-8和预期的BOM。

#3


11  

@S-Lott gives the right procedure, but expanding on the Unicode issues, the Python interpreter can provide more insights.

@S-Lott提供了正确的过程,但是在Unicode问题上进行扩展,Python解释器可以提供更多的见解。

Jon Skeet is right (unusual) about the codecs module - it contains byte strings:

Jon Skeet对codecs模块是正确的(不同寻常)——它包含了字节字符串:

>>> import codecs
>>> codecs.BOM
'\xff\xfe'
>>> codecs.BOM_UTF8
'\xef\xbb\xbf'
>>> 

Picking another nit, the BOM has a standard Unicode name, and it can be entered as:

选择另一个nit, BOM有一个标准的Unicode名称,可以输入如下:

>>> bom= u"\N{ZERO WIDTH NO-BREAK SPACE}"
>>> bom
u'\ufeff'

It is also accessible via unicodedata:

它也可以通过unicodedata访问:

>>> import unicodedata
>>> unicodedata.lookup('ZERO WIDTH NO-BREAK SPACE')
u'\ufeff'
>>> 

#4


5  

I use the file *nix command to convert a unknown charset file in a utf-8 file

我使用文件*nix命令将一个未知的charset文件转换为utf-8文件。

# -*- encoding: utf-8 -*-

# converting a unknown formatting file in utf-8

import codecs
import commands

file_location = "jumper.sub"
file_encoding = commands.getoutput('file -b --mime-encoding %s' % file_location)

file_stream = codecs.open(file_location, 'r', file_encoding)
file_output = codecs.open(file_location+"b", 'w', 'utf-8')

for l in file_stream:
    file_output.write(l)

file_stream.close()
file_output.close()