如何在Python中将文件转换为utf-8?

时间:2021-06-28 20:16:53

I need to convert a bunch of files to utf-8 in Python, and I have trouble with the "converting the file" part.

我需要在Python中将一堆文件转换为utf-8,而我在“转换文件”部分时遇到了麻烦。

I'd like to do the equivalent of:

我想做相当于:

iconv -t utf-8 $file > converted/$file # this is shell code

Thanks!

谢谢!

5 个解决方案

#1


46  

You can use the codecs module, like this:

您可以使用编解码器模块,如下所示:

import codecs
BLOCKSIZE = 1048576 # or some other, desired size in bytes
with codecs.open(sourceFileName, "r", "your-source-encoding") as sourceFile:
    with codecs.open(targetFileName, "w", "utf-8") as targetFile:
        while True:
            contents = sourceFile.read(BLOCKSIZE)
            if not contents:
                break
            targetFile.write(contents)

EDIT: added BLOCKSIZE parameter to control file chunk size.

编辑:添加BLOCKSIZE参数来控制文件块大小。

#2


26  

This worked for me in a small test:

这在一个小测试中对我有用:

sourceEncoding = "iso-8859-1"
targetEncoding = "utf-8"
source = open("source")
target = open("target", "w")

target.write(unicode(source.read(), sourceEncoding).encode(targetEncoding))

#3


12  

Thanks for the replies, it works!

谢谢你的回复,它的确有效!

And since the source files are in mixed formats, I added a list of source formats to be tried in sequence (sourceFormats), and on UnicodeDecodeError I try the next format:

由于源文件是混合格式,我添加了一系列要按顺序尝试的源格式(sourceFormats),并且在UnicodeDecodeError上我尝试了下一种格式:

from __future__ import with_statement

import os
import sys
import codecs
from chardet.universaldetector import UniversalDetector

targetFormat = 'utf-8'
outputDir = 'converted'
detector = UniversalDetector()

def get_encoding_type(current_file):
    detector.reset()
    for line in file(current_file):
        detector.feed(line)
        if detector.done: break
    detector.close()
    return detector.result['encoding']

def convertFileBestGuess(filename):
   sourceFormats = ['ascii', 'iso-8859-1']
   for format in sourceFormats:
     try:
        with codecs.open(fileName, 'rU', format) as sourceFile:
            writeConversion(sourceFile)
            print('Done.')
            return
      except UnicodeDecodeError:
        pass

def convertFileWithDetection(fileName):
    print("Converting '" + fileName + "'...")
    format=get_encoding_type(fileName)
    try:
        with codecs.open(fileName, 'rU', format) as sourceFile:
            writeConversion(sourceFile)
            print('Done.')
            return
    except UnicodeDecodeError:
        pass

    print("Error: failed to convert '" + fileName + "'.")


def writeConversion(file):
    with codecs.open(outputDir + '/' + fileName, 'w', targetFormat) as targetFile:
        for line in file:
            targetFile.write(line)

# Off topic: get the file list and call convertFile on each file
# ...

(EDIT by Rudro Badhon: this incorporates the original try multiple formats until you don't get an exception as well as an alternate approach that uses chardet.universaldetector)

(由Rudro Badhon编辑:这包含原始尝试多种格式,直到您没有获得异常以及使用chardet.universaldetector的替代方法)

#4


1  

This is a Python3 function for converting any text file into the one with UTF-8 encoding. (without using unnecessary packages)

这是一个Python3函数,用于将任何文本文件转换为具有UTF-8编码的文件。 (不使用不必要的包)

def correctSubtitleEncoding(filename, newFilename, encoding_from, encoding_to='UTF-8'):
    with open(filename, 'r', encoding=encoding_from) as fr:
        with open(newFilename, 'w', encoding=encoding_to) as fw:
            for line in fr:
                fw.write(line[:-1]+'\r\n')

You can use it easily in a loop to convert a list of files.

您可以在循环中轻松使用它来转换文件列表。

#5


0  

To guess what's the source encoding you can use the file *nix command.

要猜测源编码是什么,可以使用文件* nix命令。

Example:

例:

$ file --mime jumper.xml

jumper.xml: application/xml; charset=utf-8

#1


46  

You can use the codecs module, like this:

您可以使用编解码器模块,如下所示:

import codecs
BLOCKSIZE = 1048576 # or some other, desired size in bytes
with codecs.open(sourceFileName, "r", "your-source-encoding") as sourceFile:
    with codecs.open(targetFileName, "w", "utf-8") as targetFile:
        while True:
            contents = sourceFile.read(BLOCKSIZE)
            if not contents:
                break
            targetFile.write(contents)

EDIT: added BLOCKSIZE parameter to control file chunk size.

编辑:添加BLOCKSIZE参数来控制文件块大小。

#2


26  

This worked for me in a small test:

这在一个小测试中对我有用:

sourceEncoding = "iso-8859-1"
targetEncoding = "utf-8"
source = open("source")
target = open("target", "w")

target.write(unicode(source.read(), sourceEncoding).encode(targetEncoding))

#3


12  

Thanks for the replies, it works!

谢谢你的回复,它的确有效!

And since the source files are in mixed formats, I added a list of source formats to be tried in sequence (sourceFormats), and on UnicodeDecodeError I try the next format:

由于源文件是混合格式,我添加了一系列要按顺序尝试的源格式(sourceFormats),并且在UnicodeDecodeError上我尝试了下一种格式:

from __future__ import with_statement

import os
import sys
import codecs
from chardet.universaldetector import UniversalDetector

targetFormat = 'utf-8'
outputDir = 'converted'
detector = UniversalDetector()

def get_encoding_type(current_file):
    detector.reset()
    for line in file(current_file):
        detector.feed(line)
        if detector.done: break
    detector.close()
    return detector.result['encoding']

def convertFileBestGuess(filename):
   sourceFormats = ['ascii', 'iso-8859-1']
   for format in sourceFormats:
     try:
        with codecs.open(fileName, 'rU', format) as sourceFile:
            writeConversion(sourceFile)
            print('Done.')
            return
      except UnicodeDecodeError:
        pass

def convertFileWithDetection(fileName):
    print("Converting '" + fileName + "'...")
    format=get_encoding_type(fileName)
    try:
        with codecs.open(fileName, 'rU', format) as sourceFile:
            writeConversion(sourceFile)
            print('Done.')
            return
    except UnicodeDecodeError:
        pass

    print("Error: failed to convert '" + fileName + "'.")


def writeConversion(file):
    with codecs.open(outputDir + '/' + fileName, 'w', targetFormat) as targetFile:
        for line in file:
            targetFile.write(line)

# Off topic: get the file list and call convertFile on each file
# ...

(EDIT by Rudro Badhon: this incorporates the original try multiple formats until you don't get an exception as well as an alternate approach that uses chardet.universaldetector)

(由Rudro Badhon编辑:这包含原始尝试多种格式,直到您没有获得异常以及使用chardet.universaldetector的替代方法)

#4


1  

This is a Python3 function for converting any text file into the one with UTF-8 encoding. (without using unnecessary packages)

这是一个Python3函数,用于将任何文本文件转换为具有UTF-8编码的文件。 (不使用不必要的包)

def correctSubtitleEncoding(filename, newFilename, encoding_from, encoding_to='UTF-8'):
    with open(filename, 'r', encoding=encoding_from) as fr:
        with open(newFilename, 'w', encoding=encoding_to) as fw:
            for line in fr:
                fw.write(line[:-1]+'\r\n')

You can use it easily in a loop to convert a list of files.

您可以在循环中轻松使用它来转换文件列表。

#5


0  

To guess what's the source encoding you can use the file *nix command.

要猜测源编码是什么,可以使用文件* nix命令。

Example:

例:

$ file --mime jumper.xml

jumper.xml: application/xml; charset=utf-8