先说下:所谓的大文件并不是压缩文件有多大,几十兆的文件而是解压后几百兆。其中就遇到解压不成功的情况.、读小文件时成功,大文件时失败等
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
def unzip_to_txt_plus(zipfilename):
zfile = zipfile.ZipFile(zipfilename, 'r' )
for filename in zfile.namelist():
data = zfile.read(filename)
# data = data.decode('gbk').encode('utf-8')
data = data.decode( 'gbk' , 'ignore' ).encode( 'utf-8' )
file = open (filename, 'w+b' )
file .write(data)
file .close()
if __name__ = = '__main__' :
zipfilename = "E:\\share\\python_excel\\zip_to_database\\20171025.zip"
unzip_to_txt_plus(zipfilename)
|
注意参数:‘ignore' ,因为默认是严格编码,如果不加这个参数就会报错。
因为该函数已经把文件编成utf-8 所以后面读取文件时成功,下面贴出读取大文件代码(忽略数据库相关)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
|
# - coding: utf-8 -
import csv
import linecache
import xlrd
import MySQLdb
def txt_todatabase(filename, linenum):
# with open(filename, "r", encoding="gbk") as csvfile:
# Read = csv.reader(csvfile)
# count =0
# for i in Read:
# # print(i)
# count += 1
# # print('hello')
# print(count)
count = linecache.getline(filename, linenum)
print (count)
# with open("new20171028.csv", "w", newline="") as datacsv:
# # dialect为打开csv文件的方式,默认是excel,delimiter="\t"参数指写入的时候的分隔符
# csvwriter = csv.writer(datacsv, dialect=("excel"))
# # csv文件插入一行数据,把下面列表中的每一项放入一个单元格(可以用循环插入多行)
# csvwriter.writerow(["A", "B", "C", "D"])
def bigtxt_read(filename):
with open (filename, 'r' , encoding = 'utf-8' ) as data:
count = 0
while 1 :
count + = 1
line = data.readline()
if 1000000 = = count:
print (line)
if not line:
break
print (count)
if __name__ = = '__main__' :
filename = '20171025.txt'
txt_todatabase(filename, 1000000 )
bigtxt_read(filename)
|
经过对比,发现两个速度基本一样快。两百万行的数据是没压力的。
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持服务器之家。
原文链接:http://blog.csdn.net/u012762054/article/details/78367372