I am currently trying to import the following large tab-delimited file into a dataframe-like structure within Python---naturally I am using pandas
dataframe, though I am open to other options.
我目前正在尝试将以下的大表分隔文件导入到Python中类似dataframe的结构中——当然,我使用的是熊猫dataframe,不过我愿意接受其他选项。
This file is several GB in size, and is not a standard tsv
file---it is broken, i.e. the rows have a different number of columns. One row may have 25 columns, another has 21.
这个文件的大小是GB,并且不是标准的tsv文件——它是被破坏的,即行有不同的列数。一行可能有25个列,另一行有21个。
Here is an example of the data:
下面是数据的一个例子:
Col_01: 14 .... Col_20: 25 Col_21: 23432 Col_22: 639142
Col_01: 8 .... Col_20: 25 Col_22: 25134 Col_23: 243344
Col_01: 17 .... Col_21: 75 Col_23: 79876 Col_25: 634534 Col_22: 5 Col_24: 73453
Col_01: 19 .... Col_20: 25 Col_21: 32425 Col_23: 989423
Col_01: 12 .... Col_20: 25 Col_21: 23424 Col_22: 342421 Col_23: 7 Col_24: 13424 Col_25: 67
Col_01: 3 .... Col_20: 95 Col_21: 32121 Col_25: 111231
As you can see, some of these columns are not in the correct order...
正如您所看到的,其中一些列的顺序不正确……
Now, I think the correct way to import this file into a dataframe is to preprocess the data such that you can output a dataframe with NaN
values, e.g.
现在,我认为将这个文件导入dataframe的正确方法是预处理数据,这样您就可以输出带有NaN值的dataframe。
Col_01 .... Col_20 Col_21 Col22 Col23 Col24 Col25
8 .... 25 NaN 25134 243344 NaN NaN
17 .... NaN 75 2 79876 73453 634534
19 .... 25 32425 NaN 989423 NaN NaN
12 .... 25 23424 342421 7 13424 67
3 .... 95 32121 NaN NaN NaN 111231
To make this even more complicated, this is a very large file, several GB in size.
为了使这个更复杂,这是一个非常大的文件,有几个GB大小。
Normally, I try to process the data in chunks, e.g.
通常,我尝试用块来处理数据。
import pandas as pd
for chunk in pd.read_table(FILE_PATH, header=None, sep='\t', chunksize=10**6):
# place chunks into a dataframe or HDF
However, I see no way to "preprocess" the data first in chunks, and then use chunks to read the data into pandas.read_table()
. How would you do this? What sort of preprocessing tools are available---perhaps sed
? awk
?
但是,我认为没有方法先将数据“预处理”,然后使用块将数据读入pandas.read_table()。你会怎么做?有什么样的预处理工具——可能是sed?awk吗?
This is a challenging problem, due to the size of the data and the formatting that must be done before loading into a dataframe. Any help appreciated.
这是一个具有挑战性的问题,因为数据的大小和在加载到dataframe之前必须完成的格式化。任何帮助表示赞赏。
3 个解决方案
#1
4
$ cat > pandas.awk
BEGIN {
PROCINFO["sorted_in"]="@ind_str_asc" # traversal order for for(i in a)
}
NR==1 { # the header cols is in the beginning of data file
# FORGET THIS: header cols from another file replace NR==1 with NR==FNR and see * below
split($0,a," ") # mkheader a[1]=first_col ...
for(i in a) { # replace with a[first_col]="" ...
a[a[i]]
printf "%6s%s", a[i], OFS # output the header
delete a[i] # remove a[1], a[2], ...
}
# next # FORGET THIS * next here if cols from another file UNTESTED
}
{
gsub(/: /,"=") # replace key-value separator ": " with "="
split($0,b,FS) # split record from ","
for(i in b) {
split(b[i],c,"=") # split key=value to c[1]=key, c[2]=value
b[c[1]]=c[2] # b[key]=value
}
for(i in a) # go thru headers in a[] and printf from b[]
printf "%6s%s", (i in b?b[i]:"NaN"), OFS; print ""
}
Data sample (pandas.txt
):
数据样本(pandas.txt):
Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
Col_01: 14 Col_20: 25 Col_21: 23432 Col_22: 639142
Col_01: 8 Col_20: 25 Col_22: 25134 Col_23: 243344
Col_01: 17 Col_21: 75 Col_23: 79876 Col_25: 634534 Col_22: 5 Col_24: 73453
Col_01: 19 Col_20: 25 Col_21: 32425 Col_23: 989423
Col_01: 12 Col_20: 25 Col_21: 23424 Col_22: 342421 Col_23: 7 Col_24: 13424 Col_25: 67
Col_01: 3 Col_20: 95 Col_21: 32121 Col_25: 111231
$ awk -f pandas.awk -pandas.txt
Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
14 25 23432 639142 NaN NaN
8 25 NaN 25134 243344 NaN
17 NaN 75 5 79876 634534
19 25 32425 NaN 989423 NaN
12 25 23424 342421 7 67
3 95 32121 NaN NaN 111231
All needed cols should be in the data file header. It's probably not a big job to collect the headers while processing, just keep the data in arrays and print in the end, maybe in version 3.
所有需要的cols都应该在数据文件头中。在处理过程中收集头信息可能不是一个大任务,只需将数据保存在数组中并在最后打印,可能在版本3中。
If you read the headers from a different file (cols.txt
) than the data file (pandas.txt
), execute the script (pandas.awk
):
如果您从不同的文件(cols.txt)中读取标题,而不是数据文件(pandas.txt),执行脚本(pandas.awk):
$ awk -F pandas.awk cols.txt pandas.txt
#2
3
Another version which takes a separate column file as parameter or uses the first record. Run either way:
另一个版本,它以一个单独的列文件作为参数,或者使用第一个记录。运行方式:
awk -f pandas2.awk pandas.txt # first record as header
awk -f pandas2.awk cols.txt pandas.txt # first record from cols.txt
awk -v cols="cols.txt" -f pandas2.awk pandas.txt # read cols from cols.txt
Or even:
甚至:
awk -v cols="pandas.txt" -f pandas2.awk pandas.txt # separates keys from pandas.txt for header
Code:
代码:
$ cat > pandas2.awk
BEGIN {
PROCINFO["sorted_in"]="@ind_str_asc" # traversal order for for(i in a)
if(cols) { # if -v cols="column_file.txt" or even "pandas.txt"
while ((getline line< cols)>0) { # read it in line by line
gsub(/: [^ ]+/,"",line) # remove values from "key: value"
split(line,a) # split to temp array
for(i in a) # collect keys to column array
col[a[i]]
}
for(i in col) # output columns
printf "%6s%s", i, OFS
print ""
}
}
NR==1 && cols=="" { # if the header cols are in the beginning of data file
# if not, -v cols="column_file.txt"
split($0,a," +") # split header record by spaces
for(i in a) {
col[a[i]] # set them to array col
printf "%6s%s", a[i], OFS # output the header
}
print ""
}
NR==1 {
next
}
{
gsub(/: /,"=") # replace key-value separator ": " with "="
split($0,b,FS) # split record from separator FS
for(i in b) {
split(b[i],c,"=") # split key=value to c[1]=key, c[2]=value
b[c[1]]=c[2] # b[key]=value
}
for(i in col) # go thru headers in col[] and printf from b[]
printf "%6s%s", (i in b?b[i]:"NaN"), OFS; print ""
}
#3
1
You can do this more cleanly completely in Pandas.
在熊猫里,你可以更干净利落地做到这一点。
Suppose you have two independent data frames with only one overlapping column:
假设您有两个独立的数据帧,只有一个重叠的列:
>>> df1
A B
0 1 2
>>> df2
B C
1 3 4
You can use .concat to concatenate them together:
您可以使用.concat将它们连接在一起:
>>> pd.concat([df1, df2])
A B C
0 1 2 NaN
1 NaN 3 4
You can see NaN
is created for row values that do not exist.
您可以看到NaN是为不存在的行值创建的。
This can easily be applied to your example data without preprocessing at all:
这可以很容易地应用于您的示例数据,而无需进行预处理:
import pandas as pd
df=pd.DataFrame()
with open(fn) as f_in:
for i, line in enumerate(f_in):
line_data=pd.DataFrame({k.strip():v.strip()
for k,_,v in (e.partition(':')
for e in line.split('\t'))}, index=[i])
df=pd.concat([df, line_data])
>>> df
Col_01 Col_20 Col_21 Col_22 Col_23 Col_24 Col_25
0 14 25 23432 639142 NaN NaN NaN
1 8 25 NaN 25134 243344 NaN NaN
2 17 NaN 75 5 79876 73453 634534
3 19 25 32425 NaN 989423 NaN NaN
4 12 25 23424 342421 7 13424 67
5 3 95 32121 NaN NaN NaN 111231
Alternatively, if your main issue is establishing the desired order of the columns in a multi chunk add of columns, just read all the column value first (not tested):
或者,如果您的主要问题是在多块添加的列中建立列的期望顺序,那么只需先读取所有列值(未经过测试):
# based on the alpha numeric sort of the example of:
# [ALPHA]_[NUM]
headers=set()
with open(fn) as f:
for line in f:
for record in line.split('\t'):
head,_,datum=record.partition(":")
headers.add(head)
# sort as you wish:
cols=sorted(headers, key=lambda e: int(e.partition('_')[2]))
Pandas will use the order of the list for the column order if given in the initial creation of the DataFrame.
如果在DataFrame的初始创建中给出,熊猫将使用列顺序列表的顺序。
#1
4
$ cat > pandas.awk
BEGIN {
PROCINFO["sorted_in"]="@ind_str_asc" # traversal order for for(i in a)
}
NR==1 { # the header cols is in the beginning of data file
# FORGET THIS: header cols from another file replace NR==1 with NR==FNR and see * below
split($0,a," ") # mkheader a[1]=first_col ...
for(i in a) { # replace with a[first_col]="" ...
a[a[i]]
printf "%6s%s", a[i], OFS # output the header
delete a[i] # remove a[1], a[2], ...
}
# next # FORGET THIS * next here if cols from another file UNTESTED
}
{
gsub(/: /,"=") # replace key-value separator ": " with "="
split($0,b,FS) # split record from ","
for(i in b) {
split(b[i],c,"=") # split key=value to c[1]=key, c[2]=value
b[c[1]]=c[2] # b[key]=value
}
for(i in a) # go thru headers in a[] and printf from b[]
printf "%6s%s", (i in b?b[i]:"NaN"), OFS; print ""
}
Data sample (pandas.txt
):
数据样本(pandas.txt):
Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
Col_01: 14 Col_20: 25 Col_21: 23432 Col_22: 639142
Col_01: 8 Col_20: 25 Col_22: 25134 Col_23: 243344
Col_01: 17 Col_21: 75 Col_23: 79876 Col_25: 634534 Col_22: 5 Col_24: 73453
Col_01: 19 Col_20: 25 Col_21: 32425 Col_23: 989423
Col_01: 12 Col_20: 25 Col_21: 23424 Col_22: 342421 Col_23: 7 Col_24: 13424 Col_25: 67
Col_01: 3 Col_20: 95 Col_21: 32121 Col_25: 111231
$ awk -f pandas.awk -pandas.txt
Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
14 25 23432 639142 NaN NaN
8 25 NaN 25134 243344 NaN
17 NaN 75 5 79876 634534
19 25 32425 NaN 989423 NaN
12 25 23424 342421 7 67
3 95 32121 NaN NaN 111231
All needed cols should be in the data file header. It's probably not a big job to collect the headers while processing, just keep the data in arrays and print in the end, maybe in version 3.
所有需要的cols都应该在数据文件头中。在处理过程中收集头信息可能不是一个大任务,只需将数据保存在数组中并在最后打印,可能在版本3中。
If you read the headers from a different file (cols.txt
) than the data file (pandas.txt
), execute the script (pandas.awk
):
如果您从不同的文件(cols.txt)中读取标题,而不是数据文件(pandas.txt),执行脚本(pandas.awk):
$ awk -F pandas.awk cols.txt pandas.txt
#2
3
Another version which takes a separate column file as parameter or uses the first record. Run either way:
另一个版本,它以一个单独的列文件作为参数,或者使用第一个记录。运行方式:
awk -f pandas2.awk pandas.txt # first record as header
awk -f pandas2.awk cols.txt pandas.txt # first record from cols.txt
awk -v cols="cols.txt" -f pandas2.awk pandas.txt # read cols from cols.txt
Or even:
甚至:
awk -v cols="pandas.txt" -f pandas2.awk pandas.txt # separates keys from pandas.txt for header
Code:
代码:
$ cat > pandas2.awk
BEGIN {
PROCINFO["sorted_in"]="@ind_str_asc" # traversal order for for(i in a)
if(cols) { # if -v cols="column_file.txt" or even "pandas.txt"
while ((getline line< cols)>0) { # read it in line by line
gsub(/: [^ ]+/,"",line) # remove values from "key: value"
split(line,a) # split to temp array
for(i in a) # collect keys to column array
col[a[i]]
}
for(i in col) # output columns
printf "%6s%s", i, OFS
print ""
}
}
NR==1 && cols=="" { # if the header cols are in the beginning of data file
# if not, -v cols="column_file.txt"
split($0,a," +") # split header record by spaces
for(i in a) {
col[a[i]] # set them to array col
printf "%6s%s", a[i], OFS # output the header
}
print ""
}
NR==1 {
next
}
{
gsub(/: /,"=") # replace key-value separator ": " with "="
split($0,b,FS) # split record from separator FS
for(i in b) {
split(b[i],c,"=") # split key=value to c[1]=key, c[2]=value
b[c[1]]=c[2] # b[key]=value
}
for(i in col) # go thru headers in col[] and printf from b[]
printf "%6s%s", (i in b?b[i]:"NaN"), OFS; print ""
}
#3
1
You can do this more cleanly completely in Pandas.
在熊猫里,你可以更干净利落地做到这一点。
Suppose you have two independent data frames with only one overlapping column:
假设您有两个独立的数据帧,只有一个重叠的列:
>>> df1
A B
0 1 2
>>> df2
B C
1 3 4
You can use .concat to concatenate them together:
您可以使用.concat将它们连接在一起:
>>> pd.concat([df1, df2])
A B C
0 1 2 NaN
1 NaN 3 4
You can see NaN
is created for row values that do not exist.
您可以看到NaN是为不存在的行值创建的。
This can easily be applied to your example data without preprocessing at all:
这可以很容易地应用于您的示例数据,而无需进行预处理:
import pandas as pd
df=pd.DataFrame()
with open(fn) as f_in:
for i, line in enumerate(f_in):
line_data=pd.DataFrame({k.strip():v.strip()
for k,_,v in (e.partition(':')
for e in line.split('\t'))}, index=[i])
df=pd.concat([df, line_data])
>>> df
Col_01 Col_20 Col_21 Col_22 Col_23 Col_24 Col_25
0 14 25 23432 639142 NaN NaN NaN
1 8 25 NaN 25134 243344 NaN NaN
2 17 NaN 75 5 79876 73453 634534
3 19 25 32425 NaN 989423 NaN NaN
4 12 25 23424 342421 7 13424 67
5 3 95 32121 NaN NaN NaN 111231
Alternatively, if your main issue is establishing the desired order of the columns in a multi chunk add of columns, just read all the column value first (not tested):
或者,如果您的主要问题是在多块添加的列中建立列的期望顺序,那么只需先读取所有列值(未经过测试):
# based on the alpha numeric sort of the example of:
# [ALPHA]_[NUM]
headers=set()
with open(fn) as f:
for line in f:
for record in line.split('\t'):
head,_,datum=record.partition(":")
headers.add(head)
# sort as you wish:
cols=sorted(headers, key=lambda e: int(e.partition('_')[2]))
Pandas will use the order of the list for the column order if given in the initial creation of the DataFrame.
如果在DataFrame的初始创建中给出,熊猫将使用列顺序列表的顺序。