HDF5比CSV占用更多空间?

时间:2022-01-05 20:15:46

Consider the following example:

请考虑以下示例:

Prepare the data:

import string
import random
import pandas as pd

matrix = np.random.random((100, 3000))
my_cols = [random.choice(string.ascii_uppercase) for x in range(matrix.shape[1])]
mydf = pd.DataFrame(matrix, columns=my_cols)
mydf['something'] = 'hello_world'

Set the highest compression possible for HDF5:

store = pd.HDFStore('myfile.h5',complevel=9, complib='bzip2')
store['mydf'] = mydf
store.close()

Save also to CSV:

mydf.to_csv('myfile.csv', sep=':')

The result is:

结果是:

  • myfile.csv is 5.6 MB big
  • myfile.csv大5.6 MB

  • myfile.h5 is 11 MB big
  • myfile.h5是11 MB大

The difference grows bigger as the datasets get larger.

随着数据集变大,差异变得越来越大。

I have tried with other compression methods and levels. Is this a bug? (I am using Pandas 0.11 and the latest stable version of HDF5 and Python).

我尝试过其他压缩方法和级别。这是一个错误吗? (我正在使用Pandas 0.11和HDF5和Python的最新稳定版本)。

1 个解决方案

#1


34  

Copy of my answer from the issue: https://github.com/pydata/pandas/issues/3651

我的问题答案的副本:https://github.com/pydata/pandas/issues/3651

Your sample is really too small. HDF5 has a fair amount of overhead with really small sizes (even 300k entries is on the smaller side). The following is with no compression on either side. Floats are really more efficiently represented in binary (that as a text representation).

你的样本实在太小了。 HDF5具有相当大的开销,而且尺寸非常小(即使是300k的条目也在较小的一侧)。以下是任何一方都没有压缩。浮点数实际上更有效地用二进制表示(作为文本表示)。

In addition, HDF5 is row based. You get MUCH efficiency by having tables that are not too wide but are fairly long. (Hence your example is not very efficient in HDF5 at all, store it transposed in this case)

此外,HDF5是基于行的。通过使表不是太宽但是相当长的表,你可以获得很高的效率。 (因此你的例子在HDF5中效率不高,在这种情况下将其存储转置)

I routinely have tables that are 10M+ rows and query times can be in the ms. Even the below example is small. Having 10+GB files is quite common (not to mention the astronomy guys who 10GB+ is a few seconds!)

我通常拥有10M +行的表,查询时间可以在ms中。即使是下面的例子也很小。拥有10 + GB文件是很常见的(更不用说10GB +几秒钟的天文学家!)

-rw-rw-r--  1 jreback users 203200986 May 19 20:58 test.csv
-rw-rw-r--  1 jreback users  88007312 May 19 20:59 test.h5

In [1]: df = DataFrame(randn(1000000,10))

In [9]: df
Out[9]: 
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1000000 entries, 0 to 999999
Data columns (total 10 columns):
0    1000000  non-null values
1    1000000  non-null values
2    1000000  non-null values
3    1000000  non-null values
4    1000000  non-null values
5    1000000  non-null values
6    1000000  non-null values
7    1000000  non-null values
8    1000000  non-null values
9    1000000  non-null values
dtypes: float64(10)

In [5]: %timeit df.to_csv('test.csv',mode='w')
1 loops, best of 3: 12.7 s per loop

In [6]: %timeit df.to_hdf('test.h5','df',mode='w')
1 loops, best of 3: 825 ms per loop

In [7]: %timeit pd.read_csv('test.csv',index_col=0)
1 loops, best of 3: 2.35 s per loop

In [8]: %timeit pd.read_hdf('test.h5','df')
10 loops, best of 3: 38 ms per loop

I really wouldn't worry about the size (I suspect you are not, but are merely interested, which is fine). The point of HDF5 is that disk is cheap, cpu is cheap, but you can't have everything in memory at once so we optimize by using chunking

我真的不会担心尺寸(我怀疑你不是,但只是感兴趣,这很好)。 HDF5的重点是磁盘价格便宜,cpu便宜,但你不能同时拥有内存中的所有东西,所以我们通过使用分块进行优化

#1


34  

Copy of my answer from the issue: https://github.com/pydata/pandas/issues/3651

我的问题答案的副本:https://github.com/pydata/pandas/issues/3651

Your sample is really too small. HDF5 has a fair amount of overhead with really small sizes (even 300k entries is on the smaller side). The following is with no compression on either side. Floats are really more efficiently represented in binary (that as a text representation).

你的样本实在太小了。 HDF5具有相当大的开销,而且尺寸非常小(即使是300k的条目也在较小的一侧)。以下是任何一方都没有压缩。浮点数实际上更有效地用二进制表示(作为文本表示)。

In addition, HDF5 is row based. You get MUCH efficiency by having tables that are not too wide but are fairly long. (Hence your example is not very efficient in HDF5 at all, store it transposed in this case)

此外,HDF5是基于行的。通过使表不是太宽但是相当长的表,你可以获得很高的效率。 (因此你的例子在HDF5中效率不高,在这种情况下将其存储转置)

I routinely have tables that are 10M+ rows and query times can be in the ms. Even the below example is small. Having 10+GB files is quite common (not to mention the astronomy guys who 10GB+ is a few seconds!)

我通常拥有10M +行的表,查询时间可以在ms中。即使是下面的例子也很小。拥有10 + GB文件是很常见的(更不用说10GB +几秒钟的天文学家!)

-rw-rw-r--  1 jreback users 203200986 May 19 20:58 test.csv
-rw-rw-r--  1 jreback users  88007312 May 19 20:59 test.h5

In [1]: df = DataFrame(randn(1000000,10))

In [9]: df
Out[9]: 
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1000000 entries, 0 to 999999
Data columns (total 10 columns):
0    1000000  non-null values
1    1000000  non-null values
2    1000000  non-null values
3    1000000  non-null values
4    1000000  non-null values
5    1000000  non-null values
6    1000000  non-null values
7    1000000  non-null values
8    1000000  non-null values
9    1000000  non-null values
dtypes: float64(10)

In [5]: %timeit df.to_csv('test.csv',mode='w')
1 loops, best of 3: 12.7 s per loop

In [6]: %timeit df.to_hdf('test.h5','df',mode='w')
1 loops, best of 3: 825 ms per loop

In [7]: %timeit pd.read_csv('test.csv',index_col=0)
1 loops, best of 3: 2.35 s per loop

In [8]: %timeit pd.read_hdf('test.h5','df')
10 loops, best of 3: 38 ms per loop

I really wouldn't worry about the size (I suspect you are not, but are merely interested, which is fine). The point of HDF5 is that disk is cheap, cpu is cheap, but you can't have everything in memory at once so we optimize by using chunking

我真的不会担心尺寸(我怀疑你不是,但只是感兴趣,这很好)。 HDF5的重点是磁盘价格便宜,cpu便宜,但你不能同时拥有内存中的所有东西,所以我们通过使用分块进行优化