Python熊猫——使用to_sql以块的形式编写大型数据帧

时间:2021-08-11 21:41:58

I'm using Pandas' to_sql function to write to MySQL, which is timing out due to large frame size (1M rows, 20 columns).

我正在使用熊猫的to_sql函数来编写MySQL,由于大的帧大小(1M行,20列),MySQL正在超时。

http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html

http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html

Is there a more official way to chunk through the data and write rows in blocks? I've written my own code, which seems to work. I'd prefer an official solution though. Thanks!

是否有一种更正式的方式来处理数据并以块形式写入行?我已经编写了自己的代码,这似乎起作用了。不过我更喜欢官方的解决方案。谢谢!

def write_to_db(engine, frame, table_name, chunk_size):

    start_index = 0
    end_index = chunk_size if chunk_size < len(frame) else len(frame)

    frame = frame.where(pd.notnull(frame), None)
    if_exists_param = 'replace'

    while start_index != end_index:
        print "Writing rows %s through %s" % (start_index, end_index)
        frame.iloc[start_index:end_index, :].to_sql(con=engine, name=table_name, if_exists=if_exists_param)
        if_exists_param = 'append'

        start_index = min(start_index + chunk_size, len(frame))
        end_index = min(end_index + chunk_size, len(frame))

engine = sqlalchemy.create_engine('mysql://...') #database details omited
write_to_db(engine, frame, 'retail_pendingcustomers', 20000)

2 个解决方案

#1


14  

Update: this functionality has been merged in pandas master and will be released in 0.15 (probably end of september), thanks to @artemyk! See https://github.com/pydata/pandas/pull/8062

更新:由于@artemyk,这个功能已经在熊猫大师中合并,将在0.15(可能在9月底)发布。参见https://github.com/pydata/pandas/pull/8062

So starting from 0.15, you can specify the chunksize argument and e.g. simply do:

因此,从0.15开始,您可以指定chunksize参数,例如:

df.to_sql('table', engine, chunksize=20000)

#2


0  

There is beautiful idiomatic function chunks provided in answer to this question

在回答这个问题时,有一些漂亮的习惯函数块

In your case you can use this function like this:

在您的情况下,您可以使用如下函数:

def chunks(l, n):
""" Yield successive n-sized chunks from l.
"""
    for i in xrange(0, len(l), n):
         yield l.iloc[i:i+n]

def write_to_db(engine, frame, table_name, chunk_size):
    for idx, chunk in enumerate(chunks(frame, chunk_size)):
        if idx == 0:
            if_exists_param = 'replace':
        else:
            if_exists_param = 'append'
        chunk.to_sql(con=engine, name=table_name, if_exists=if_exists_param)

Only drawback that it doesn't support slicing second index in iloc function.

唯一的缺点是它不支持在iloc函数中分割第二个索引。

#1


14  

Update: this functionality has been merged in pandas master and will be released in 0.15 (probably end of september), thanks to @artemyk! See https://github.com/pydata/pandas/pull/8062

更新:由于@artemyk,这个功能已经在熊猫大师中合并,将在0.15(可能在9月底)发布。参见https://github.com/pydata/pandas/pull/8062

So starting from 0.15, you can specify the chunksize argument and e.g. simply do:

因此,从0.15开始,您可以指定chunksize参数,例如:

df.to_sql('table', engine, chunksize=20000)

#2


0  

There is beautiful idiomatic function chunks provided in answer to this question

在回答这个问题时,有一些漂亮的习惯函数块

In your case you can use this function like this:

在您的情况下,您可以使用如下函数:

def chunks(l, n):
""" Yield successive n-sized chunks from l.
"""
    for i in xrange(0, len(l), n):
         yield l.iloc[i:i+n]

def write_to_db(engine, frame, table_name, chunk_size):
    for idx, chunk in enumerate(chunks(frame, chunk_size)):
        if idx == 0:
            if_exists_param = 'replace':
        else:
            if_exists_param = 'append'
        chunk.to_sql(con=engine, name=table_name, if_exists=if_exists_param)

Only drawback that it doesn't support slicing second index in iloc function.

唯一的缺点是它不支持在iloc函数中分割第二个索引。