I would like to create a MySQL table with Pandas' to_sql function which has a primary key (it is usually kind of good to have a primary key in a mysql table) as so:
我想用Pandas的to_sql函数创建一个MySQL表,它有一个主键(在mysql表中有一个主键通常很好),如下所示:
group_export.to_sql(con = db, name = config.table_group_export, if_exists = 'replace', flavor = 'mysql', index = False)
but this creates a table without any primary key, (or even without any index).
但这会创建一个没有任何主键的表(甚至没有任何索引)。
The documentation mentions the parameter 'index_label' which combined with the 'index' parameter could be used to create an index but doesn't mention any option for primary keys.
文档提到参数'index_label'与'index'参数结合使用可用于创建索引,但未提及主键的任何选项。
文档
3 个解决方案
#1
10
Disclaimer: this answer is more experimental then practical, but maybe worth mention.
免责声明:这个答案更具实验性和实用性,但也许值得一提。
I found that class pandas.io.sql.SQLTable
has named argument key
and if you assign it the name of the field then this field becomes the primary key:
我发现类pandas.io.sql.SQLTable已命名参数键,如果为其分配字段名称,则此字段将成为主键:
Unfortunately you can't just transfer this argument from DataFrame.to_sql()
function. To use it you should:
不幸的是,你不能只从DataFrame.to_sql()函数传递这个参数。要使用它你应该:
-
create
pandas.io.SQLDatabase
instance创建pandas.io.SQLDatabase实例
engine = sa.create_engine('postgresql:///somedb') pandas_sql = pd.io.sql.pandasSQL_builder(engine, schema=None, flavor=None)
-
define function analoguous to
pandas.io.SQLDatabase.to_sql()
but with additional*kwargs
argument which is passed topandas.io.SQLTable
object created inside it (i've just copied originalto_sql()
method and added*kwargs
):定义与pandas.io.SQLDatabase.to_sql()类似的函数,但附加* kwargs参数,该参数传递给在其中创建的pandas.io.SQLTable对象(我刚刚复制了原始的to_sql()方法并添加了* kwargs):
def to_sql_k(self, frame, name, if_exists='fail', index=True, index_label=None, schema=None, chunksize=None, dtype=None, **kwargs): if dtype is not None: from sqlalchemy.types import to_instance, TypeEngine for col, my_type in dtype.items(): if not isinstance(to_instance(my_type), TypeEngine): raise ValueError('The type of %s is not a SQLAlchemy ' 'type ' % col) table = pd.io.sql.SQLTable(name, self, frame=frame, index=index, if_exists=if_exists, index_label=index_label, schema=schema, dtype=dtype, **kwargs) table.create() table.insert(chunksize)
-
call this function with your
SQLDatabase
instance and the dataframe you want to save使用您的SQLDatabase实例和要保存的数据帧调用此函数
to_sql_k(pandas_sql, df2save, 'tmp', index=True, index_label='id', keys='id', if_exists='replace')
And we get something like
我们得到类似的东西
CREATE TABLE public.tmp
(
id bigint NOT NULL DEFAULT nextval('tmp_id_seq'::regclass),
...
)
in the database.
在数据库中。
PS You can of course monkey-patch DataFrame
, io.SQLDatabase
and io.to_sql()
functions to use this workaround with convenience.
PS当然,您可以使用Monkey-patch DataFrame,io.SQLDatabase和io.to_sql()函数来方便地使用此变通方法。
#2
20
Simply add the primary key after uploading the table with pandas.
只需在使用pandas上传表后添加主键即可。
group_export.to_sql(con=engine, name=example_table, if_exists='replace',
flavor='mysql', index=False)
with engine.connect() as con:
con.execute('ALTER TABLE `example_table` ADD PRIMARY KEY (`ID_column`);')
#3
0
automap_base
from sqlalchemy.ext.automap
(tableNamesDict is a dict with only the Pandas tables):
来自sqlalchemy.ext.automap的automap_base(tableNamesDict是一个仅包含Pandas表的dict):
metadata = MetaData()
metadata.reflect(db.engine, only=tableNamesDict.values())
Base = automap_base(metadata=metadata)
Base.prepare()
Which would have worked perfectly, except for one problem, automap requires the tables to have a primary key. Ok, no problem, I'm sure Pandas to_sql
has a way to indicate the primary key... nope. This is where it gets a little hacky:
除了一个问题之外,哪个会完美运行,自动化要求表具有主键。好吧,没问题,我敢肯定Pandas to_sql有办法表明主键...不。这是一个有点hacky的地方:
for df in dfs.keys():
cols = dfs[df].columns
cols = [str(col) for col in cols if 'id' in col.lower()]
schema = pd.io.sql.get_schema(dfs[df],df, con=db.engine, keys=cols)
db.engine.execute('DROP TABLE ' + df + ';')
db.engine.execute(schema)
dfs[df].to_sql(df,con=db.engine, index=False, if_exists='append')
I iterate thru the dict
of DataFrames
, get a list of the columns to use for the primary key (i.e. those containing id
), use get_schema
to create the empty tables then append the DataFrame
to the table.
我通过DataFrames的dict迭代,获取用于主键的列的列表(即那些包含id的列),使用get_schema创建空表然后将DataFrame附加到表。
Now that you have the models, you can explicitly name and use them (i.e. User = Base.classes.user
) with session.query
or create a dict of all the classes with something like this:
现在您已经拥有模型,您可以使用session.query显式命名和使用它们(即User = Base.classes.user),或者使用以下内容创建所有类的dict:
alchemyClassDict = {}
for t in Base.classes.keys():
alchemyClassDict[t] = Base.classes[t]
And query with:
并查询:
res = db.session.query(alchemyClassDict['user']).first()
#1
10
Disclaimer: this answer is more experimental then practical, but maybe worth mention.
免责声明:这个答案更具实验性和实用性,但也许值得一提。
I found that class pandas.io.sql.SQLTable
has named argument key
and if you assign it the name of the field then this field becomes the primary key:
我发现类pandas.io.sql.SQLTable已命名参数键,如果为其分配字段名称,则此字段将成为主键:
Unfortunately you can't just transfer this argument from DataFrame.to_sql()
function. To use it you should:
不幸的是,你不能只从DataFrame.to_sql()函数传递这个参数。要使用它你应该:
-
create
pandas.io.SQLDatabase
instance创建pandas.io.SQLDatabase实例
engine = sa.create_engine('postgresql:///somedb') pandas_sql = pd.io.sql.pandasSQL_builder(engine, schema=None, flavor=None)
-
define function analoguous to
pandas.io.SQLDatabase.to_sql()
but with additional*kwargs
argument which is passed topandas.io.SQLTable
object created inside it (i've just copied originalto_sql()
method and added*kwargs
):定义与pandas.io.SQLDatabase.to_sql()类似的函数,但附加* kwargs参数,该参数传递给在其中创建的pandas.io.SQLTable对象(我刚刚复制了原始的to_sql()方法并添加了* kwargs):
def to_sql_k(self, frame, name, if_exists='fail', index=True, index_label=None, schema=None, chunksize=None, dtype=None, **kwargs): if dtype is not None: from sqlalchemy.types import to_instance, TypeEngine for col, my_type in dtype.items(): if not isinstance(to_instance(my_type), TypeEngine): raise ValueError('The type of %s is not a SQLAlchemy ' 'type ' % col) table = pd.io.sql.SQLTable(name, self, frame=frame, index=index, if_exists=if_exists, index_label=index_label, schema=schema, dtype=dtype, **kwargs) table.create() table.insert(chunksize)
-
call this function with your
SQLDatabase
instance and the dataframe you want to save使用您的SQLDatabase实例和要保存的数据帧调用此函数
to_sql_k(pandas_sql, df2save, 'tmp', index=True, index_label='id', keys='id', if_exists='replace')
And we get something like
我们得到类似的东西
CREATE TABLE public.tmp
(
id bigint NOT NULL DEFAULT nextval('tmp_id_seq'::regclass),
...
)
in the database.
在数据库中。
PS You can of course monkey-patch DataFrame
, io.SQLDatabase
and io.to_sql()
functions to use this workaround with convenience.
PS当然,您可以使用Monkey-patch DataFrame,io.SQLDatabase和io.to_sql()函数来方便地使用此变通方法。
#2
20
Simply add the primary key after uploading the table with pandas.
只需在使用pandas上传表后添加主键即可。
group_export.to_sql(con=engine, name=example_table, if_exists='replace',
flavor='mysql', index=False)
with engine.connect() as con:
con.execute('ALTER TABLE `example_table` ADD PRIMARY KEY (`ID_column`);')
#3
0
automap_base
from sqlalchemy.ext.automap
(tableNamesDict is a dict with only the Pandas tables):
来自sqlalchemy.ext.automap的automap_base(tableNamesDict是一个仅包含Pandas表的dict):
metadata = MetaData()
metadata.reflect(db.engine, only=tableNamesDict.values())
Base = automap_base(metadata=metadata)
Base.prepare()
Which would have worked perfectly, except for one problem, automap requires the tables to have a primary key. Ok, no problem, I'm sure Pandas to_sql
has a way to indicate the primary key... nope. This is where it gets a little hacky:
除了一个问题之外,哪个会完美运行,自动化要求表具有主键。好吧,没问题,我敢肯定Pandas to_sql有办法表明主键...不。这是一个有点hacky的地方:
for df in dfs.keys():
cols = dfs[df].columns
cols = [str(col) for col in cols if 'id' in col.lower()]
schema = pd.io.sql.get_schema(dfs[df],df, con=db.engine, keys=cols)
db.engine.execute('DROP TABLE ' + df + ';')
db.engine.execute(schema)
dfs[df].to_sql(df,con=db.engine, index=False, if_exists='append')
I iterate thru the dict
of DataFrames
, get a list of the columns to use for the primary key (i.e. those containing id
), use get_schema
to create the empty tables then append the DataFrame
to the table.
我通过DataFrames的dict迭代,获取用于主键的列的列表(即那些包含id的列),使用get_schema创建空表然后将DataFrame附加到表。
Now that you have the models, you can explicitly name and use them (i.e. User = Base.classes.user
) with session.query
or create a dict of all the classes with something like this:
现在您已经拥有模型,您可以使用session.query显式命名和使用它们(即User = Base.classes.user),或者使用以下内容创建所有类的dict:
alchemyClassDict = {}
for t in Base.classes.keys():
alchemyClassDict[t] = Base.classes[t]
And query with:
并查询:
res = db.session.query(alchemyClassDict['user']).first()