I have a python file which uses SQLAlchemy to define all the tables in a given database, including all the applicable indexes and foreign key constraints. The file looks something like this:
我有一个python文件,它使用SQLAlchemy定义给定数据库中的所有表,包括所有适用的索引和外键约束。该文件看起来像这样:
Base = declarative_base()
class FirstLevel(Base):
__tablename__ = 'first_level'
first_level_id = Column(Integer, index=True, nullable=False, primary_key=True, autoincrement=True)
first_level_col1 = Column(String(100), index=True)
first_level_col2 = Column(String(100))
first_level_col3 = Column(String(100))
class SecondLevel(Base):
__tablename__ = 'second_level'
second_level_id = Column(Integer, index=True, nullable=False, primary_key=True, autoincrement=True)
first_level_id = Column(None, ForeignKey(FirstLevel.first_level_id, onupdate='cascade', ondelete='cascade', deferrable=True), index=True, nullable=False)
second_level_col1 = Column(String(100), index=True)
second_level_col2 = Column(String(100))
second_level_col3 = Column(String(100))
class ThirdLevel(Base):
__tablename__ = 'third_level'
third_level_id = Column(Integer, index=True, nullable=False, primary_key=True, autoincrement=True)
first_level_id = Column(None, ForeignKey(FirstLevel.first_level_id, onupdate='cascade', ondelete='cascade', deferrable=True), index=True, nullable=False)
second_level_id = Column(None, ForeignKey(SecondLevel.second_level_id, onupdate='cascade', ondelete='cascade', deferrable=True), index=True, nullable=False)
third_level_col1 = Column(String(100), index=True)
third_level_col2 = Column(String(100))
third_level_col3 = Column(String(100))
...
I can use this file to create a new schema in the postgres database by executing the following command:
我可以通过执行以下命令使用此文件在postgres数据库中创建新模式:
engine = create_engine('postgresql://username:password@path_to_database')
Base.metadata.create_all(engine)
The problem is that I have to load a huge amount of data into this newly-created database, and this takes a long long time if I don't remove the indexes and foreign key constraints. But manually removing and manually recreating them after I am done inserting all the data is a big hassle and removes most of the convenience of using SQLAlchemy to create a database schema.
问题是我必须将大量数据加载到这个新创建的数据库中,如果我不删除索引和外键约束,这需要很长时间。但是,在完成插入所有数据后手动删除并手动重新创建它们是一个很大的麻烦,并且消除了使用SQLAlchemy创建数据库模式的大部分便利。
I was wondering if there is a way to use SQLAlchemy to first create the tables in the database, load the data, and then use SQLAlchemy ORM again to create all the indexes and foreign key constraints?
我想知道是否有办法使用SQLAlchemy首先在数据库中创建表,加载数据,然后再次使用SQLAlchemy ORM来创建所有索引和外键约束?
1 个解决方案
#1
2
You could do it with Alembic migration scripts.
您可以使用Alembic迁移脚本来完成此操作。
- Create initial tables / drop existing indexes
- Load data
- Add indexes
创建初始表/删除现有索引
#1
2
You could do it with Alembic migration scripts.
您可以使用Alembic迁移脚本来完成此操作。
- Create initial tables / drop existing indexes
- Load data
- Add indexes
创建初始表/删除现有索引