火鸟备份还原令人沮丧,有办法避免它吗?

时间:2022-01-30 00:09:10

I am using Firebird, but lately the database grows really seriously. There is really a lot of delete statements running, as well update/inserts, and the database file size grows really fast. After tons of deleting records the database size doesn't decrease, and even worse, i have the feeling that actually the query getting slowed down a bit. In order to fix this a daily backup/restore process have been involved, but because of it's time to complete - i could say that it is really frustrating to use Firebird.

我用的是Firebird,但是最近数据库变得非常严重。有很多删除语句在运行,还有更新/插入,而且数据库文件的大小增长得非常快。在大量删除记录之后,数据库的大小并没有减少,更糟糕的是,我感觉查询的速度慢了一点。为了修复这个问题,每天都要进行备份/恢复过程,但是因为是时候完成了——我可以说使用Firebird真的很让人沮丧。

  • Any ideas on workarounds or solution on this will be welcome.

    任何关于变通或解决方案的想法都是受欢迎的。

  • As well, I am considering switching to Interbase because I heard from a friend that it is not having this issue - it is so ?

    同样,我正在考虑转换到Interbase,因为我从一个朋友那里听说它没有这个问题——是吗?

3 个解决方案

#1


9  

We have a lot of huge databases on Firebird in production but never had an issue with a database growth. Yes, every time a record being deleted or updated an old version of it will be kept in the file. But sooner or later a garbage collector will sweap it away. Once both processes will balance each other the database file will grow only for the size of new data and indices.

我们在产品中有很多关于Firebird的大型数据库,但是从来没有出现过数据库增长的问题。是的,每次删除或更新记录时,旧版本的记录将保存在文件中。但是迟早会有一个垃圾收集器把它消耗掉。一旦两个进程相互平衡,数据库文件只会随着新数据和索引的大小而增长。

As general precaution to prevent an enormous database growth try to make your transactions as short as possible. In our applications we use one READ ONLY transaction for reading all the data. This transaction is open through whole application life time. For every batch of insert/update/delete statements we use short separate transactions.

为了防止巨大的数据库增长,一般的预防措施是尽量缩短事务。在我们的应用程序中,我们使用一个只读事务来读取所有数据。此事务通过整个应用程序生命周期打开。对于每一批插入/更新/删除语句,我们都使用短而独立的事务。

Slowing of database operations could be resulted from obsolete indices stats. Here you can find an example of how to recalculate statistics for all indices: http://www.firebirdfaq.org/faq167/

数据库操作的减慢可能是由于过时的索引统计。在这里,您可以找到一个如何重新计算所有索引的统计数据的示例:http://www.firebirdfaq.org/faq167/

#2


7  

Check if you have unfinished transactions in your applications. If transaction is started but not committed or rolled back, database will have own revision for each transaction after the oldest active transaction.

检查您的应用程序中是否有未完成的事务。如果事务已启动,但未提交或回滚,数据库将在最老的活动事务之后对每个事务有自己的修订。

You can check the database statistics (gstat or external tool), there's oldest transaction and the next transaction. If the difference between those numbers keeps growing, you have the stuck transaction problem.

您可以检查数据库统计数据(gstat或外部工具),有最古老的事务和下一个事务。如果这些数字之间的差异一直在增长,那么就有了被卡住的事务问题。

There are also monitoring tools the check situation, one I've used is Sinatica Monitor for Firebird.

还有监控工具检查情况,我用过的是火鸟的Sinatica监视器。

Edit: Also, database file doesn't shrink automatically ever. Parts of it get marked as unused (after sweep operation) and will be reused. http://www.firebirdfaq.org/faq41/

编辑:而且,数据库文件不会自动收缩。它的某些部分被标记为未使用(在清除操作之后),并将被重用。http://www.firebirdfaq.org/faq41/

#3


6  

The space occupied by deleted records will be re-used as soon as it is garbage collected by Firebird. If GC is not happening (transaction problems?), DB will keep growing, until GC can do its job.

被删除记录占用的空间将在Firebird收集垃圾时重新使用。如果GC没有发生(事务问题?),DB将继续增长,直到GC完成它的工作。

Also, there is a problem when you do a massive delete in a table (ex: millions of records), the next select in that table will "trigger" the garbage collection, and the performance will drop until GC finishes. The only way to workaround this would be to do the massive deletes in a time when the server is not very used, and run a sweep after that, making sure that there are no stuck transactions.

此外,当您在表中执行大规模删除时(例如:数百万条记录),该表中的下一个选择将“触发”垃圾收集,性能将下降,直到GC完成。解决这一问题的唯一方法是在服务器不太常用的情况下执行大量删除操作,并在此之后运行一次扫描,以确保没有阻塞的事务。

Also, keep in mind that if you are using "standard" tables to hold temporary data (ie: info is inserted and delete several times), you can get corrupted database in some circumstances. I strongly suggest you to start using Global Temporary Tables feature.

另外,请记住,如果您使用“标准”表来保存临时数据(即:info被插入和删除多次),在某些情况下,您可能会得到损坏的数据库。我强烈建议您开始使用全局临时表特性。

#1


9  

We have a lot of huge databases on Firebird in production but never had an issue with a database growth. Yes, every time a record being deleted or updated an old version of it will be kept in the file. But sooner or later a garbage collector will sweap it away. Once both processes will balance each other the database file will grow only for the size of new data and indices.

我们在产品中有很多关于Firebird的大型数据库,但是从来没有出现过数据库增长的问题。是的,每次删除或更新记录时,旧版本的记录将保存在文件中。但是迟早会有一个垃圾收集器把它消耗掉。一旦两个进程相互平衡,数据库文件只会随着新数据和索引的大小而增长。

As general precaution to prevent an enormous database growth try to make your transactions as short as possible. In our applications we use one READ ONLY transaction for reading all the data. This transaction is open through whole application life time. For every batch of insert/update/delete statements we use short separate transactions.

为了防止巨大的数据库增长,一般的预防措施是尽量缩短事务。在我们的应用程序中,我们使用一个只读事务来读取所有数据。此事务通过整个应用程序生命周期打开。对于每一批插入/更新/删除语句,我们都使用短而独立的事务。

Slowing of database operations could be resulted from obsolete indices stats. Here you can find an example of how to recalculate statistics for all indices: http://www.firebirdfaq.org/faq167/

数据库操作的减慢可能是由于过时的索引统计。在这里,您可以找到一个如何重新计算所有索引的统计数据的示例:http://www.firebirdfaq.org/faq167/

#2


7  

Check if you have unfinished transactions in your applications. If transaction is started but not committed or rolled back, database will have own revision for each transaction after the oldest active transaction.

检查您的应用程序中是否有未完成的事务。如果事务已启动,但未提交或回滚,数据库将在最老的活动事务之后对每个事务有自己的修订。

You can check the database statistics (gstat or external tool), there's oldest transaction and the next transaction. If the difference between those numbers keeps growing, you have the stuck transaction problem.

您可以检查数据库统计数据(gstat或外部工具),有最古老的事务和下一个事务。如果这些数字之间的差异一直在增长,那么就有了被卡住的事务问题。

There are also monitoring tools the check situation, one I've used is Sinatica Monitor for Firebird.

还有监控工具检查情况,我用过的是火鸟的Sinatica监视器。

Edit: Also, database file doesn't shrink automatically ever. Parts of it get marked as unused (after sweep operation) and will be reused. http://www.firebirdfaq.org/faq41/

编辑:而且,数据库文件不会自动收缩。它的某些部分被标记为未使用(在清除操作之后),并将被重用。http://www.firebirdfaq.org/faq41/

#3


6  

The space occupied by deleted records will be re-used as soon as it is garbage collected by Firebird. If GC is not happening (transaction problems?), DB will keep growing, until GC can do its job.

被删除记录占用的空间将在Firebird收集垃圾时重新使用。如果GC没有发生(事务问题?),DB将继续增长,直到GC完成它的工作。

Also, there is a problem when you do a massive delete in a table (ex: millions of records), the next select in that table will "trigger" the garbage collection, and the performance will drop until GC finishes. The only way to workaround this would be to do the massive deletes in a time when the server is not very used, and run a sweep after that, making sure that there are no stuck transactions.

此外,当您在表中执行大规模删除时(例如:数百万条记录),该表中的下一个选择将“触发”垃圾收集,性能将下降,直到GC完成。解决这一问题的唯一方法是在服务器不太常用的情况下执行大量删除操作,并在此之后运行一次扫描,以确保没有阻塞的事务。

Also, keep in mind that if you are using "standard" tables to hold temporary data (ie: info is inserted and delete several times), you can get corrupted database in some circumstances. I strongly suggest you to start using Global Temporary Tables feature.

另外,请记住,如果您使用“标准”表来保存临时数据(即:info被插入和删除多次),在某些情况下,您可能会得到损坏的数据库。我强烈建议您开始使用全局临时表特性。