如何提高Trac的表现

时间:2021-11-30 05:35:08

I have noticed that my particular instance of Trac is not running quickly and has big lags. This is at the very onset of a project, so not much is in Trac (except for plugins and code loaded into SVN).

我注意到我的特定Trac实例没有快速运行并且有很大的滞后。这是一个项目的开始,所以在Trac中没有多少(除了加载到SVN中的插件和代码)。

Setup Info: This is via a SELinux system hosted by WebFaction. It is behind Apache, and connections are over SSL. Currently the .htpasswd file is what I use to control access.

设置信息:这是通过WebFaction托管的SELinux系统实现的。它落后于Apache,连接通过SSL。目前,我使用.htpasswd文件来控制访问。

Are there any recommend ways to improve the performance of Trac?

有没有推荐的方法来改善Trac的性能?

4 个解决方案

#1


5  

It's hard to say without knowing more about your setup, but one easy win is to make sure that Trac is running in something like mod_python, which keeps the Python runtime in memory. Otherwise, every HTTP request will cause Python to run, import all the modules, and then finally handle the request. Using mod_python (or FastCGI, whichever you prefer) will eliminate that loading and skip straight to the good stuff.

在不了解更多关于你的设置的情况下很难说,但一个简单的胜利就是确保Trac运行在mod_python之类的东西中,这样可以将Python运行时保留在内存中。否则,每个HTTP请求都将导致Python运行,导入所有模块,然后最终处理请求。使用mod_python(或FastCGI,无论你喜欢哪个)将消除加载并直接跳到好东西。

Also, as your Trac database grows and you get more people using the site, you'll probably outgrow the default SQLite database. At that point, you should think about migrating the database to PostgreSQL or MySQL, because they'll be able to handle concurrent requests much faster.

此外,随着Trac数据库的增长以及您使用该站点的人越来越多,您可能会超出默认的SQLite数据库。此时,您应该考虑将数据库迁移到PostgreSQL或MySQL,因为它们能够更快地处理并发请求。

#2


3  

We've had the best luck with FastCGI. Another critical factor was to only use https for authentication but use http for all other traffic -- I was really surprised how much that made a difference.

我们用FastCGI获得了最好的运气。另一个关键因素是仅使用https进行身份验证,但将http用于所有其他流量 - 我真的很惊讶这有多大作用。

#3


2  

I have noticed that if

我注意到了

select disctinct name from wiki

takes more than 5 seconds (for example due to a million rows in this table - this is a true story (We had a script that filled it)), browsing wiki pages becomes very slow and takes over 2*t*n, where t is time of execution of the quoted query (>5s of course), and n is a number of tracwiki links present on the viewed page. This is due to trac having a (hardcoded) 5s cache expire for this query. It is used by trac to tell what the colour should the link be. We re-hardcoded the value to 30s (We need that many pages, so every 30s someone has to wait 6-7s).

花费超过5秒(例如由于此表中的一百万行 - 这是一个真实的故事(我们有一个填充它的脚本)),浏览维基页面变得非常慢并且需要超过2 * t * n,其中t是引用查询的执行时间(当然> 5s),n是查看页面上存在的一些tracwiki链接。这是由于trac对此查询具有(硬编码)5s缓存过期。 trac使用它来告诉链接应该是什么颜色。我们将这个值重新硬编码为30秒(我们需要那么多页面,所以每30个人需要等待6到7个星期)。

It may not be what caused Your problem, but it may be. Good luck on speeding up Your Trac instance.

它可能不是导致你的问题的原因,但它可能是。祝你加速你的Trac实例好运。

#4


1  

Serving the chrome files statically with and expires-header could help too. See the end of this page.

使用和expires-header静态提供chrome文件也可以提供帮助。请参见本页末尾。

#1


5  

It's hard to say without knowing more about your setup, but one easy win is to make sure that Trac is running in something like mod_python, which keeps the Python runtime in memory. Otherwise, every HTTP request will cause Python to run, import all the modules, and then finally handle the request. Using mod_python (or FastCGI, whichever you prefer) will eliminate that loading and skip straight to the good stuff.

在不了解更多关于你的设置的情况下很难说,但一个简单的胜利就是确保Trac运行在mod_python之类的东西中,这样可以将Python运行时保留在内存中。否则,每个HTTP请求都将导致Python运行,导入所有模块,然后最终处理请求。使用mod_python(或FastCGI,无论你喜欢哪个)将消除加载并直接跳到好东西。

Also, as your Trac database grows and you get more people using the site, you'll probably outgrow the default SQLite database. At that point, you should think about migrating the database to PostgreSQL or MySQL, because they'll be able to handle concurrent requests much faster.

此外,随着Trac数据库的增长以及您使用该站点的人越来越多,您可能会超出默认的SQLite数据库。此时,您应该考虑将数据库迁移到PostgreSQL或MySQL,因为它们能够更快地处理并发请求。

#2


3  

We've had the best luck with FastCGI. Another critical factor was to only use https for authentication but use http for all other traffic -- I was really surprised how much that made a difference.

我们用FastCGI获得了最好的运气。另一个关键因素是仅使用https进行身份验证,但将http用于所有其他流量 - 我真的很惊讶这有多大作用。

#3


2  

I have noticed that if

我注意到了

select disctinct name from wiki

takes more than 5 seconds (for example due to a million rows in this table - this is a true story (We had a script that filled it)), browsing wiki pages becomes very slow and takes over 2*t*n, where t is time of execution of the quoted query (>5s of course), and n is a number of tracwiki links present on the viewed page. This is due to trac having a (hardcoded) 5s cache expire for this query. It is used by trac to tell what the colour should the link be. We re-hardcoded the value to 30s (We need that many pages, so every 30s someone has to wait 6-7s).

花费超过5秒(例如由于此表中的一百万行 - 这是一个真实的故事(我们有一个填充它的脚本)),浏览维基页面变得非常慢并且需要超过2 * t * n,其中t是引用查询的执行时间(当然> 5s),n是查看页面上存在的一些tracwiki链接。这是由于trac对此查询具有(硬编码)5s缓存过期。 trac使用它来告诉链接应该是什么颜色。我们将这个值重新硬编码为30秒(我们需要那么多页面,所以每30个人需要等待6到7个星期)。

It may not be what caused Your problem, but it may be. Good luck on speeding up Your Trac instance.

它可能不是导致你的问题的原因,但它可能是。祝你加速你的Trac实例好运。

#4


1  

Serving the chrome files statically with and expires-header could help too. See the end of this page.

使用和expires-header静态提供chrome文件也可以提供帮助。请参见本页末尾。