加快linux上的链接速度/快速链接

时间:2022-07-29 02:43:16

I am building webkit ( 2 Million lines of code) after every ten minutes to see the output of my change in it, and linking of webkit on my Machine requires to process 600-700 MB of object files which are there on my hard-disk. That takes around 1.5 minutes. I want to speedup this linking process.

我每隔十​​分钟就建立webkit(200万行代码)以查看我的更改输出,并且我的机器上的webkit链接需要处理我硬盘上的600-700 MB目标文件。这需要大约1.5分钟。我想加快这个链接过程。

Is there any chance that, I can tell os to keep all the object files in RAM only ( I have 4 GB of ram ). Is there any other way to speed up the linking?

有没有机会,我可以告诉操作系统只将所有目标文件保存在RAM中(我有4 GB的ram)。有没有其他方法来加快链接?

Any ideas or help is appreciated!

任何想法或帮助表示赞赏!

Here is a command which takes 1.5 minutes,

这是一个需要1.5分钟的命令,

http://pastebin.com/GtaggkSc

http://pastebin.com/GtaggkSc

4 个解决方案

#1


13  

I solved this problem by using tempfs and gold linker.

我通过使用tempfs和gold链接器解决了这个问题。

1). tmpfs: mount directory which contains all the object files as tmpfs.

1)。 tmpfs:mount目录,包含所有目标文件为tmpfs。

2). gold linker: using gold linker will make linking 5-6 times fast, with tmpfs advantage speedup will be 7-8 times than normal linking. use following command on ubuntu and your normal linker will get replaced with Gold Linker.

2)。金链接器:使用金链接器会使连接速度快5-6倍,而tmpfs的优势加速将是普通链接的7-8倍。在ubuntu上使用以下命令,您的普通链接器将被Gold Linker取代。

sudo apt-get install binutils-gold

You can find some linking error using gold linker, below thread is a good help on it.

您可以使用黄金链接器找到一些链接错误,下面的线程是一个很好的帮助。

Replacing ld with gold - any experience?

用黄金取代ld - 有经验吗?

#2


2  

Try to use a ramdisk

尝试使用ramdisk

#3


1  

Truthfully I'm not sure I understand the problem but would something like ramfs be of use to you?

说实话,我不确定我是否理解这个问题,但ramfs会对你有用吗?

#4


1  

Get a SSD Disk for your linux machine. If write performance is still a problem configure the output path to be in a ram disk.

获取Linux机器的SSD磁盘。如果写入性能仍然存在问题,请将输出路径配置为ram磁盘。

Have you measured how much of the 1.5min is really IO bound? Webkit being so large means that you can run into memory cache trashing. You should try to find out how many L1/L2 cache misses you have. I would suggest that this is a problem. In this case your only hope is that someone at the GCC team looks into this problem.

你有没有测量过多少1.5分钟的IO界限? Webkit如此庞大意味着您可能会遇到内存缓存垃圾。您应该尝试找出有多少L1 / L2缓存未命中。我认为这是一个问题。在这种情况下,你唯一的希望是海湾合作委员会小组的某个人调查这个问题。

By the way: Microsoft has the same problem with extreme linker times.

顺便说一句:微软在极端的链接器时间方面遇到了同样的问题。

#1


13  

I solved this problem by using tempfs and gold linker.

我通过使用tempfs和gold链接器解决了这个问题。

1). tmpfs: mount directory which contains all the object files as tmpfs.

1)。 tmpfs:mount目录,包含所有目标文件为tmpfs。

2). gold linker: using gold linker will make linking 5-6 times fast, with tmpfs advantage speedup will be 7-8 times than normal linking. use following command on ubuntu and your normal linker will get replaced with Gold Linker.

2)。金链接器:使用金链接器会使连接速度快5-6倍,而tmpfs的优势加速将是普通链接的7-8倍。在ubuntu上使用以下命令,您的普通链接器将被Gold Linker取代。

sudo apt-get install binutils-gold

You can find some linking error using gold linker, below thread is a good help on it.

您可以使用黄金链接器找到一些链接错误,下面的线程是一个很好的帮助。

Replacing ld with gold - any experience?

用黄金取代ld - 有经验吗?

#2


2  

Try to use a ramdisk

尝试使用ramdisk

#3


1  

Truthfully I'm not sure I understand the problem but would something like ramfs be of use to you?

说实话,我不确定我是否理解这个问题,但ramfs会对你有用吗?

#4


1  

Get a SSD Disk for your linux machine. If write performance is still a problem configure the output path to be in a ram disk.

获取Linux机器的SSD磁盘。如果写入性能仍然存在问题,请将输出路径配置为ram磁盘。

Have you measured how much of the 1.5min is really IO bound? Webkit being so large means that you can run into memory cache trashing. You should try to find out how many L1/L2 cache misses you have. I would suggest that this is a problem. In this case your only hope is that someone at the GCC team looks into this problem.

你有没有测量过多少1.5分钟的IO界限? Webkit如此庞大意味着您可能会遇到内存缓存垃圾。您应该尝试找出有多少L1 / L2缓存未命中。我认为这是一个问题。在这种情况下,你唯一的希望是海湾合作委员会小组的某个人调查这个问题。

By the way: Microsoft has the same problem with extreme linker times.

顺便说一句:微软在极端的链接器时间方面遇到了同样的问题。