核心转储文件没有生成在分割错误

时间:2020-12-02 22:27:31

I am trying to debug a segmentation fault caused by my C program using gdb. A core dump file is not automatically generated when I run my program,and i have to run the command

我正在尝试调试由我的C程序使用gdb引起的分割错误。当我运行我的程序时,一个核心转储文件不会自动生成,我必须运行这个命令

ulimit -c unlimited

for a core file to be generated on the next run.

以便在下一次运行时生成核心文件。

Why is a core dump file not generated automatically and why do I have to run the ulimit command everytime to generate a core file on the next run of my program ?.

为什么内核转储文件没有自动生成,为什么每次都要运行ulimit命令才能在程序的下一次运行中生成核心文件?

The operating system i use is Ubuntu 10.10.

我使用的操作系统是Ubuntu 10.10。

4 个解决方案

#1


18  

You need to place the command

您需要放置命令

ulimit -c unlimited

in your environment settings.

在您的环境中设置。

If you are using bash as your shell, you need to place the above command in ~/.bashrc

如果使用bash作为shell,则需要将上面的命令放在~/.bashrc中

#2


15  

You might also want to try to edit /etc/security/limits.conf file instead of adding ulimit -c unlimited to ~/.bashrc.

您可能还想尝试编辑/etc/security/limitsconf文件而不是添加ulimit -c unlimited到~/.bashrc。

The limits.conf is the "correct" place where to specify core dump details in most Linux distros.

的限制。conf是在大多数Linux发行版中指定核心转储细节的“正确”位置。

#3


9  

That's because by default your distribution limits core file size to 0 blocks. The ulimit command you mentioned increases that limit to infinity.

这是因为默认情况下,发行版将核心文件大小限制为0块。您提到的ulimit命令将限制增加到无穷大。

I don't know about Ubuntu, but most distros have a file /etc/limits with system defaults for resource limits.

我不知道Ubuntu,但是大多数发行版都有一个文件/etc/limit,系统默认设置了资源限制。

#4


1  

The segmentation fault is due to irrelevant values for path variables. On my system the user is sidd@sidd-Lenovo-G460 and the contents added are as below.

分割错误是由于路径变量的无关值。在我的系统上,用户是sidd@sidd-Lenovo-G460,添加的内容如下。

PATH=$PATH:/home/sidd/ns-allinone-2.35/bin:/home/sidd/ns-allinone-2.35/tcl8.5.10/unix:/home/sidd/ns-allinone-2.35/tk8.5.10/unix

LD_LIBRARY_PATH=/home/sidd/ns-allinone-2.35/otcl-1.14:/home/sidd/ns-allinone-2.35/lib

TCL_LIBRARY=/home/sidd/ns-allinone-2.35/tcl8.5.10/library

Please refer this blog post (VERY IMPORTANT).

请参考这篇博文(非常重要)。

#1


18  

You need to place the command

您需要放置命令

ulimit -c unlimited

in your environment settings.

在您的环境中设置。

If you are using bash as your shell, you need to place the above command in ~/.bashrc

如果使用bash作为shell,则需要将上面的命令放在~/.bashrc中

#2


15  

You might also want to try to edit /etc/security/limits.conf file instead of adding ulimit -c unlimited to ~/.bashrc.

您可能还想尝试编辑/etc/security/limitsconf文件而不是添加ulimit -c unlimited到~/.bashrc。

The limits.conf is the "correct" place where to specify core dump details in most Linux distros.

的限制。conf是在大多数Linux发行版中指定核心转储细节的“正确”位置。

#3


9  

That's because by default your distribution limits core file size to 0 blocks. The ulimit command you mentioned increases that limit to infinity.

这是因为默认情况下,发行版将核心文件大小限制为0块。您提到的ulimit命令将限制增加到无穷大。

I don't know about Ubuntu, but most distros have a file /etc/limits with system defaults for resource limits.

我不知道Ubuntu,但是大多数发行版都有一个文件/etc/limit,系统默认设置了资源限制。

#4


1  

The segmentation fault is due to irrelevant values for path variables. On my system the user is sidd@sidd-Lenovo-G460 and the contents added are as below.

分割错误是由于路径变量的无关值。在我的系统上,用户是sidd@sidd-Lenovo-G460,添加的内容如下。

PATH=$PATH:/home/sidd/ns-allinone-2.35/bin:/home/sidd/ns-allinone-2.35/tcl8.5.10/unix:/home/sidd/ns-allinone-2.35/tk8.5.10/unix

LD_LIBRARY_PATH=/home/sidd/ns-allinone-2.35/otcl-1.14:/home/sidd/ns-allinone-2.35/lib

TCL_LIBRARY=/home/sidd/ns-allinone-2.35/tcl8.5.10/library

Please refer this blog post (VERY IMPORTANT).

请参考这篇博文(非常重要)。