我所有的索引节点都用在哪里了?

时间:2021-01-07 18:06:01

How do I find out which directories are responsible for chewing up all my inodes?

我怎样才能找出哪些目录负责咀嚼我所有的索引节点?

Ultimately the root directory will be responsible for the largest number of inodes, so I'm not sure exactly what sort of answer I want..

最终根目录将负责最大数量的索引节点,所以我不确定我想要什么样的答案。

Basically, I'm running out of available inodes and need to find a unneeded directory to cull.

基本上,我将耗尽可用的索引节点,需要找到一个不需要的目录进行筛选。

Thanks, and sorry for the vague question.

谢谢,很抱歉这个模糊的问题。

14 个解决方案

#1


19  

So basically you're looking for which directories have a lot of files? Here's a first stab at it:

所以基本上你在寻找哪些目录有很多文件?这是第一个尝试:

find . -type d -print0 | xargs -0 -n1 count_files | sort -n

where "count_files" is a shell script that does (thanks Jonathan)

这里的“count_files”是一个shell脚本(谢谢Jonathan)

echo $(ls -a "$1" | wc -l) $1

#2


80  

If you don't want to make a new file (or can't because you ran out of inodes) you can run this query:

如果您不想创建一个新文件(或者因为inode耗尽而无法创建),可以运行以下查询:

for i in `find . -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n

as insider mentioned in another answer, using a solution with find will be much quicker since recursive ls is quite slow, check below for that solution! (credit where credit due!)

正如在另一个答案中提到的,使用带有find的解决方案要快得多,因为递归ls非常慢,请查看下面的解决方案!(学分)

#3


34  

Provided methods with recursive ls are very slow. Just for quickly finding parent directory consuming most of inodes i used:

提供的递归ls方法非常缓慢。为了快速找到父目录并使用我使用的大部分inode:

cd /partition_that_is_out_of_inodes
for i in *; do echo -e "$(find $i | wc -l)\t$i"; done | sort -n

#4


10  

This is my take on it. It's not so different from others, but the output is pretty and I think it counts more valid inodes than others (directories and symlinks). This counts the number of files in each subdirectory of the working directory; it sorts and formats the output into two columns; and it prints a grand total (shown as ".", the working directory). This will not follow symlinks but will count files and directories that begin with a dot. This does not count device nodes and special files like named pipes. Just remove the "-type l -o -type d -o -type f" test if you want to count those, too. Because this command is split up into two find commands it cannot correctly discriminate against directories mounted on other filesystems (the -mount option will not work). For example, this should really ignore "/proc" and "/sys" directories. You can see that in the case of running this command in "/" that including "/proc" and "/sys" grossly skews the grand total count.

这是我的看法。它和其他的没有太大的不同,但是输出很漂亮,我认为它比其他的(目录和符号链接)更有效。这将计算工作目录的每个子目录中的文件数量;它将输出排序并格式化为两列;它打印了一个总计(显示为“”)。工作目录)。这将不在符号链接之后,而是将计数以点开头的文件和目录。这不会计算设备节点和特殊文件,如命名管道。如果你也想计算的话,只需删除“-type l -o -type d -o -type f”测试。因为这个命令被分割成两个find命令,所以它不能正确地区分安装在其他文件系统上的目录(-mount选项将不起作用)。例如,这实际上应该忽略“/proc”和“/sys”目录。在“/”中运行这个命令时,您可以看到,包括“/proc”和“/sys”的命令严重扭曲了总的计数。

for ii in $(find . -maxdepth 1 -type d); do 
    echo -e "${ii}\t$(find "${ii}" -type l -o -type d -o -type f | wc -l)"
done | sort -n -k 2 | column -t

Example:

例子:

# cd /
# for ii in $(find -maxdepth 1 -type d); do echo -e "${ii}\t$(find "${ii}" -type l -o -type d -o -type f | wc -l)"; done | sort -n -k 2 | column -t
./boot        1
./lost+found  1
./media       1
./mnt         1
./opt         1
./srv         1
./lib64       2
./tmp         5
./bin         107
./sbin        109
./home        146
./root        169
./dev         188
./run         226
./etc         1545
./var         3611
./sys         12421
./lib         17219
./proc        20824
./usr         56628
.             113207

#5


10  

I used the following to work out (with a bit of help from my colleague James) that we had a massive number of PHP session files which needed to be deleted on one machine:

我利用下面的方法(在同事James的帮助下)计算出我们有大量的PHP会话文件需要在一台机器上删除:

1. How many inodes have I got in use?

1。我使用了多少inode ?

 root@polo:/# df -i
 Filesystem     Inodes  IUsed  IFree IUse% Mounted on
 /dev/xvda1     524288 427294  96994   81% /
 none           256054      2 256052    1% /sys/fs/cgroup
 udev           254757    404 254353    1% /dev
 tmpfs          256054    332 255722    1% /run
 none           256054      3 256051    1% /run/lock
 none           256054      1 256053    1% /run/shm
 none           256054      3 256051    1% /run/user

2. Where are all those inodes?

2。那些inode在哪里?

 root@polo:/# find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
 [...]
    1088 /usr/src/linux-headers-3.13.0-39/include/linux
    1375 /usr/src/linux-headers-3.13.0-29-generic/include/config
    1377 /usr/src/linux-headers-3.13.0-39-generic/include/config
    2727 /var/lib/dpkg/info
    2834 /usr/share/man/man3
  416811 /var/lib/php5/session
 root@polo:/#

That's a lot of PHP session files on the last line.

最后一行有很多PHP会话文件。

3. How to delete all those files?

3所示。如何删除所有的文件?

Delete all files in the directory which are older than 1440 minutes (24 hours):

删除目录中超过1440分钟(24小时)的所有文件:

root@polo:/var/lib/php5/session# find ./ -cmin +1440 | xargs rm
root@polo:/var/lib/php5/session#

4. Has it worked?

4所示。它工作吗?

 root@polo:~# find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
 [...]
    1088 /usr/src/linux-headers-3.13.0-39/include/linux
    1375 /usr/src/linux-headers-3.13.0-29-generic/include/config
    1377 /usr/src/linux-headers-3.13.0-39-generic/include/config
    2727 /var/lib/dpkg/info
    2834 /usr/share/man/man3
    2886 /var/lib/php5/session
 root@polo:~# df -i
 Filesystem     Inodes  IUsed  IFree IUse% Mounted on
 /dev/xvda1     524288 166420 357868   32% /
 none           256054      2 256052    1% /sys/fs/cgroup
 udev           254757    404 254353    1% /dev
 tmpfs          256054    332 255722    1% /run
 none           256054      3 256051    1% /run/lock
 none           256054      1 256053    1% /run/shm
 none           256054      3 256051    1% /run/user
 root@polo:~#

Luckily we had a sensu alert emailing us that our inodes were almost used up.

幸运的是,我们收到了sensu警报,我们的inode几乎用完了。

#6


6  

Here's a simple Perl script that'll do it:

下面是一个简单的Perl脚本:

#!/usr/bin/perl -w

use strict;

sub count_inodes($);
sub count_inodes($)
{
  my $dir = shift;
  if (opendir(my $dh, $dir)) {
    my $count = 0;
    while (defined(my $file = readdir($dh))) {
      next if ($file eq '.' || $file eq '..');
      $count++;
      my $path = $dir . '/' . $file;
      count_inodes($path) if (-d $path);
    }
    closedir($dh);
    printf "%7d\t%s\n", $count, $dir;
  } else {
    warn "couldn't open $dir - $!\n";
  }
}

push(@ARGV, '.') unless (@ARGV);
while (@ARGV) {
  count_inodes(shift);
}

If you want it to work like du (where each directory count also includes the recursive count of the subdirectory) then change the recursive function to return $count and then at the recursion point say:

如果您想让它像du一样工作(每个目录计数还包括子目录的递归计数),那么更改递归函数以返回$count,然后在递归点说:

$count += count_inodes($path) if (-d $path);

#7


2  

An actually functional one-liner (GNU find, for other kinds of find you'd need your own equivalent of -xdev to stay on the same FS.)

一个实际的函数式单行程序(GNU find,对于其他类型的查找,您需要自己的-xdev版本才能保持相同的FS)。

find / -xdev -type d | while read -r i; do printf "%d %s\n" $(ls -a "$i" | wc -l) "$i"; done | sort -nr | head -10

查找/ -xdev -type d |,读取-r i;执行printf“%d %s\n”$(ls -a“$i”| wc -l)“我美元”;完成|排序-nr | head -10

The tail is, obviously, customizable.

显然,尾巴是可以定制的。

As with many other suggestions here, this will only show you amount of entries in each directory, non-recursively.

与这里的许多其他建议一样,这将只显示每个目录中不递归的条目数量。

P.S.

注:

Fast, but imprecise one-liner (detect by directory node size):

快速但不精确的一行(按目录节点大小检测):

find / -xdev -type d -size +100k

查找/ -xdev -type d -size +100k

#8


1  

for i in dir.[01]
do
    find $i -printf "%i\n"|sort -u|wc -l|xargs echo $i --
done

dir.0 -- 27913
dir.1 -- 27913

dir。0——27913 dir。1——27913

#9


0  

The perl script is good, but beware symlinks- recurse only when -l filetest returns false or you will at best over-count, at worst recurse indefinitely (which could- minor concern- invoke Satan's 1000-year reign).

perl脚本很好,但是要注意符号链接——只有当-l filetest返回false时才进行递归,否则您最好是重复计数,最坏是无限递归(稍微注意一下,可能会调用撒旦的1000年统治)。

The whole idea of counting inodes in a file system tree falls apart when there are multiple links to more than a small percentage of the files.

当指向文件系统树的多个链接多于一个小百分比时,计算索引节点的整个想法就会失败。

#10


0  

Just wanted to mention that you could also search indirectly using the directory size, for example:

只是想提一下,您也可以使用目录大小间接搜索,例如:

find /path -type d -size +500k

Where 500k could be increased if you have a lot of large directories.

如果你有很多大的目录,可以增加500k。

Note that this method is not recursive. This will only help you if you have a lot of files in one single directory, but not if the files are evenly distributed across its descendants.

注意,这个方法不是递归的。只有当您在一个目录中有许多文件时,这才会对您有所帮助,但是如果这些文件分布在它的后代目录中,这就不会对您有所帮助了。

#11


0  

Just a note, when you finally find some mail spool directory and want to delete all the junk that's in there, rm * will not work if there are too many files, you can run the following command to quickly delete everything in that directory:

请注意,当您最终找到某个邮件假脱机目录并想删除其中的所有垃圾文件时,如果文件太多,rm *将无法工作,您可以运行以下命令快速删除该目录中的所有内容:

* WARNING * THIS WILL DELETE ALL FILES QUICKLY FOR CASES WHEN rm DOESN'T WORK

*警告*当rm不工作时,这将快速删除所有文件

find . -type f -delete

#12


0  

This counts files under current directory. This is supposed to work even if filenames contain newlines. It uses GNU Awk. Change value of d to get wanted maximum separated path depths. 0 means unlimited depth.

这将计算当前目录下的文件。即使文件名包含换行符,这也应该是可行的。它使用GNU Awk。改变d的值,得到所需的最大分离路径深度。0意味着无限的深度。

find . -mount -not -path . -print0 | gawk -v d=2 '
BEGIN{RS="\0";FS="/";SUBSEP="/";ORS="\0"}
{
    s="./"
    for(i=2;i!=d+1 && i<NF;i++){s=s $i "/"}
    ++n[s]
}
END{for(val in n){print n[val] "\t" val "\n"}}' | sort -gz -k 1,1

Same by Bash 4; this is significantly slower in my experience:

同样的Bash 4;这在我的经验中要慢得多:

declare -A n;
d=2
while IFS=/ read -d $'\0' -r -a a; do
  s="./"
  for ((i=2; i!=$((d+1)) && i<${#a[*]}; i++)); do
    s+="${a[$((i-1))]}/"
  done
  ((++n[\$s]))
done < <(find . -mount -not -path . -print0)

for j in "${!n[@]}"; do
    printf '%i\t%s\n\0' "${n[$j]}" "$j"
done | sort -gz -k 1,1 

#13


0  

use

使用

ncdu -x <path>

then press Shitf+c to sort by items count where the item is file

然后按Shitf+c按项目计数对项目所在的文件进行排序

#14


-1  

This command works in highly unlikely cases where your directory structure is identical to mine:

此命令在非常不可能的情况下工作,您的目录结构与我的相同:

find / -type f | grep -oP '^/([^/]+/){3}' | sort | uniq -c | sort -n

找到/类型f | grep op ' ^ /((^ /)+ /){ 3 }’排序| | uniq - c | - n

#1


19  

So basically you're looking for which directories have a lot of files? Here's a first stab at it:

所以基本上你在寻找哪些目录有很多文件?这是第一个尝试:

find . -type d -print0 | xargs -0 -n1 count_files | sort -n

where "count_files" is a shell script that does (thanks Jonathan)

这里的“count_files”是一个shell脚本(谢谢Jonathan)

echo $(ls -a "$1" | wc -l) $1

#2


80  

If you don't want to make a new file (or can't because you ran out of inodes) you can run this query:

如果您不想创建一个新文件(或者因为inode耗尽而无法创建),可以运行以下查询:

for i in `find . -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n

as insider mentioned in another answer, using a solution with find will be much quicker since recursive ls is quite slow, check below for that solution! (credit where credit due!)

正如在另一个答案中提到的,使用带有find的解决方案要快得多,因为递归ls非常慢,请查看下面的解决方案!(学分)

#3


34  

Provided methods with recursive ls are very slow. Just for quickly finding parent directory consuming most of inodes i used:

提供的递归ls方法非常缓慢。为了快速找到父目录并使用我使用的大部分inode:

cd /partition_that_is_out_of_inodes
for i in *; do echo -e "$(find $i | wc -l)\t$i"; done | sort -n

#4


10  

This is my take on it. It's not so different from others, but the output is pretty and I think it counts more valid inodes than others (directories and symlinks). This counts the number of files in each subdirectory of the working directory; it sorts and formats the output into two columns; and it prints a grand total (shown as ".", the working directory). This will not follow symlinks but will count files and directories that begin with a dot. This does not count device nodes and special files like named pipes. Just remove the "-type l -o -type d -o -type f" test if you want to count those, too. Because this command is split up into two find commands it cannot correctly discriminate against directories mounted on other filesystems (the -mount option will not work). For example, this should really ignore "/proc" and "/sys" directories. You can see that in the case of running this command in "/" that including "/proc" and "/sys" grossly skews the grand total count.

这是我的看法。它和其他的没有太大的不同,但是输出很漂亮,我认为它比其他的(目录和符号链接)更有效。这将计算工作目录的每个子目录中的文件数量;它将输出排序并格式化为两列;它打印了一个总计(显示为“”)。工作目录)。这将不在符号链接之后,而是将计数以点开头的文件和目录。这不会计算设备节点和特殊文件,如命名管道。如果你也想计算的话,只需删除“-type l -o -type d -o -type f”测试。因为这个命令被分割成两个find命令,所以它不能正确地区分安装在其他文件系统上的目录(-mount选项将不起作用)。例如,这实际上应该忽略“/proc”和“/sys”目录。在“/”中运行这个命令时,您可以看到,包括“/proc”和“/sys”的命令严重扭曲了总的计数。

for ii in $(find . -maxdepth 1 -type d); do 
    echo -e "${ii}\t$(find "${ii}" -type l -o -type d -o -type f | wc -l)"
done | sort -n -k 2 | column -t

Example:

例子:

# cd /
# for ii in $(find -maxdepth 1 -type d); do echo -e "${ii}\t$(find "${ii}" -type l -o -type d -o -type f | wc -l)"; done | sort -n -k 2 | column -t
./boot        1
./lost+found  1
./media       1
./mnt         1
./opt         1
./srv         1
./lib64       2
./tmp         5
./bin         107
./sbin        109
./home        146
./root        169
./dev         188
./run         226
./etc         1545
./var         3611
./sys         12421
./lib         17219
./proc        20824
./usr         56628
.             113207

#5


10  

I used the following to work out (with a bit of help from my colleague James) that we had a massive number of PHP session files which needed to be deleted on one machine:

我利用下面的方法(在同事James的帮助下)计算出我们有大量的PHP会话文件需要在一台机器上删除:

1. How many inodes have I got in use?

1。我使用了多少inode ?

 root@polo:/# df -i
 Filesystem     Inodes  IUsed  IFree IUse% Mounted on
 /dev/xvda1     524288 427294  96994   81% /
 none           256054      2 256052    1% /sys/fs/cgroup
 udev           254757    404 254353    1% /dev
 tmpfs          256054    332 255722    1% /run
 none           256054      3 256051    1% /run/lock
 none           256054      1 256053    1% /run/shm
 none           256054      3 256051    1% /run/user

2. Where are all those inodes?

2。那些inode在哪里?

 root@polo:/# find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
 [...]
    1088 /usr/src/linux-headers-3.13.0-39/include/linux
    1375 /usr/src/linux-headers-3.13.0-29-generic/include/config
    1377 /usr/src/linux-headers-3.13.0-39-generic/include/config
    2727 /var/lib/dpkg/info
    2834 /usr/share/man/man3
  416811 /var/lib/php5/session
 root@polo:/#

That's a lot of PHP session files on the last line.

最后一行有很多PHP会话文件。

3. How to delete all those files?

3所示。如何删除所有的文件?

Delete all files in the directory which are older than 1440 minutes (24 hours):

删除目录中超过1440分钟(24小时)的所有文件:

root@polo:/var/lib/php5/session# find ./ -cmin +1440 | xargs rm
root@polo:/var/lib/php5/session#

4. Has it worked?

4所示。它工作吗?

 root@polo:~# find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
 [...]
    1088 /usr/src/linux-headers-3.13.0-39/include/linux
    1375 /usr/src/linux-headers-3.13.0-29-generic/include/config
    1377 /usr/src/linux-headers-3.13.0-39-generic/include/config
    2727 /var/lib/dpkg/info
    2834 /usr/share/man/man3
    2886 /var/lib/php5/session
 root@polo:~# df -i
 Filesystem     Inodes  IUsed  IFree IUse% Mounted on
 /dev/xvda1     524288 166420 357868   32% /
 none           256054      2 256052    1% /sys/fs/cgroup
 udev           254757    404 254353    1% /dev
 tmpfs          256054    332 255722    1% /run
 none           256054      3 256051    1% /run/lock
 none           256054      1 256053    1% /run/shm
 none           256054      3 256051    1% /run/user
 root@polo:~#

Luckily we had a sensu alert emailing us that our inodes were almost used up.

幸运的是,我们收到了sensu警报,我们的inode几乎用完了。

#6


6  

Here's a simple Perl script that'll do it:

下面是一个简单的Perl脚本:

#!/usr/bin/perl -w

use strict;

sub count_inodes($);
sub count_inodes($)
{
  my $dir = shift;
  if (opendir(my $dh, $dir)) {
    my $count = 0;
    while (defined(my $file = readdir($dh))) {
      next if ($file eq '.' || $file eq '..');
      $count++;
      my $path = $dir . '/' . $file;
      count_inodes($path) if (-d $path);
    }
    closedir($dh);
    printf "%7d\t%s\n", $count, $dir;
  } else {
    warn "couldn't open $dir - $!\n";
  }
}

push(@ARGV, '.') unless (@ARGV);
while (@ARGV) {
  count_inodes(shift);
}

If you want it to work like du (where each directory count also includes the recursive count of the subdirectory) then change the recursive function to return $count and then at the recursion point say:

如果您想让它像du一样工作(每个目录计数还包括子目录的递归计数),那么更改递归函数以返回$count,然后在递归点说:

$count += count_inodes($path) if (-d $path);

#7


2  

An actually functional one-liner (GNU find, for other kinds of find you'd need your own equivalent of -xdev to stay on the same FS.)

一个实际的函数式单行程序(GNU find,对于其他类型的查找,您需要自己的-xdev版本才能保持相同的FS)。

find / -xdev -type d | while read -r i; do printf "%d %s\n" $(ls -a "$i" | wc -l) "$i"; done | sort -nr | head -10

查找/ -xdev -type d |,读取-r i;执行printf“%d %s\n”$(ls -a“$i”| wc -l)“我美元”;完成|排序-nr | head -10

The tail is, obviously, customizable.

显然,尾巴是可以定制的。

As with many other suggestions here, this will only show you amount of entries in each directory, non-recursively.

与这里的许多其他建议一样,这将只显示每个目录中不递归的条目数量。

P.S.

注:

Fast, but imprecise one-liner (detect by directory node size):

快速但不精确的一行(按目录节点大小检测):

find / -xdev -type d -size +100k

查找/ -xdev -type d -size +100k

#8


1  

for i in dir.[01]
do
    find $i -printf "%i\n"|sort -u|wc -l|xargs echo $i --
done

dir.0 -- 27913
dir.1 -- 27913

dir。0——27913 dir。1——27913

#9


0  

The perl script is good, but beware symlinks- recurse only when -l filetest returns false or you will at best over-count, at worst recurse indefinitely (which could- minor concern- invoke Satan's 1000-year reign).

perl脚本很好,但是要注意符号链接——只有当-l filetest返回false时才进行递归,否则您最好是重复计数,最坏是无限递归(稍微注意一下,可能会调用撒旦的1000年统治)。

The whole idea of counting inodes in a file system tree falls apart when there are multiple links to more than a small percentage of the files.

当指向文件系统树的多个链接多于一个小百分比时,计算索引节点的整个想法就会失败。

#10


0  

Just wanted to mention that you could also search indirectly using the directory size, for example:

只是想提一下,您也可以使用目录大小间接搜索,例如:

find /path -type d -size +500k

Where 500k could be increased if you have a lot of large directories.

如果你有很多大的目录,可以增加500k。

Note that this method is not recursive. This will only help you if you have a lot of files in one single directory, but not if the files are evenly distributed across its descendants.

注意,这个方法不是递归的。只有当您在一个目录中有许多文件时,这才会对您有所帮助,但是如果这些文件分布在它的后代目录中,这就不会对您有所帮助了。

#11


0  

Just a note, when you finally find some mail spool directory and want to delete all the junk that's in there, rm * will not work if there are too many files, you can run the following command to quickly delete everything in that directory:

请注意,当您最终找到某个邮件假脱机目录并想删除其中的所有垃圾文件时,如果文件太多,rm *将无法工作,您可以运行以下命令快速删除该目录中的所有内容:

* WARNING * THIS WILL DELETE ALL FILES QUICKLY FOR CASES WHEN rm DOESN'T WORK

*警告*当rm不工作时,这将快速删除所有文件

find . -type f -delete

#12


0  

This counts files under current directory. This is supposed to work even if filenames contain newlines. It uses GNU Awk. Change value of d to get wanted maximum separated path depths. 0 means unlimited depth.

这将计算当前目录下的文件。即使文件名包含换行符,这也应该是可行的。它使用GNU Awk。改变d的值,得到所需的最大分离路径深度。0意味着无限的深度。

find . -mount -not -path . -print0 | gawk -v d=2 '
BEGIN{RS="\0";FS="/";SUBSEP="/";ORS="\0"}
{
    s="./"
    for(i=2;i!=d+1 && i<NF;i++){s=s $i "/"}
    ++n[s]
}
END{for(val in n){print n[val] "\t" val "\n"}}' | sort -gz -k 1,1

Same by Bash 4; this is significantly slower in my experience:

同样的Bash 4;这在我的经验中要慢得多:

declare -A n;
d=2
while IFS=/ read -d $'\0' -r -a a; do
  s="./"
  for ((i=2; i!=$((d+1)) && i<${#a[*]}; i++)); do
    s+="${a[$((i-1))]}/"
  done
  ((++n[\$s]))
done < <(find . -mount -not -path . -print0)

for j in "${!n[@]}"; do
    printf '%i\t%s\n\0' "${n[$j]}" "$j"
done | sort -gz -k 1,1 

#13


0  

use

使用

ncdu -x <path>

then press Shitf+c to sort by items count where the item is file

然后按Shitf+c按项目计数对项目所在的文件进行排序

#14


-1  

This command works in highly unlikely cases where your directory structure is identical to mine:

此命令在非常不可能的情况下工作,您的目录结构与我的相同:

find / -type f | grep -oP '^/([^/]+/){3}' | sort | uniq -c | sort -n

找到/类型f | grep op ' ^ /((^ /)+ /){ 3 }’排序| | uniq - c | - n