mount 过程:
每个文件系统都有独立的Super Block,Inode,Data Block,如果我们要访问一个文件系统中的内容,或者向文件系统中写入数据,那么首先需要让系统能够找到文件系统的入口。在前面的小节中可以看到,Linux中使用目录来进行文件的查找,所以我们需要将文件系统与目录树进行关联,关联后,便可以通过文件系统关联到的目录的入口进入文件系统,这个关联的过程也就是我们通常所说的挂载(Mount)。
从本质上来说,挂载的过程最主要的目的是要将挂载目录的Dir Entry结构指向的Inode屏蔽掉,然后重新定位到要挂载的文件系统的入口Inode,这样在访问挂载目录的文件的时候,才能够通过文件系统的入口Inode一级一级找到里面的文件,例如:
[root@DanCentOS65 sdc]# stat /mnt/sdc File: `/mnt/sdc' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 801h/2049d Inode: 1704120 Links: 3 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2017-04-27 10:54:50.364707408 +0000 Modify: 2016-11-29 03:48:54.730756524 +0000 Change: 2016-11-29 03:48:54.730756524 +0000 [root@DanCentOS65 sdc]# mount /dev/sdc1 /mnt/sdc [root@DanCentOS65 sdc]# stat /mnt/sdc File: `/mnt/sdc' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 821h/2081d Inode: 2 Links: 4 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2017-04-25 08:35:11.000000000 +0000 Modify: 2017-04-21 09:14:29.000000000 +0000 Change: 2017-04-21 09:14:29.000000000 +0000 |
可以看到,在挂载之前,/mnt/sdc目录的Inode编号为1704120,当将/dev/sdc1挂载到/mnt/sdc后,目录的Inode就指向了文件系统入口的Inode了,具体过程是当文件系统挂载到/mnt/sdc的时候,系统会从Dir Entry的哈希表中获取这个目录的Dir Entry,然后将这个结构中的d_flags标记为DCACHE_MOUNTED,标记完成也就将/mnt/sdc指向的Inode屏蔽掉了,如果我们再将目录umount掉,则又能看到该目录的Inode指回了原Inode:
[root@DanCentOS65 sdc]# umount /dev/sdc1 [root@DanCentOS65 sdc]# stat /mnt/sdc File: `/mnt/sdc' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 801h/2049d Inode: 1704120 Links: 3 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2017-04-27 10:54:50.364707408 +0000 Modify: 2016-11-29 03:48:54.730756524 +0000 Change: 2016-11-29 03:48:54.730756524 +0000 |
Dir Entry的结构:
struct dentry { atomic_t d_count; /* usage count */ unsigned long d_vfs_flags; /* dentry cache flags */ spinlock_t d_lock; /* per-dentry lock */ struct inode *d_inode; /* associated inode */ struct list_head d_lru; /* unused list */ struct list_head d_child; /* list of dentries within */ struct list_head d_subdirs; /* subdirectories */ struct list_head d_alias; /* list of alias inodes */ unsigned long d_time; /* revalidate time */ struct dentry_operations *d_op; /* dentry operations table */ struct super_block *d_sb; /* superblock of file */ unsigned int d_flags; /* dentry flags */ int d_mounted; /* is this a mount point? */ void *d_fsdata; /* filesystem-specific data */ struct rcu_head d_rcu; /* RCU locking */ struct dcookie_struct *d_cookie; /* cookie */ struct dentry *d_parent; /* dentry object of parent */ struct qstr d_name; /* dentry name */ struct hlist_node d_hash; /* list of hash table entries */ struct hlist_head *d_bucket; /* hash bucket */ unsigned char d_iname[DNAME_INLINE_LEN_MIN]; /* short name */ }; |
那么系统是如何在屏蔽掉原Inode后,将目录映射到文件系统的入口Inode的呢?接着来看:
在挂载的时候,内核会在内存中创建一个对应这个文件系统的Super Block对象,从要挂在的文件系统中读出其Super Block,用于这个内存对象的初始化,初始化好内存对象后,内核会将其加入一个全局的Super Block对象列表(这个列表由内核一直维护)。Super Block结构中的s_root字段指向文件系统根目录的DirEntry,挂载后的文件系统的入口就是这里。同时,VFS(VirtualFilesystem Switch)会创建一个vfsmount对象,这个对象包含了文件系统挂载点的全部信息:
struct vfsmount { struct list_head mnt_hash; /* 连接到VFSMOUNT Hash Table */ struct vfsmount *mnt_parent; /* 指向mount树中的父节点 */ struct dentry *mnt_mountpoint; /* 指向mount点的目录项 */ struct dentry *mnt_root; /* 被mount的文件系统根目录项 */ struct super_block *mnt_sb; /* 指向被mount的文件系统superblock */ #ifdef CONFIG_SMP struct mnt_pcp __percpu *mnt_pcp; atomic_t mnt_longterm; /* how many of the refs are longterm */ #else int mnt_count; int mnt_writers; #endif struct list_head mnt_mounts; /* 下级(child)vfsmount对象链表 */ struct list_head mnt_child; /* 链入上级vfsmount对象的链表点 */ int mnt_flags; /* 4 bytes hole on 64bits arches without fsnotify */ #ifdef CONFIG_FSNOTIFY __u32 mnt_fsnotify_mask; struct hlist_head mnt_fsnotify_marks; #endif const char *mnt_devname; /* 文件系统所在的设备名字,例如/dev/sdb */ struct list_head mnt_list; struct list_head mnt_expire; /* link in fs-specific expiry list */ struct list_head mnt_share; /* circular list of shared mounts */ struct list_head mnt_slave_list;/* list of slave mounts */ struct list_head mnt_slave; /* slave list entry */ struct vfsmount *mnt_master; /* slave is on master->mnt_slave_list */ struct mnt_namespace *mnt_ns; /* containing namespace */ int mnt_id; /* mount identifier */ int mnt_group_id; /* peer group identifier */ int mnt_expiry_mark; /* true if marked for expiry */ int mnt_pinned; int mnt_ghosts; }; |
vfsmount对象也是通过哈希表来维护,通过访问的目录的路径来计算哈希值,vfsmount结构中的mnt_root指向SuperBlock对象的s_root,因此,在挂载完成,vfsmount对象创建并完成哈希后,目录的查找过程就变成了下面这个样子:
- 挂载完成后(假设挂载点是/mnt/sdc),尝试去访问/mnt/sdc/test/1.txt
- 在进行地址解析的时候,首先解析得到/mnt/sdc的Dir Entry,发现对应的d_flags字段被标记为DCACHE_MOUNTED,所以会进一步通过/mnt/sdc去计算哈希值,并在vfsmount的哈希表中检索得到对应的vfsmount对象,得到该对象后,通过其mnt_root属性找到文件系统Super Block的根目录的Dir Entry,从而找到了文件系统的入口Inode
- 后面的步骤就是按照前面小节中介绍的目录检索的步骤最终得到文件内容了,这里不再赘述
除了vfsmount对象的哈希表之外,内核中还维护了一个挂在对象的树结构,通过这个树结构,可以了解到各个文件系统的关系,通过findmnt可以打印出当前挂载对象的树结构:
[root@DanCentOS65 daniel]# findmnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda1 ext4 rw,relatime,barrier=1,data=ordered ├─/proc proc proc rw,relatime │├─/proc/bus/usb /proc/bus/usb usbfs rw,relatime │└─/proc/sys/fs/binfmt_misc binfmt_misc rw,relatime ├─/sys sysfs sysfs rw,relatime ├─/dev devtmpfs devtmpfs rw,relatime,size=3496016k,nr_inodes=874004,mode=755 │├─/dev/pts devpts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 │└─/dev/shm tmpfs tmpfs rw,relatime ├─/misc /etc/auto.misc autofs rw,relatime,fd=7,pgrp=1503,timeout=300,minproto=5,maxproto=5,indirect ├─/net -hosts autofs rw,relatime,fd=13,pgrp=1503,timeout=300,minproto=5,maxproto=5,indirect ├─/mnt/resource /dev/sdb1 ext4 rw,relatime,barrier=1,data=ordered ├─/mnt/sdd /dev/sdd1 ext4 rw,relatime,barrier=1,data=ordered ├─/mnt/sdc /dev/sdc1 ext4 rw,relatime,barrier=1,data=ordered └─/mnt/sde /dev/sde1 ext3 rw,relatime,errors=continue,barrier=1,data=writeback |
通过上面的树结构,可以对整个文件系统挂载的情况一目了然。
umount的一个小技巧:
如果umount的时候提示“device is busy”,可以通过下面的方法解决:
[root@DanCentOS65 test]# umount /mnt/sdd umount: /mnt/sdd: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) |
使用fuser查看是那个进程再占用设备:
[root@DanCentOS65 test]# fuser -m -v /mnt/sdd USER PID ACCESS COMMAND /mnt/sdd: root 4255 ..c.. bash |
再使用kill杀掉对应的进程:
[root@DanCentOS65 test]# kill -9 4255 Killed [root@DanCentOS65 daniel]# umount /mnt/sdd [root@DanCentOS65 daniel]# |
其他类型文件系统的支持情况:
除了前面我们深入介绍的ext2/ext3/ext4文件系统之外,Linux还支持很多其他类型的文件系统,例如:
传统文件系统:ext2,minix,MS-DOS,FAT (用 vfat模块),iso9660 (光盘)等
日志式文件系统: ext3,ReiserFS,Windows'NTFS,IBM's JFS,SGI's XFS等
网络文件系统: NFS,SMBFS等
如果想要看一下你的Linux系统支持的文件系统类型有哪些,可以查看下面的目录:
[root@DanCentOS65 /]# ls -l /lib/modules/$(uname -r)/kernel/fs total 132 drwxr-xr-x 2 root root 4096 Apr 18 05:57 autofs4 drwxr-xr-x 2 root root 4096 Apr 18 05:57 btrfs drwxr-xr-x 2 root root 4096 Apr 18 05:57 cachefiles drwxr-xr-x 2 root root 4096 Apr 18 05:57 cifs drwxr-xr-x 2 root root 4096 Apr 18 05:57 configfs drwxr-xr-x 2 root root 4096 Apr 18 05:57 cramfs drwxr-xr-x 2 root root 4096 Apr 18 05:57 dlm drwxr-xr-x 2 root root 4096 Apr 18 05:57 ecryptfs drwxr-xr-x 2 root root 4096 Apr 18 05:57 exportfs drwxr-xr-x 2 root root 4096 Apr 18 05:57 ext2 drwxr-xr-x 2 root root 4096 Apr 18 05:57 ext3 drwxr-xr-x 2 root root 4096 Apr 18 05:57 ext4 drwxr-xr-x 2 root root 4096 Apr 18 05:57 fat drwxr-xr-x 2 root root 4096 Apr 18 05:57 fscache drwxr-xr-x 2 root root 4096 Apr 18 05:57 fuse drwxr-xr-x 2 root root 4096 Apr 18 05:57 gfs2 drwxr-xr-x 2 root root 4096 Apr 18 05:57 jbd drwxr-xr-x 2 root root 4096 Apr 18 05:57 jbd2 drwxr-xr-x 2 root root 4096 Apr 18 05:57 jffs2 drwxr-xr-x 2 root root 4096 Apr 18 05:57 lockd -rwxr--r-- 1 root root 19944 Apr 11 17:30 mbcache.ko drwxr-xr-x 2 root root 4096 Apr 18 05:57 nfs drwxr-xr-x 2 root root 4096 Apr 18 05:57 nfs_common drwxr-xr-x 2 root root 4096 Apr 18 05:57 nfsd drwxr-xr-x 2 root root 4096 Apr 18 05:57 nls drwxr-xr-x 2 root root 4096 Apr 18 05:57 squashfs drwxr-xr-x 2 root root 4096 Apr 18 05:57 ubifs drwxr-xr-x 2 root root 4096 Apr 18 05:57 udf drwxr-xr-x 2 root root 4096 Apr 18 05:57 xfs |
通过下面的命令可以查看VFS系统目前加载到内存中支持的文件系统类型:
[root@DanCentOS65 /]# cat /proc/filesystems nodev sysfs nodev rootfs nodev bdev nodev proc nodev cgroup nodev cpuset nodev tmpfs nodev devtmpfs nodev binfmt_misc nodev debugfs nodev securityfs nodev sockfs nodev usbfs nodev pipefs nodev anon_inodefs nodev inotifyfs nodev devpts nodev ramfs nodev hugetlbfs iso9660 nodev pstore nodev mqueue ext4 nodev autofs ext2 ext3 |
参考链接:
https://www.ibm.com/developerworks/cn/linux/filesystem/ext2/
https://linux.die.net/man/5/mke2fs.conf
http://www.tuicool.com/articles/mYBBbu
http://www.cnblogs.com/wuhuiyuan/p/linux-filesystem-inodes.html
http://blog.csdn.net/ljianhui/article/details/8604140
http://blog.csdn.net/kai_ding/article/details/9914619
http://blog.csdn.net/haiross/article/details/39179301
http://blog.csdn.net/younger_china/article/details/21185097
http://www.cgsecurity.org/wiki/Advanced_Find_ext2_ext3_Backup_SuperBlock
http://alanwu.blog.51cto.com/3652632/1105681/
http://blog.csdn.net/macrossdzh/article/details/5973639
http://www.cnblogs.com/cute/p/4627622.html
http://www.docin.com/p-661335281.html
http://www.cnblogs.com/peon/archive/2011/06/22/2086470.html
http://blog.csdn.net/yunsongice/article/details/5822495
http://e2fsprogs.sourceforge.net/
http://blog.csdn.net/yiqingyang2012/article/details/8116522
http://lxr.free-electrons.com/source/fs/ext2/ext2.h
http://www.science.unitn.it/~fiorella/guidelinux/tlk/node96.html
http://cs.smith.edu/~nhowe/262/oldlabs/ext2.html
http://140.120.7.21/LinuxKernel/LinuxKernel/node17.html
http://www.nongnu.org/ext2-doc/ext2.html
https://www.cs.montana.edu/courses/309/topics/4-disks/debugfs_example.html
http://www.giis.co.in/symlink.html
http://www.cnblogs.com/qq78292959/archive/2012/02/23/2364760.html
http://blog.chinaunix.net/uid-20196318-id-152429.html
http://www.linuxplanet.com/linuxplanet/reports/4136/5
https://en.wikipedia.org/wiki/Journaling_block_device