I had some unknown issue with my old EC2 instance so that I can't ssh into it anymore. Therefore I created a new EBS volume from a snapshot of the old volume and tried to attach and mount it to the new instance. Here is what I did:
我以前的EC2实例有一些未知的问题,所以我不能再ssh到它。因此,我从旧卷的快照创建了一个新的EBS卷,并尝试将它附加到新实例上。我所做的是:
- Created a new volume from snapshot of the old one.
- 从旧卷的快照创建一个新卷。
- Created a new EC2 instance and attached the volume to it as
/dev/xvdf
(or/dev/sdf
) - 创建一个新的EC2实例并将卷附加到它作为/dev/xvdf(或/dev/sdf)
-
SSHed into the instance and attempted to mount the old volume with:
插入实例并试图以以下方式装入旧卷:
$ sudo mkdir -m 000 /vol $ sudo mount /dev/xvdf /vol
$ sudo mkdir -m 000 /vol $ sudo mount /dev/xvdf /vol
And the output was:
和输出是:
mount: block device /dev/xvdf is write-protected, mounting read-only
mount: you must specify the filesystem type
Now, I know I should specify the filesytem as ext4
but since the volume contains a lot of important data, I cannot format it through $ sudo mkfs -t ext4 /dev/xvdf
. Still, I know of no other way of preserving the data and specifying the filesystem at the same time. I've searched a lot about it and I'm currently at a loss.
现在,我知道我应该将文件系统指定为ext4,但是由于卷包含许多重要的数据,所以我不能通过$ sudo mkfs -t ext4 /dev/xvdf对它进行格式化不过,我知道没有其他方法可以同时保存数据并指定文件系统。我搜索了很多关于它的信息,现在我不知所措。
By the way, the mounting as 'read-only' also worries me but I haven't look into it yet since I can't mount the volume at all.
顺便说一下,“只读”的挂载方式也让我担心,但我还没有深入研究,因为我根本无法挂载卷。
Thanks in advance!
提前谢谢!
Edit:
编辑:
When I do sudo mount /dev/xvdf /vol -t ext4
(no formatting) I get:
当我做sudo挂载/dev/xvdf /vol -t ext4(无格式)时,我得到:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
And dmesg | tail
gives me:
dmesg |尾巴给我:
[ 1433.217915] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.222107] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.226127] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.260752] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.265563] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.270477] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.274549] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.277632] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.306549] ISOFS: Unable to identify CD-ROM format.
[ 2373.694570] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
6 个解决方案
#1
73
The One Liner
Use this command to mount it if your filesystem type is ext4:
如果您的文件系统类型是ext4,可以使用此命令挂载它:
sudo mount /dev/xvdf /vol -t ext4
Many people have success with the following (if disk is partitioned):
很多人都成功地做到了以下几点(如果磁盘是分区的):
sudo mount /dev/xvdf1 /vol -t ext4
where:
地点:
-
/dev/xvdf
is changed to the EBS Volume device being mounted - 将/dev/xvdf更改为正在挂载的EBS卷设备
-
/vol
is changed to the folder you want to mount to. - /vol被更改为要挂载到的文件夹。
-
ext4
is the filesystem type of the volume being mounted - ext4是正在挂载的卷的文件系统类型
Common Mistakes How To:
Attached Devices List
Check your mount command for correct EBS Volume device names and filesystem types. The following will list them all:
检查您的挂载命令,以获得正确的EBS卷设备名称和文件系统类型。以下将列出它们:
sudo lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,LABEL
If your EBS Volume displays with an attached partition
, mount the partition
; not the disk.
如果您的EBS卷显示带有附加分区,则挂载该分区;而不是磁盘。
If it doesn't show, you didn't Attach
your EBS Volume in AWS web-console
如果没有显示,您没有将您的EBS卷附加到AWS web控制台
Auto Remounting on Reboot
These devices become unmounted again if the EC2 Instance ever reboots.
如果EC2实例重新启动,这些设备将再次被卸载。
A way to make them mount again upon startup is to edit the server file listed below and insert just the single mount command that you originally used.
使它们在启动时重新挂载的一种方法是编辑下面列出的服务器文件,并只插入您最初使用的单一挂载命令。
/etc/rc.local
(Place your change above exit 0
, the last line in this file.)
(将您的更改放在此文件的最后一行0出口之上。)
#2
22
I noticed that for some reason the volume was located at /dev/xvdf1
, not /dev/xvdf
.
我注意到,出于某种原因,卷位于/dev/xvdf1,而不是/dev/xvdf
Using
使用
sudo mount /dev/xvdf1 /vol -t ext4
worked like a charm
工作就像一个魅力
#3
15
I encountered this problem too after adding a new 16GB volume and attaching it to an existing instance. First of all you need to know what disks you have present Run
在添加新的16GB卷并将其附加到现有实例之后,我也遇到了这个问题。首先,您需要知道当前运行的磁盘是什么
sudo fdisk -l
You'll' have an output that appears like the one shown below detailing information about your disks (volumes"
您将得到如下所示的输出,详细说明有关磁盘的信息(卷)
Disk /dev/xvda: 12.9 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders, total 25165824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/xvda1 * 16065 25157789 12570862+ 83 Linux
Disk /dev/xvdf: 17.2 GB, 17179869184 bytes
255 heads, 63 sectors/track, 2088 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvdf doesn't contain a valid partition table
As you can see the newly added Disk /dev/xvdf is present. To make it available you need to create a filesystem on it and mount it to a mount point. You can achieve that with the following commands
如您所见,新添加的磁盘/dev/xvdf存在。要使其可用,您需要在其上创建一个文件系统并将其挂载到挂载点。您可以使用以下命令来实现这一点。
sudo mkfs -t ext4 /dev/xvdf
Making a new file system clears everything in the volume so do this on a fresh volume without important data
创建一个新的文件系统将清除卷中的所有内容,所以在没有重要数据的新卷上执行此操作
Then mount it maybe in a directory under the /mnt folder
然后将它挂载到/mnt文件夹下的目录中
sudo mount /dev/xvdf /mnt/dir/
Confirm that you have mounted the volume to the instance by running
通过运行确认已将卷挂载到实例
df -h
This is what you should have
这是你应该拥有的
Filesystem Size Used Avail Use% Mounted on
udev 486M 12K 486M 1% /dev
tmpfs 100M 400K 99M 1% /run
/dev/xvda1 12G 5.5G 5.7G 50% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 497M 0 497M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvdf 16G 44M 15G 1% /mnt/ebs
And that's it you have the volume for use there attached to your existing instance. credit
就这样,你有了一个附加到现有实例的卷。信贷
#4
11
I encountered this problem, and I got it now,
我遇到了这个问题,现在我明白了,
[ec2-user@ip-172-31-63-130 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
└─xvdf1 202:81 0 8G 0 part
You should mount the partition
您应该挂载分区
/dev/xvdf1 (which type is a partition)
/dev/xvdf1(类型为分区)
not mount the disk
没有挂载磁盘
/dev/xvdf (which type is a disk)
/dev/xvdf(类型为磁盘)
#5
0
You do not need to create a file system of the newly created volume from the snapshot.simply attach the volume and mount the volume to the folder where you want. I have attached the new volume to the same location of the previously deleted volume and it was working fine.
您不需要从快照中创建新创建卷的文件系统。只需将卷加载到您想要的文件夹中。我已经将新卷附加到先前已删除的卷的相同位置,并且工作正常。
[ec2-user@ip-x-x-x-x vol1]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 10G 0 disk /home/ec2-user/vol1
#6
0
I had different issue, here when I checked in dmesg logs, the issue was with same UUID of existing root volume and UUID of root volume of another ec2. So to fix this I mounted it on another Linux type of ec2. It worked.
我有不同的问题,在这里,当我检查dmesg日志时,问题是与现有根卷的UUID和另一个ec2的根卷的UUID相同。为了解决这个问题,我将它安装在另一种Linux类型的ec2上。它工作。
#1
73
The One Liner
Use this command to mount it if your filesystem type is ext4:
如果您的文件系统类型是ext4,可以使用此命令挂载它:
sudo mount /dev/xvdf /vol -t ext4
Many people have success with the following (if disk is partitioned):
很多人都成功地做到了以下几点(如果磁盘是分区的):
sudo mount /dev/xvdf1 /vol -t ext4
where:
地点:
-
/dev/xvdf
is changed to the EBS Volume device being mounted - 将/dev/xvdf更改为正在挂载的EBS卷设备
-
/vol
is changed to the folder you want to mount to. - /vol被更改为要挂载到的文件夹。
-
ext4
is the filesystem type of the volume being mounted - ext4是正在挂载的卷的文件系统类型
Common Mistakes How To:
Attached Devices List
Check your mount command for correct EBS Volume device names and filesystem types. The following will list them all:
检查您的挂载命令,以获得正确的EBS卷设备名称和文件系统类型。以下将列出它们:
sudo lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,LABEL
If your EBS Volume displays with an attached partition
, mount the partition
; not the disk.
如果您的EBS卷显示带有附加分区,则挂载该分区;而不是磁盘。
If it doesn't show, you didn't Attach
your EBS Volume in AWS web-console
如果没有显示,您没有将您的EBS卷附加到AWS web控制台
Auto Remounting on Reboot
These devices become unmounted again if the EC2 Instance ever reboots.
如果EC2实例重新启动,这些设备将再次被卸载。
A way to make them mount again upon startup is to edit the server file listed below and insert just the single mount command that you originally used.
使它们在启动时重新挂载的一种方法是编辑下面列出的服务器文件,并只插入您最初使用的单一挂载命令。
/etc/rc.local
(Place your change above exit 0
, the last line in this file.)
(将您的更改放在此文件的最后一行0出口之上。)
#2
22
I noticed that for some reason the volume was located at /dev/xvdf1
, not /dev/xvdf
.
我注意到,出于某种原因,卷位于/dev/xvdf1,而不是/dev/xvdf
Using
使用
sudo mount /dev/xvdf1 /vol -t ext4
worked like a charm
工作就像一个魅力
#3
15
I encountered this problem too after adding a new 16GB volume and attaching it to an existing instance. First of all you need to know what disks you have present Run
在添加新的16GB卷并将其附加到现有实例之后,我也遇到了这个问题。首先,您需要知道当前运行的磁盘是什么
sudo fdisk -l
You'll' have an output that appears like the one shown below detailing information about your disks (volumes"
您将得到如下所示的输出,详细说明有关磁盘的信息(卷)
Disk /dev/xvda: 12.9 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders, total 25165824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/xvda1 * 16065 25157789 12570862+ 83 Linux
Disk /dev/xvdf: 17.2 GB, 17179869184 bytes
255 heads, 63 sectors/track, 2088 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvdf doesn't contain a valid partition table
As you can see the newly added Disk /dev/xvdf is present. To make it available you need to create a filesystem on it and mount it to a mount point. You can achieve that with the following commands
如您所见,新添加的磁盘/dev/xvdf存在。要使其可用,您需要在其上创建一个文件系统并将其挂载到挂载点。您可以使用以下命令来实现这一点。
sudo mkfs -t ext4 /dev/xvdf
Making a new file system clears everything in the volume so do this on a fresh volume without important data
创建一个新的文件系统将清除卷中的所有内容,所以在没有重要数据的新卷上执行此操作
Then mount it maybe in a directory under the /mnt folder
然后将它挂载到/mnt文件夹下的目录中
sudo mount /dev/xvdf /mnt/dir/
Confirm that you have mounted the volume to the instance by running
通过运行确认已将卷挂载到实例
df -h
This is what you should have
这是你应该拥有的
Filesystem Size Used Avail Use% Mounted on
udev 486M 12K 486M 1% /dev
tmpfs 100M 400K 99M 1% /run
/dev/xvda1 12G 5.5G 5.7G 50% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 497M 0 497M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvdf 16G 44M 15G 1% /mnt/ebs
And that's it you have the volume for use there attached to your existing instance. credit
就这样,你有了一个附加到现有实例的卷。信贷
#4
11
I encountered this problem, and I got it now,
我遇到了这个问题,现在我明白了,
[ec2-user@ip-172-31-63-130 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
└─xvdf1 202:81 0 8G 0 part
You should mount the partition
您应该挂载分区
/dev/xvdf1 (which type is a partition)
/dev/xvdf1(类型为分区)
not mount the disk
没有挂载磁盘
/dev/xvdf (which type is a disk)
/dev/xvdf(类型为磁盘)
#5
0
You do not need to create a file system of the newly created volume from the snapshot.simply attach the volume and mount the volume to the folder where you want. I have attached the new volume to the same location of the previously deleted volume and it was working fine.
您不需要从快照中创建新创建卷的文件系统。只需将卷加载到您想要的文件夹中。我已经将新卷附加到先前已删除的卷的相同位置,并且工作正常。
[ec2-user@ip-x-x-x-x vol1]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 10G 0 disk /home/ec2-user/vol1
#6
0
I had different issue, here when I checked in dmesg logs, the issue was with same UUID of existing root volume and UUID of root volume of another ec2. So to fix this I mounted it on another Linux type of ec2. It worked.
我有不同的问题,在这里,当我检查dmesg日志时,问题是与现有根卷的UUID和另一个ec2的根卷的UUID相同。为了解决这个问题,我将它安装在另一种Linux类型的ec2上。它工作。