mdadm是linux下用于创建和管理软件RAID的命令,是一个模式化命令。但由于现在服务器一般都带有RAID阵列卡,并且RAID阵列卡也很廉价,且由于软件RAID的自身缺陷(不能用作启动分区、使用CPU实现,降低CPU利用率),因此在生产环境下并不适用。但为了学习和了解RAID原理和管理,因此仍然进行一个详细的讲解:
1.1 创建raid
mdadm -C /dev/md0 -a yes -l 0 -n 2 /dev/sdb{1,2}
mdadm -C /dev/md0 -a yes -l 0 -n 2 /dev/sdb1 /dev/sdc2
mkfs.ext4 /dev/md0
注意:在格式化时,可以指定-E选项下的stride参数指定条带是块大小的多少倍,有在一定程度上提高软RAID性能,如块默认大小为4k,而条带大小默认为64k,则stride为16,这样就避免了RAID每次存取数据时都去计算条带大小,如:
mkfs.ext4 -E stride=16 -b 4096 /dev/md0
[root@localhost ~]# mdadm -C /dev/md1 -a yes -n 2 -l 1 /dev/sdb{5,6}
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@localhost ~]# mkfs.ext4 /dev/md1
[root@localhost ~]# mdadm -C /dev/md2 -a yes -l 5 -n 3 /dev/sdb{5,6,7}
mdadm: /dev/sdb5 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sun Jul 14 09:14:25 2013
mdadm: /dev/sdb6 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sun Jul 14 09:14:25 2013
mdadm: /dev/sdb7 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sun Jul 14 09:14:25 2013
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
[root@localhost ~]# mkfs.ext4 /dev/md2
[root@localhost ~]# mdadm /dev/md2 -a /dev/sdb8
mdadm -D /dev/md# 查看指定RAID设备的详细信息
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0] [raid1]
md0 : active raid0 sdb2[1] sdb1[0]
4206592 blocks super 1.2 512k chunks
md1 : active raid1 sdb6[1] sdb5[0]
2103447 blocks super 1.2 [2/2] [UU]
unused devices: <none>
cat /proc/partitions
kpartx /dev/sdb或者partprobe/dev/sdb
#mkdir /md
挂载上面的raid(这里挂载md0)
#mount /dev/md0 /md
#vi /etc/fstab
添加下方最后一行
#
# /etc/fstab
# Created by anaconda on Tue Apr 10 21:05:33 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=7d9131a9-9b81-4e04-8beb-f6c834e1ac41 / xfs defaults 0 0
UUID=10d14473-dfd4-4a5d-a4e9-bd66117b450c /boot xfs defaults 0 0
UUID=9ad8d627-7699-4b7a-8c70-f03fcc63180e swap swap defaults 0 0
#mount RAID
/dev/md0 /md ext4 defaults 0 0
如果挂载多个就按格式写多个mdadm /dev/md1 -f /dev/sdb5
mdadm /dev/md1 -r /dev/sdb5
mdadm /dev/md1 -a /dev/sdb7
选项:-S = --stop
mdadm -S /dev/md1
[root@localhost ~]# mdadm -G /dev/md2 -n 4
[root@localhost ~]# mdadm -D /dev/md2
……此处略掉部分信息……
Number Major Minor RaidDevice State
0 8 21 0 active sync /dev/sdb5
1 8 22 1 active sync /dev/sdb6
3 8 23 2 active sync /dev/sdb7
4 8 24 3 active sync /dev/sdb8
mdadm -A /dev/md1 /dev/sdb5 /dev/sdb6
mdadm运行时会自动检查/etc/mdadm.conf 文件并尝试自动装配,因此第一次配置raid后可以将信息导入到/etc/mdadm.conf 中,命令如下:
[root@localhost ~]# mdadm -Ds >/etc/mdadm.conf
#mdadm /dev/md1 -a /dev/sdb1
执行后就会自动开始同步数据到sdc1模块上。可以通过以下命令查看进度
#mdadm -D /dev/md1
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Thu Jan 25 07:15:06 2018
Raid Level : raid1
Array Size : 629014528 (599.88 GiB 644.11 GB)
Used Dev Size : 629014528 (599.88 GiB 644.11 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun May 27 07:04:57 2018
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Consistency Policy : unknown
Rebuild Status : 1% complete
Name : localhost.localdomain:1 (local to host localhost.localdomain)
UUID : 401d70fc:1d8675af:2adb9bb4:0b9b37e6
Events : 16931
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
2 8 17 1 spare rebuilding /dev/sdb1
[root@localhost ~]#
等到完全进度100%即可。恢复完成。https://www.leftso.com/article/359.html