下面我们在RHEL4中做一个这样的实验:
操作系统为Red Hat Enterprise Linux 4 update 3;
内核版本为2.6.9-34.EL;
五块1GB SCSI接口的磁盘,其中整个系统安装在第一块磁盘,保留一块作为后面测试用,其它四块组成RAID 5。
在RHEL4u3下实现软件RAID是通过mdadm工具实现的,它是一个单一的程序,创建、管理RAID都非常方便,而且也很稳定。而在早期Linux下使用的raidtools,由于维护起来很困难,而且其性能有限,在RHEL4下已经不支持了。
实现过程
环境分析:我的服务器有四块闲置的SCSI硬盘,准备用他们做一个RAID5,其中三个做RAID5必须设备,多余的一块作为额外的冗余磁盘,当其中一个出现故障这块额外的磁盘会自动添加到RAID设备中,并且根据算法填充数据。
第一步:给四个硬盘划分空间格式化为fs格式。
[root@localhost ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-130, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130):
Using default value 130
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
后面四块硬盘重复上面的操作。
我们来查看一下
[root@localhost ~]# fdisk -l
Disk /dev/sda: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 56 345397+ 82 Linux swap
/dev/sda3 57 261 1646662+ 83 Linux
Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 130 1044193+ fd Linux raid autodetect
Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 130 1044193+ fd Linux raid autodetect
Disk /dev/sdd: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 130 1044193+ fd Linux raid autodetect
Disk /dev/sde: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 130 1044193+ fd Linux raid autodetect
Disk /dev/sde: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 130 1044193+ fd Linux raid autodetect
下面一条命令就是创建我们的RAID的描述。
[root@localhost ~]# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1
II_sse : 2660.000 MB/sec
raid5: using function: pIII_sse (2660.000 MB/sec)
raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2
RAID5 conf printout:
--- rd:3 wd:2 fd:1
disk 0, o:1, dev:sdb1
disk 1, o:1, dev:sdc1
RAID5 conf printout:
--- rd:3 wd:2 fd:1
disk 0, o:1, dev:sdb1
disk 1, o:1, dev:sdc1
disk 2, o:1, dev:sdd1
RAID5 conf printout:
--- rd:3 wd:2 fd:1
disk 0, o:1, dev:sdb1
disk 1, o:1, dev:sdc1
disk 2, o:1, dev:sdd1
mdadm: array /dev/md0 started.
好了,这个时候硬盘开始狂转,备份数据以及写入奇偶效验了。我们可以来看看/proc/mdstart的状态
[root@localhost ~]# watch more /proc/mdstat
Every 2.0s: cat /proc/mdstat Fri May 26 10:46:29 2006
好了我们来给他指定一个文件系统并且挂载使用
[root@localhost ~]# mkfs.ext3 /dev/md0
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
261120 inodes, 522048 blocks
26102 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
16320 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912