昨天重裝一臺(tái)老服務(wù)器的時(shí)候發(fā)現(xiàn) Intel hardware RAID 控制卡有問題,不能識(shí)別所有硬盤,但是安裝操作系統(tǒng)過程中可以識(shí)別所有硬盤,還有一個(gè)問題就是操作系統(tǒng)安裝正常,但是安裝完后無法啟動(dòng),某種原因?qū)е?BIOS 不能從硬盤啟動(dòng)系統(tǒng)。所以打算把操作系統(tǒng)安裝到一個(gè) USB 盤上,然后從 USB 盤啟動(dòng)系統(tǒng),并給上面的6塊硬盤做成 Software RAID 10 后掛載到系統(tǒng)里用。
做 Software RAID 不要求硬盤都一模一樣,但是強(qiáng)烈推薦用同一廠商、型號(hào)和大小的硬盤。為啥 RAID 10,不選 RAID0, RAID1, RAID5 呢?答:RAID0 太危險(xiǎn),RAID1 性能稍遜一些,RAID5 頻繁寫情況下性能差,RAID10 似乎是當(dāng)今磁盤陣列的最佳選擇,特別適合做 KVM/Xen/VMware 虛擬機(jī)母機(jī)(host)的本地存儲(chǔ)系統(tǒng)(如果不考慮 SAN 和分布式存儲(chǔ)的話)。
這臺(tái)服務(wù)器上有6塊完全相同的硬盤,給每塊硬盤分成一個(gè)區(qū),分區(qū)格式為 Linux software raid:
# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-91201, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-91201, default 91201):
Using default value 91201
Command (m for help): p
Disk /dev/sda: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005c259
Device Boot Start End Blocks Id System
/dev/sda1 1 91201 732572001 83 Linux
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
按照上面的 /dev/sda 的分區(qū)例子依次給剩下的5塊硬盤 sdc, sdd, sde, sdf, sdg 分區(qū)、更改分區(qū)格式:
# fdisk /dev/sdc
...
# fdisk /dev/sdd
...
# fdisk /dev/sde
...
# fdisk /dev/sdf
...
# fdisk /dev/sdg
...
分區(qū)完成后就可以開始創(chuàng)建 RAID 了,在上面的6個(gè)相同大小的分區(qū)上創(chuàng)建 raid10:
# mdadm --create /dev/md0 -v --raid-devices=6 --level=raid10 /dev/sda1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 732440576K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
查看磁盤陣列的初始化過程(build),根據(jù)磁盤大小和速度,整個(gè)過程大概需要幾個(gè)小時(shí):
# watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat Tue Feb 11 12:51:25 2014
Personalities : [raid10]
md0 : active raid10 sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sda1[0]
2197321728 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
[>....................] resync = 0.2% (5826816/2197321728) finish=278.9min speed=13
0948K/sec
unused devices:
等陣列完成初始化后,就可以給 md0 設(shè)備創(chuàng)建分區(qū)和文件系統(tǒng)了,有了文件系統(tǒng)就可以掛載到系統(tǒng)里:
# fdisk /dev/md0
# mkfs.ext4 /dev/md0p1
# mkdir /raid10
# mount /dev/md0p1 /raid10
修改 /etc/fstab 文件讓每次系統(tǒng)啟動(dòng)時(shí)自動(dòng)掛載:
# vi /etc/fstab
...
/dev/md0p1 /raid10 ext4 noatime,rw 0 0
在上面的 /etc/fstab 文件里使用 /dev/md0p1 設(shè)備名不是一個(gè)好辦法,因?yàn)?udev 的緣故,這個(gè)設(shè)備名常在重啟系統(tǒng)后變化,所以最好用 UUID,使用 blkid 命令找到相應(yīng)分區(qū)的 UUID:
# blkid
...
/dev/md0p1: UUID="093e0605-1fa2-4279-99b2-746c70b78f1b" TYPE="ext4"
然后修改相應(yīng)的 fstab,使用 UUID 掛載:
# vi /etc/fstab
...
#/dev/md0p1 /raid10 ext4 noatime,rw 0 0
UUID=093e0605-1fa2-4279-99b2-746c70b78f1b /raid10 ext4 noatime,rw 0 0
查看 RAID 的情況:
# mdadm --query --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Feb 11 12:50:38 2014
Raid Level : raid10
Array Size : 2197321728 (2095.53 GiB 2250.06 GB)
Used Dev Size : 732440576 (698.51 GiB 750.02 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Tue Feb 11 18:48:10 2014
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : local:0 (local to host local)
UUID : e3044b6c:5ab972ea:8e742b70:3f766a11
Events : 70
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
5 8 97 5 active sync /dev/sdg1