从RAID1降级一个盘并重建成RAID4,双盘RAID4相当于RAID0,然后RAID4降级到RAID0,下面是流程记录.(后附总结)
root@ns******:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda2[1] sdb2[0]
1952855040 blocks super 1.2 [2/2] [UU]
bitmap: 1/15 pages [4KB], 65536KB chunk
unused devices: <none>
root@ns******:~# mdadm /dev/md2 --fail /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md2
root@ns******:~# mdadm /dev/md2 --remove /dev/sdb2
mdadm: hot removed /dev/sdb2 from /dev/md2
root@ns******:~# wipefs -a /dev/sdb2
/dev/sdb2: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
root@ns******:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda2[1]
1952855040 blocks super 1.2 [2/1] [_U]
bitmap: 3/15 pages [12KB], 65536KB chunk
unused devices: <none>
root@ns******:~# mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Mon Nov 22 15:19:28 2021
Raid Level : raid1
Array Size : 1952855040 (1862.39 GiB 1999.72 GB)
Used Dev Size : 1952855040 (1862.39 GiB 1999.72 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Nov 23 01:46:30 2021
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : md2
UUID : 6849a02d:97f671af:786a5197:14d8f4a8
Events : 20
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 2 1 active sync /dev/sda2
root@ns******:~# mdadm --grow /dev/md2 --level=0
mdadm: level of /dev/md2 changed to raid0
root@ns******:~# mdadm --misc --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Mon Nov 22 15:19:28 2021
Raid Level : raid0
Array Size : 1952855040 (1862.39 GiB 1999.72 GB)
Raid Devices : 1
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Tue Nov 23 01:47:35 2021
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
Consistency Policy : none
Name : md2
UUID : 6849a02d:97f671af:786a5197:14d8f4a8
Events : 24
Number Major Minor RaidDevice State
1 8 2 0 active sync /dev/sda2
root@ns******:~# mdadm --add /dev/md2 /dev/sdb2
mdadm: level of /dev/md2 changed to raid4
mdadm: added /dev/sdb2
root@ns******:~# mdadm --misc --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Mon Nov 22 15:19:28 2021
Raid Level : raid4
Array Size : 1952855040 (1862.39 GiB 1999.72 GB)
Used Dev Size : 1952855040 (1862.39 GiB 1999.72 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Nov 23 01:48:38 2021
State : active, FAILED, reshaping
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Chunk Size : 64K
Consistency Policy : resync
Reshape Status : 0% complete
Delta Devices : 1, (2->3)
Name : md2
UUID : 6849a02d:97f671af:786a5197:14d8f4a8
Events : 45
Number Major Minor RaidDevice State
1 8 2 0 active sync /dev/sda2
2 8 18 1 spare rebuilding /dev/sdb2
- 0 0 2 removed
root@ns******:~# watch cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid4 sdb2[2] sda2[1]
1952855040 blocks super 1.2 level 4, 64k chunk, algorithm 5 [3/2] [U__]
[>....................] reshape = 0.9% (19050868/1952855040) finish=750.7min speed=42927K/sec
unused devices: <none>
正在努力Reshape,然后漫长时间后重启,就Raid0了.
root@ns******:~# mdadm --misc --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Tue Nov 23 01:45:45 2021
Raid Level : raid0
Array Size : 3905710080 (3724.78 GiB 3999.45 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Nov 23 11:31:20 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
Consistency Policy : none
Name : md2
UUID : 305efdee:76ec5b85:d66afca1:002a4f1f
Events : 20170
Number Major Minor RaidDevice State
1 8 2 0 active sync /dev/sda2
2 8 18 1 active sync /dev/sdb2
root@ns******:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 900K 3.2G 1% /run
/dev/md2 1.8T 1.2G 1.7T 1% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.2G 0 3.2G 0% /run/user/0
root@ns******:~# resize2fs /dev/md2
resize2fs 1.46.2 (28-Feb-2021)
Filesystem at /dev/md2 is mounted on /; on-line resizing required
old_desc_blocks = 117, new_desc_blocks = 233
The filesystem on /dev/md2 is now 976427520 (4k) blocks long.
root@ns******:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 900K 3.2G 1% /run
/dev/md2 3.6T 1.2G 3.5T 1% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.2G 0 3.2G 0% /run/user/0
root@ns******:~#
转载自https://www.taterli.com/8394/
总结:
先raid1安装好后,执行下面的进行变更也行
cat /proc/mdstat mdadm /dev/md2 --fail /dev/sdb2 mdadm /dev/md2 --remove /dev/sdb2 wipefs -a /dev/sdb2 mdadm --grow /dev/md2 --level=0 mdadm --grow /dev/md2 --level=0 --raid-devices=2 --add /dev/sdb2 watch cat /proc/mdstat # 等待重建完毕 mdadm --misc --detail /dev/md2 resize2fs /dev/md2 df -h
文章评论