mdadm
mdadm¶
Some tips and tricks regarding mdadm
.
raid 6 ioperf tests¶
4k random read write tests¶
fio --name TEST --filename=temp.file --rw=randrw --size=4g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=16 --runtime=60
Run status group 0 (all jobs):
READ: bw=489KiB/s (500kB/s), 28.9KiB/s-32.5KiB/s (29.5kB/s-33.2kB/s), io=28.7MiB (30.1MB), run=60002-60062msec
WRITE: bw=499KiB/s (511kB/s), 29.4KiB/s-33.2KiB/s (30.1kB/s-33.0kB/s), io=29.3MiB (30.7MB), run=60002-60062msec
128k random read write tests¶
fio --name TEST --filename=temp.file --rw=randrw --size=4g --io_size=10g --blocksize=128k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=16 --runtime=60
Run status group 0 (all jobs):
READ: bw=16.5MiB/s (17.2MB/s), 1007KiB/s-1107KiB/s (1031kB/s-1133kB/s), io=988MiB (1036MB), run=60003-60043msec
WRITE: bw=16.7MiB/s (17.6MB/s), 1017KiB/s-1126KiB/s (1042kB/s-1153kB/s), io=1005MiB (1054MB), run=60003-60043msec
4M random read write tests¶
fio --name TEST --filename=temp.file --rw=randrw --size=4g --io_size=10g --blocksize=4M --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=16 --runtime=60
Run status group 0 (all jobs):
READ: bw=208MiB/s (218MB/s), 11.9MiB/s-14.1MiB/s (12.5MB/s-14.7MB/s), io=12.2GiB (13.1GB), run=60011-60081msec
WRITE: bw=205MiB/s (215MB/s), 11.9MiB/s-13.8MiB/s (12.4MB/s-14.5MB/s), io=12.1GiB (12.9GB), run=60011-60081msec
Reconfigure to raid10¶
We have 14 free disks to be used, so let's create raid 10 array with 10 disk and 4 four as spare
devices for auto-rebuild in case of failure
mdadm -v --create /dev/md0 --level=raid10 --layout=f2 --raid-devices=10 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl
mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Jun 22 12:20:31 2021
Raid Level : raid10
Array Size : 48831523840 (45.48 TiB 50.00 TB)
Used Dev Size : 9766304768 (9.10 TiB 10.00 TB)
Raid Devices : 10
Total Devices : 10
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jun 22 12:20:58 2021
State : clean, resyncing
Active Devices : 10
Working Devices : 10
Failed Devices : 0
Spare Devices : 0
Layout : far=2
Chunk Size : 512K
Consistency Policy : bitmap
Resync Status : 0% complete
Name : storage02.rdu2.centos.org:0 (local to host storage02.rdu2.centos.org)
UUID : 521a6cbb:cb76e0c1:dc7fe98c:fe1b0bec
Events : 5
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 48 1 active sync /dev/sdd
2 8 64 2 active sync /dev/sde
3 8 80 3 active sync /dev/sdf
4 8 96 4 active sync /dev/sdg
5 8 112 5 active sync /dev/sdh
6 8 128 6 active sync /dev/sdi
7 8 144 7 active sync /dev/sdj
8 8 160 8 active sync /dev/sdk
9 8 176 9 active sync /dev/sdl
Let's increase the raid array speed
md_device=md0
echo max > /sys/block/${md_device}/md/sync_max
echo 500000 > /sys/block/${md_device}/md/sync_speed_min
echo 500000 > /proc/sys/dev/raid/speed_limit_max
Adding other disks as Spare disks in array¶
for i in m n o p ; do mdadm --add /dev/md0 /dev/sd${i} ; done
mdadm: added /dev/sdm
mdadm: added /dev/sdn
mdadm: added /dev/sdo
mdadm: added /dev/sdp
mdadm --detail /dev/md0|grep Devices
Raid Devices : 10
Total Devices : 14
Active Devices : 10
Working Devices : 14
Failed Devices : 0
Spare Devices : 4
VolumeGroup create and LV/filesystem¶
Let's now create a vg_array volume on top of new /dev/md0 raid10 device :-1:
pvcreate /dev/md0
vgcreate vg_array /dev/md0
raid10 ioperf tests¶