RAID 5 is faster than RAID 1 (reading)
I've been using my two 500G drives in a mixed RAID1/RAID5 array. The drives are partitioned in three -- 100M, 10G, and 460G partitions. The first partition of every drive is in a RAID1 -- this is my boot partition. The 10G partitions are also in a RAID1 (the root partition). And the big partitions are assembled in RAID5. All is fine, and testing each RAID device with hdparm revealed reading performance equal to that of a single hard disk -- about 60MB/s.
Today I added another disk. Same partitioning -- I grew the raid5 array to become double the size and I also added another device to the raid1 arrays so that there are three mirrors. Curious as I am, I again tested the RAID devices with hdparm. The results were intriguing -- the RAID 5 array read at 120MB/s while the RAID 1 arrays were still stuck at 60MB/s. It is also interesting that when there were only two drives in the RAID 5 array hdparm -t reported 60 MB/s.
I figure that the 64k chunks on the RAID5 array force reading from different drives, while when the kernel is allowed to choose where to read from in the RAID 1, it simply goes for a single non-busy drive. Interleaving the reads on RAID 1 would have been nice but I guess I'll figure it some other time.
This figure illustrates the read performance when reading from the first, second and third hard disk and the three RAID arrays in turn. It is interesting to see that when reading from md1 (the root partition and a RAID1 array) we see reads from different drives that are sequentially switched over -- it starts reading from sdb and then switches to sda. When reading from md2 (the RAID5 array), however, we see concurrent reading at a constant speed from all drives.
Today I added another disk. Same partitioning -- I grew the raid5 array to become double the size and I also added another device to the raid1 arrays so that there are three mirrors. Curious as I am, I again tested the RAID devices with hdparm. The results were intriguing -- the RAID 5 array read at 120MB/s while the RAID 1 arrays were still stuck at 60MB/s. It is also interesting that when there were only two drives in the RAID 5 array hdparm -t reported 60 MB/s.
I figure that the 64k chunks on the RAID5 array force reading from different drives, while when the kernel is allowed to choose where to read from in the RAID 1, it simply goes for a single non-busy drive. Interleaving the reads on RAID 1 would have been nice but I guess I'll figure it some other time.
This figure illustrates the read performance when reading from the first, second and third hard disk and the three RAID arrays in turn. It is interesting to see that when reading from md1 (the root partition and a RAID1 array) we see reads from different drives that are sequentially switched over -- it starts reading from sdb and then switches to sda. When reading from md2 (the RAID5 array), however, we see concurrent reading at a constant speed from all drives.
Well, it seems that this is common knowledge after all: http://marc.info/?l=linux-raid&m=119281351420650&w=2
ReplyDelete