If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
Max number of drives in RAID5 set
Hi all
I have a new SATA tray for one of my SANS and was wondering if anyone has preferences (and why) as to the number of drives to use in each RAID5 set. I can either make a single RAID5 volume of 13x250 + hotspare and then carve that up into three / four virtual volumes, or I could create say three RAID5 volumes with one volume set on each .... obviously I lose more storage with three RAID5 volumes. Is 13 drives in a RAID5 set too many? Rebuild time a problem? Thanks Steve |
#2
|
|||
|
|||
Steve Christall wrote: Hi all I have a new SATA tray for one of my SANS and was wondering if anyone has preferences (and why) as to the number of drives to use in each RAID5 set. I can either make a single RAID5 volume of 13x250 + hotspare and then carve that up into three / four virtual volumes, or I could create say three RAID5 volumes with one volume set on each .... obviously I lose more storage with three RAID5 volumes. Is 13 drives in a RAID5 set too many? Rebuild time a problem? Thanks Steve Hi Steve, If you do not intend to have separation of volumes for different applications, one big volume should be ok. You can save on storage. Generally small and application specific volumes are better. Back/Restore cycles ,which are usually performed by volume, are shorter. Rebuild time is less. Snapshots or mirroring is also easier. Thanks Pras |
#3
|
|||
|
|||
On Wed, 22 Dec 2004 12:50:53 -0000, "Steve Christall"
wrote: Hi all I have a new SATA tray for one of my SANS and was wondering if anyone has preferences (and why) as to the number of drives to use in each RAID5 set. I can either make a single RAID5 volume of 13x250 + hotspare and then carve that up into three / four virtual volumes, or I could create say three RAID5 volumes with one volume set on each .... obviously I lose more storage with three RAID5 volumes. Is 13 drives in a RAID5 set too many? Rebuild time a problem? Thanks Steve Rebuild times are definitely a concern at 250gb. SATA not as bad but ATA drives that size can take 24 hours to rebuild, that's a long time to be vulnerable. Depending on vendor they may do some slick things like copy all the viable data from the failing drive first then reconstruct what's missing. This saves loads of time but is fairly uncommon still. Another problem with large drives like this is spindle performance; you want alot of spindles but the size of the volume is overkill (in alot of cases, maybe not yours). My personal preference would be to have multiple Raid 5 sets and use an LVM (Logical Volume Manager) to make them all seem like one volume. That way you get all the benefits of spindle performance, a larger volume size (though not as large as if it were one Raid 5 set), and extra protection against multi-drive failures. ~F |
#4
|
|||
|
|||
Faeandar writes:
.... Rebuild times are definitely a concern at 250gb. SATA not as bad but ATA drives that size can take 24 hours to rebuild, that's a long time to be vulnerable. Depending on vendor they may do some slick things like copy all the viable data from the failing drive first then reconstruct what's missing. This saves loads of time but is fairly uncommon still. Another problem with large drives like this is spindle performance; you want alot of spindles but the size of the volume is overkill (in alot of cases, maybe not yours). My personal preference would be to have multiple Raid 5 sets and use an LVM (Logical Volume Manager) to make them all seem like one volume. That way you get all the benefits of spindle performance, a larger volume size (though not as large as if it were one Raid 5 set), and extra protection against multi-drive failures. I've been working with Linux Software RAID on 400 GB ATA drives. The Linux md driver supports up to 27 disks in one array. My co-worker found out that the rebuild times using the Linux 2.6 kernel are higher than they need to be. He was building a RAID 5 on nine EtherDrive storage blades and found that the per-blade I/O rate was only 1200KB/s. At that rate, the RAID initialization was going to take days. We know that 1200KB/s is lower than it should be, so he checked the 2.6 kernel sources and thought he saw a problem with the way it determines whether devices are idle or not. He did this ... echo 100000 /proc/sys/dev/raid/speed_limit_max echo 100000 /proc/sys/dev/raid/speed_limit_min .... and the per-blade throughput went up to about 5300 KB/s, meaning that the array could fully initialize in about 18 hours. I've been meaning to look at the md code myself, but I wonder if anybody else has noticed this when initializing software RAIDs. It might be that there's something different about the aoe block driver. -- Ed L Cashin |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Optical Drives: Identify Manufacturer Name and Model Number | Vince | Dell Computers | 2 | March 2nd 05 07:17 PM |
Newbie. P4P800 Deluxe number of possible SATA drives? | Christian | Asus Motherboards | 2 | September 23rd 04 05:27 AM |
Upgrade Difficulties | Ron B | Gateway Computers | 0 | February 14th 04 03:26 AM |
Mediaform 5916 and CRD-BP4 Drives | Crazy Anj | Cdr | 2 | December 21st 03 01:15 AM |
Help, disc drives won't work! | Greg Bailey | Cdr | 2 | July 4th 03 04:15 PM |