If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Display Modes |
#11
|
|||
|
|||
A few questions before assembling Linux 7.5TB RAID 5 array
Steve Cousins wrote:
OK. I've never used JFS. XFS has worked really well for us. One nice thing when testing different configurations is that the file system creates very quickly. mkfs.jfs also works very quickly, as well. What takes a long time--and is of course filesystem-independent--is the RAID-creation process. Benchmarking multiple chunk sizes is going to be extremely time-consuming, alas. Another thing that I ran into is that if you ever want to do a xfs_check on a volume this big it takes a lot of memory and/or swap space. I appreciate the suggestion. The box will ony have 2GB of RAM; it doesn't need any more for my purposes, but I'll be sure to give it lots of swap. -- URL:http://www.pobox.com/~ylee/ PERTH ---- * Homemade 2.8TB RAID 5 storage array: URL:http://groups.google.ca/groups?selm=slrnd1g04a.5mt.ylee%40pobox.com |
#12
|
|||
|
|||
A few questions before assembling Linux 7.5TB RAID 5 array
Guy Dawson wrote:
Yeechang Lee wrote: I'm shortly going to be setting up a Linux software RAID 5 array using 16 500GB SATA drives with one HighPoint RocketRAID 2240 PCI-X controller (i.e., the controller will be used for its 16 SATA ports, not its "hardware" fakeraid). How long are you expecting a rebuild to take in the event of a disk failure? You may well be better off creating a bunch of smaller 5 disk RAID5 arrays rather than one big one. An aside - we've just taken delivery of an EMC CX300 storage system. We've configured a RAID 5 array with 15 146GB Fibre channel disks and a hot spare. We've just pulled one of the disks from the array and are watching the rebuild take place. I'll let you know how long it takes! Well, the data was in long ago but then I went on holiday. After pulling one 146GB disk from the 14 disk RAID 5 array it took the CX300 35 mins to bring the hot spare in to the array. When I replaced the pulled drive the CX300 took 10 mins to rebuild the array so that the hot spare was spare again. Guy -- -------------------------------------------------------------------- Guy Dawson I.T. Manager Crossflight Ltd |
#13
|
|||
|
|||
A few questions before assembling Linux 7.5TB RAID 5 array
Guy Dawson wrote:
Guy Dawson wrote: Yeechang Lee wrote: I'm shortly going to be setting up a Linux software RAID 5 array using 16 500GB SATA drives with one HighPoint RocketRAID 2240 PCI-X controller (i.e., the controller will be used for its 16 SATA ports, not its "hardware" fakeraid). How long are you expecting a rebuild to take in the event of a disk failure? You may well be better off creating a bunch of smaller 5 disk RAID5 arrays rather than one big one. An aside - we've just taken delivery of an EMC CX300 storage system. We've configured a RAID 5 array with 15 146GB Fibre channel disks and a hot spare. We've just pulled one of the disks from the array and are watching the rebuild take place. I'll let you know how long it takes! Well, the data was in long ago but then I went on holiday. After pulling one 146GB disk from the 14 disk RAID 5 array it took the CX300 35 mins to bring the hot spare in to the array. When I replaced the pulled drive the CX300 took 10 mins to rebuild the array so that the hot spare was spare again. 35 minutes sounds way too short to me. We have Clariions with 5-disk RAID groups of 146GB drives and they take longer than that. Clariion arrays do RAID rebuilds based on LUNs, so for example if you only had a 100GB LUN bound in that RAID group that's all you rebuilt. The array knows not to bother to rebuild dead space where no LUNs are bound. If you had bound LUNs to fill that whole 14-disk RAID 5 array (~2TB) I suspect your rebuild would take considerably longer. |
#14
|
|||
|
|||
A few questions before assembling Linux 7.5TB RAID 5 array
Jon Metzger wrote:
35 minutes sounds way too short to me. We have Clariions with 5-disk RAID groups of 146GB drives and they take longer than that. Clariion arrays do RAID rebuilds based on LUNs, so for example if you only had a 100GB LUN bound in that RAID group that's all you rebuilt. The array knows not to bother to rebuild dead space where no LUNs are bound. If you had bound LUNs to fill that whole 14-disk RAID 5 array (~2TB) I suspect your rebuild would take considerably longer. Ah. That does indeed change things. We had a 10GB LUN and a 250GB LUN on the 14 disk array at the time of the test. Guy -- -------------------------------------------------------------------- Guy Dawson I.T. Manager Crossflight Ltd |
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
A few questions before assembling Linux 7.5TB RAID 5 array | Yeechang Lee | Storage & Hardrives | 13 | January 4th 07 02:24 PM |
RAID 1 vs RAID 5 and to the bottom of it ! | John | Storage (alternative) | 12 | September 21st 06 10:55 PM |
How I built a 2.8TB RAID storage array | Yeechang Lee | Storage (alternative) | 42 | March 3rd 05 12:04 AM |
How to set up RAID 0+1 on P4C800E-DLX MB -using 4 SATA HDD's & 2 ATA133 HHD? | Data Wing | Asus Motherboards | 2 | June 5th 04 03:47 PM |
help with motherboard choice | S.Boardman | General | 30 | October 20th 03 10:23 PM |