If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
RAID Controller Failover
I'm an IT admin starting to look at building a SAN for my company and I'm
curious if anyone out there can explain how some of the popular RAID vendors (i.e. EMC, Chaparral, Infortrend) handle failover (resuming I/O with another RAID controller after one has failed)? I'm mostly interested in failover on the storage side (as opposed to the host side). Specifically I'm interested in knowing if failover is generally accomplished by a surviving controller taking over the failed controller's (or failed port's) AL_PA('s) or if surviving controllers actually alias failed controller's WWN's? Or is this something that's generally handled at the switch level. I'm trying to better understand how failover is accomplished transparent to the host. Many thanks for any input regarding this. |
#2
|
|||
|
|||
|
#3
|
|||
|
|||
On Fri, 07 Nov 2003 21:33:49 +0000, Steve Holly wrote:
I'm an IT admin starting to look at building a SAN for my company and I'm curious if anyone out there can explain how some of the popular RAID vendors (i.e. EMC, Chaparral, Infortrend) handle failover (resuming I/O with another RAID controller after one has failed)? I'm mostly interested in failover on the storage side (as opposed to the host side). Specifically I'm interested in knowing if failover is generally accomplished by a surviving controller taking over the failed controller's (or failed port's) AL_PA('s) or if surviving controllers actually alias failed controller's WWN's? I'm assuming that you are talking about failover within the same storage unit, and not between two physical units. I'm not too familar with how EMC does it, but most (I'm sure not all; everyone does things different) vendors will present the controllers with one WWN and the failover is completely transparent to the host. Some vendors will also distribute the controllers with seperate WWN's which will rely on the host to fail over. This will be controlled similar to that of a lost disk/path failover where each controller is its own path to the same disk. When that path is lost (or the controller dies) then the software, LVM or vendor software fails the I/O over after a certain amount of time. If you are talking about a failover between two physical arrays (this would only happen under very strange circumstances) then this will have to be handled by another piece of software. Possibely a high availability package, or LVM where if the disk along with all paths are lost. So the software in this case is resposible fo detecting a failure and switching to the secondary disk. I hope this helps. - Jake |
#4
|
|||
|
|||
If you want to save money on disks - yes.
If the disk drive cost is neglectable for you - then RAID1+0 is better by far. RAID4 and RAID5 are very slow on writes. -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation http://www.storagecraft.com "canotto" wrote in message ... i'm not an IT manager but a raid 5 solution is imho the best! -- can8 |
#5
|
|||
|
|||
On Sat, 08 Nov 2003 18:45:17 +0300, Maxim S. Shatskih wrote:
If you want to save money on disks - yes. If the disk drive cost is neglectable for you - then RAID1+0 is better by far. RAID4 and RAID5 are very slow on writes. I've noticed as the controllers get more advanced that the caching and other alogrithms used minimize the actual write times that the host system sees. My tests on the recent HP equipment show that the difference in write times between RAID-1 and RAID-5 are within 1MB of each other. I have a hard time believing that anyone would be driven away from RAID-5 due to performance factors on high-end equipment. I have the bonnie++ stats if you'd like to see them. - Jake |
#6
|
|||
|
|||
Jake Roersma wrote:
I've noticed as the controllers get more advanced that the caching and other alogrithms used minimize the actual write times that the host system sees. My tests on the recent HP equipment show that the difference in write times between RAID-1 and RAID-5 are within 1MB of each other. I have a hard time believing that anyone would be driven away from RAID-5 due to performance factors on high-end equipment. I have the bonnie++ stats if you'd like to see them. Random small writes can still kill you. One write turns into read-read-write-write. Latency doubles, throughput is a quarter. Thomas |
#7
|
|||
|
|||
Hi Jake,
I've noticed as the controllers get more advanced that the caching and other alogrithms used minimize the actual write times that the host system sees. My tests on the recent HP equipment show that the difference in write times between RAID-1 and RAID-5 are within 1MB of each other. I have a hard time believing that anyone would be driven away from RAID-5 due to performance factors on high-end equipment. I have the bonnie++ stats if you'd like to see them. === I'd be interested, would you inf sharing/posting them along with the specs for the hardware you've used? Many thanks in advance, Jochen |
#8
|
|||
|
|||
On Sun, 09 Nov 2003 10:58:51 +0100, Jochen Kaiser wrote:
Hi Jake, I'd be interested, would you inf sharing/posting them along with the specs for the hardware you've used? Many thanks in advance, Jochen Jochen, Here is the link to my testing (sorry due to the amount of information i had to leave it as html). As time progresses I will be adding more to it. If you need anymore information on the hardware/software please let me know. I tried to be as scientific as possible, each test result is the average of three test results. They were also done before the EVA 5000 was placed into production so there was no other traffic on the controller and drives. All volumes were created with 80GB of space so they should be spread out across the same number of disks. RAID-1 is actually RAID-1+0. http://www.copiosus.net/bonnie/results.html - Jake |
#9
|
|||
|
|||
Jake,
Jochen, Here is the link to my testing (sorry due to the amount of information i had to leave it as html). As time progresses I will be adding more to it. If you need anymore information on the hardware/software please let me know. I tried to be as scientific as possible, each test result is the average of three test results. They were also done before the EVA 5000 was placed into production so there was no other traffic on the controller and drives. All volumes were created with 80GB of space so they should be spread out across the same number of disks. RAID-1 is actually RAID-1+0. http://www.copiosus.net/bonnie/results.html Thank you very much. I wouldn't have thought that performance of both Raid 1+0 and Raid 5 are that close, the same is true for the disks. I would've expected the 10k disks to be much slower in randon r/w than the 15k disks. Makes me rethink our database sizing concept. BTW, have you ever tested Raid 1+0 vs. 5 concerning a lot of small r/w transactions? (Like typical OLTP rdbm system behavior) Thanks again, Jochen |
#10
|
|||
|
|||
"Jochen Kaiser" wrote in message ... Jake, Jochen, Here is the link to my testing (sorry due to the amount of information i had to leave it as html). As time progresses I will be adding more to it. If you need anymore information on the hardware/software please let me know. I tried to be as scientific as possible, each test result is the average of three test results. They were also done before the EVA 5000 was placed into production so there was no other traffic on the controller and drives. All volumes were created with 80GB of space so they should be spread out across the same number of disks. RAID-1 is actually RAID-1+0. http://www.copiosus.net/bonnie/results.html Thank you very much. I wouldn't have thought that performance of both Raid 1+0 and Raid 5 are that close, the same is true for the disks. I would've expected the 10k disks to be much slower in randon r/w than the 15k disks. Makes me rethink our database sizing concept. Well, to be precise, the performance of RAID-1 and RAID-5 (and the performance of disks) varies a lot more than those results indicate: among other things, the use of extensive stable write-back cache and (on Linux) reiserfs just smooths out a lot of this variation. But it can't completely obscure the fundamentals. While per-operation latency on a lightly-loaded system may seem similar, when the going gets tough down at the disk level for a given usable storage size RAID-1 offers nearly twice the streaming sequential read bandwidth and nearly twice the read IOPS that RAID-5 does. And for *truly* random small write operations (rather than operations like file creation/deletion where much of the updating is concentrated in a small number of blocks - parent-directory updates, for example) where all that the large write-back cache can do is allow the writes to be queue-optimized rather than coalesced, RAID-1 will out-perform RAID-5 by a factor of around 2. BTW, have you ever tested Raid 1+0 vs. 5 concerning a lot of small r/w transactions? (Like typical OLTP rdbm system behavior) That might well tend to highlight the differences better (as long as the database working set significantly exceeded the size of the cache), for the reasons noted above. - bill |
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Need help with SATA RAID 1 failure on A7N8X Delux | Cameron | Asus Motherboards | 10 | September 6th 04 11:50 PM |
Asus P4C800 Deluxe ATA SATA and RAID Promise FastTrack 378 Drivers and more. | Julian | Asus Motherboards | 2 | August 11th 04 12:43 PM |
How Create SATA RAID 1 with current install? | Mr Mister | Asus Motherboards | 8 | July 25th 04 10:46 PM |
help with motherboard choice | S.Boardman | Overclocking AMD Processors | 30 | October 20th 03 10:23 PM |
Promise FastTrak RAID controller on Gigabyte GA-8IHXP board | milleniumaire | Gigabyte Motherboards | 7 | October 14th 03 09:10 AM |