If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
there has to be a better way, or we are stuffed!
Company I work for has a SAN of about 50 TB.
It is configured as 4 logical disks. So when there is a failure, that logical disk is out of action while the RAID rebuilds itself. "Hot swapping" doesn't help much when 1/4 of the system is paralysed for hours afterwards. It seems about once a month that one hard drive ****s itself, and has to be replaced, triggering the fiasco again. Then system engineer says it would be good idea to run complete diagnostics. That means taking all offline for 172800 seconds = gazillions of dollars lost. |
#2
|
|||
|
|||
there has to be a better way, or we are stuffed!
On May 23, 6:55 pm, wrote:
Company I work for has a SAN of about 50 TB. It is configured as 4 logical disks. So when there is a failure, that logical disk is out of action while the RAID rebuilds itself. "Hot swapping" doesn't help much when 1/4 of the system is paralysed for hours afterwards. It seems about once a month that one hard drive ****s itself, and has to be replaced, triggering the fiasco again. Then system engineer says it would be good idea to run complete diagnostics. That means taking all offline for 172800 seconds = gazillions of dollars lost. how many physical disks are we talking about?? Even if there are alot loosing one a month seems like a really high failure rate to me. what type of drives are they??? also i dont know who your Raid vendor is but with raid5 you should be able to continue writing to the LUN even with a failed disk. The lun should be critical but still accessable at a slower speed (overhead of the rebuild process). Id talk to my raid vendor about the massive failure rate. also what you woukd need to do is make smaller LUNS that way when a drive fails you dont take as much of a hit mabe 1/15 of your storage is offline instead of 1/4 just some suggestions AJ |
#3
|
|||
|
|||
there has to be a better way, or we are stuffed!
|
#4
|
|||
|
|||
there has to be a better way, or we are stuffed!
"Nik Simpson" wrote in message
.. . wrote: 2. Failure rates seem very high It all depends on the number of drives involved. 50TB RAID capacity might be about 60TB native. If that's made up of 146GB disks you'd be talking about 400+ drives. With 3% annual failure rate (which is not unusual) that would be about 12 per year. Rob |
#5
|
|||
|
|||
there has to be a better way, or we are stuffed!
Rob Turk wrote:
"Nik Simpson" wrote in message .. . wrote: 2. Failure rates seem very high It all depends on the number of drives involved. 50TB RAID capacity might be about 60TB native. If that's made up of 146GB disks you'd be talking about 400+ drives. With 3% annual failure rate (which is not unusual) that would be about 12 per year. Rob True, I guess I was just assuming larger drives, but good point, OP needs to tell us a little more about the configuration. -- Nik Simpson |
#6
|
|||
|
|||
there has to be a better way, or we are stuffed!
On May 24, 8:55 am, wrote:
Company I work for has a SAN of about 50 TB. It is configured as 4 logical disks. So when there is a failure, that logical disk is out of action while the RAID rebuilds itself. "Hot swapping" doesn't help much when 1/4 of the system is paralysed for hours afterwards. It seems about once a month that one hard drive ****s itself, and has to be replaced, triggering the fiasco again. Then system engineer says it would be good idea to run complete diagnostics. That means taking all offline for 172800 seconds = gazillions of dollars lost. Either the storage system is configured very badly or it's a very poor design. A single disk failure should not have such a significant impact on performance. You should be able to replace the drive and let the system rebuild it in the background and still allow user/ application access to the logical disks. For that quantity of storage and for the cost of down-time (given your mention of lost revenue) this storage should be a highly available enterprise level solution. If that's what you've paid for, it certainly sounds like that's not what you've got. Care to elaborate on what systems you're actually running? Graeme |
#7
|
|||
|
|||
there has to be a better way, or we are stuffed!
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
stuffed up bios update | nospam | Ati Videocards | 6 | May 22nd 05 05:51 AM |
omega drivers stuffed up my system | Sleepy | Nvidia Videocards | 7 | June 11th 04 12:28 AM |
How do I format a HD with WinXP installed. My PC is stuffed! | Newt | Homebuilt PC's | 1 | August 21st 03 05:33 PM |
How do I format a HD with Win XP Pro installed. PC is stuffed. | Edzo | General | 2 | August 21st 03 03:51 PM |
stuffed laptop | Graham | General | 0 | August 16th 03 09:04 AM |