A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Raid5 vs Raid6 under degraded mode



 
 
Thread Tools Display Modes
  #1  
Old May 29th 09, 09:46 AM posted to comp.arch.storage
howa
external usenet poster
 
Posts: 4
Default Raid5 vs Raid6 under degraded mode

Hey,

Assume a system with 9 SATA 1TB disks, and 1 hot spare.

They are targeted to config as Raid5 or Raid6.

If one of the disk failed, which system will perform better? Raid5 and
Raid6? By how many %?

Thanks.

  #2  
Old May 29th 09, 04:35 PM posted to comp.arch.storage
Ed Wilts
external usenet poster
 
Posts: 14
Default Raid5 vs Raid6 under degraded mode

On May 29, 3:46*am, howa wrote:
Assume a system with 9 SATA 1TB disks, and 1 hot spare.

They are targeted to config as Raid5 or Raid6.

If one of the disk failed, which system will perform better? Raid5 and
Raid6? By how many %?


It's not a performance question - it's an availability question. If
you configure those volumes in a raid5, you will almost be guaranteed
of losing your entire array if a drive fails. The math does not work
in your favor - the odds are extremely high that under high I/O load -
during your raid5 rebuild, you will have an unrecoverable error on one
of the remaining members. At that time, your raid5 array will fail.

Don't think it won't happen to you - I've seen double-disk failures
happen on raid5 sets too often for it to be funny.

Go with raid6. If you want performance, create 4 mirror sets instead
of raid6.
  #3  
Old May 30th 09, 01:21 AM posted to comp.arch.storage
Bill Todd
external usenet poster
 
Posts: 162
Default Raid5 vs Raid6 under degraded mode

Ed Wilts wrote:
On May 29, 3:46 am, howa wrote:
Assume a system with 9 SATA 1TB disks, and 1 hot spare.

They are targeted to config as Raid5 or Raid6.

If one of the disk failed, which system will perform better? Raid5 and
Raid6? By how many %?


It's not a performance question - it's an availability question.


No, it's not. You just didn't choose to answer the question that was asked.

The answer to that question is that the RAID-5 and RAID-6 options should
perform about the same after the disk failure since both will likely use
the same algorithm to reconstruct data when it's needed and also to
rebuild the failed disk's data on the hot spare (though the latter is a
bit more complex for RAID-6 and thus might consume marginally more
resources).

If
you configure those volumes in a raid5, you will almost be guaranteed
of losing your entire array if a drive fails.


Not unless the array is brain-damaged.

The math does not work
in your favor - the odds are extremely high that under high I/O load -
during your raid5 rebuild, you will have an unrecoverable error on one
of the remaining members.


That is correct.

At that time, your raid5 array will fail.


That is not correct unless the array is (as I said above) brain-damaged.
Rather, the stripe(s) containing an unreadable sector will be unable
to reconstruct the sector from the failed disk and thus at most two
sectors in the stripe (only one if the other was the parity sector) will
be lost.

A single disk doesn't take all its marbles and go home when a sector
becomes unreadable, it just soldiers on without it (reporting the error,
of course, and usually revectoring the damaged sector to a healthy one
for future use). Why should an array act differently? It should report
the sector(s) as lost, create a healthy stripe out of the valid
remainder of the damaged stripe, and revector subsequent requests
transparently to that new location.


Don't think it won't happen to you - I've seen double-disk failures
happen on raid5 sets too often for it to be funny.


Two whole-disk failures very close together in a RAID-5 set of this size
are certainly possible but of very low probability. The most likely
cause would be some external environmental catastrophe of a nature which
could quite possibly affect more than two disks anyway (in which case
RAID-6 wouldn't have bought you anything). The next most likely cause
would be some common flaw in the batch of disks being used, but even
then the second disk would have to fail so soon that the hot spare would
not have been rebuilt yet - a very unlikely occurrence.

So if the original poster could indeed tolerate very minor data loss in
the array should a disk fail (the probability of which is significant
over a period of years) RAID-5 could be a very reasonable option. The
likelihood that such very minor data loss would happen to occur in a
critical area of the file system such that effectively far more data
would become inaccessible is far smaller than the likelihood that some
external problem would trash his/her data.

- bill
  #4  
Old May 31st 09, 02:02 PM posted to comp.arch.storage
Gary R. Schmidt
external usenet poster
 
Posts: 15
Default Raid5 vs Raid6 under degraded mode

Bill Todd wrote:
[SNIP]
Two whole-disk failures very close together in a RAID-5 set of this size
are certainly possible but of very low probability. The most likely
cause would be some external environmental catastrophe of a nature which
could quite possibly affect more than two disks anyway (in which case
RAID-6 wouldn't have bought you anything). The next most likely cause
would be some common flaw in the batch of disks being used, but even
then the second disk would have to fail so soon that the hot spare would
not have been rebuilt yet - a very unlikely occurrence.


An example - disks in a RAID set being on a single controller which is
starting to fail. Been there, done that, replaced controller and
restored from backup.

But this sort of problem does not often show up with enterprise-level
hardware, it is more seen with consumer-grade equipment that is being
pushed harder than it was costed/designed to do. And, with the increase
of RAID being targeted as a consumer-level data storage solution, this
will increase.

(Not disputing any of the points you made, just expanding on this one.)

Cheers,
Gary B-)
  #5  
Old June 4th 09, 02:42 PM posted to comp.arch.storage
John S.
external usenet poster
 
Posts: 2
Default Raid5 vs Raid6 under degraded mode

On May 29, 3:35 pm, Ed Wilts wrote:
On May 29, 3:46 am, howa wrote:

Assume a system with 9 SATA 1TB disks, and 1 hot spare.


They are targeted to config as Raid5 or Raid6.


If one of the disk failed, which system will perform better? Raid5 and
Raid6? By how many %?


It's not a performance question - it's an availability question. If
you configure those volumes in a raid5, you will almost be guaranteed
of losing your entire array if a drive fails. The math does not work
in your favor - the odds are extremely high that under high I/O load -
during your raid5 rebuild, you will have an unrecoverable error on one
of the remaining members. At that time, your raid5 array will fail.

Don't think it won't happen to you - I've seen double-disk failures
happen on raid5 sets too often for it to be funny.

Go with raid6. If you want performance, create 4 mirror sets instead
of raid6.


Guess you need to buy better disk arrays....


The disk arrays we use are constantly doing scrubs, preemptive
replacements, etc. I have not had a disk array choke on a RAID-5 in
YEARS... hmmm... about the same time we STOPPED using low end, bargain
priced arrays...
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Seagate ST31000333AS in a RAID6, different firmwares? Kremlar Storage (alternative) 2 April 4th 09 12:01 AM
space consumed by raid6 [email protected] Storage & Hardrives 4 May 30th 07 07:32 PM
Asus A8N-E mirror degraded status old man Asus Motherboards 0 March 14th 06 06:22 PM
RAID5 on A8N-SLI Deluxe : is it hard Raid5 Toto Asus Motherboards 7 September 27th 05 05:01 PM
Adaptec vs. Western Digital. Who is DEGRADED? Who is FOS? Brian General 0 January 13th 04 06:16 PM


All times are GMT +1. The time now is 12:26 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.