A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Raid 6 in use ? - Reactivation



 
 
Thread Tools Display Modes
  #1  
Old May 19th 05, 11:01 AM
albatros66
external usenet poster
 
Posts: n/a
Default Raid 6 in use ? - Reactivation

In July 2004 there was a thread on comp.arch.storage on this - see eg:
a href= "http://groups.google.com/groups?threadm=GoudncugpcwXwmnd4p2dnA%40csd.net&rn um=32&prev=/groups%3Fq%3DRaid%2B6%26start%3D30%26hl%3Dpl%26lr% 3D%26group%3Dcomp.arch.storage%26scoring%3Dd%26sel m%3DGoudncugpcwXwmnd4p2dnA%2540csd.net%26rnum%3D32 "
Previous thread on iRaid 6 in use/i/a

unfortunately there was no single real use (experimental data or case
study) on this, I'd like to ask again about it. Any (non-theoretical)
opinions, expertises, successful (or not) stories etc. ???
It seems that more and more low cost sata-to-scsi/fc boxes tries to
offer this solution ...
And even recently I found Intel's pleople saying that Raid 6 will
replace raid 5 :

a href="http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm"
Intel goes for Raid 6/a

So please share your EXPERIENCE!

al
  #2  
Old May 19th 05, 06:21 PM
_firstname_@lr_dot_los-gatos_dot_ca.us
external usenet poster
 
Posts: n/a
Default

In article ,
albatros66 wrote:
In July 2004 there was a thread on comp.arch.storage on this - see eg:
a href=
"http://groups.google.com/groups?threadm=GoudncugpcwXwmnd4p2dnA%40csd.net&rn um=32&prev=/groups%3Fq%3DRaid%2B6%26start%3D30%26hl%3Dpl%26lr% 3D%26group%3Dcomp.arch.storage%26scoring%3Dd%26sel m%3DGoudncugpcwXwmnd4p2dnA%2540csd.net%26rnum%3D32 "
Previous thread on iRaid 6 in use/i/a

unfortunately there was no single real use (experimental data or case
study) on this, I'd like to ask again about it. Any (non-theoretical)
opinions, expertises, successful (or not) stories etc. ???
It seems that more and more low cost sata-to-scsi/fc boxes tries to
offer this solution ...
And even recently I found Intel's pleople saying that Raid 6 will
replace raid 5 :

a href="http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm"
Intel goes for Raid 6/a

So please share your EXPERIENCE!

al


Zeroest: In the good old days the RAB standardized the use of the
terms RAID-1 through RAID-5, with RAID-0 having become an accepted
term for non-redundant layouts (striping). Then it standardized
RAID-6 as the accepted term for a variant of RAID-5 with double
parity. Problem is that there are several different ways to implement
RAID-6, which are highly non-equivalent. The ones commonly labelled
RAID-6 seems to be either PQ-parity, or the EVENODD scheme described
in the Menon/BBB papers (the three other authors all have names that
start with B). Furthermore, today there are many more interesting
redundancy schemes that have similar or better space efficiency than
RAID-6, and similar or better other properties (real-life resiliency
to correlated failures, performance, difficulty of implementation).
So while I agree that multi-failure redundant data layouts are the
thing of the future, it is not clear that the thing traditionally
called "RAID-6" will have significant impact.

(Side remark: How many oldies remember the good folks at Storage
Computer and their attempt to make RAID-7 a trademark for RAID-4 with
cache? Their lawsuit against Hitachi, claiming that they had a patent
on all parity-based RAID? They seem to have gone under, which would
be well deserved. They did act as if they were a bunch of thieves,
liars and crooks, but there may be more to the story.)

First: I would guess that those of us that work for a corporation
which builds RAID arrays will not discuss our experiences in public,
other than in oblique references. The reasons are obvious: The
implementation techniques, target markets, and performance/cost
tradeoffs between RAID-n (for various values of n, not limited to
1...6) are difficult, multi-faceted, and very important trade secrets.

Second: The academic literature has pretty much left research on data
layouts as trivial as RAID-n behind. The thrust of research today is
in more interestingly complex data layouts (for an example of an
interesting, complex, and probably pointless system, see the
Oceanstore project at Berkeley). This means that looking at the
published research literature will not give too many clues on what is
really implemented in the field in disk array. Still, a good study of
data layout papers by authors with an industry affiliation will give
some insights.

Third: Without wanting to name names, I happen to know that there is
at least one product line from a major vendor out there (the product
may have been cancelled in the meantime, it existed a few years ago)
that uses a traditional RAID-6 layout (either PQ-parity or EVENODD,
don't remember). But for marketing reasons, the RAID layout was not
called RAID-6, because the term "RAID-5" has such a strong brand
recognition (SAN administrators "know" that RAID-0 means no
fault-tolerance, RAID-1 means fault tolerance with good performance
but at a high cost, and that RAID-5 is a compromise between cost and
performance, suitable for some workloads, all these statements are
oversimplifications). Therefore, the product was marketed as
something like "enhanced RAID-5" or "RAID-5++". The marketing team
feared that calling it "RAID-6" would scare off customers, who are
often quite innovation-shy.

--
The address in the header is invalid for obvious reasons. Please
reconstruct the address from the information below (look for _).
Ralph Becker-Szendy
  #3  
Old May 19th 05, 09:18 PM
Joshua Baker-LePain
external usenet poster
 
Posts: n/a
Default

In article , albatros66 wrote:

unfortunately there was no single real use (experimental data or case
study) on this, I'd like to ask again about it. Any (non-theoretical)
opinions, expertises, successful (or not) stories etc. ???


Probably not what you're looking for, but search the archives of
either linux-kernel or linux-raid. The 2.6 kernel has a software
implementation of RAID 6 that I do believe some folks are using
in production. Also, for an internal product, the new Areca SATA
RAID controller claims to do RAID 6.

--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
  #4  
Old May 20th 05, 11:15 PM
Curious George
external usenet poster
 
Posts: n/a
Default

On Thu, 19 May 2005 17:21:16 -0000,
wrote:

snip

So please share your EXPERIENCE!


snip

I can only share the obvious generalization that RAID 6 brings
additional complexity & parity overhead to raid 5 - which has
potential & often real penalty, esp during degredation/rebuilding.
The quality of an implementation (as was noted - whose details are
usually obscured) would seem quite important. IMHO making the
business case is very often not as clear cut as with other currently
viable levels. I cannot comprehend it in the low-end enthusiast
market - where it is presently being very hyped-up.

Third: Without wanting to name names, I happen to know that there is
at least one product line from a major vendor out there (the product
may have been cancelled in the meantime, it existed a few years ago)
that uses a traditional RAID-6 layout (either PQ-parity or EVENODD,
don't remember). But for marketing reasons, the RAID layout was not
called RAID-6, because the term "RAID-5" has such a strong brand
recognition (SAN administrators "know" that RAID-0 means no
fault-tolerance, RAID-1 means fault tolerance with good performance
but at a high cost, and that RAID-5 is a compromise between cost and
performance, suitable for some workloads, all these statements are
oversimplifications). Therefore, the product was marketed as
something like "enhanced RAID-5" or "RAID-5++". The marketing team
feared that calling it "RAID-6" would scare off customers, who are
often quite innovation-shy.


The one with "e"s is still & will continue to be supported for some
time. Only "ee" is replacing "e".
  #5  
Old May 21st 05, 02:20 AM
Faeandar
external usenet poster
 
Posts: n/a
Default

On Fri, 20 May 2005 22:15:12 GMT, Curious George wrote:

On Thu, 19 May 2005 17:21:16 -0000,
wrote:

snip

So please share your EXPERIENCE!


snip

I can only share the obvious generalization that RAID 6 brings
additional complexity & parity overhead to raid 5 - which has
potential & often real penalty, esp during degredation/rebuilding.
The quality of an implementation (as was noted - whose details are
usually obscured) would seem quite important. IMHO making the
business case is very often not as clear cut as with other currently
viable levels. I cannot comprehend it in the low-end enthusiast
market - where it is presently being very hyped-up.


Raid 6 for the low end enthusiast market? Sheesh, that *is* hype.

I don't think there is an actual definition of Raid 6 yet but I
haven't looked at standards either.


Third: Without wanting to name names, I happen to know that there is
at least one product line from a major vendor out there (the product
may have been cancelled in the meantime, it existed a few years ago)
that uses a traditional RAID-6 layout (either PQ-parity or EVENODD,
don't remember). But for marketing reasons, the RAID layout was not
called RAID-6, because the term "RAID-5" has such a strong brand
recognition (SAN administrators "know" that RAID-0 means no
fault-tolerance, RAID-1 means fault tolerance with good performance
but at a high cost, and that RAID-5 is a compromise between cost and
performance, suitable for some workloads, all these statements are
oversimplifications). Therefore, the product was marketed as
something like "enhanced RAID-5" or "RAID-5++". The marketing team
feared that calling it "RAID-6" would scare off customers, who are
often quite innovation-shy.


Some vendors (NetApp) are calling it Raid DP, not double parity but
rather diagonal parity. Don't ask me how it works because I have no
clue, I just know that some pretty smart people say it's incredibly
more safe than non-DP parity raid.

~F

  #6  
Old May 21st 05, 05:10 AM
_firstname_@lr_dot_los-gatos_dot_ca.us
external usenet poster
 
Posts: n/a
Default

In article ,
Faeandar wrote:
Some vendors (NetApp) are calling it Raid DP, not double parity but
rather diagonal parity. Don't ask me how it works because I have no
clue, I just know that some pretty smart people say it's incredibly
more safe than non-DP parity raid.


I think a few people from NetApp gave a talk about a new parity-scheme
at one of the USENIX FAST (File And Storage Technologies) conferences
in San Francisco, a little while ago. Only thing I remember is that
it was either early in the morning or right after lunch, so I had a
hard time staying awake, but the talk seemed very interesting.

Grep the FAST proceedings for "NetApp" "Parity" "RAID" or such (since
there have been only 4 or so FAST conferences, a visual grep will
probably suffice). I'm at home now, and the proceedings are in the
office, and I don't feel like doing this online over a low-speed link.

Reminder: The fact that a few researchers from NetApp give a talk at a
research conference doesnt mean that NetApp ships exactly that idea in
a product.

--
The address in the header is invalid for obvious reasons. Please
reconstruct the address from the information below (look for _).
Ralph Becker-Szendy
  #8  
Old May 21st 05, 09:16 AM
Bill Todd
external usenet poster
 
Posts: n/a
Default

Zak wrote:
wrote:

Reminder: The fact that a few researchers from NetApp give a talk at a
research conference doesnt mean that NetApp ships exactly that idea in
a product.



Diagonal parity sounded like a simple enough idea to actually work. But
obviously there are implementation details...


IIRC it is relatively simple, and somewhat elegant. It involves a
normal RAID-5-style set of stripes, each of which includes an additional
segment used for the 'diagonal' parity that does not otherwise
participate in the stripe. So when a single disk fails, data is rebuilt
exactly as it is in a normal RAID-5 configuration.

If a second disk fails, then the 'diagonal' parity comes into play.
It's generated by XORing segments from successive stripes (on successive
disks), such that it's always possible to regenerate one of the two
missing segments in any normal RAID-5-style stripe from information in
surrounding stripes (plus the diagonal parity) - and once you've done
that, the other missing segment can then be regenerated by the normal
RAID-5 mechanism.

While with a single disk failure degraded performance is therefore
similar to a normal RAID-5 array, with a double failure this two-stage
approach to reconstructing each stripe slows things down to a crawl:
far better than losing your data entirely and having to reconstruct it
from backup material, but in many cases not adequate for acceptable
continuing performance in a production environment.

There's also of course additional write overhead during normal
operation, but NetApp may be in a better position than most to tolerate
this because of their ability (due to the NVRAM that's an integral part
of their server architecture plus their 'write-anywhere file layout') to
collect dirty data and dump it to disk in full-stripe or even
multi-stripe batches where they can just blind-write the parity
information along with it rather than reconstruct it with multiple disk
accesses as is necessary for small write operations.

- bill
  #9  
Old May 21st 05, 10:12 AM
Paul Rubin
external usenet poster
 
Posts: n/a
Default

Bill Todd writes:
If a second disk fails, then the 'diagonal' parity comes into
play. It's generated by XORing segments from successive stripes (on
successive disks), such that it's always possible to regenerate one of
the two missing segments in any normal RAID-5-style stripe from
information in surrounding stripes (plus the diagonal parity) - and
once you've done that, the other missing segment can then be
regenerated by the normal RAID-5 mechanism.


I'm having trouble wrapping my mind around that. There are well known
ways of doing this kind of stuff with finite field arithmetic but I
don't see how to do it with simple xor's. But maybe I don't
understand the RAID terminology and that's confusing me. Is there
a more detailed explanation online somewhere? Thanks.

  #10  
Old May 21st 05, 07:09 PM
Bill Todd
external usenet poster
 
Posts: n/a
Default

Paul Rubin wrote:
Bill Todd writes:

If a second disk fails, then the 'diagonal' parity comes into
play. It's generated by XORing segments from successive stripes (on
successive disks), such that it's always possible to regenerate one of
the two missing segments in any normal RAID-5-style stripe from
information in surrounding stripes (plus the diagonal parity) - and
once you've done that, the other missing segment can then be
regenerated by the normal RAID-5 mechanism.



I'm having trouble wrapping my mind around that. There are well known
ways of doing this kind of stuff with finite field arithmetic but I
don't see how to do it with simple xor's. But maybe I don't
understand the RAID terminology and that's confusing me. Is there
a more detailed explanation online somewhere? Thanks.


About a minute Googling for raid-dp at www.netapp.com yielded
http://www.netapp.com/tech_library/3298.html?fmt=print

- bill
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
IDE RAID Ted Dawson Asus Motherboards 29 September 21st 04 03:39 AM
Need help with SATA RAID 1 failure on A7N8X Delux Cameron Asus Motherboards 10 September 6th 04 11:50 PM
Asus P4C800 Deluxe ATA SATA and RAID Promise FastTrack 378 Drivers and more. Julian Asus Motherboards 2 August 11th 04 12:43 PM
Gigabyte GA-8KNXP and Promise SX4000 RAID Controller Old Dude Gigabyte Motherboards 4 November 12th 03 07:26 PM
RAID-1 reliability marcodeo Storage (alternative) 26 August 30th 03 09:53 PM


All times are GMT +1. The time now is 06:58 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.