A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

RAID5 vs RAID10 speed benchmark?



 
 
Thread Tools Display Modes
  #1  
Old May 29th 04, 12:49 AM
- C -
external usenet poster
 
Posts: n/a
Default RAID5 vs RAID10 speed benchmark?

Has anyone do a benchmark of RAID5 vs RAID10? I am looking for large
sequential reads and writes... Thanks in advance...

Clayton


  #2  
Old May 29th 04, 08:57 PM
Ali-Reza Anghaie
external usenet poster
 
Posts: n/a
Default

- C - wrote:
Has anyone do a benchmark of RAID5 vs RAID10? I am looking for large
sequential reads and writes... Thanks in advance...


That's not something that'd be independant of the hardware, drivers,
OS/filesystem, number of disks in the volumes, etc.

-Ali

--
OpenPGP Key: 030E44E6
--
Was I helpful?: http://svcs.affero.net/rm.php?r=packetknife
--
After all, there is but one race - humanity. -- George Moore
  #3  
Old May 30th 04, 02:07 AM
Ron Reaugh
external usenet poster
 
Posts: n/a
Default


"- C -" wrote in message
hlink.net...
Has anyone do a benchmark of RAID5 vs RAID10? I am looking for large
sequential reads and writes... Thanks in advance...


You need to provide more information. Let's say one has a RAID 5 array
using N disks and an equivalent overall storage sized RAID 10 array using
2x(N-1) disks. That's apples vs apples. Then if both are well implemented,
the RAID 10 will always be faster in both single queued sequential reading
and single queued sequential writing than the RAID 5. The reading might
only be imperceptibly faster but the writing would be significantly faster.
Non-single queued I/O whether sequential or small record random provides
even a greater advantage to the RAID 10 in most cases.


  #4  
Old May 30th 04, 05:57 AM
Ron Reaugh
external usenet poster
 
Posts: n/a
Default

One could also claim that a real apples vs apples comparison would be the
same number of drives in a RAID 5 array vs that same number of drivers in
RAID 10. Then the balance swings towards the RAID 5 performance wise.

"Ron Reaugh" wrote in message
...

"- C -" wrote in message
hlink.net...
Has anyone do a benchmark of RAID5 vs RAID10? I am looking for large
sequential reads and writes... Thanks in advance...


You need to provide more information. Let's say one has a RAID 5 array
using N disks and an equivalent overall storage sized RAID 10 array using
2x(N-1) disks. That's apples vs apples. Then if both are well

implemented,
the RAID 10 will always be faster in both single queued sequential reading
and single queued sequential writing than the RAID 5. The reading might
only be imperceptibly faster but the writing would be significantly

faster.
Non-single queued I/O whether sequential or small record random provides
even a greater advantage to the RAID 10 in most cases.




  #5  
Old May 30th 04, 06:43 PM
Robert Wessel
external usenet poster
 
Posts: n/a
Default

"Ron Reaugh" wrote in message ...
One could also claim that a real apples vs apples comparison would be the
same number of drives in a RAID 5 array vs that same number of drivers in
RAID 10. Then the balance swings towards the RAID 5 performance wise.



True for read performance, but not for write performance, where RAID5
continues to get clobbered by the required read-modify-write cycles
for each update.

And please don't top-post.
  #6  
Old May 31st 04, 10:14 PM
Ron Reaugh
external usenet poster
 
Posts: n/a
Default


"Robert Wessel" wrote in message
om...
"Ron Reaugh" wrote in message

...
One could also claim that a real apples vs apples comparison would be

the
same number of drives in a RAID 5 array vs that same number of drivers

in
RAID 10. Then the balance swings towards the RAID 5 performance wise.



True for read performance, but not for write performance, where RAID5
continues to get clobbered by the required read-modify-write cycles
for each update.


Not so much when the number of drives is that much different. RAID 5 has a
write performance hit but not necessarily a crippling one. There are also
fancier implementations of RAID 5 where the writes are held pending in a
small RAID 10 array(sub part) and during slow periods written to the full
RAID 5 array.

And please don't top-post.


Except where appropriate which in this case responding to my own post it was
so appropriate.


  #7  
Old June 2nd 04, 08:20 AM
Robert Wessel
external usenet poster
 
Posts: n/a
Default

"Ron Reaugh" wrote in message ...
"Robert Wessel" wrote in message
om...
"Ron Reaugh" wrote in message

...
One could also claim that a real apples vs apples comparison would be

the
same number of drives in a RAID 5 array vs that same number of drivers

in
RAID 10. Then the balance swings towards the RAID 5 performance wise.



True for read performance, but not for write performance, where RAID5
continues to get clobbered by the required read-modify-write cycles
for each update.


Not so much when the number of drives is that much different.



Errr...? The item under discussion is when the number of drives is
equal.


RAID 5 has a
write performance hit but not necessarily a crippling one.



In almost all cases, the RAID5 write will require more work (three or
four I/Os rather than two), compared to RAID1(0). You can get three
on a simple update if the data block happens to be cached. In the
(unusual) case where an entire set of stripes is being overwritten,
the write overhead can actually be lower for RAID5. It's a question
of write load and latency requirements. Increased read performance
may free up more (potential) I/Os for use in write operations. Below
some relative level (of writes vs. reads), this may allow write
performance (throughput) to increase as well. How latency is measured
in your system is important as well - in the RAID5 cases the latency
of completing the write to disk will usually be higher (especially at
low write loads), even when throughput increases. If you measure
latency only to the write to NVRAM, the difference can be minimal.

The ratio of reads to writes is quite environment dependant. In file
service applications reads are often quite popular, while in
transactional (eg. database) systems, writes can be the dominant
workload.



There are also
fancier implementations of RAID 5 where the writes are held pending in a
small RAID 10 array(sub part) and during slow periods written to the full
RAID 5 array.



Which is then no longer a RAID5 system. And which does not improve
sustained performance.
  #8  
Old June 2nd 04, 09:16 AM
Bill Todd
external usenet poster
 
Posts: n/a
Default


"Robert Wessel" wrote in message
om...
"Ron Reaugh" wrote in message

...

....

There are also
fancier implementations of RAID 5 where the writes are held pending in a
small RAID 10 array(sub part) and during slow periods written to the

full
RAID 5 array.



Which is then no longer a RAID5 system.


Indeed. I think the name for this HP innovation is 'AutoRAID'; John Wilkes'
group there published some papers on it in the 1990s. It's actually more
sophisticated than a simple RAID-10 array acting like a stable write-back
cache in front of the main RAID-5 array: the two arrays share the same
disks, and their relative capacities may even vary dynamically according to
space availability.

And which does not improve
sustained performance.


Well, it can (in much the same manner as a large, stable write-back cache
can), if the updates have either physical locality (in which case entire
RAID-5 stripe updates may accumulate in the RAID-1 section and be propagated
to the RAID-5 section as full-stripe writes after the activity moves
elsewhere) or temporal locality (in which case repeated updates to the same
data occur in the RAID-1 section and, again, are only propagated to the
RAID-5 section once, after things calm down in that area).

- bill



  #9  
Old June 2nd 04, 11:21 PM
Robert Wessel
external usenet poster
 
Posts: n/a
Default

"Bill Todd" wrote in message ...

And which does not improve
sustained performance.


Well, it can (in much the same manner as a large, stable write-back cache
can), if the updates have either physical locality (in which case entire
RAID-5 stripe updates may accumulate in the RAID-1 section and be propagated
to the RAID-5 section as full-stripe writes after the activity moves
elsewhere) or temporal locality (in which case repeated updates to the same
data occur in the RAID-1 section and, again, are only propagated to the
RAID-5 section once, after things calm down in that area).



That's true, although I expect it's a net gain (in sustained
performance) only for fairly rare workload. For bursty workloads
(where you can do the extra work in an idle period), sure, but for
sustained workloads you're going to have to merge a lot of I/Os to
make up for the extra copy that needs to be done.
  #10  
Old June 3rd 04, 01:12 AM
Bill Todd
external usenet poster
 
Posts: n/a
Default


"Robert Wessel" wrote in message
om...
"Bill Todd" wrote in message

...

And which does not improve
sustained performance.


Well, it can (in much the same manner as a large, stable write-back

cache
can), if the updates have either physical locality (in which case entire
RAID-5 stripe updates may accumulate in the RAID-1 section and be

propagated
to the RAID-5 section as full-stripe writes after the activity moves
elsewhere) or temporal locality (in which case repeated updates to the

same
data occur in the RAID-1 section and, again, are only propagated to the
RAID-5 section once, after things calm down in that area).



That's true, although I expect it's a net gain (in sustained
performance) only for fairly rare workload. For bursty workloads
(where you can do the extra work in an idle period), sure, but for
sustained workloads you're going to have to merge a lot of I/Os to
make up for the extra copy that needs to be done.


They may be smart enough to avoid a copy: if you lay out the RAID-10
material suavely (leaving an unused parity block in each copy's stripe,
since space efficiency is not paramount for the temporary mirrored part of
the storage) and have an explicit per-stripe map (which you probably want
anyway if you're going to allow the division between RAID-10 and RAID-5
space to vary dynamically), you can just leave one copy of the data in place
and only write the parity when you convert it to RAID-5.

Assuming that none of the RAID-10 data is still cached when the conversion
is performed, this results in N reads and one write, rather than the N reads
and N+1 writes a copy would require (this can be further optimized by
creating parity in memory as each RAID-10 disk segment is evicted from the
cache, avoiding the reads at the cost of a single chunk of cache which
itself could be staged to disk - using the 'spare' disk in the RAID-10
stripe - if space got tight; you'd have to model this to decide exactly what
strategy was best). The stripe map likely is maintained in stable RAM and
updated on disk only occasionally: these aren't low-end arrays. And
conversion doesn't necessarily occur that often: the only reason you *ever*
need to move data from RAID-10 to RAID-5 storage is because you're getting
tight on overall space and need the resulting near-factor-of-2 compression -
and for many update-intensive workloads the net accumulation of data is
relatively slow (if the workload is not update-intensive, you might be
better off just using vanilla-flavored RAID-5 anyway, fronted by a stable
cache if small-chunk extension of existing data is common to consolidate it
into larger writes).

- bill



 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
memory too slow... Euclid Compaq Computers 4 May 10th 04 11:20 AM
120 gb is the Largest hard drive I can put in my 4550? David H. Lipman Dell Computers 65 December 11th 03 01:51 PM
Newbie storage questions... (RAID5, SANs, SCSI) David Sworder Storage & Hardrives 17 December 2nd 03 01:10 AM
Check my RAM Speed Ben Pope Homebuilt PC's 0 October 24th 03 06:14 PM
Best bang for buck CPU? Shawk Homebuilt PC's 9 October 5th 03 07:24 PM


All times are GMT +1. The time now is 09:11 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.