A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Raid0 or Raid5 for network to disk backup (Gigabit)?



 
 
Thread Tools Display Modes
  #1  
Old March 29th 07, 04:05 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
willbill
external usenet poster
 
Posts: 103
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

markm75 wrote:

Anyone have any thoughts on which is going to give me better write
speeds.. I know raid0 should be much better and if i combine it with
raid1, redundant..

But I'm assuming when I backup my servers to (this backup server)
across the gigabit network, my write speeds would max out at say 60 MB/
s wouldnt they?

I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..

Any thoughts?



two disk raid0 with write cache turned on. better read/write
performance all around and twice the disk space. run the
backup machine with a UPS power supply

so what if it (raid0) fails every few years? it's a minor backup
machine and odds are that it won't be the end of the world

if the backup is not "minor" (meaning totally critical),
then go with slower raid5 (or something similar)

bill
  #2  
Old March 29th 07, 06:33 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Maurice Volaski
external usenet poster
 
Posts: 5
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..


Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.
  #3  
Old March 29th 07, 10:59 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Arno Wagner
external usenet poster
 
Posts: 2,796
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:
markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..


Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.


With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.

But the 75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.

Arno
  #4  
Old March 29th 07, 04:29 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
markm75
external usenet poster
 
Posts: 222
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

On Mar 29, 5:59 am, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:

markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..

Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.


With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.

But the 75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.

Arno


Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...

  #5  
Old March 29th 07, 05:01 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Folkert Rienstra
external usenet poster
 
Posts: 1,297
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

"markm75" wrote in message oups.com
On Mar 29, 5:59 am, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:

markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..
Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.


With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.

But the 75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.

Arno


Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...


No, really?
Who would have thought that from your first post. Thanks for clearing that up.
It all becomes much clearer now.
  #6  
Old March 29th 07, 06:21 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
markm75
external usenet poster
 
Posts: 222
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

On Mar 29, 11:29 am, "markm75" wrote:
On Mar 29, 5:59 am, Arno Wagner wrote:





In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:


markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..
Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.


With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.


But the 75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.


Arno


Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...- Hide quoted text -

- Show quoted text -


If I use the option, checked off, "Bypass windows cache" I do in fact
get HIGHER values than when not bypassing the cache..

I know this sounds reversed, but it is what happens.


  #7  
Old March 30th 07, 01:15 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Arno Wagner
external usenet poster
 
Posts: 2,796
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

In comp.sys.ibm.pc.hardware.storage markm75 wrote:
On Mar 29, 11:29 am, "markm75" wrote:
On Mar 29, 5:59 am, Arno Wagner wrote:





In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:


markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..
Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.


With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.


But the 75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.


Arno


Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...- Hide quoted text -

- Show quoted text -


If I use the option, checked off, "Bypass windows cache" I do in fact
get HIGHER values than when not bypassing the cache..


I know this sounds reversed, but it is what happens.


It is possible. It does however point to some serious problem
in the write-buffer design.

Arno
  #8  
Old March 30th 07, 02:22 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Maxim S. Shatskih
external usenet poster
 
Posts: 87
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

If I use the option, checked off, "Bypass windows cache" I do in fact
get HIGHER values than when not bypassing the cache..

I know this sounds reversed, but it is what happens.


Depends on data access pattern, on some patterns it is really profitable. For
instance, databases like MSSQLServer also use cache bypass.

--
Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation

http://www.storagecraft.com

  #9  
Old March 30th 07, 05:50 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
markm75
external usenet poster
 
Posts: 222
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

As a side note: Much to my disappointment I found bad results last
nite in my over the network Acronis of one of my servers to my current
backup server (test server that i was doing the hdd to hdd testing on,
where I was getting 300gb partition done in 4hrs with acronis on the
same machine)..

My results going across gigabit ethernet using acronis, set to normal
compression (not high or max, wondering if increase compression should
speed things along)...

Size of partiton: 336GB or 334064MB (RAID5, sata 150)
Time to complete: 9hrs, 1min (541 mins or 32460 seconds)
Compressed size at normal: 247 GB
Destination: Single Volume, SATAII (seq writes 55 MB/s bypassing
windows cache, 38 MB/s not bypassing windows cache).. Yes i dont know
why when testing with sisandra and not bypassing the cache the numbers
are LESS but they are


10.29 MB/sec actual rate

*Using qcheck going from the source to the destination I get 450 Mbps
to 666 Mbps (use 450 as the avg = 56.25 MB/s

So the max rate I could possibily expect would be 56 MB/s if the
writes on the destination occurred at this rate.


Any thoughts on how to get this network backup up in value?

Thoughts on running simultaneous jobs across the network if I enable
both Gigabit ports on the destination server (how would I do this, ie:
do I have to do trunking or just set another ip on the other port and
direct the backup to that ip\e$ ) IE: If i end up using Acronis
there is no way to do a job that will sequentially do each server, I'd
have to either know when the job stops to start up the next server to
be backed up on each weeks full.. so the only way I figured around
this was to do 2 servers at once on dual gigabit?

I have intel pro cards in alot of the servers, but I dont see any way
to set jumbo frames either.

My switch is a Dlink DGS-1248T (gigabit, managed).

The controller card on the source server in this case is 3ware
escalade 8506-4lp PCI-x sataI while the one on the destination is
ARC-1120 pci-x 8 port sataII.. I'm assuming these are both very good
cards.. I dont know how they compare to the Raidcore though?

Still alittle confused on the SATA vs SCSI argument too.. the basic
rule should be that if alot of simultaneous hits are going on.. SCSI
is better.. but why? Still unsure if each drive on a scsi chain has
divided bandwith of say 320 mB/s.. same for SATA, each cable divided
from the 3 Gbps rate or each has 3 Gb/s.. if both devices have
dedicated bandwidth for any given drive, then what makes SCSI superior
to SATA...

  #10  
Old March 30th 07, 09:15 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Arno Wagner
external usenet poster
 
Posts: 2,796
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

In comp.sys.ibm.pc.hardware.storage markm75 wrote:
As a side note: Much to my disappointment I found bad results last
nite in my over the network Acronis of one of my servers to my current
backup server (test server that i was doing the hdd to hdd testing on,
where I was getting 300gb partition done in 4hrs with acronis on the
same machine)..


My results going across gigabit ethernet using acronis, set to normal
compression (not high or max, wondering if increase compression should
speed things along)...


Size of partiton: 336GB or 334064MB (RAID5, sata 150)
Time to complete: 9hrs, 1min (541 mins or 32460 seconds)
Compressed size at normal: 247 GB
Destination: Single Volume, SATAII (seq writes 55 MB/s bypassing
windows cache, 38 MB/s not bypassing windows cache).. Yes i dont know
why when testing with sisandra and not bypassing the cache the numbers
are LESS but they are



10.29 MB/sec actual rate


*Using qcheck going from the source to the destination I get 450 Mbps
to 666 Mbps (use 450 as the avg = 56.25 MB/s


So the max rate I could possibily expect would be 56 MB/s if the
writes on the destination occurred at this rate.



Any thoughts on how to get this network backup up in value?


Thoughts on running simultaneous jobs across the network if I enable
both Gigabit ports on the destination server (how would I do this, ie:
do I have to do trunking or just set another ip on the other port and
direct the backup to that ip\e$ ) IE: If i end up using Acronis
there is no way to do a job that will sequentially do each server, I'd
have to either know when the job stops to start up the next server to
be backed up on each weeks full.. so the only way I figured around
this was to do 2 servers at once on dual gigabit?


I have intel pro cards in alot of the servers, but I dont see any way
to set jumbo frames either.


My switch is a Dlink DGS-1248T (gigabit, managed).


The controller card on the source server in this case is 3ware
escalade 8506-4lp PCI-x sataI while the one on the destination is
ARC-1120 pci-x 8 port sataII.. I'm assuming these are both very good
cards.. I dont know how they compare to the Raidcore though?


Still alittle confused on the SATA vs SCSI argument too.. the basic
rule should be that if alot of simultaneous hits are going on.. SCSI
is better.. but why? Still unsure if each drive on a scsi chain has
divided bandwith of say 320 mB/s.. same for SATA, each cable divided
from the 3 Gbps rate or each has 3 Gb/s.. if both devices have
dedicated bandwidth for any given drive, then what makes SCSI superior
to SATA...



Are you sure your bottleneck is not the compression? Retry
this without compression for a reference value.

Arno

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Raid0 or Raid5 for network to disk backup (Gigabit)? markm75 Storage (alternative) 41 April 18th 07 09:37 PM
change raid5 to raid1 / backup&restore partition / Arconis? [email protected] Storage (alternative) 2 February 9th 07 12:57 AM
I was unhappy with my Gigabit Network card George Hester General 3 July 5th 06 08:52 AM
SATA RAID5 disk replacement: same type of disk? Richard NL Storage (alternative) 9 February 3rd 06 02:42 PM
RAID0 vs. RAID5 - Benchmark Ingo Seibold Storage (alternative) 3 November 11th 04 06:07 PM


All times are GMT +1. The time now is 04:06 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.