HardwareBanter

HardwareBanter (http://www.hardwarebanter.com/index.php)
-   Storage & Hardrives (http://www.hardwarebanter.com/forumdisplay.php?f=30)
-   -   Raid0 or Raid5 for network to disk backup (Gigabit)? (http://www.hardwarebanter.com/showthread.php?t=148496)

willbill March 29th 07 04:05 AM

Raid0 or Raid5 for network to disk backup (Gigabit)?
 
markm75 wrote:

Anyone have any thoughts on which is going to give me better write
speeds.. I know raid0 should be much better and if i combine it with
raid1, redundant..

But I'm assuming when I backup my servers to (this backup server)
across the gigabit network, my write speeds would max out at say 60 MB/
s wouldnt they?

I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..

Any thoughts?



two disk raid0 with write cache turned on. better read/write
performance all around and twice the disk space. run the
backup machine with a UPS power supply

so what if it (raid0) fails every few years? it's a minor backup
machine and odds are that it won't be the end of the world

if the backup is not "minor" (meaning totally critical),
then go with slower raid5 (or something similar)

bill

Maurice Volaski March 29th 07 06:33 AM

Raid0 or Raid5 for network to disk backup (Gigabit)?
 
markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..


Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.

Arno Wagner March 29th 07 10:59 AM

Raid0 or Raid5 for network to disk backup (Gigabit)?
 
In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:
markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..


Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.


With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.

But the 75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.

Arno

markm75 March 29th 07 04:29 PM

Raid0 or Raid5 for network to disk backup (Gigabit)?
 
On Mar 29, 5:59 am, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:

markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..

Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.


With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.

But the 75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.

Arno


Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...


Folkert Rienstra March 29th 07 05:01 PM

Raid0 or Raid5 for network to disk backup (Gigabit)?
 
"markm75" wrote in message oups.com
On Mar 29, 5:59 am, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:

markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..
Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.


With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.

But the 75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.

Arno


Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...


No, really?
Who would have thought that from your first post. Thanks for clearing that up.
It all becomes much clearer now.

markm75 March 29th 07 06:21 PM

Raid0 or Raid5 for network to disk backup (Gigabit)?
 
On Mar 29, 11:29 am, "markm75" wrote:
On Mar 29, 5:59 am, Arno Wagner wrote:





In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:


markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..
Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.


With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.


But the 75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.


Arno


Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...- Hide quoted text -

- Show quoted text -


If I use the option, checked off, "Bypass windows cache" I do in fact
get HIGHER values than when not bypassing the cache..

I know this sounds reversed, but it is what happens.



Arno Wagner March 30th 07 01:15 AM

Raid0 or Raid5 for network to disk backup (Gigabit)?
 
In comp.sys.ibm.pc.hardware.storage markm75 wrote:
On Mar 29, 11:29 am, "markm75" wrote:
On Mar 29, 5:59 am, Arno Wagner wrote:





In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:


markm75 wrote:


I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/
s (bypassing windows cache, using SIsandra to benchmark).. if you dont
bypass the windows cache this becomes more like 38 MB/s..
Aren't these numbers reversed? Anyway, good drives should be 75
MB/second when the cache is bypassed. Not bypassing it should give
significantly greater performance.


With a reasonable buffer (not cache) implementation, yes. Something
seems wrong or MS screwed up rather badly in implementing this.


But the 75MB/s figure only applies to the start of the disk.
At the end it is typically is somewere in the 35...50MB/s range,
since the cylinders contain less sectors.


Arno


Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...- Hide quoted text -

- Show quoted text -


If I use the option, checked off, "Bypass windows cache" I do in fact
get HIGHER values than when not bypassing the cache..


I know this sounds reversed, but it is what happens.


It is possible. It does however point to some serious problem
in the write-buffer design.

Arno

Maxim S. Shatskih March 30th 07 02:22 AM

Raid0 or Raid5 for network to disk backup (Gigabit)?
 
If I use the option, checked off, "Bypass windows cache" I do in fact
get HIGHER values than when not bypassing the cache..

I know this sounds reversed, but it is what happens.


Depends on data access pattern, on some patterns it is really profitable. For
instance, databases like MSSQLServer also use cache bypass.

--
Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation

http://www.storagecraft.com


markm75 March 30th 07 05:50 PM

Raid0 or Raid5 for network to disk backup (Gigabit)?
 
As a side note: Much to my disappointment I found bad results last
nite in my over the network Acronis of one of my servers to my current
backup server (test server that i was doing the hdd to hdd testing on,
where I was getting 300gb partition done in 4hrs with acronis on the
same machine)..

My results going across gigabit ethernet using acronis, set to normal
compression (not high or max, wondering if increase compression should
speed things along)...

Size of partiton: 336GB or 334064MB (RAID5, sata 150)
Time to complete: 9hrs, 1min (541 mins or 32460 seconds)
Compressed size at normal: 247 GB
Destination: Single Volume, SATAII (seq writes 55 MB/s bypassing
windows cache, 38 MB/s not bypassing windows cache).. Yes i dont know
why when testing with sisandra and not bypassing the cache the numbers
are LESS but they are


10.29 MB/sec actual rate

*Using qcheck going from the source to the destination I get 450 Mbps
to 666 Mbps (use 450 as the avg = 56.25 MB/s

So the max rate I could possibily expect would be 56 MB/s if the
writes on the destination occurred at this rate.


Any thoughts on how to get this network backup up in value?

Thoughts on running simultaneous jobs across the network if I enable
both Gigabit ports on the destination server (how would I do this, ie:
do I have to do trunking or just set another ip on the other port and
direct the backup to that ip\e$ ) IE: If i end up using Acronis
there is no way to do a job that will sequentially do each server, I'd
have to either know when the job stops to start up the next server to
be backed up on each weeks full.. so the only way I figured around
this was to do 2 servers at once on dual gigabit?

I have intel pro cards in alot of the servers, but I dont see any way
to set jumbo frames either.

My switch is a Dlink DGS-1248T (gigabit, managed).

The controller card on the source server in this case is 3ware
escalade 8506-4lp PCI-x sataI while the one on the destination is
ARC-1120 pci-x 8 port sataII.. I'm assuming these are both very good
cards.. I dont know how they compare to the Raidcore though?

Still alittle confused on the SATA vs SCSI argument too.. the basic
rule should be that if alot of simultaneous hits are going on.. SCSI
is better.. but why? Still unsure if each drive on a scsi chain has
divided bandwith of say 320 mB/s.. same for SATA, each cable divided
from the 3 Gbps rate or each has 3 Gb/s.. if both devices have
dedicated bandwidth for any given drive, then what makes SCSI superior
to SATA...


Arno Wagner March 30th 07 09:15 PM

Raid0 or Raid5 for network to disk backup (Gigabit)?
 
In comp.sys.ibm.pc.hardware.storage markm75 wrote:
As a side note: Much to my disappointment I found bad results last
nite in my over the network Acronis of one of my servers to my current
backup server (test server that i was doing the hdd to hdd testing on,
where I was getting 300gb partition done in 4hrs with acronis on the
same machine)..


My results going across gigabit ethernet using acronis, set to normal
compression (not high or max, wondering if increase compression should
speed things along)...


Size of partiton: 336GB or 334064MB (RAID5, sata 150)
Time to complete: 9hrs, 1min (541 mins or 32460 seconds)
Compressed size at normal: 247 GB
Destination: Single Volume, SATAII (seq writes 55 MB/s bypassing
windows cache, 38 MB/s not bypassing windows cache).. Yes i dont know
why when testing with sisandra and not bypassing the cache the numbers
are LESS but they are



10.29 MB/sec actual rate


*Using qcheck going from the source to the destination I get 450 Mbps
to 666 Mbps (use 450 as the avg = 56.25 MB/s


So the max rate I could possibily expect would be 56 MB/s if the
writes on the destination occurred at this rate.



Any thoughts on how to get this network backup up in value?


Thoughts on running simultaneous jobs across the network if I enable
both Gigabit ports on the destination server (how would I do this, ie:
do I have to do trunking or just set another ip on the other port and
direct the backup to that ip\e$ ) IE: If i end up using Acronis
there is no way to do a job that will sequentially do each server, I'd
have to either know when the job stops to start up the next server to
be backed up on each weeks full.. so the only way I figured around
this was to do 2 servers at once on dual gigabit?


I have intel pro cards in alot of the servers, but I dont see any way
to set jumbo frames either.


My switch is a Dlink DGS-1248T (gigabit, managed).


The controller card on the source server in this case is 3ware
escalade 8506-4lp PCI-x sataI while the one on the destination is
ARC-1120 pci-x 8 port sataII.. I'm assuming these are both very good
cards.. I dont know how they compare to the Raidcore though?


Still alittle confused on the SATA vs SCSI argument too.. the basic
rule should be that if alot of simultaneous hits are going on.. SCSI
is better.. but why? Still unsure if each drive on a scsi chain has
divided bandwith of say 320 mB/s.. same for SATA, each cable divided
from the 3 Gbps rate or each has 3 Gb/s.. if both devices have
dedicated bandwidth for any given drive, then what makes SCSI superior
to SATA...



Are you sure your bottleneck is not the compression? Retry
this without compression for a reference value.

Arno



All times are GMT +1. The time now is 06:47 PM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
HardwareBanter.com