View Single Post
  #10  
Old March 30th 07, 09:15 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Arno Wagner
external usenet poster
 
Posts: 2,796
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

In comp.sys.ibm.pc.hardware.storage markm75 wrote:
As a side note: Much to my disappointment I found bad results last
nite in my over the network Acronis of one of my servers to my current
backup server (test server that i was doing the hdd to hdd testing on,
where I was getting 300gb partition done in 4hrs with acronis on the
same machine)..


My results going across gigabit ethernet using acronis, set to normal
compression (not high or max, wondering if increase compression should
speed things along)...


Size of partiton: 336GB or 334064MB (RAID5, sata 150)
Time to complete: 9hrs, 1min (541 mins or 32460 seconds)
Compressed size at normal: 247 GB
Destination: Single Volume, SATAII (seq writes 55 MB/s bypassing
windows cache, 38 MB/s not bypassing windows cache).. Yes i dont know
why when testing with sisandra and not bypassing the cache the numbers
are LESS but they are



10.29 MB/sec actual rate


*Using qcheck going from the source to the destination I get 450 Mbps
to 666 Mbps (use 450 as the avg = 56.25 MB/s


So the max rate I could possibily expect would be 56 MB/s if the
writes on the destination occurred at this rate.



Any thoughts on how to get this network backup up in value?


Thoughts on running simultaneous jobs across the network if I enable
both Gigabit ports on the destination server (how would I do this, ie:
do I have to do trunking or just set another ip on the other port and
direct the backup to that ip\e$ ) IE: If i end up using Acronis
there is no way to do a job that will sequentially do each server, I'd
have to either know when the job stops to start up the next server to
be backed up on each weeks full.. so the only way I figured around
this was to do 2 servers at once on dual gigabit?


I have intel pro cards in alot of the servers, but I dont see any way
to set jumbo frames either.


My switch is a Dlink DGS-1248T (gigabit, managed).


The controller card on the source server in this case is 3ware
escalade 8506-4lp PCI-x sataI while the one on the destination is
ARC-1120 pci-x 8 port sataII.. I'm assuming these are both very good
cards.. I dont know how they compare to the Raidcore though?


Still alittle confused on the SATA vs SCSI argument too.. the basic
rule should be that if alot of simultaneous hits are going on.. SCSI
is better.. but why? Still unsure if each drive on a scsi chain has
divided bandwith of say 320 mB/s.. same for SATA, each cable divided
from the 3 Gbps rate or each has 3 Gb/s.. if both devices have
dedicated bandwidth for any given drive, then what makes SCSI superior
to SATA...



Are you sure your bottleneck is not the compression? Retry
this without compression for a reference value.

Arno