If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
markm75 wrote:
Anyone have any thoughts on which is going to give me better write speeds.. I know raid0 should be much better and if i combine it with raid1, redundant.. But I'm assuming when I backup my servers to (this backup server) across the gigabit network, my write speeds would max out at say 60 MB/ s wouldnt they? I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/ s (bypassing windows cache, using SIsandra to benchmark).. if you dont bypass the windows cache this becomes more like 38 MB/s.. Any thoughts? two disk raid0 with write cache turned on. better read/write performance all around and twice the disk space. run the backup machine with a UPS power supply so what if it (raid0) fails every few years? it's a minor backup machine and odds are that it won't be the end of the world if the backup is not "minor" (meaning totally critical), then go with slower raid5 (or something similar) bill |
#2
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
markm75 wrote:
I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/ s (bypassing windows cache, using SIsandra to benchmark).. if you dont bypass the windows cache this becomes more like 38 MB/s.. Aren't these numbers reversed? Anyway, good drives should be 75 MB/second when the cache is bypassed. Not bypassing it should give significantly greater performance. |
#3
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote:
markm75 wrote: I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/ s (bypassing windows cache, using SIsandra to benchmark).. if you dont bypass the windows cache this becomes more like 38 MB/s.. Aren't these numbers reversed? Anyway, good drives should be 75 MB/second when the cache is bypassed. Not bypassing it should give significantly greater performance. With a reasonable buffer (not cache) implementation, yes. Something seems wrong or MS screwed up rather badly in implementing this. But the 75MB/s figure only applies to the start of the disk. At the end it is typically is somewere in the 35...50MB/s range, since the cylinders contain less sectors. Arno |
#4
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
On Mar 29, 5:59 am, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote: markm75 wrote: I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/ s (bypassing windows cache, using SIsandra to benchmark).. if you dont bypass the windows cache this becomes more like 38 MB/s.. Aren't these numbers reversed? Anyway, good drives should be 75 MB/second when the cache is bypassed. Not bypassing it should give significantly greater performance. With a reasonable buffer (not cache) implementation, yes. Something seems wrong or MS screwed up rather badly in implementing this. But the 75MB/s figure only applies to the start of the disk. At the end it is typically is somewere in the 35...50MB/s range, since the cylinders contain less sectors. Arno Apologies.. Yeah when bypassing the cache I got an index of 57MB/s... |
#5
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
"markm75" wrote in message oups.com
On Mar 29, 5:59 am, Arno Wagner wrote: In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote: markm75 wrote: I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/ s (bypassing windows cache, using SIsandra to benchmark).. if you dont bypass the windows cache this becomes more like 38 MB/s.. Aren't these numbers reversed? Anyway, good drives should be 75 MB/second when the cache is bypassed. Not bypassing it should give significantly greater performance. With a reasonable buffer (not cache) implementation, yes. Something seems wrong or MS screwed up rather badly in implementing this. But the 75MB/s figure only applies to the start of the disk. At the end it is typically is somewere in the 35...50MB/s range, since the cylinders contain less sectors. Arno Apologies.. Yeah when bypassing the cache I got an index of 57MB/s... No, really? Who would have thought that from your first post. Thanks for clearing that up. It all becomes much clearer now. |
#6
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
On Mar 29, 11:29 am, "markm75" wrote:
On Mar 29, 5:59 am, Arno Wagner wrote: In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote: markm75 wrote: I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/ s (bypassing windows cache, using SIsandra to benchmark).. if you dont bypass the windows cache this becomes more like 38 MB/s.. Aren't these numbers reversed? Anyway, good drives should be 75 MB/second when the cache is bypassed. Not bypassing it should give significantly greater performance. With a reasonable buffer (not cache) implementation, yes. Something seems wrong or MS screwed up rather badly in implementing this. But the 75MB/s figure only applies to the start of the disk. At the end it is typically is somewere in the 35...50MB/s range, since the cylinders contain less sectors. Arno Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...- Hide quoted text - - Show quoted text - If I use the option, checked off, "Bypass windows cache" I do in fact get HIGHER values than when not bypassing the cache.. I know this sounds reversed, but it is what happens. |
#7
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
In comp.sys.ibm.pc.hardware.storage markm75 wrote:
On Mar 29, 11:29 am, "markm75" wrote: On Mar 29, 5:59 am, Arno Wagner wrote: In comp.sys.ibm.pc.hardware.storage Maurice Volaski wrote: markm75 wrote: I think right now on Raid5, sataII, i'm getting a write speed of 57 MB/ s (bypassing windows cache, using SIsandra to benchmark).. if you dont bypass the windows cache this becomes more like 38 MB/s.. Aren't these numbers reversed? Anyway, good drives should be 75 MB/second when the cache is bypassed. Not bypassing it should give significantly greater performance. With a reasonable buffer (not cache) implementation, yes. Something seems wrong or MS screwed up rather badly in implementing this. But the 75MB/s figure only applies to the start of the disk. At the end it is typically is somewere in the 35...50MB/s range, since the cylinders contain less sectors. Arno Apologies.. Yeah when bypassing the cache I got an index of 57MB/s...- Hide quoted text - - Show quoted text - If I use the option, checked off, "Bypass windows cache" I do in fact get HIGHER values than when not bypassing the cache.. I know this sounds reversed, but it is what happens. It is possible. It does however point to some serious problem in the write-buffer design. Arno |
#8
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
If I use the option, checked off, "Bypass windows cache" I do in fact
get HIGHER values than when not bypassing the cache.. I know this sounds reversed, but it is what happens. Depends on data access pattern, on some patterns it is really profitable. For instance, databases like MSSQLServer also use cache bypass. -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation http://www.storagecraft.com |
#9
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
As a side note: Much to my disappointment I found bad results last
nite in my over the network Acronis of one of my servers to my current backup server (test server that i was doing the hdd to hdd testing on, where I was getting 300gb partition done in 4hrs with acronis on the same machine).. My results going across gigabit ethernet using acronis, set to normal compression (not high or max, wondering if increase compression should speed things along)... Size of partiton: 336GB or 334064MB (RAID5, sata 150) Time to complete: 9hrs, 1min (541 mins or 32460 seconds) Compressed size at normal: 247 GB Destination: Single Volume, SATAII (seq writes 55 MB/s bypassing windows cache, 38 MB/s not bypassing windows cache).. Yes i dont know why when testing with sisandra and not bypassing the cache the numbers are LESS but they are 10.29 MB/sec actual rate *Using qcheck going from the source to the destination I get 450 Mbps to 666 Mbps (use 450 as the avg = 56.25 MB/s So the max rate I could possibily expect would be 56 MB/s if the writes on the destination occurred at this rate. Any thoughts on how to get this network backup up in value? Thoughts on running simultaneous jobs across the network if I enable both Gigabit ports on the destination server (how would I do this, ie: do I have to do trunking or just set another ip on the other port and direct the backup to that ip\e$ ) IE: If i end up using Acronis there is no way to do a job that will sequentially do each server, I'd have to either know when the job stops to start up the next server to be backed up on each weeks full.. so the only way I figured around this was to do 2 servers at once on dual gigabit? I have intel pro cards in alot of the servers, but I dont see any way to set jumbo frames either. My switch is a Dlink DGS-1248T (gigabit, managed). The controller card on the source server in this case is 3ware escalade 8506-4lp PCI-x sataI while the one on the destination is ARC-1120 pci-x 8 port sataII.. I'm assuming these are both very good cards.. I dont know how they compare to the Raidcore though? Still alittle confused on the SATA vs SCSI argument too.. the basic rule should be that if alot of simultaneous hits are going on.. SCSI is better.. but why? Still unsure if each drive on a scsi chain has divided bandwith of say 320 mB/s.. same for SATA, each cable divided from the 3 Gbps rate or each has 3 Gb/s.. if both devices have dedicated bandwidth for any given drive, then what makes SCSI superior to SATA... |
#10
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
In comp.sys.ibm.pc.hardware.storage markm75 wrote:
As a side note: Much to my disappointment I found bad results last nite in my over the network Acronis of one of my servers to my current backup server (test server that i was doing the hdd to hdd testing on, where I was getting 300gb partition done in 4hrs with acronis on the same machine).. My results going across gigabit ethernet using acronis, set to normal compression (not high or max, wondering if increase compression should speed things along)... Size of partiton: 336GB or 334064MB (RAID5, sata 150) Time to complete: 9hrs, 1min (541 mins or 32460 seconds) Compressed size at normal: 247 GB Destination: Single Volume, SATAII (seq writes 55 MB/s bypassing windows cache, 38 MB/s not bypassing windows cache).. Yes i dont know why when testing with sisandra and not bypassing the cache the numbers are LESS but they are 10.29 MB/sec actual rate *Using qcheck going from the source to the destination I get 450 Mbps to 666 Mbps (use 450 as the avg = 56.25 MB/s So the max rate I could possibily expect would be 56 MB/s if the writes on the destination occurred at this rate. Any thoughts on how to get this network backup up in value? Thoughts on running simultaneous jobs across the network if I enable both Gigabit ports on the destination server (how would I do this, ie: do I have to do trunking or just set another ip on the other port and direct the backup to that ip\e$ ) IE: If i end up using Acronis there is no way to do a job that will sequentially do each server, I'd have to either know when the job stops to start up the next server to be backed up on each weeks full.. so the only way I figured around this was to do 2 servers at once on dual gigabit? I have intel pro cards in alot of the servers, but I dont see any way to set jumbo frames either. My switch is a Dlink DGS-1248T (gigabit, managed). The controller card on the source server in this case is 3ware escalade 8506-4lp PCI-x sataI while the one on the destination is ARC-1120 pci-x 8 port sataII.. I'm assuming these are both very good cards.. I dont know how they compare to the Raidcore though? Still alittle confused on the SATA vs SCSI argument too.. the basic rule should be that if alot of simultaneous hits are going on.. SCSI is better.. but why? Still unsure if each drive on a scsi chain has divided bandwith of say 320 mB/s.. same for SATA, each cable divided from the 3 Gbps rate or each has 3 Gb/s.. if both devices have dedicated bandwidth for any given drive, then what makes SCSI superior to SATA... Are you sure your bottleneck is not the compression? Retry this without compression for a reference value. Arno |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Raid0 or Raid5 for network to disk backup (Gigabit)? | markm75 | Storage (alternative) | 41 | April 18th 07 09:37 PM |
change raid5 to raid1 / backup&restore partition / Arconis? | [email protected] | Storage (alternative) | 2 | February 8th 07 11:57 PM |
I was unhappy with my Gigabit Network card | George Hester | General | 3 | July 5th 06 08:52 AM |
SATA RAID5 disk replacement: same type of disk? | Richard NL | Storage (alternative) | 9 | February 3rd 06 01:42 PM |
RAID0 vs. RAID5 - Benchmark | Ingo Seibold | Storage (alternative) | 3 | November 11th 04 05:07 PM |