A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Raid0 or Raid5 for network to disk backup (Gigabit)?



 
 
Thread Tools Display Modes
  #11  
Old March 31st 07, 03:40 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
markm75
external usenet poster
 
Posts: 222
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

On Mar 30, 4:15 pm, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage markm75 wrote:





As a side note: Much to my disappointment I found bad results last
nite in my over the network Acronis of one of my servers to my current
backup server (test server that i was doing the hdd to hdd testing on,
where I was getting 300gb partition done in 4hrs with acronis on the
same machine)..
My results going across gigabit ethernet using acronis, set to normal
compression (not high or max, wondering if increase compression should
speed things along)...
Size of partiton: 336GB or 334064MB (RAID5, sata 150)
Time to complete: 9hrs, 1min (541 mins or 32460 seconds)
Compressed size at normal: 247 GB
Destination: Single Volume, SATAII (seq writes 55 MB/s bypassing
windows cache, 38 MB/s not bypassing windows cache).. Yes i dont know
why when testing with sisandra and not bypassing the cache the numbers
are LESS but they are
10.29 MB/sec actual rate
*Using qcheck going from the source to the destination I get 450 Mbps
to 666 Mbps (use 450 as the avg = 56.25 MB/s
So the max rate I could possibily expect would be 56 MB/s if the
writes on the destination occurred at this rate.
Any thoughts on how to get this network backup up in value?
Thoughts on running simultaneous jobs across the network if I enable
both Gigabit ports on the destination server (how would I do this, ie:
do I have to do trunking or just set another ip on the other port and
direct the backup to that ip\e$ ) IE: If i end up using Acronis
there is no way to do a job that will sequentially do each server, I'd
have to either know when the job stops to start up the next server to
be backed up on each weeks full.. so the only way I figured around
this was to do 2 servers at once on dual gigabit?
I have intel pro cards in alot of the servers, but I dont see any way
to set jumbo frames either.
My switch is a Dlink DGS-1248T (gigabit, managed).
The controller card on the source server in this case is 3ware
escalade 8506-4lp PCI-x sataI while the one on the destination is
ARC-1120 pci-x 8 port sataII.. I'm assuming these are both very good
cards.. I dont know how they compare to the Raidcore though?
Still alittle confused on the SATA vs SCSI argument too.. the basic
rule should be that if alot of simultaneous hits are going on.. SCSI
is better.. but why? Still unsure if each drive on a scsi chain has
divided bandwith of say 320 mB/s.. same for SATA, each cable divided
from the 3 Gbps rate or each has 3 Gb/s.. if both devices have
dedicated bandwidth for any given drive, then what makes SCSI superior
to SATA...


Are you sure your bottleneck is not the compression? Retry
this without compression for a reference value.

Arno- Hide quoted text -

- Show quoted text -


Well I started this one around 4:30pm and its 10:30.. been 6 hours ,
it says 3 to go.. that would still be 9 hours or so, I turned
compression off, so we shall c.. not looking good though.. still
nowhere near the bandwidth it should be using (Did a sisandra test to
compare.. sisandra was also coming in around 61 MB/sec).

  #12  
Old March 31st 07, 04:53 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Arno Wagner
external usenet poster
 
Posts: 2,796
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

In comp.sys.ibm.pc.hardware.storage markm75 wrote:
On Mar 30, 4:15 pm, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage markm75 wrote:

[...]
Are you sure your bottleneck is not the compression? Retry
this without compression for a reference value.

Arno- Hide quoted text -

- Show quoted text -


Well I started this one around 4:30pm and its 10:30.. been 6 hours ,
it says 3 to go.. that would still be 9 hours or so, I turned
compression off, so we shall c.. not looking good though.. still
nowhere near the bandwidth it should be using (Did a sisandra test to
compare.. sisandra was also coming in around 61 MB/sec).


Hmm. Did you do the Sandra test over the network? If not, maybe you
have a 100Mb/sec link somewhere in there? Youer observed 10.29MB/s
would perfectly fit such a link-speed, as it is very close to 100Mb/s
raw speed (8 * 10.29 = 82.3Mb/s, add a bit for ethernet overhead...).
Might be a bad cable, router port or network card. I have had this
type of problem several times with Gigabit Ethernet.

Arno
  #13  
Old April 1st 07, 03:27 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
markm75
external usenet poster
 
Posts: 222
Default Raid0 or Raid5 for network to disk backup (Gigabit)?


Hmm. Did you do the Sandra test over the network? If not, maybe you
have a 100Mb/sec link somewhere in there? Youer observed 10.29MB/s
would perfectly fit such a link-speed, as it is very close to 100Mb/s
raw speed (8 * 10.29 = 82.3Mb/s, add a bit for ethernet overhead...).
Might be a bad cable, router port or network card. I have had this
type of problem several times with Gigabit Ethernet.

Arno


Yep.. it was done over the network and yielded 60 MB/s

btw.. that uncompressed backup took about 11 hours to complete (again,
same size on same drive was about 4 hours), with normal compression
about 9 over network.. I'm trying max compression now...

  #14  
Old April 2nd 07, 03:17 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
markm75
external usenet poster
 
Posts: 222
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

On Apr 1, 10:27 am, "markm75" wrote:
Hmm. Did you do the Sandra test over the network? If not, maybe you
have a 100Mb/sec link somewhere in there? Youer observed 10.29MB/s
would perfectly fit such a link-speed, as it is very close to 100Mb/s
raw speed (8 * 10.29 = 82.3Mb/s, add a bit for ethernet overhead...).
Might be a bad cable, router port or network card. I have had this
type of problem several times with Gigabit Ethernet.


Arno


Yep.. it was done over the network and yielded 60 MB/s

btw.. that uncompressed backup took about 11 hours to complete (again,
same size on same drive was about 4 hours), with normal compression
about 9 over network.. I'm trying max compression now...



Very very bad results with max compression, took like 12 or 13
hours...

This baffles me.. as I know that I can do the backup on the same drive
locally in 4 hours on that machine.. and I know I can do the same type
backup on the remote (destination) as well.. so going from D on
ServerA to E on ServerB should just be a limitation of the network,
which benches at 60 MB/s in all of my tests.

Again, I dont think jumbo frames would help.. but I cant even turn
them on , as each nic on each end doesnt have this setting, not sure
what else to test here or fix.


  #15  
Old April 2nd 07, 03:32 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Arno Wagner
external usenet poster
 
Posts: 2,796
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

In comp.sys.ibm.pc.hardware.storage markm75 wrote:
On Apr 1, 10:27 am, "markm75" wrote:
Hmm. Did you do the Sandra test over the network? If not, maybe you
have a 100Mb/sec link somewhere in there? Youer observed 10.29MB/s
would perfectly fit such a link-speed, as it is very close to 100Mb/s
raw speed (8 * 10.29 = 82.3Mb/s, add a bit for ethernet overhead...).
Might be a bad cable, router port or network card. I have had this
type of problem several times with Gigabit Ethernet.


Arno


Yep.. it was done over the network and yielded 60 MB/s

btw.. that uncompressed backup took about 11 hours to complete (again,
same size on same drive was about 4 hours), with normal compression
about 9 over network.. I'm trying max compression now...



Very very bad results with max compression, took like 12 or 13
hours...


This baffles me.. as I know that I can do the backup on the same drive
locally in 4 hours on that machine..


With maximum compression? Ok, then it is not a CPU issue.

and I know I can do the same type
backup on the remote (destination) as well.. so going from D on
ServerA to E on ServerB should just be a limitation of the network,
which benches at 60 MB/s in all of my tests.


There seems to be some problem with the network. One test you can try is
pushing, say, 10 GB or so of data through the network to the target
drive and see how long that takes. If that works with expected speed,
then there is some issue with the type of traffic your software
generates. Difficult to debug without sniffing the traffic.

One other thing you shoulkd do is to create some test setup, that
allows you to test the speed in 5-10 minutes, otherwise this
will literally take forever to figure out.

Again, I dont think jumbo frames would help.. but I cant even turn
them on , as each nic on each end doesnt have this setting, not sure
what else to test here or fix.


I agree that jumbo-frames are not the issue. They can increase
throughput by 10% or so, but your problem is an order of magnitude
bigger.

Here is an additional test: Connect the two computers directly with
a short CAT5e cable and see whether things get fater then.

Arno
  #16  
Old April 2nd 07, 04:43 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
markm75
external usenet poster
 
Posts: 222
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

On Apr 2, 10:32 am, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage markm75 wrote:





On Apr 1, 10:27 am, "markm75" wrote:
Hmm. Did you do the Sandra test over the network? If not, maybe you
have a 100Mb/sec link somewhere in there? Youer observed 10.29MB/s
would perfectly fit such a link-speed, as it is very close to 100Mb/s
raw speed (8 * 10.29 = 82.3Mb/s, add a bit for ethernet overhead...).
Might be a bad cable, router port or network card. I have had this
type of problem several times with Gigabit Ethernet.


Arno


Yep.. it was done over the network and yielded 60 MB/s


btw.. that uncompressed backup took about 11 hours to complete (again,
same size on same drive was about 4 hours), with normal compression
about 9 over network.. I'm trying max compression now...

Very very bad results with max compression, took like 12 or 13
hours...
This baffles me.. as I know that I can do the backup on the same drive
locally in 4 hours on that machine..


With maximum compression? Ok, then it is not a CPU issue.

and I know I can do the same type
backup on the remote (destination) as well.. so going from D on
ServerA to E on ServerB should just be a limitation of the network,
which benches at 60 MB/s in all of my tests.


There seems to be some problem with the network. One test you can try is
pushing, say, 10 GB or so of data through the network to the target
drive and see how long that takes. If that works with expected speed,
then there is some issue with the type of traffic your software
generates. Difficult to debug without sniffing the traffic.

One other thing you shoulkd do is to create some test setup, that
allows you to test the speed in 5-10 minutes, otherwise this
will literally take forever to figure out.

Again, I dont think jumbo frames would help.. but I cant even turn
them on , as each nic on each end doesnt have this setting, not sure
what else to test here or fix.


I agree that jumbo-frames are not the issue. They can increase
throughput by 10% or so, but your problem is an order of magnitude
bigger.

Here is an additional test: Connect the two computers directly with
a short CAT5e cable and see whether things get fater then.

Arno- Hide quoted text -

- Show quoted text -


just tried a 10gb file across the ethernet.. 4m 15seconds for
9.31GB .. this seems normal to me.


  #17  
Old April 2nd 07, 06:52 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Bill Todd
external usenet poster
 
Posts: 162
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

markm75 wrote:

....

This baffles me.. as I know that I can do the backup on the same drive
locally in 4 hours on that machine.. and I know I can do the same type
backup on the remote (destination) as well.. so going from D on
ServerA to E on ServerB should just be a limitation of the network,
which benches at 60 MB/s in all of my tests.


While I have no specific solution to suggest, it is possible that the
problem is not network bandwidth but network latency, which after the
entire stack is taken into account can add up to hundreds of
microseconds per transaction.

If the storage interactions performed by the backup software (in
contrast to simple streaming file copies) are both small (say, a few KB
apiece) and 'chatty' (such that such a transaction occurs for every
modest-size storage transfer) this could significantly compromise
network throughput (since the per-transaction overhead could increase by
close to a couple of orders of magnitude compared to microsecond-level
local ones).

Another remote possibility is that for some reason transferring across
the network when using the backup software is suppressing write-back
caching at the destination, causing a missed disk revolution on up to
every access (though the worst case would limit throughput to less than
8 MB/sec if Windows is destaging data in its characteristic 64 KB
increments, and you are apparently doing somewhat better than that).

- bill
  #18  
Old April 2nd 07, 11:25 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
Arno Wagner
external usenet poster
 
Posts: 2,796
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

In comp.sys.ibm.pc.hardware.storage markm75 wrote:
On Apr 2, 10:32 am, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage markm75 wrote:





On Apr 1, 10:27 am, "markm75" wrote:
Hmm. Did you do the Sandra test over the network? If not, maybe you
have a 100Mb/sec link somewhere in there? Youer observed 10.29MB/s
would perfectly fit such a link-speed, as it is very close to 100Mb/s
raw speed (8 * 10.29 = 82.3Mb/s, add a bit for ethernet overhead...).
Might be a bad cable, router port or network card. I have had this
type of problem several times with Gigabit Ethernet.


Arno


Yep.. it was done over the network and yielded 60 MB/s


btw.. that uncompressed backup took about 11 hours to complete (again,
same size on same drive was about 4 hours), with normal compression
about 9 over network.. I'm trying max compression now...
Very very bad results with max compression, took like 12 or 13
hours...
This baffles me.. as I know that I can do the backup on the same drive
locally in 4 hours on that machine..


With maximum compression? Ok, then it is not a CPU issue.

and I know I can do the same type
backup on the remote (destination) as well.. so going from D on
ServerA to E on ServerB should just be a limitation of the network,
which benches at 60 MB/s in all of my tests.


There seems to be some problem with the network. One test you can try is
pushing, say, 10 GB or so of data through the network to the target
drive and see how long that takes. If that works with expected speed,
then there is some issue with the type of traffic your software
generates. Difficult to debug without sniffing the traffic.

One other thing you shoulkd do is to create some test setup, that
allows you to test the speed in 5-10 minutes, otherwise this
will literally take forever to figure out.

Again, I dont think jumbo frames would help.. but I cant even turn
them on , as each nic on each end doesnt have this setting, not sure
what else to test here or fix.


I agree that jumbo-frames are not the issue. They can increase
throughput by 10% or so, but your problem is an order of magnitude
bigger.

Here is an additional test: Connect the two computers directly with
a short CAT5e cable and see whether things get fater then.

Arno- Hide quoted text -

- Show quoted text -


just tried a 10gb file across the ethernet.. 4m 15seconds for
9.31GB .. this seems normal to me.


That is about 36MB/s, far lower than the stated 60MB/s benchmark.
If the slowdown on a linear, streamed write is that big, maybe the
slopw backup you experience is just due to the write strategy of
the backup software. Seems to me the fileserver OS could be
not too suitable for its task....

Arno
  #19  
Old April 3rd 07, 03:08 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
markm75
external usenet poster
 
Posts: 222
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

On Apr 2, 6:25 pm, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage markm75 wrote:





On Apr 2, 10:32 am, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage markm75 wrote:


On Apr 1, 10:27 am, "markm75" wrote:
Hmm. Did you do the Sandra test over the network? If not, maybe you
have a 100Mb/sec link somewhere in there? Youer observed 10.29MB/s
would perfectly fit such a link-speed, as it is very close to 100Mb/s
raw speed (8 * 10.29 = 82.3Mb/s, add a bit for ethernet overhead...).
Might be a bad cable, router port or network card. I have had this
type of problem several times with Gigabit Ethernet.


Arno


Yep.. it was done over the network and yielded 60 MB/s


btw.. that uncompressed backup took about 11 hours to complete (again,
same size on same drive was about 4 hours), with normal compression
about 9 over network.. I'm trying max compression now...
Very very bad results with max compression, took like 12 or 13
hours...
This baffles me.. as I know that I can do the backup on the same drive
locally in 4 hours on that machine..


With maximum compression? Ok, then it is not a CPU issue.


and I know I can do the same type
backup on the remote (destination) as well.. so going from D on
ServerA to E on ServerB should just be a limitation of the network,
which benches at 60 MB/s in all of my tests.


There seems to be some problem with the network. One test you can try is
pushing, say, 10 GB or so of data through the network to the target
drive and see how long that takes. If that works with expected speed,
then there is some issue with the type of traffic your software
generates. Difficult to debug without sniffing the traffic.


One other thing you shoulkd do is to create some test setup, that
allows you to test the speed in 5-10 minutes, otherwise this
will literally take forever to figure out.


Again, I dont think jumbo frames would help.. but I cant even turn
them on , as each nic on each end doesnt have this setting, not sure
what else to test here or fix.


I agree that jumbo-frames are not the issue. They can increase
throughput by 10% or so, but your problem is an order of magnitude
bigger.


Here is an additional test: Connect the two computers directly with
a short CAT5e cable and see whether things get fater then.


Arno- Hide quoted text -


- Show quoted text -

just tried a 10gb file across the ethernet.. 4m 15seconds for
9.31GB .. this seems normal to me.


That is about 36MB/s, far lower than the stated 60MB/s benchmark.
If the slowdown on a linear, streamed write is that big, maybe the
slopw backup you experience is just due to the write strategy of
the backup software. Seems to me the fileserver OS could be
not too suitable for its task....

Arno- Hide quoted text -

- Show quoted text -


Well I know when i did the tests with BackupExec.. at least locally..
they would start off high.. 2000mB/min.. by hour 4 it was down to 1000
by the end down to 385 MB/min.. with BackupExec if I did one big 300gb
file, locally, it stayed around 2000, if there was a mixture then it
went down gradually.

Of course acronis imaging worked fine locally, so it must be some
network transfer issue with the software.. I'll try ShadowProtect next
and see how it fares.

  #20  
Old April 4th 07, 02:59 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage
markm75
external usenet poster
 
Posts: 222
Default Raid0 or Raid5 for network to disk backup (Gigabit)?

On Apr 2, 6:25 pm, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage markm75 wrote:





On Apr 2, 10:32 am, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage markm75 wrote:


On Apr 1, 10:27 am, "markm75" wrote:
Hmm. Did you do the Sandra test over the network? If not, maybe you
have a 100Mb/sec link somewhere in there? Youer observed 10.29MB/s
would perfectly fit such a link-speed, as it is very close to 100Mb/s
raw speed (8 * 10.29 = 82.3Mb/s, add a bit for ethernet overhead...).
Might be a bad cable, router port or network card. I have had this
type of problem several times with Gigabit Ethernet.


Arno


Yep.. it was done over the network and yielded 60 MB/s


btw.. that uncompressed backup took about 11 hours to complete (again,
same size on same drive was about 4 hours), with normal compression
about 9 over network.. I'm trying max compression now...
Very very bad results with max compression, took like 12 or 13
hours...
This baffles me.. as I know that I can do the backup on the same drive
locally in 4 hours on that machine..


With maximum compression? Ok, then it is not a CPU issue.


and I know I can do the same type
backup on the remote (destination) as well.. so going from D on
ServerA to E on ServerB should just be a limitation of the network,
which benches at 60 MB/s in all of my tests.


There seems to be some problem with the network. One test you can try is
pushing, say, 10 GB or so of data through the network to the target
drive and see how long that takes. If that works with expected speed,
then there is some issue with the type of traffic your software
generates. Difficult to debug without sniffing the traffic.


One other thing you shoulkd do is to create some test setup, that
allows you to test the speed in 5-10 minutes, otherwise this
will literally take forever to figure out.


Again, I dont think jumbo frames would help.. but I cant even turn
them on , as each nic on each end doesnt have this setting, not sure
what else to test here or fix.


I agree that jumbo-frames are not the issue. They can increase
throughput by 10% or so, but your problem is an order of magnitude
bigger.


Here is an additional test: Connect the two computers directly with
a short CAT5e cable and see whether things get fater then.


Arno- Hide quoted text -


- Show quoted text -

just tried a 10gb file across the ethernet.. 4m 15seconds for
9.31GB .. this seems normal to me.


That is about 36MB/s, far lower than the stated 60MB/s benchmark.
If the slowdown on a linear, streamed write is that big, maybe the
slopw backup you experience is just due to the write strategy of
the backup software. Seems to me the fileserver OS could be
not too suitable for its task....

Arno- Hide quoted text -

- Show quoted text -


I'm currently running an image using ShadowProtect Server, its getting
27 MB/s... stating 3-4 hours remaining, after 10 minutes.. we shall
see how it does in the end..

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Raid0 or Raid5 for network to disk backup (Gigabit)? markm75 Storage (alternative) 41 April 18th 07 09:37 PM
change raid5 to raid1 / backup&restore partition / Arconis? [email protected] Storage (alternative) 2 February 8th 07 11:57 PM
I was unhappy with my Gigabit Network card George Hester General 3 July 5th 06 08:52 AM
SATA RAID5 disk replacement: same type of disk? Richard NL Storage (alternative) 9 February 3rd 06 01:42 PM
RAID0 vs. RAID5 - Benchmark Ingo Seibold Storage (alternative) 3 November 11th 04 05:07 PM


All times are GMT +1. The time now is 04:58 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.