A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage (alternative)
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Disk to Disk Backup recommendations requested



 
 
Thread Tools Display Modes
  #1  
Old September 2nd 04, 03:32 PM
Michelle
external usenet poster
 
Posts: n/a
Default Disk to Disk Backup recommendations requested

Greetings,

I currently have a Win2K file server which contains:

(2) 80GB 7200/EIDE RAID 1 Mirror (Total working data 80GB)

I'm thinking of building a new box for the file server which contains:

(1) GB Ethernet
(6) 74GB 10,000/SATA RAID 1 Mirror (Total working data 222GB)

I'm also building a target backup server for this new file server
which will contain 1 tape drive and the following to perform disk to
disk backup:

(1) GB Ethernet
(4) 250GB 7200/SATA RAID 1 Mirror (Total Backup Capacity 500GB)

I'm trying to build a file server/backup system that would be able to
handle
multiple streams of data to allow for the fastest backup. There are
many
small files to be backed up. My personal preference is RAID 1. Since
none of my file server partitions are larger then 74GB I'm not worried
that a backup will be larger then any of my backup server partitions.

My question is, how do I build theses systems to get the fastest
backup possible? Should I install multiple GB nics? Is there an array
controller that is specifically designed to handle mutliple streams of
data? I'm aware that the Disk I/O is usually the cause of the
bottleneck during backup. Will moving both server and backup system to
RAID 5 significantly increase speeds of backup? Can someone make some
recommendations in terms of hardware/software.
I'm aware that the OS can also increase file system overhead so I'm
electing
to try out Veritas Backup Exec (although quite frankly I've always
been a fan
of Xcopy).

My current tests show me that I copy approximately 1.09GB of data in
approximately 4:18 seconds with Backup Exec/verify on over a Gigabit
connection to 7200 IDE drive. Backup Exec says that is a transfer rate
of 435MB/second. Is there away to increase the speed of this to say
1000MB/sec?
Thanks in advanced for sharing any advice or experience you may have!

Michelle
  #2  
Old September 3rd 04, 08:01 AM
Odie Ferrous
external usenet poster
 
Posts: n/a
Default

Michelle wrote:


I won't comment on the above - it's a minefield and everyone will have
their own solution for you.


My current tests show me that I copy approximately 1.09GB of data in
approximately 4:18 seconds with Backup Exec/verify on over a Gigabit
connection to 7200 IDE drive. Backup Exec says that is a transfer rate
of 435MB/second. Is there away to increase the speed of this to say
1000MB/sec?
Thanks in advanced for sharing any advice or experience you may have!



Gigabit technology is something I have been looking at a lot recently,
as I need to install a powerful data stream capability across my
recovery systems.

Firstly, Gigabit is still pretty much a theoretical speed, the bandwidth
of which is curtailed mainly by cable design.

The following speeds are an indication of the maximum I was able to
achieve using short lengths of cable running between very powerful
systems. They are real-life results - not mere hypothetical maxima
taken from some optimistic marketing gunk.

Cat 5e is said to be fine for Gigabit, but you will typically get only
250Mb ( Mb = megabit; MB = megabyte; check your post - there are
differences between GB and Gb ) per second transfer.

With Cat 6 cable you are looking at around 400Mb per second.

Even Cat 7 cable that promises more than 600Mb per second is difficult
to find.

However, I have been in touch with a company that recently re-wired a
Rolls Royce factory in the UK with Gigabit and they have apparently got
very close to 1Gb per second throughput.

They are http://www.krone.co.uk/ and are also talking about 10Gb
networks using copper cable...

They are preparing a quotation for me, but I suspect the price is going
to be horrific.



Odie
--

RetroData
Data Recovery Experts
www.retrodata.co.uk
  #3  
Old September 3rd 04, 09:21 AM
J. Clarke
external usenet poster
 
Posts: n/a
Default

Michelle wrote:

Greetings,

I currently have a Win2K file server which contains:

(2) 80GB 7200/EIDE RAID 1 Mirror (Total working data 80GB)

I'm thinking of building a new box for the file server which contains:

(1) GB Ethernet
(6) 74GB 10,000/SATA RAID 1 Mirror (Total working data 222GB)

I'm also building a target backup server for this new file server
which will contain 1 tape drive and the following to perform disk to
disk backup:

(1) GB Ethernet
(4) 250GB 7200/SATA RAID 1 Mirror (Total Backup Capacity 500GB)

I'm trying to build a file server/backup system that would be able to
handle
multiple streams of data to allow for the fastest backup. There are
many
small files to be backed up. My personal preference is RAID 1. Since
none of my file server partitions are larger then 74GB I'm not worried
that a backup will be larger then any of my backup server partitions.

My question is, how do I build theses systems to get the fastest
backup possible? Should I install multiple GB nics? Is there an array
controller that is specifically designed to handle mutliple streams of
data? I'm aware that the Disk I/O is usually the cause of the
bottleneck during backup. Will moving both server and backup system to
RAID 5 significantly increase speeds of backup? Can someone make some
recommendations in terms of hardware/software.
I'm aware that the OS can also increase file system overhead so I'm
electing
to try out Veritas Backup Exec (although quite frankly I've always
been a fan
of Xcopy).

My current tests show me that I copy approximately 1.09GB of data in
approximately 4:18 seconds with Backup Exec/verify on over a Gigabit
connection to 7200 IDE drive. Backup Exec says that is a transfer rate
of 435MB/second. Is there away to increase the speed of this to say
1000MB/sec?
Thanks in advanced for sharing any advice or experience you may have!


Something's suspicious here. 1 GB (B=byte, b=bit) of data is 8 Gb plus
overhead, which takes at least 8 seconds to transfer over 1 Gb/sec
Ethernet. If you're doing it in 4 then you're already getting 2 Gb/sec
over a 1 Gb/sec channel.

Backup Exec can do some compression--2:1 is reasonable, that might explain
how you managed to do the transfer at the rate you report.

Multiple gigabit NICs aren't going to help you. The maximum throughput of
the PCI bus is a little over a billion bits per second. When used to
transfer data from the disk via network you're typically going to be
bottlenecked at about 400 Mb/sec by the PCI bus. To get more than that you
need to go to 64 bit 66 MHz PCI, which you will find only on server boards,
or PCI-X or PCI Express, which you will find on some workstation boards as
well as some server boards. Note that both the disk subystem and the
network interface need to be attached via the fast bus for this to confer
benefit.

Once you've got a fast bus then you need to put together a disk subsystem
that can fill that pipe--that's difficult and expensive and what's going to
work is going to depend on the particular data to be transferred. Note
that RAID5 does well in reads but there's a performance hit on writes.
Your RAID1 idea has merit _if_ you have a controller smart enough to
schedule the reads over multiple drives to reduce seek time, but even so
writing to another RAID 1 you're going to be limited by the seek time on
writes.

If you're using a second machine as a backup device and need fast transfers,
you might want to consider going to a clustering technology.

Michelle


--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
  #4  
Old September 3rd 04, 09:24 AM
J. Clarke
external usenet poster
 
Posts: n/a
Default

Odie Ferrous wrote:

Michelle wrote:


I won't comment on the above - it's a minefield and everyone will have
their own solution for you.


My current tests show me that I copy approximately 1.09GB of data in
approximately 4:18 seconds with Backup Exec/verify on over a Gigabit
connection to 7200 IDE drive. Backup Exec says that is a transfer rate
of 435MB/second. Is there away to increase the speed of this to say
1000MB/sec?
Thanks in advanced for sharing any advice or experience you may have!



Gigabit technology is something I have been looking at a lot recently,
as I need to install a powerful data stream capability across my
recovery systems.

Firstly, Gigabit is still pretty much a theoretical speed, the bandwidth
of which is curtailed mainly by cable design.


I'd like to see your source for that. The Ethernet experts don't seem to
think that the cable is an issue. However the PCI bus most assuredly is,
as is the disk subsystem. To fill a gigabit pipe you need a 64-bit 66 MHz
or better PCI bus and if you're going to sustain transfers you also need a
very heavy duty RAID system with a large number of fast drives.

Gigabit was designed to run on CAT5. CAT5E just nails down some of the
numbers that nearly all existing properly installed CAT5 already meets but
for which it was never tested.

The following speeds are an indication of the maximum I was able to
achieve using short lengths of cable running between very powerful
systems. They are real-life results - not mere hypothetical maxima
taken from some optimistic marketing gunk.

Cat 5e is said to be fine for Gigabit, but you will typically get only
250Mb ( Mb = megabit; MB = megabyte; check your post - there are
differences between GB and Gb ) per second transfer.

With Cat 6 cable you are looking at around 400Mb per second.

Even Cat 7 cable that promises more than 600Mb per second is difficult
to find.


Would you care to describe your test configuration and the nature of the
tests you performed? Did you first confirm that your cable did in fact
meet CAT5E channel standards? I suspect that the limitations you're seeing
are not due to the cable. Incidentally, there is no such thing as "CAT 7
cable". That's cable manufacturers' hype.

However, I have been in touch with a company that recently re-wired a
Rolls Royce factory in the UK with Gigabit and they have apparently got
very close to 1Gb per second throughput.

They are http://www.krone.co.uk/ and are also talking about 10Gb
networks using copper cable...


Actually, Krone has nothing much to do with that technology--all they do is
make cable and connectors and 10 gig is probably going to run on CAT5E.

They are preparing a quotation for me, but I suspect the price is going
to be horrific.



Odie


--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
  #5  
Old September 3rd 04, 02:01 PM
Al Dykes
external usenet poster
 
Posts: n/a
Default

In article ,
Odie Ferrous wrote:
Michelle wrote:


I won't comment on the above - it's a minefield and everyone will have
their own solution for you.


My current tests show me that I copy approximately 1.09GB of data in
approximately 4:18 seconds with Backup Exec/verify on over a Gigabit
connection to 7200 IDE drive. Backup Exec says that is a transfer rate
of 435MB/second. Is there away to increase the speed of this to say
1000MB/sec?
Thanks in advanced for sharing any advice or experience you may have!



Gigabit technology is something I have been looking at a lot recently,
as I need to install a powerful data stream capability across my
recovery systems.

Firstly, Gigabit is still pretty much a theoretical speed, the bandwidth
of which is curtailed mainly by cable design.

The following speeds are an indication of the maximum I was able to
achieve using short lengths of cable running between very powerful
systems. They are real-life results - not mere hypothetical maxima
taken from some optimistic marketing gunk.

Cat 5e is said to be fine for Gigabit, but you will typically get only
250Mb ( Mb = megabit; MB = megabyte; check your post - there are
differences between GB and Gb ) per second transfer.

With Cat 6 cable you are looking at around 400Mb per second.

Even Cat 7 cable that promises more than 600Mb per second is difficult
to find.

However, I have been in touch with a company that recently re-wired a
Rolls Royce factory in the UK with Gigabit and they have apparently got
very close to 1Gb per second throughput.

They are http://www.krone.co.uk/ and are also talking about 10Gb
networks using copper cable...

They are preparing a quotation for me, but I suspect the price is going
to be horrific.


Odie




Well, If you go faster than 6.63Gb/sec you've broken the speed record.
Actualy, this is a long-distance benchmark, which has many issues that
computer-room network doesn't have, but the article does give you an
idea of what kind of equipment is required at each end to fill the
pipe. I'm sure if you made some inquiries you could get a technical
description of the equipment involved. Big Bucks.

http://www.internetnews.com/infra/article.php/3403161

--
Al Dykes
-----------
adykes at p a n i x . c o m
  #6  
Old September 3rd 04, 02:42 PM
Odie Ferrous
external usenet poster
 
Posts: n/a
Default

"J. Clarke" wrote:



J Clarke,


Oh, dear. You and reality are an oxymoron, aren't you?



I'd like to see your source for that. The Ethernet experts don't seem to
think that the cable is an issue. However the PCI bus most assuredly is,
as is the disk subsystem. To fill a gigabit pipe you need a 64-bit 66 MHz
or better PCI bus and if you're going to sustain transfers you also need a
very heavy duty RAID system with a large number of fast drives.

Gigabit was designed to run on CAT5. CAT5E just nails down some of the
numbers that nearly all existing properly installed CAT5 already meets but
for which it was never tested.

Would you care to describe your test configuration and the nature of the
tests you performed? Did you first confirm that your cable did in fact
meet CAT5E channel standards? I suspect that the limitations you're seeing
are not due to the cable. Incidentally, there is no such thing as "CAT 7
cable". That's cable manufacturers' hype.


I have better things to do than to reply to your every point - you have
displayed a shocking lack of even basic knowledge of the standards.
And, hey, hey - your statement about cat7 cable not existing?

Have a look here

http://www.ieee802.org/3/10GBT/publi...way_1_0503.pdf

Next time, try to work things out a little before jumping down someone's
throat.

You might want to try www.google.com and type in the box at the top
something like, "ieee cat6" and press return. You'd be amazed at what
comes up!

If you'd like more assistance, please don't hesitate to ask - you know
me, always willing to help!!



Odie
--

RetroData
Data Recovery Experts
www.retrodata.co.uk
  #7  
Old September 3rd 04, 02:55 PM
Peter
external usenet poster
 
Posts: n/a
Default

Can you please verify your statement:
"Backup Exec says that is a transfer rate of 435MB/second"
That does not seem right.
Maybe it should be 435MBytes/minute?

"Michelle" wrote in message
om...
Greetings,

I currently have a Win2K file server which contains:

(2) 80GB 7200/EIDE RAID 1 Mirror (Total working data 80GB)

I'm thinking of building a new box for the file server which contains:

(1) GB Ethernet
(6) 74GB 10,000/SATA RAID 1 Mirror (Total working data 222GB)

I'm also building a target backup server for this new file server
which will contain 1 tape drive and the following to perform disk to
disk backup:

(1) GB Ethernet
(4) 250GB 7200/SATA RAID 1 Mirror (Total Backup Capacity 500GB)

I'm trying to build a file server/backup system that would be able to
handle
multiple streams of data to allow for the fastest backup. There are
many
small files to be backed up. My personal preference is RAID 1. Since
none of my file server partitions are larger then 74GB I'm not worried
that a backup will be larger then any of my backup server partitions.

My question is, how do I build theses systems to get the fastest
backup possible? Should I install multiple GB nics? Is there an array
controller that is specifically designed to handle mutliple streams of
data? I'm aware that the Disk I/O is usually the cause of the
bottleneck during backup. Will moving both server and backup system to
RAID 5 significantly increase speeds of backup? Can someone make some
recommendations in terms of hardware/software.
I'm aware that the OS can also increase file system overhead so I'm
electing
to try out Veritas Backup Exec (although quite frankly I've always
been a fan
of Xcopy).

My current tests show me that I copy approximately 1.09GB of data in
approximately 4:18 seconds with Backup Exec/verify on over a Gigabit
connection to 7200 IDE drive. Backup Exec says that is a transfer rate
of 435MB/second. Is there away to increase the speed of this to say
1000MB/sec?
Thanks in advanced for sharing any advice or experience you may have!

Michelle



  #8  
Old September 3rd 04, 03:54 PM
Eric Gisin
external usenet poster
 
Posts: n/a
Default

"Odie Ferrous" wrote in message
...
"J. Clarke" wrote:

Gigabit was designed to run on CAT5. CAT5E just nails down some of the
numbers that nearly all existing properly installed CAT5 already meets but
for which it was never tested.

Correct.

Would you care to describe your test configuration and the nature of the
tests you performed? Did you first confirm that your cable did in fact
meet CAT5E channel standards? I suspect that the limitations you're

seeing
are not due to the cable. Incidentally, there is no such thing as "CAT 7
cable". That's cable manufacturers' hype.


What is the point of CAT7 if it is not backward compatible and more expensive
than fiber?

I have better things to do than to reply to your every point - you have
displayed a shocking lack of even basic knowledge of the standards.
And, hey, hey - your statement about cat7 cable not existing?

Another idiotic Odie troll. There is no CAT7 standard, so cables do not exist.

Have a look here

http://www.ieee802.org/3/10GBT/publi...way_1_0503.pdf

Next time, try to work things out a little before jumping down someone's
throat.

You might want to try www.google.com and type in the box at the top
something like, "ieee cat6" and press return. You'd be amazed at what
comes up!

Nobody mentioned CAT6, did they?

  #9  
Old September 3rd 04, 11:40 PM
Folkert Rienstra
external usenet poster
 
Posts: n/a
Default

"J. Clarke" wrote in message
Odie Ferrous wrote:
Michelle wrote:

I won't comment on the above - it's a minefield and everyone will have
their own solution for you.

My current tests show me that I copy approximately 1.09GB of data in
approximately 4:18 seconds with Backup Exec/verify on over a Gigabit
connection to 7200 IDE drive. Backup Exec says that is a transfer rate
of 435MB/second. Is there away to increase the speed of this to say
1000MB/sec?
Thanks in advanced for sharing any advice or experience you may have!


Gigabit technology is something I have been looking at a lot recently,
as I need to install a powerful data stream capability across my
recovery systems.

Firstly, Gigabit is still pretty much a theoretical speed, the bandwidth
of which is curtailed mainly by cable design.


I'd like to see your source for that. The Ethernet experts don't seem to
think that the cable is an issue. However the PCI bus most assuredly is,
as is the disk subsystem.


To fill a gigabit pipe you need a 64-bit 66 MHz or better PCI bus


Huh? 64-bit or 66-MHz should be fine.
For Bechmarking or when one of the two (network, disksubsystem) is
not on the PCI bus even a standard 32-bit/33MHz bus should suffice.

and if you're going to sustain transfers you also need a very
heavy duty RAID system with a large number of fast drives.

Gigabit was designed to run on CAT5. CAT5E just nails down some of the
numbers that nearly all existing properly installed CAT5 already meets
but for which it was never tested.

The following speeds are an indication of the maximum I was able to
achieve using short lengths of cable running between very powerful
systems. They are real-life results - not mere hypothetical maxima
taken from some optimistic marketing gunk.

Cat 5e is said to be fine for Gigabit, but you will typically get only
250Mb ( Mb = megabit; MB = megabyte; check your post - there are
differences between GB and Gb ) per second transfer.

With Cat 6 cable you are looking at around 400Mb per second.

Even Cat 7 cable that promises more than 600Mb per second is difficult
to find.


Would you care to describe your test configuration and the nature of the
tests you performed? Did you first confirm that your cable did in fact
meet CAT5E channel standards? I suspect that the limitations you're seeing
are not due to the cable. Incidentally, there is no such thing as "CAT 7
cable". That's cable manufacturers' hype.

However, I have been in touch with a company that recently re-wired a
Rolls Royce factory in the UK with Gigabit


and they have apparently got very close to 1Gb per second throughput.


Which of course isn't even possible.


They are http://www.krone.co.uk/ and are also talking about 10Gb
networks using copper cable...


Actually, Krone has nothing much to do with that technology--all they do is
make cable and connectors and 10 gig is probably going to run on CAT5E.

They are preparing a quotation for me, but I suspect the price is going
to be horrific.



Odie

  #10  
Old September 4th 04, 03:53 AM
J. Clarke
external usenet poster
 
Posts: n/a
Default

Odie Ferrous wrote:

"J. Clarke" wrote:



J Clarke,


Oh, dear. You and reality are an oxymoron, aren't you?


I'm sorry, I fail to see how the letter "J." preceding the noun "Clarke"
constitutes a contradiction in terms. Or perhaps you are laboring under
the misconception that an "oxymoron" is a creature of some sort.

I'd like to see your source for that. The Ethernet experts don't seem to
think that the cable is an issue. However the PCI bus most assuredly is,
as is the disk subsystem. To fill a gigabit pipe you need a 64-bit 66
MHz or better PCI bus and if you're going to sustain transfers you also
need a very heavy duty RAID system with a large number of fast drives.

Gigabit was designed to run on CAT5. CAT5E just nails down some of the
numbers that nearly all existing properly installed CAT5 already meets
but for which it was never tested.

Would you care to describe your test configuration and the nature of the
tests you performed? Did you first confirm that your cable did in fact
meet CAT5E channel standards? I suspect that the limitations you're
seeing
are not due to the cable. Incidentally, there is no such thing as "CAT 7
cable". That's cable manufacturers' hype.


I have better things to do than to reply to your every point


You are the one who made assertions about the performance of various cables
and claimed to have test results to back up those assertions. But when
pressed to present enough details for someone else to be able to decide
whether your test methodology was valid instead of doing so you start
hurling insults. That would lead the unbiased observer to believe that you
did not in fact have any such test results.

- you have
displayed a shocking lack of even basic knowledge of the standards.


In what manner?

And, hey, hey - your statement about cat7 cable not existing?

Have a look here

http://www.ieee802.org/3/10GBT/publi...way_1_0503.pdf


A year old working group paper discussing potential solutions. The cable
standards are set by EIA/TIA, not IEEE, and EIA/TIA has not released a
standard for category 7 cable. That paper is not a standard of any kind,
it's discussion of what _might_ go into a standard.

Yes, cable manufacturers sell cable that they call "Category 7". They sold
a lot of "Category 6" before a standard was released and the ended up
eating a lot of it when the standard came out and the cable wasn't
compliant.

Next time, try to work things out a little before jumping down someone's
throat.


Go over to the comp.dcom.lans.ethernet and make the same claimes you have
made here and see what happens.

You might want to try www.google.com and type in the box at the top
something like, "ieee cat6" and press return. You'd be amazed at what
comes up!


Nothing that the cable manufacturers say will amaze me. There is no such
thing as "IEEE CAT6"--the standard is set by EIA/TIA and is properly
"EIA/TIA Category 6". The Ethernet spec is a free download from the IEEE
site. If you download it and search it for "Cat 6" or any variation
thereof you will find that such cabling is not mentioned in the standard.
If you go over to the Ethernet newsgroup and ask the people who wrote the
spec what cable gigabit was designed to run on they'll tell you more or
less what I told you. The Ethernet spec defines certain electrical
properties that the cable must possess, it does not define category
numbers.

If you'd like more assistance, please don't hesitate to ask - you know
me, always willing to help!!


Then post your test configuration. Assuming of course that you actually did
conduct the tests that you claim to have conducted.


--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Best drive configuration? Noozer General 20 May 27th 04 03:10 AM
RAID card for my PC?? TANKIE General 5 May 22nd 04 01:09 AM
backup software - restore disk chris General 5 May 20th 04 03:39 AM
Recommendations on backup software please atDFN General 10 January 21st 04 01:45 PM
HP Multi-Function Inkjet Recommendations Requested Johann Bäcker Printers 0 September 3rd 03 11:39 AM


All times are GMT +1. The time now is 12:37 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.