A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Backup performance is not what we expected.



 
 
Thread Tools Display Modes
  #1  
Old June 4th 04, 07:39 PM
Dennis Herrick
external usenet poster
 
Posts: n/a
Default Backup performance is not what we expected.

Hi Folks:

Sorry for the long detailed story.

I haven't contacted the vendors yet, that's the next step, but
my backup performance isn't quite what I expected. I'm interested in
hearing what other folks are seeing with backup. Hardware, software,
configuration and strategy are set for this project. It's basically
what fit into some rather tight price points. Things like high end
fiberchannel arrays were out of the question from the start. I have
to make do with what I have, which really isn't too bad. Here is the
configuration.

HP DL580 G2 Server with dual 2.8 XEON processors and 2 Gig of memory.
HP MSL6060 LT02 Tape Library with 2 LT02 tape drives
HP MSA1000 Storage Controller with 28, 146.8 Gig 10000 RPM SCSI Drives
There are 3, 1 Terrabyte Arrays of 9 drives each. Final drive is for
hot spare. Each array makes use of all 4 SCSI channels, with a
maximum of 3 drives on each channel. It's usually only 2.
SCSI Tape Library and SCSI Disk Array are connected over a 2.0 Gbit
SAN via a switch.

Operating System is Windows 2000 Advanced Server at Service Pack 3
MicroSoft SQL Server 2000 is also running here, servicing a local
application.
Backup Software is CommVault Galaxy 5.0 at Service Pack 2
This system is the MediaAgent machine in the CommCell.
Backups are either local or over the SAN. No backups run over the network
This system is the only backup client machine.

Because MicroSoft SQL Server 2000 is already running on the main server
We had to add a second server to run the CommVault Galaxy Backup Database.
It's configuration is as follows:

HP DL360 Server with a Single 2.4 GHz XEON processor and 1 Gig of memory
This system is the Galaxy CommCell controller. It provides the backup
user interface. Network connection between the two servers is switched
ethernet at 100 mbps.
Operating System is Windows 2000 Server.

I tried to backup data from one of the arrays to a single tape drive
and got 80 Gigabytes per hour. I tried 2 backups, 2 arrays to 2 tapes
and got a combined total of 100 Gigabytes per hour. One backup ran at
60 Gigabytes per hour and the other ran at 40 Gigabytes per hour. This
seemed low so I started to try to figure out where the bottleneck was.
After all, each tape drive has a maximum throughput of 30 - 35 Megabytes
per second or 108 - 126 Gibabytes per hour and that doesn't even take into
consideration compression. The MSA1000 is rated at a maximum of 200
megabytes per second so the tapes drives should not be starved for data.
I loaded Passmark's PerformanceTest 5.0, www.passmark.com, and used the
Advanced Disk Test to see what type of throughput I can expect from the
array. I played around with the number of simultaneous clients running
and the transfer block size and managed to get 173 Megabytes per second
transferring from one array to another. I think there were nine clients,
data was "raw mode" and the block size was something midrange. Now I
wanted to see what performance I could get on a tape to tape copy. I
haven't done this yet and won't get a chance this week. All I can say
is the Galaxy AUX copies, which are tape to tape seem to run quite fast.

To tried to speed things up, I bought CommVaults "Image iDataAgent". It does
bit level backups for speed instead of file level backups. There was a
12% performance improvement. However the driver conflicts with another
software package we're running. My Galaxy backups seemed to be eating up
tapes too fast, so I purchased CommVaults "Direct to Disk" option and created
a Magnetic library. Now I backup to disk first and run an AUX copy to tape
offline. This cut way down on the tape usage. Unfortunately, backup to
disk is only giving me 60 Gigabytes per hour. I was hoping for double that.
When I roll this environment out, there will be three configurations. One
setup will need to backup up 1 terabyte of data. Second setup will need
to backup 2 Terabytes of data and the final setup will need to backup 5
Terabytes of data. The first setup will be no problem. The second setup
described above can be managed. However, the third setup will be trouble,
even with a 4 processor server, 4 Gigabytes of memory, 2 MSA1000s and a Tape
Library with 4 LT02 drives. I really need that 120 Gig / hour throughput
at a minimum. These servers will also be running 7 X 24 production.

Things I've heard recently are add more memory to the main server, because
SQL Server is a memory hog. And I also heard that the larger the SCSI drive
in the array, the slower the overall performance. Should we have used 72
Gig drives instead? That 173 Megabyte array performance test seems just
about right.

I'm hoping for a vendor neutral discussion here before calling the parties
back in to look at the numbers.

Thanks for your time,

Dennis Herrick

  #2  
Old June 6th 04, 12:00 AM
Charles Morrall
external usenet poster
 
Posts: n/a
Default


"Dennis Herrick" wrote in message
om...
Hi Folks:

HP DL580 G2 Server with dual 2.8 XEON processors and 2 Gig of memory.
HP MSL6060 LT02 Tape Library with 2 LT02 tape drives
HP MSA1000 Storage Controller with 28, 146.8 Gig 10000 RPM SCSI Drives
There are 3, 1 Terrabyte Arrays of 9 drives each. Final drive is for
hot spare. Each array makes use of all 4 SCSI channels, with a
maximum of 3 drives on each channel. It's usually only 2.
SCSI Tape Library and SCSI Disk Array are connected over a 2.0 Gbit
SAN via a switch.

Operating System is Windows 2000 Advanced Server at Service Pack 3
MicroSoft SQL Server 2000 is also running here, servicing a local
application.
Backup Software is CommVault Galaxy 5.0 at Service Pack 2
This system is the MediaAgent machine in the CommCell.
Backups are either local or over the SAN. No backups run over the network
This system is the only backup client machine.


I'm not too sure it's such a good idea to mix a backup application and SQL
Server.


Because MicroSoft SQL Server 2000 is already running on the main server
We had to add a second server to run the CommVault Galaxy Backup Database.
It's configuration is as follows:

HP DL360 Server with a Single 2.4 GHz XEON processor and 1 Gig of memory
This system is the Galaxy CommCell controller. It provides the backup
user interface. Network connection between the two servers is switched
ethernet at 100 mbps.
Operating System is Windows 2000 Server.

Or is this server the actual backup server? I'm not versed in CommVault
Galaxy.


I tried to backup data from one of the arrays to a single tape drive
and got 80 Gigabytes per hour. I tried 2 backups, 2 arrays to 2 tapes
and got a combined total of 100 Gigabytes per hour. One backup ran at
60 Gigabytes per hour and the other ran at 40 Gigabytes per hour. This
seemed low so I started to try to figure out where the bottleneck was.
After all, each tape drive has a maximum throughput of 30 - 35 Megabytes
per second or 108 - 126 Gibabytes per hour and that doesn't even take into
consideration compression.


The tape will maybe run at that rate, provided you feed it at that rate.
This is obviously not happening.


The MSA1000 is rated at a maximum of 200
megabytes per second so the tapes drives should not be starved for data.


The Fibre Channel interface is rated at 200MB/s. The actual performance
might be nowhere near this, although in your particular case with 28 146GB
10krpm drives you should be pushing this.

I loaded Passmark's PerformanceTest 5.0, www.passmark.com, and used the
Advanced Disk Test to see what type of throughput I can expect from the
array. I played around with the number of simultaneous clients running
and the transfer block size and managed to get 173 Megabytes per second


And indeed you are getting close.
The only thing this proves is that the MSA1000 as such could deliver close
to its rated maximum, under ideal conditions with regards to I/O size, r/w
ratio etc. However, you have to adapt your configuration to the application
workload profile, not the other way around. In other words, just because you
can push an array to the performance level you'd like in a benchmark
program, there is no way you can take a given workload profile (SQL Server
for example) and expect it to perform just as well.

transferring from one array to another. I think there were nine clients,
data was "raw mode" and the block size was something midrange. Now I
wanted to see what performance I could get on a tape to tape copy. I
haven't done this yet and won't get a chance this week. All I can say
is the Galaxy AUX copies, which are tape to tape seem to run quite fast.

To tried to speed things up, I bought CommVaults "Image iDataAgent". It

does
bit level backups for speed instead of file level backups. There was a
12% performance improvement. However the driver conflicts with another


The file system is slowing things down, it would seem.
What are you actually backing up? I'm led to believe it's SQL Server
databases. Are you using a module to backup the databases online, or are you
dumping them to disk first from SQL and backing up the dumps?
Either way, you should get fairly good performance, but it can vary with the
sizes of the databases.

If it's regular files, this can be an absolute performance killer. Millions
and millions of small files (20-100kB) will drag your performance down, and
become the bottleneck. You say you're getting 60-80GB/hour, which translates
to 16-18MB/s. This is the level you can expect when it comes to file backup
in the Windows world, in my experience.

software package we're running. My Galaxy backups seemed to be eating up
tapes too fast, so I purchased CommVaults "Direct to Disk" option and

created
a Magnetic library. Now I backup to disk first and run an AUX copy to

tape
offline. This cut way down on the tape usage. Unfortunately, backup to
disk is only giving me 60 Gigabytes per hour. I was hoping for double

that.

Backup to disk helps, 60GB/h isn't too hot, but it all depends on what you
are backing up.

When I roll this environment out, there will be three configurations. One
setup will need to backup up 1 terabyte of data. Second setup will need
to backup 2 Terabytes of data and the final setup will need to backup 5
Terabytes of data. The first setup will be no problem. The second setup
described above can be managed. However, the third setup will be trouble,
even with a 4 processor server, 4 Gigabytes of memory, 2 MSA1000s and a

Tape
Library with 4 LT02 drives. I really need that 120 Gig / hour throughput
at a minimum. These servers will also be running 7 X 24 production.


You should look into third-mirror break-off. This is what hardware vendors
love to sell you for big dollars, they call it snapshots, snapclones,
Business Continuance Volumes etc. However, you can do the same with Veritas
Volume Manager, which will be considerably cheaper and can be used with the
existing MSA1000 (or any other array that fits into the SAN).
Provided you can allocate enough extra diskspace for at least one more copy,
you can backup the entire dataset in seconds, and restore it in seconds.

Things I've heard recently are add more memory to the main server, because
SQL Server is a memory hog. And I also heard that the larger the SCSI

drive
in the array, the slower the overall performance. Should we have used 72
Gig drives instead? That 173 Megabyte array performance test seems just
about right.

Double the number of spindles (disk) will indeed increase performance, since
you need twice the number of 72GB drives to get the same volume. But
comparing 1 146 GB drive and 1 72 GB Drive, and all other things equal, the
146GB drive is faster.

I'm hoping for a vendor neutral discussion here before calling the parties
back in to look at the numbers.


All in all, you're getting about what you can expect for traditional backup
to tape in Windows land. I doubt the vendors will be able to tweak it much.


Thanks for your time,

Dennis Herrick


/charles


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Some general questions about backup systems, mainly speed of restore Rob Nicholson Storage & Hardrives 6 April 19th 04 07:33 PM
Remote backup in Veritas Backup Exec Christer Hamre Storage & Hardrives 0 April 2nd 04 09:12 PM
VXA tape flaw Arthur Begun Storage & Hardrives 61 February 5th 04 09:38 AM
BackUp MyPC: How to Slow Down the CD Burner? JamesDad Cdr 5 October 29th 03 05:10 AM
Help: Backup Exec 9.0 backup on to Plextor DVD+RW via Roxio Jeff Ishaq Cdr 0 October 22nd 03 06:39 PM


All times are GMT +1. The time now is 12:27 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.