If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
Some general questions about backup systems, mainly speed of restore
We have an LTO 100GB/200GB drive backing up a variety of data spread across
~5 servers. We also backup a few local directories on development PCs "just in case". We're currently running an ancient copy of ArcServeIT 6.61 on NT4 for which the support contract ran out ages ago :-) The ArcServe host is a Dell PowerEdge 2400 server with fast SCSI HDD and wide SCSI for the LTO (don't think it's LV). The majority of the data is stored on our main 70GB Back Office server. This is the mission critical stuff. We've got the tape drive on the wrong server at the moment so it's pulling the data across the network but that's easy enough to fix. However, it's probably time to think about upgrading the software and so, four years later: o Is ArcServe the way to go still or are Veritas BackupExec & ArcServe still playing the leapfrog game? o Is there anything worth considering? o What about backwards compatibility of reading tapes backed up with ArcServe? o Licensing seems to have changed drastically! Our existing copy of ArcServeIT can backup any network folder but I get the impression (bit out of touch) that you have to buy a) a central copy of ArcServe and b) remote agent licenses for each of the other servers you plan to backup - right? The key trigger for thinking about upgrading is that we've got a low-grade development server that's packed full of semi-important files (MSDN library, GHOST images, "just in case" copies that kind of thing) that we back-up once a month onto a couple of LTO tapes. It's based on an equally low-grade 800MHz PII IDE Windows 2000 Server with two IDE RAID 256GB drives. This is also our disaster recovery server just in case the main Back Office server goes belly-up. Okay, so it'll run like a dog with three legs but at least we'll be able to get Exchange, SQL & file share backup quicker. Or so we thought!!! The upgrade from a single 256GB to dual 256GB using a cheap-n-cheerful SATA IDE RAID-0 controller required a complete backup of the existing drive and a restore from tape. The backup went fine, trundled along at an acceptable speed and mainly finished backup overnight. Insert the IDE RAID card, configure the RAID array (bang, data gone) and tell ArcServe to restore the backup. WHAM - NOT!! Restore trickled along at a pathetic 6MB/m!! It had done about 2GB in four hours... I calculated it was going to take nearly two weeks to restore about 200GB. Hmm, not ideal!! Tried a test restore to the host ArcServe server and that tears along at high speed. Thought it might be the performance of the development server so tried a test restore across the network to another similar spec server as the ArcServe host and that too restored at a very low rate. Did a few searches around on "copying lots of small files" and discovered a CA support article that says "it happens with lots of small files". More background reading takes me into the realm of MFT and the b-tree efficiency when creating lots and lots of very small files one after the other. I suspect this might be where the bottleneck is - even moving the restored files from the ArcServe server to the development server is slow when it's copying lots of small files but not bad when it gets its teeth into a 650MB ISO image. This is using TakeCommand's MOVE command. So the question is, will this be significantly any better if we upgrade ArcServe? Is it a ArcServe problem or something more fundamental with NTFS/MFT/slow disk systems etc? Thanks in advance for any advice. Regards, Rob. |
#2
|
|||
|
|||
a pathetic 6MB/m!! It had done about 2GB in four hours... I calculated it
was going to take nearly two weeks to restore about 200GB. Hmm, not ideal!! PS. As we can't afford to keep our only backup system busy restoring for two weeks, we're going to "borrow" a Adaptec SCSI card from another PC, install it in the development server, stick a copy of ArcServeIT on and restore direct to the server, to see if that helps. Cheers, Rob. |
#3
|
|||
|
|||
ArcServe? Is it a ArcServe problem or something more fundamental with
NTFS/MFT/slow disk systems etc? NTFS. It is just plain very slow on "lots of tiny fines" scenario. FAT/FAT32 are noticeable faster. Consider switching off the atime updates (anyway nearly no Win32 software use atimes) on NTFS by a registry parameter - described on Microsoft's site, each for "last access time NTFS" or such. Consider using a FAT32 partition to keep such data. Note that FAT32 brings a) security issues - no more ACLs on files and directories, only on SMB shares b) the 4GB file size limit c) worse fault tolerance - NTFS is nearly absolute in this, FAT is not so good but is by far better then non-journaling UNIX FSs (ext2 or FFS). You can also consider using UNIX there, but this should require a survey on UNIX filesystems - I expect them to differ a lot in terms of speed, fault tolerance and bugginess. -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation http://www.storagecraft.com |
#4
|
|||
|
|||
"Rob Nicholson" wrote in message
... a pathetic 6MB/m!! It had done about 2GB in four hours... I calculated it was going to take nearly two weeks to restore about 200GB. Hmm, not ideal!! PS. As we can't afford to keep our only backup system busy restoring for two weeks, we're going to "borrow" a Adaptec SCSI card from another PC, install it in the development server, stick a copy of ArcServeIT on and restore direct to the server, to see if that helps. Cheers, Rob. Is this by any chance an HP drive? I ran across an HP branded STK library a few weeks ago which also showed extremely low restore speeds on all drives. Write was fine, read was pathetic. I turned out to be a drive firmware issue. The latest-and-greatest firmware from HP fixed the issue. Rob |
#5
|
|||
|
|||
few weeks ago which also showed extremely low restore speeds on all
drives. Write was fine, read was pathetic. I turned out to be a drive firmware issue. The latest-and-greatest firmware from HP fixed the issue. No, it's a Christie drive which I believe is based on the Seagate mechanism, at least in part. Hmm, unless the HP is also based on the same mechanism. I suspect it's not entirely the drive as it can restore to the local server hard disk at a perfectly acceptable rate. Unfortunately, the local server only has about 5GB spare and the back is 0.25TB (256GB). Cheers, Rob. |
#6
|
|||
|
|||
Consider switching off the atime updates (anyway nearly no Win32 software
use atimes) on NTFS by a registry parameter - described on Microsoft's site, each for "last access time NTFS" or such. Thanks - will check that out. Consider using a FAT32 partition to keep such data. Unfortunately, we've got some large SQL databases on there (10GB) which we occasionally use (copies of customer data for diagnosis). Security isn't that much of a problem as I assume we can still control access at the share level. Regards, Rob. |
#7
|
|||
|
|||
In article ,
Maxim S. Shatskih wrote: Note that FAT32 brings a) security issues - no more ACLs on files and directories, only on SMB shares b) the 4GB file size limit c) worse fault tolerance - NTFS is nearly absolute in this, FAT is not so good but is by far better then non-journaling UNIX FSs (ext2 or FFS). I sure hope that was a typo. The reliability and robustness of FFS is very good, good enough that after the jouranl-based Berkeley LFS and the Sprite FS were introduced to BSD they simply couldn't compete. Classic FFS gets reliability at the expense of performance by forcing synchronous metadata update. Modern FFS with Softupdates uses ordered metadata update. Either of these provide file system consistency guarantees that are very good. In practice NTFS and FFS have proven not greatly different in fault tolerance for me, and FFS has the edge in recovery because there are better recovery tools out there. FAT and Ext2 are far behind. You can also consider using UNIX there, but this should require a survey on UNIX filesystems - I expect them to differ a lot in terms of speed, fault tolerance and bugginess. Indeed. -- I've seen things you people can't imagine. Chimneysweeps on fire over the roofs of London. I've watched kite-strings glitter in the sun at Hyde Park Gate. All these things will be lost in time, like chalk-paintings in the rain. `-_-' Time for your nap. | Peter da Silva | Har du kramat din varg, idag? 'U` |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
RAID card for my PC?? | TANKIE | General | 5 | May 22nd 04 01:09 AM |
NTI Drive Backup Behavior | Earl F. Parrish | Cdr | 0 | January 13th 04 02:11 AM |
BackUp MyPC: How to Slow Down the CD Burner? | JamesDad | Cdr | 5 | October 29th 03 04:10 AM |
Help: Backup Exec 9.0 backup on to Plextor DVD+RW via Roxio | Jeff Ishaq | Cdr | 0 | October 22nd 03 06:39 PM |
MS Backup Problem | Allen Weiner | Cdr | 1 | October 3rd 03 11:09 PM |