If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#21
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
Keep in mind that any application that writes enormous files to a
Windows network share will experience gradual but steady performance degredation over time. This is due to a performance bug in the Windows itself, and has nothing to do with the application that is writing the data. This can be easily reproduced by writing a simple app that does nothing but constantly write a continuous stream of data to a specified file. The degredation will occur more slowly if the host of the share has more memory. Also interesting (and important) is the fact that if the app that is writing the file closes the file and then reopens a new file, the performance will jump back up to its peak. This fact is important because it suggests a good workaround for this issue. If you are backing up huge (100's of GB, or even TB) sized volumes to network shares, you should configure your backup so that it will split the backup image into ~50GB pieces. Most backup apps support splitting the backup image file. This way performance will stay at reasonable levels. |
#22
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
Keep in mind that any application that writes enormous files to a
Windows network share will experience gradual but steady performance degredation over time. This is due to a performance bug in the Windows itself, and has nothing to do with the application that is writing the data. This can be easily reproduced by writing a simple app that does nothing but constantly write a continuous stream of data to a specified file. Exactly so, we have noticed it and measured it. This is MS's issue, and is possible related to cache pollution - polluting the cache faster then the lazy writer will flush it. Tweaking the cache settings in the registry (after finding the MS's KB about them) can be a good idea. the backup image into ~50GB pieces. Most backup apps support splitting the backup image file. ShadowProtect surely supports this, and I think Acronis and Norton Ghost/LSR too. -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation http://www.storagecraft.com |
#23
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
On Apr 4, 12:31 pm, "Maxim S. Shatskih" wrote:
Keep in mind that any application that writes enormous files to a Windows network share will experience gradual but steady performance degredation over time. This is due to a performance bug in the Windows itself, and has nothing to do with the application that is writing the data. This can be easily reproduced by writing a simple app that does nothing but constantly write a continuous stream of data to a specified file. Exactly so, we have noticed it and measured it. This is MS's issue, and is possible related to cache pollution - polluting the cache faster then the lazy writer will flush it. Tweaking the cache settings in the registry (after finding the MS's KB about them) can be a good idea. the backup image into ~50GB pieces. Most backup apps support splitting the backup image file. ShadowProtect surely supports this, and I think Acronis and Norton Ghost/LSR too. -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation Thanks for that tid bit.. I'll either break them up and test again or try to find the MS solution. Sure enough, ShadowProtect ended up at 9 hours as well. |
#24
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
In comp.sys.ibm.pc.hardware.storage markm75 wrote:
On Apr 4, 12:31 pm, "Maxim S. Shatskih" wrote: Keep in mind that any application that writes enormous files to a Windows network share will experience gradual but steady performance degredation over time. This is due to a performance bug in the Windows itself, and has nothing to do with the application that is writing the data. This can be easily reproduced by writing a simple app that does nothing but constantly write a continuous stream of data to a specified file. Exactly so, we have noticed it and measured it. This is MS's issue, and is possible related to cache pollution - polluting the cache faster then the lazy writer will flush it. Tweaking the cache settings in the registry (after finding the MS's KB about them) can be a good idea. the backup image into ~50GB pieces. Most backup apps support splitting the backup image file. ShadowProtect surely supports this, and I think Acronis and Norton Ghost/LSR too. -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation Thanks for that tid bit.. I'll either break them up and test again or try to find the MS solution. Sure enough, ShadowProtect ended up at 9 hours as well. Well, that would explain it. Once again. MS is using substandard technology. I hope you find a solution to this, but I certainly have ni clus what it could be. Arno |
#25
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
Well, that would explain it. Once again. MS is using substandard
technology. I would not say that SMB slowdown on files 100GB is "substandard" for a mass market commodity OS. This is a rare corner case in fact, with the image backup software being nearly the only users of it, and they can split the image to smaller files. Note that lots of UNIX-derived OSes still have 4GB file size limit :-) -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation http://www.storagecraft.com |
#26
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote:
Well, that would explain it. Once again. MS is using substandard technology. I would not say that SMB slowdown on files 100GB is "substandard" for a mass market commodity OS. Hmm. I think that if it supports files 100GB, then it should support them without surprises. Of course, if you say ''commodity'' = ''not really for mission critical stuff'', then I can agree. This is a rare corner case in fact, with the image backup software being nearly the only users of it, and they can split the image to smaller files. Note that lots of UNIX-derived OSes still have 4GB file size limit :-) I wouldn't know. Linux ext2/3 has a 2TB file size limit. But that was actually not my point. My point is that if it is supported, then it should be supported well. If it is not supported that is better than if you think you can use it, but on actual usage things start to go wrong. I believe this whole thread shows that ;-) So ''substandard'' = ''the features are there but you should not really use them to their limits'', a.k.a. ''we did it, but we did not really do it right''. Arno |
#27
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
On Apr 5, 5:23 am, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote: Well, that would explain it. Once again. MS is using substandard technology. I would not say that SMB slowdown on files 100GB is "substandard" for a mass market commodity OS. Hmm. I think that if it supports files 100GB, then it should support them without surprises. Of course, if you say ''commodity'' = ''not really for mission critical stuff'', then I can agree. This is a rare corner case in fact, with the image backup software being nearly the only users of it, and they can split the image to smaller files. Note that lots of UNIX-derived OSes still have 4GB file size limit :-) I wouldn't know. Linux ext2/3 has a 2TB file size limit. But that was actually not my point. My point is that if it is supported, then it should be supported well. If it is not supported that is better than if you think you can use it, but on actual usage things start to go wrong. I believe this whole thread shows that ;-) So ''substandard'' = ''the features are there but you should not really use them to their limits'', a.k.a. ''we did it, but we did not really do it right''. Arno Results are in.. Used shadow protect.. set to 50GB files at a time on the backup file side.. average throughput 23 MB/s.. finished in 4hr 25 minutes, the same time as a local backup took (this was across gigabit). So I guess its true.. there is something to the polution of the cache/ registry issue? Anyone have a KB article where I could find the tweak and try this again without splitting the backup files? (Not sure what I'm searching for exactly). Thanks |
#28
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
In comp.sys.ibm.pc.hardware.storage markm75 wrote:
On Apr 5, 5:23 am, Arno Wagner wrote: In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote: Well, that would explain it. Once again. MS is using substandard technology. I would not say that SMB slowdown on files 100GB is "substandard" for a mass market commodity OS. Hmm. I think that if it supports files 100GB, then it should support them without surprises. Of course, if you say ''commodity'' = ''not really for mission critical stuff'', then I can agree. This is a rare corner case in fact, with the image backup software being nearly the only users of it, and they can split the image to smaller files. Note that lots of UNIX-derived OSes still have 4GB file size limit :-) I wouldn't know. Linux ext2/3 has a 2TB file size limit. But that was actually not my point. My point is that if it is supported, then it should be supported well. If it is not supported that is better than if you think you can use it, but on actual usage things start to go wrong. I believe this whole thread shows that ;-) So ''substandard'' = ''the features are there but you should not really use them to their limits'', a.k.a. ''we did it, but we did not really do it right''. Arno Results are in.. Used shadow protect.. set to 50GB files at a time on the backup file side.. average throughput 23 MB/s.. finished in 4hr 25 minutes, the same time as a local backup took (this was across gigabit). Interesting. So I guess its true.. there is something to the polution of the cache/ registry issue? Anyone have a KB article where I could find the tweak and try this again without splitting the backup files? (Not sure what I'm searching for exactly). Why not just split the backup? This seems to work, after all. If you want this a bit better sorted, put each backup set into one subdirectory. Arno |
#29
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
On 5 Apr 2007 09:23:26 GMT, Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote: Well, that would explain it. Once again. MS is using substandard technology. I would not say that SMB slowdown on files 100GB is "substandard" for a mass market commodity OS. Hmm. I think that if it supports files 100GB, then it should support them without surprises. Of course, if you say ''commodity'' = ''not really for mission critical stuff'', then I can agree. This is a rare corner case in fact, with the image backup software being nearly the only users of it, and they can split the image to smaller files. Note that lots of UNIX-derived OSes still have 4GB file size limit :-) I wouldn't know. Linux ext2/3 has a 2TB file size limit. But that was actually not my point. My point is that if it is supported, then it should be supported well. If it is not supported that is better than if you think you can use it, but on actual usage things start to go wrong. I believe this whole thread shows that ;-) So ''substandard'' = ''the features are there but you should not really use them to their limits'', a.k.a. ''we did it, but we did not really do it right''. Arno Careful the trail you blaze. The automounter and NFS client subsystems in Linux are beyond sub-standard. It exists, and it will work if you do not use it heavily. I dislike MS more than most, but throwing stones will only break your own windows (no pun intended on that one). ~F |
#30
|
|||
|
|||
Raid0 or Raid5 for network to disk backup (Gigabit)?
I wouldn't know. Linux ext2/3 has a 2TB file size limit.
Sorry. See the cite from include/linux/ext2_fs.h below and "__u32 i_size;" in it. ext2's limit is 4GB. I remember ext3 being compatible with ext2 in on-disk structures in everything except the transaction log, so, looks like ext3 is also limited to 4GB per file. More so, if you will also find the superblock structure, you will see that ext2 is also limited to 32bit block numbers in the volume. There are good chances that this means the volume size limit of 2TB (if "block" is really the disk sector and not a group of sectors). /* * Structure of an inode on the disk */ struct ext2_inode { __u16 i_mode; /* File mode */ __u16 i_uid; /* Owner Uid */ __u32 i_size; /* Size in bytes */ __u32 i_atime; /* Access time */ __u32 i_ctime; /* Creation time */ __u32 i_mtime; /* Modification time */ __u32 i_dtime; /* Deletion Time */ __u16 i_gid; /* Group Id */ __u16 i_links_count; /* Links count */ __u32 i_blocks; /* Blocks count */ __u32 i_flags; /* File flags */ supported, then it should be supported well. If it is not supported that is better than if you think you can use it, but on actual usage things start to go wrong. I believe this whole thread shows that ;-) Let's wait for MS's hotfixes and service packs. Such a "corner case" (circumstances rarely met in the real life) issues do occur in any software. -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation http://www.storagecraft.com |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Raid0 or Raid5 for network to disk backup (Gigabit)? | markm75 | Storage (alternative) | 41 | April 18th 07 09:37 PM |
change raid5 to raid1 / backup&restore partition / Arconis? | [email protected] | Storage (alternative) | 2 | February 8th 07 11:57 PM |
I was unhappy with my Gigabit Network card | George Hester | General | 3 | July 5th 06 08:52 AM |
SATA RAID5 disk replacement: same type of disk? | Richard NL | Storage (alternative) | 9 | February 3rd 06 01:42 PM |
RAID0 vs. RAID5 - Benchmark | Ingo Seibold | Storage (alternative) | 3 | November 11th 04 05:07 PM |