A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage (alternative)
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Deframenter That Will Create New Hard Drive?



 
 
Thread Tools Display Modes
  #1  
Old October 19th 06, 03:53 AM posted to comp.sys.ibm.pc.hardware.storage
Will
external usenet poster
 
Posts: 338
Default Deframenter That Will Create New Hard Drive?

After seeing how much work some defragmenters will go through to do their
job, I'm wondering does anyone make a defragmenter that could be ordered to
actually construct a new drive as the result of the defragmentation? In
other words, rather than just shuffling sectors around, the fragmenter would
write out a whole new drive, with all of the sectors nicely ordered.

This would likely be a much faster way to defragment for those extreme cases
when you want to physically order files based on filename spelling, so that
consecutive files are contiguous on the drive. This is important for games
like flight simulators that need to load thousands of very small files in
very short periods of time, where disk latency accumulates across the large
numbers of files and really does affect load times.

It seems to me that this kind of defragmenter might also make an interesting
way to do an image backup of a drive while it is in use. Obviously there is
a risk if doing that for certain types of database files that are actively
in use at the time of the defragmentation, but caveat emptor and all that.

--
Will


  #2  
Old October 19th 06, 03:59 AM posted to comp.sys.ibm.pc.hardware.storage
CJT
external usenet poster
 
Posts: 95
Default Deframenter That Will Create New Hard Drive?

Will wrote:

After seeing how much work some defragmenters will go through to do their
job, I'm wondering does anyone make a defragmenter that could be ordered to
actually construct a new drive as the result of the defragmentation? In
other words, rather than just shuffling sectors around, the fragmenter would
write out a whole new drive, with all of the sectors nicely ordered.

This would likely be a much faster way to defragment for those extreme cases
when you want to physically order files based on filename spelling, so that
consecutive files are contiguous on the drive. This is important for games
like flight simulators that need to load thousands of very small files in
very short periods of time, where disk latency accumulates across the large
numbers of files and really does affect load times.

It seems to me that this kind of defragmenter might also make an interesting
way to do an image backup of a drive while it is in use. Obviously there is
a risk if doing that for certain types of database files that are actively
in use at the time of the defragmentation, but caveat emptor and all that.

I've been doing that with Ghost for some time.

--
The e-mail address in our reply-to line is reversed in an attempt to
minimize spam. Our true address is of the form .
  #3  
Old October 19th 06, 05:03 AM posted to comp.sys.ibm.pc.hardware.storage
Rod Speed
external usenet poster
 
Posts: 8,559
Default Deframenter That Will Create New Hard Drive?

Will wrote:

After seeing how much work some defragmenters will go through
to do their job, I'm wondering does anyone make a defragmenter
that could be ordered to actually construct a new drive as the
result of the defragmentation? In other words, rather than just
shuffling sectors around, the fragmenter would write out a whole
new drive, with all of the sectors nicely ordered.


Some backup system produce that result with the restore.

This would likely be a much faster way to defragment for those
extreme cases when you want to physically order files based on
filename spelling, so that consecutive files are contiguous on the drive.


You dont need to defrag for that, you dont need to
move the files around, just the entrys in the directorys.

And you dont necessarily need to sort them anyway
if the directorys are properly cached in memory.

This is important for games like flight simulators that need
to load thousands of very small files in very short periods
of time, where disk latency accumulates across the large
numbers of files and really does affect load times.


Nope, that's what the OS file cache is for.

It seems to me that this kind of defragmenter might also make an
interesting way to do an image backup of a drive while it is in use.


No point, real imagers do much more, particularly incremental images.

Obviously there is a risk if doing that for certain types
of database files that are actively in use at the time
of the defragmentation, but caveat emptor and all that.



  #4  
Old October 19th 06, 06:00 AM posted to comp.sys.ibm.pc.hardware.storage
Will
external usenet poster
 
Posts: 338
Default Deframenter That Will Create New Hard Drive?

"CJT" wrote in message
...
It seems to me that this kind of defragmenter might also make an

interesting
way to do an image backup of a drive while it is in use. Obviously

there is
a risk if doing that for certain types of database files that are

actively
in use at the time of the defragmentation, but caveat emptor and all

that.

I've been doing that with Ghost for some time.


Ghost is just imaging right? In my method you defrag and then end up with
an image backup as a side effect. To do things with Ghost you need to
defrag and then image separately, taking more time.

--
Will



  #5  
Old October 19th 06, 06:16 AM posted to comp.sys.ibm.pc.hardware.storage
Will
external usenet poster
 
Posts: 338
Default Deframenter That Will Create New Hard Drive?

"Rod Speed" wrote in message
...
After seeing how much work some defragmenters will go through
to do their job, I'm wondering does anyone make a defragmenter
that could be ordered to actually construct a new drive as the
result of the defragmentation? In other words, rather than just
shuffling sectors around, the fragmenter would write out a whole
new drive, with all of the sectors nicely ordered.


Some backup system produce that result with the restore.


True enough. It's all about how much time do you have as usual. That
method is not optimal because you must backup, then verify, then get a third
volume and restore (assuming you don't want to risk your original).


This would likely be a much faster way to defragment for those
extreme cases when you want to physically order files based on
filename spelling, so that consecutive files are contiguous on the

drive.

You dont need to defrag for that, you dont need to
move the files around, just the entrys in the directorys.


No, that is just wrong. Assume that you need to open 1000 small files that
are highly fragmented on the disk. Each file, because it is not contiguous
with the previous file, requires a disk seek, with a typical seek for a SATA
drive being 9 ms. For a 7200 rpm drive, once you have seeked to a track,
you have an average 4 ms of latency to get to the start sector for the file.
That is 12 ms of latency per file, and I need to open 1000 files. Simple
math tells you that you have 12,000 ms - 12 full seconds - of extra loading
time introduced in this scenario attributable to physical fragmentation of
the files on disk.

As a simple thought experiment, assume that all of the files are organized
by filename sequentially in the directory, which is your proposed solution.
The time to load that entire directory would never be more than 100 or 200
ms, and that would be a huge directory. The additional time spent by the
computer to sort even 2000 filenames in an unsorted directory is not more
than maybe another 50 ms. I'm being generous to your argument. So your
solution is based on optimizing a process that takes at most 250 ms and
saves at most maybe 50 ms, and you are ignoring the very real and very
measurable costs associated with the seek and latency for loading 1000
files.

50 ms in savings versus up to 12,000 ms in savings is not a good tradeoff.


This is important for games like flight simulators that need
to load thousands of very small files in very short periods
of time, where disk latency accumulates across the large
numbers of files and really does affect load times.


Nope, that's what the OS file cache is for.


In a perfect world I would have a 12 GB memory cache in front of the file
system and some tool to preload that cache. Maybe next year, but not on
this year's hardware, thanks.

--
Will


  #6  
Old October 19th 06, 07:50 AM posted to comp.sys.ibm.pc.hardware.storage
Rod Speed
external usenet poster
 
Posts: 8,559
Default Deframenter That Will Create New Hard Drive?

Will wrote
Rod Speed wrote
Will wrote


After seeing how much work some defragmenters will go through
to do their job, I'm wondering does anyone make a defragmenter
that could be ordered to actually construct a new drive as the
result of the defragmentation? In other words, rather than just
shuffling sectors around, the fragmenter would write out a whole
new drive, with all of the sectors nicely ordered.


Some backup system produce that result with the restore.


True enough. It's all about how much time do you have as usual.
That method is not optimal because you must backup, then verify,


You dont really need to verify.

then get a third volume and restore
(assuming you don't want to risk your original).


Yes, but defragging is a complete waste of time anyway.

This would likely be a much faster way to defragment for those
extreme cases when you want to physically order files based on
filename spelling, so that consecutive files are contiguous on the drive.


You dont need to defrag for that, you dont need to
move the files around, just the entrys in the directorys.


No, that is just wrong.


Nope.

Assume that you need to open 1000 small
files that are highly fragmented on the disk.


Small files are very unlikely to be fragmented, they normally wont
occupy more than one cluster and so by definition cant be fragmented.

Each file, because it is not contiguous with the previous file,


Thats not what fragmented means. That is poorly organised.

requires a disk seek, with a typical seek for a SATA drive being 9 ms.


You aint established that that time is significant as
part of the time involved in loading those 1K files.

For a 7200 rpm drive, once you have seeked to a track, you have
an average 4 ms of latency to get to the start sector for the file.
That is 12 ms of latency per file, and I need to open 1000 files.


You've mangled the maths there too.

Simple math tells you that you have 12,000 ms - 12 full
seconds - of extra loading time introduced in this scenario
attributable to physical fragmentation of the files on disk.


A properly written app wont have 1K of small files.

As a simple thought experiment, assume that all of the files are
organized by filename sequentially in the directory, which is your
proposed solution.


No it isnt. I was just proposing sorting the FILE ENTRYS
IN THE DIRECTORY, not the files themselves.

The time to load that entire directory would
never be more than 100 or 200 ms,


Wrong again.

and that would be a huge directory.


And an obscenely badly written app.

The additional time spent by the computer to sort even 2000 filenames
in an unsorted directory is not more than maybe another 50 ms.


They arent sorted.

I'm being generous to your argument.


You havent even managed to grasp what I was suggesting.

So your solution is based on optimizing a process that
takes at most 250 ms and saves at most maybe 50 ms,


Wrong.

and you are ignoring the very real and very measurable costs
associated with the seek and latency for loading 1000 files.


Nope.

50 ms in savings versus up to 12,000 ms in savings is not a good tradeoff.


Only the most badly written app would attempt
to load 1K files from the drive as fast as it can.

This is important for games like flight simulators that need
to load thousands of very small files in very short periods
of time, where disk latency accumulates across the large
numbers of files and really does affect load times.


Nope, that's what the OS file cache is for.


In a perfect world I would have a 12 GB memory cache in
front of the file system and some tool to preload that cache.


You dont need a 12 GB cache when its 1K very small files.

Maybe next year, but not on this year's hardware, thanks.


Dont need any new hardware, just have whatever
is in those 1K files in a single file, stupid.


  #7  
Old October 19th 06, 08:04 PM posted to comp.sys.ibm.pc.hardware.storage
Will
external usenet poster
 
Posts: 338
Default Deframenter That Will Create New Hard Drive?

"Rod Speed" wrote in message
...
Each file, because it is not contiguous
with the previous file, requires a disk seek, with a typical seek for a

SATA
For a 7200 rpm drive, once you have seeked to a track, you have
an average 4 ms of latency to get to the start sector for the file.
That is 12 ms of latency per file, and I need to open 1000 files.


You've mangled the maths there too.


Do the math correctly for us. There isn't any point to responding to your
many generalizations that have zero hard data or mathematical analysis as
backup to the points you want to make. No free rides. Do the math and
commit it to public scrutiny.

--
Will


  #8  
Old October 20th 06, 01:16 AM posted to comp.sys.ibm.pc.hardware.storage
Rod Speed
external usenet poster
 
Posts: 8,559
Default Deframenter That Will Create New Hard Drive?

Will wrote
Rod Speed wrote


Each file, because it is not contiguous with the previous file,
requires a disk seek, with a typical seek for a SATA For a
7200 rpm drive, once you have seeked to a track, you have
an average 4 ms of latency to get to the start sector for the file.
That is 12 ms of latency per file, and I need to open 1000 files.


You've mangled the maths there too.


Do the math correctly for us.


Just how many of you are there between those ears ?

No point in the math, any app designer with a clue wouldnt
have 1K of tiny files all loaded at once as fast as it can.

There isn't any point to responding to your many
generalizations that have zero hard data or mathematical
analysis as backup to the points you want to make.


Dont need any 'mathematical analysis' to realise that the
only thing that makes any sense at all is to have that data
in a single file instead of 1K separate tiny files if you are
going to load all that data as quickly as possible, stupid.

No free rides. Do the math and commit it to public scrutiny.


Go and **** yourself.


  #9  
Old October 20th 06, 01:52 AM posted to comp.sys.ibm.pc.hardware.storage
CJT
external usenet poster
 
Posts: 95
Default Deframenter That Will Create New Hard Drive?

Will wrote:

"CJT" wrote in message
...

It seems to me that this kind of defragmenter might also make an


interesting

way to do an image backup of a drive while it is in use. Obviously


there is

a risk if doing that for certain types of database files that are


actively

in use at the time of the defragmentation, but caveat emptor and all


that.

I've been doing that with Ghost for some time.



Ghost is just imaging right? In my method you defrag and then end up with
an image backup as a side effect. To do things with Ghost you need to
defrag and then image separately, taking more time.

Not so, at least on the version I use. You can create a true image by
setting an option, but the default does a file-by-file copy.

--
The e-mail address in our reply-to line is reversed in an attempt to
minimize spam. Our true address is of the form .
  #10  
Old October 20th 06, 03:02 AM posted to comp.sys.ibm.pc.hardware.storage
Will
external usenet poster
 
Posts: 338
Default Deframenter That Will Create New Hard Drive?

"Rod Speed" wrote in message
...
Will wrote
Each file, because it is not contiguous with the previous file,
requires a disk seek, with a typical seek for a SATA For a
7200 rpm drive, once you have seeked to a track, you have
an average 4 ms of latency to get to the start sector for the file.
That is 12 ms of latency per file, and I need to open 1000 files.


You've mangled the maths there too.


Do the math correctly for us.


Just how many of you are there between those ears ?

No point in the math, any app designer with a clue wouldnt
have 1K of tiny files all loaded at once as fast as it can.

There isn't any point to responding to your many
generalizations that have zero hard data or mathematical
analysis as backup to the points you want to make.


Dont need any 'mathematical analysis' to realise that the
only thing that makes any sense at all is to have that data
in a single file instead of 1K separate tiny files if you are
going to load all that data as quickly as possible, stupid.


Rod, you are changing the subject. I calculated the latency attributable
to opening 1000 non contiguous files on a SATA drive. You responded to
that by saying I did the math wrong in that calculation, and you supplied no
math of your own. If I did the math wrong, then do the math right and
explain what was wrong with the number I gave.

Now you are trying to dodge your own claim and instead change the topic to
how to redesign a commercial game to do things in a different way than it
does it. Okay, the game is designed badly. It opens a lot of very small
files in a very short time. Sorry. You can go convince Microsoft that
what they are doing is wrong. I'm sure they had a reason for it, but I
wasn't here to play Don Quixote with Microsoft. I was here to try to just
solve the real world problem I need to solve, which was to make the files
contiguous on the drive to avoid the latency of opening those files when
they are non contiguous.

Your solution for creating a drive that lines up the files contiguoulsly by
using a backup and restore is a good solution. No problems there. But if
you are going to say to someone that their calculation of some effect is
wrong, you should be prepared to present your own numbers, not just change
the topic to a variety of other sub topics. Otherwise your claim doesn't
have credibility. Saying that you don't need to backup your claims because
there are "other problems" doesn't establish your initial claim either.

--
Will



 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Seagate 7200.8 133GB per disc? Andreas Wohlfeld Storage (alternative) 11 February 19th 05 11:35 AM
how to test psu and reset to cmos to default Tanya General 23 February 7th 05 09:56 AM
Norton Ghost - Clone Won't Work jimbo Homebuilt PC's 70 November 15th 04 01:56 AM
SOLUTION! IBM Hard Drive only recognized as 33GB hard drive, but is more >>>82.3GB Andy Storage (alternative) 9 November 17th 03 11:20 PM


All times are GMT +1. The time now is 06:25 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.