View Single Post
  #15  
Old December 6th 18, 06:24 PM posted to alt.comp.hardware.pc-homebuilt
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default Copieing 60.000 items from Windows Live Mail extreme performancedegradation

wrote:

But thanks for your tips, they were a bit interesting ! =D

Especially the perfmon... it does show "total disk performance"...
something which resource monitor struggles with somewhat...
I think... not sure... there is total i/o... but I am not sure
it's seperated into read vs write speeds for totals.

Bye,
Skybuck =D


The perfmon.msc has separate graphs for read and write,
if you select the appropriate "physical disk" entries.

Once you find a "disk write bytes per second" type entry, look
down as it has additional selectors such as "Total I/O",
"Drive X", "Drive Y" and so on. I usually leave mine
set to "Total" as I'm only doing single source, single
destination experiments, so "Total" still tells me about
a particular drive.

The graph scale is a bit baffling. I always modify that,
and make it "20000" for modern hard drives, as they
top out at 200MB/sec or so. The value "40000" might be
good for a cheap SATA SSD for a full scale (400MB/sec
on some cheap drives). For example, the Intel 545S I
just got, has a read at 383MB/sec. The Samsung gives
around 440MB/sec on a good day.

*******

I would have expected trouble with NTFS eventually.
You can safely do around one million files in a
single folder. I have seen bugs in File Explorer,
at just the 60,000 file level (File Explorer goes into
a loop and doesn't come back).

You can cheat, and use a backup utility that sequentially
copies the clusters while doing a partition backup. That
will maintain the speed, because the head movement pattern
is quite different.

If a file is fragmented, the transfer slows right down.
For example, this is one of my old SATA scratch drives.
Normally it reads at 100MB/sec near the beginning.

In these pictures, a 20GB file has been fragmented on purpose
with a utility intended for the job. The transfer rate when
reading the file, drops to 8MB/sec. The tool cannot set
a high enough fragmentation level to hit 1MB/sec (which
I've seen before in practice). The fragmentation picture
in this example, has a 20GB frag.bin file and a 10GB zero.bin
file. The zero.bin is not fragmented (only 3 fragments in it).
The frag.bin has 100,000 fragments. I do a read of the zero.bin
10GB file, to flush the system read cache (so there won't
be any cheating during the benchmark run). Then, I read
the frag.bin file and see how fast it'll go.

https://i.postimg.cc/zG5tz1sz/hard-d...ss-fragged.gif

I created the 20GB file first, then applied fragmentation to
it with this tool. It you set the slider for too-small fragments,
the process of creating fragmentation is dog-slow. You have
to be *really* patient to use this tool, and once you
know it's running OK, just walk away...

https://www.passmark.com/products/fragger.htm

I prepare "thrashing" test cases on my RAMdisk and
then clone over to a real hard drive, when I know
the thrashing during preparation, would be hard on the
movements of the disk arm. Using the fragger program
on the RAMdisk is *still* slow. Dog-slow. 300KB/sec slow.
One CPU core railed. But at least with that tool, I'm
getting better quality fragmentation than with a
little C program I wrote. If you want good
fragmentation, you really need to use the Microsoft API
for moving clusters around.

https://docs.microsoft.com/en-us/win...gmenting-files

The API was intended to help people write defragmenter
programs, but in the Passmark case, they use it to
make files fragmented again. The purpose of doing that,
is for preparing disks which you intend to benchmark
with commercial defragmenter programs.

Paul