If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Display Modes |
#11
|
|||
|
|||
Copieing 60.000 items from Windows Live Mail extreme performance degradation
OPERATION SUCCCESSFULL.
MAIL ARCHIVE TRANSITIONED TO NEW DRIVE W: =D All is working again. The shutdown trick of P drive and terminate and restart windows live mail worked... no need to further waste time on it's stupid recovery bull**** Pfew... Options-mail-advanced-maintenance-storage folder This tiny little program called Windows Live Mail... still has some fighting power inside of it ! =D Now it can enjoy another 4 GIGAWATTS/BYTES OF DARTH VADER FIRE POWER YEAH =D LOL. This calls for a tie defender = ppzzzzzzzwieee.. Ok and now I really have to go read my newly retrieved e-mails =D Bye for now, Skyvader over and out ! =D |
#12
|
|||
|
|||
Copieing 60.000 items from Windows Live Mail extreme performancedegradation
|
#13
|
|||
|
|||
Copieing 60.000 items from Windows Live Mail extreme performance degradation
On Wed, 5 Dec 2018 09:42:02 -0800 (PST), wrote:
Disabling the mouse is done by clicking on the title bar of cmd.exe window and going to properties and disabling "quick edit mode". Quick edit mode is insane. Quick Edit Mode is my favorite Command Prompt option. I enable it on every system because it makes working with the Command Prompt so much easier. Interesting that you're disabling it instead. |
#15
|
|||
|
|||
Copieing 60.000 items from Windows Live Mail extreme performancedegradation
wrote:
But thanks for your tips, they were a bit interesting ! =D Especially the perfmon... it does show "total disk performance"... something which resource monitor struggles with somewhat... I think... not sure... there is total i/o... but I am not sure it's seperated into read vs write speeds for totals. Bye, Skybuck =D The perfmon.msc has separate graphs for read and write, if you select the appropriate "physical disk" entries. Once you find a "disk write bytes per second" type entry, look down as it has additional selectors such as "Total I/O", "Drive X", "Drive Y" and so on. I usually leave mine set to "Total" as I'm only doing single source, single destination experiments, so "Total" still tells me about a particular drive. The graph scale is a bit baffling. I always modify that, and make it "20000" for modern hard drives, as they top out at 200MB/sec or so. The value "40000" might be good for a cheap SATA SSD for a full scale (400MB/sec on some cheap drives). For example, the Intel 545S I just got, has a read at 383MB/sec. The Samsung gives around 440MB/sec on a good day. ******* I would have expected trouble with NTFS eventually. You can safely do around one million files in a single folder. I have seen bugs in File Explorer, at just the 60,000 file level (File Explorer goes into a loop and doesn't come back). You can cheat, and use a backup utility that sequentially copies the clusters while doing a partition backup. That will maintain the speed, because the head movement pattern is quite different. If a file is fragmented, the transfer slows right down. For example, this is one of my old SATA scratch drives. Normally it reads at 100MB/sec near the beginning. In these pictures, a 20GB file has been fragmented on purpose with a utility intended for the job. The transfer rate when reading the file, drops to 8MB/sec. The tool cannot set a high enough fragmentation level to hit 1MB/sec (which I've seen before in practice). The fragmentation picture in this example, has a 20GB frag.bin file and a 10GB zero.bin file. The zero.bin is not fragmented (only 3 fragments in it). The frag.bin has 100,000 fragments. I do a read of the zero.bin 10GB file, to flush the system read cache (so there won't be any cheating during the benchmark run). Then, I read the frag.bin file and see how fast it'll go. https://i.postimg.cc/zG5tz1sz/hard-d...ss-fragged.gif I created the 20GB file first, then applied fragmentation to it with this tool. It you set the slider for too-small fragments, the process of creating fragmentation is dog-slow. You have to be *really* patient to use this tool, and once you know it's running OK, just walk away... https://www.passmark.com/products/fragger.htm I prepare "thrashing" test cases on my RAMdisk and then clone over to a real hard drive, when I know the thrashing during preparation, would be hard on the movements of the disk arm. Using the fragger program on the RAMdisk is *still* slow. Dog-slow. 300KB/sec slow. One CPU core railed. But at least with that tool, I'm getting better quality fragmentation than with a little C program I wrote. If you want good fragmentation, you really need to use the Microsoft API for moving clusters around. https://docs.microsoft.com/en-us/win...gmenting-files The API was intended to help people write defragmenter programs, but in the Passmark case, they use it to make files fragmented again. The purpose of doing that, is for preparing disks which you intend to benchmark with commercial defragmenter programs. Paul |
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Ms-Dos and Windows 7 have a major weakness related to date/times when copieing files. | Skybuck Flying[_4_] | Homebuilt PC's | 12 | November 15th 15 05:51 PM |
Mcafee + windows mail | species8350 | Dell Computers | 0 | April 14th 07 05:28 PM |
performance degradation backing up small files | alan | Storage & Hardrives | 2 | April 27th 04 05:47 AM |
Found culprit in performance degradation... | Carl Fenley | Nvidia Videocards | 1 | February 29th 04 08:44 PM |
'Swen-mail' and the elapsed time between a Usenet newsgroup post with a valid e-mail addres and the arrival of the first infected message in the mail box | Phil Weldon | Overclocking | 12 | October 7th 03 08:14 AM |