View Single Post
  #8  
Old October 12th 18, 12:47 PM posted to alt.comp.hardware.pc-homebuilt
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default SSD and sleep mode?

Mike wrote:

Another thing I fret over is what used to be called the file allocation
table.
IF that's a fixed section of the drive, how do you wear level that?
Seems like that would be the most written section of the drive.


The SSD has a translation table.

There isn't a linear relationship between external LBA
and internal Flash address. Without the internal translation
table, or if the internal translation table is lost,
data recovery would be pretty damn difficult. The Flash
blocks inside, do not have the same size as the clusters
the file system uses.

A write to LBA 0x1234 can go to location 0x9876 on
one occasion, and to location 0x5432 on the next occasion.
Unused "blocks" are kept in a queue. When you issue
a TRIM command, that gives the drive even more blocks
to use.

And by using that sort of indirection, that gives the
wear leveling leverage. Maybe 0x9876 has 2001 writes
but 0x5432 has only had 2000 writes. So we write to
0x5432 to "bring it up to the same level" as 0x9876.
On average, the blocks all have about the same
number of writes. The flash location might have an
endurance of 3000 writes, and the writes are spread out.

It means the internal (3 core) processor can be
very busy. After a pounding with 4KB writes (where
the internal Flash blocks are a much larger allocation),
the drive rearranges the 4KB blocks and consolidates
them, and this takes an extra write. Larger files,
some of the clusters won't need to be consolidated
and they can be left alone.

Some internal processors also do error correction.
The TLC 512 byte sector might have 50 bytes of
error correction code, which is a lot. The internal
processor works out the error correction polynomial,
and fixes the bit(s) in error. On TLC or QLC, it's
possible every sector has errors (unlike the older
hard drives). You could design a dedicated hardware
block to do the error corrector, but the lazy guys
way is to do it in firmware inside the drive.

With each generation, the quantity (overhead) of
ECC bytes has gone up. SLC wouldn't have needed
50 bytes per sector. A smaller number would
have sufficed.

Blocks that are no longer correctable, are taken
out of service and no longer sit in the free queue.
Their wear leveling days are over.

*******

The toolkit may have a SMART readout with the
absolute total of writes listed as a parameter.
You can check this each day when you get up,
and see how much usage the SSD got. This will
give you some idea how "optimal" your tuning is.

In the past, it was possible to put the browser
cache in RAM. Or, you can set up a RAM Disk
yourself with its own drive letter, and use
F:\my_cache for the cached files. Seeing as my
Seamonkey can have 20K-30K files per day in that
cache, that will remove a tiny amount of wear.

If you watch the pagefile, it's been tuned pretty
well for SSD usage. The system doesn't do a lot
of paging, not nearly as much as in the past.
If you write a memory allocator and "run up"
the memory usage, you just get an out_of_memory
error and your test program stops. And the
"overshoot" hardly causes any usage of the
pagefile. I've started two of those programs
running simultaneously, and that does cause
a narrow "spike" and a tiny tiny bit of pagefile
usage. But that's not a typical user pattern.
So my synthetic tests didn't look overly scary.
I probably lack sufficient imagination to
tease out a pathological case for it.

Paul




Paul


Paul