View Single Post
  #16  
Old March 25th 21, 12:15 AM posted to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage
VanguardLH[_2_]
external usenet poster
 
Posts: 1,453
Default SSD "overprovisioning"

"J. P. Gilliver (John)" wrote:

How does the firmware (or whatever) in the SSD _know_ how much space
you've left unallocated, if you use any partitioning utility other
than one from the SSD maker (which presumably has some way of
"telling" the firmware)?


Changing the amount of unallocated space on the SSD is how the tools
from the SSD makers work, too. You can use their tool, or you can use a
partitioning tool.

If, after some while using an SSD, it has used up some of the slack,
because of some cells having been worn out, does the apparent total
size of the SSD - including unallocated space - appear (either in
manufacturer's own or some third-party partitioning utility) smaller
than when that utility is run on it when nearly new?


The amount of overprovisioning space set at the factory is never
available for you to change. If they set 7% space for overprovisioning,
you'll never be able to allocate that space to any partition. That
space is not visible, fixed, and set at the factory. For example, they
might sell a 128GB SSD, but usuable capacity is only 100GB. This is the
static overprovisioning set at the factory.

From the usable capacity of the drive, unallocated space is used for
dynamic overprovisioning. Typically you find that you cannot use all
unallocated space for a partition. There's some that cannot be
partitioned; however, by making partition(s) smaller then there is more
unallocated space available for use by dynamic overprovisioning. It's
dynamic because it changes with the amount of write delta (stored data
changes). The unallocated space is a reserve. Not all of it may get
used.

Individual cells don't get remapped. Blocks of cells get remapped. If
you were to reduce the OP using unallocated space, the previously marked
bad blocks would have to get re-remapped to blocks within the partition.
Those bad blocks are still marked as bad, so remapping has to be
elsewhere. Might you lose information in the blocks in the dynamic OP
space when you reduce it? That I don't know. Partition managers don't
know about how the content of unallocated space is used.

The SSD makers are so terse as to be sometimes unusably vague in their
responses. Samsung said "Over Provisioning can only be performed on the
last accessible partition." What does that mean? Unallocated space
must be located after the last partition? Well, although by accident,
that's how I (and Samsung Magician) have done it. The SSD shows up with
1 partition consuming all usuable capacity, and I or Samsung Magician
ended up shrinking the partition to make room for unallocated space at
the end. However, SSD makers seem to be alchemists or witches: once
they decide on their magic brew of ingredients, they keep it a secret.

I have increased OP using Samsung Magician, and decreased it, too. All
that it did was change the size of the unallocated space by shrinking or
enlarging the last partition, so the unallocated space change was after
the last partition. When shrinking the unallocated space, it was not
apparent in Samsung Magician that any bad cell blocks that got remapped
to unallocated space either got re-remapped into the static OP space
which would reduce endurance. Since the firmware had marked a block as
bad, it still gets remapped into static or dynamic OP. If unallocated
space were reduced to zero (no dynamic OP), static OP gets used for the
remappings. However, I haven't found anything that discusses for
remappings into dynamic OP when the unallocated space is shrunk.
Samsung Magician's OP adjustment looks to be nothing more than a limited
partition manager to shrink or enlarge the last partition, which is the
same you could do using a partition manager. I suspect any remap
targets in the dynamic OP do not get written into the static OP, so you
could end up with data corruption. A bad block got mapped into dynamic
OP, you reduced the size of dynamic OP which means some of those
mappings there are gone, and they are not written into static OP. Maybe
Samsung's Magician is smart enough to remap the dynamic OP remaps into
static OP, but I don't see that happening yet it could keep that
invisible to the user. Only if I had a huge number of remappings stored
in dynamic OP and then shrunk the unallocated space might I see the
extra time spent to copy those remappings into static OP when compared
to using a partition tool just just enlarge the last partition.

Since the information doesn't seem available, I err on the side of
caution: I only reduce dynamic OP immediately after enlarging it should
I decide the extra OP consumed a bit more than I want to lose in
capacity in the last partition. Once I set dynamic OP and have used the
computer for a while, I don't reduce dynamic OP. I have yet to find out
what happens to the remappings in dynamic OP when it is reduced. If I
later need more space in the partition, I get a bigger drive, clone to
it, and decide on dynamic OP at that time. With a bigger drive, I
probably will reduce the percentage of dynamic OP since it would be a
huge waste of space. For a drive clone, the static or dynamic
remappings from the old drive aren't copied to the new drive. The new
drive will have its own independent remappings, and the reads during the
clone are going to copy from the remaps from the old drive into the the
new drive's partition(s). Old remappings vaporize during the copy to a
different drive.

Unless reducing the dynamic OP size (unallocated space) is done very
early after creating it to reduce the chance of new remappings happening
between defining the unallocated space and then reducing its size, I
would be leery of reducing unallocated space on an SSD after lots of use
for a long time. Cells will go bad in SSDs, and why remapping is
needed. I don't see any tools that move remappings from dynamic OP when
it gets reduced, and the sectors where were the remapping get moved to
static OP. You can decide not to use dynamic OP at all, and hope the
factory-set static OP works okay for you for however long you own the
SSD. You can decide to sacrifice some capacity to define dynamic OP,
but I would recommend only creating it, perhaps later enlarging it, but
not to shrink it. I just can't find info on what happens to the remaps
in dynamic OP when it is shrunk.

Overprovisioning, whether fixed (static, set by factory) or dynamic
(unallocated space within the usuable space after static OP) always
reduces capacity of the drive. The reward is reducing write
amplication, increased performance (but not better than factory-time
performance), and endurance. You trade some of one for the other. It's
like insurance: the more you buy, the less money you have now, but you
hope you won't be spending a lot more later.

If - assuming you _can_ - you reduce the space for overprovisioning to
zero (obviously unwise), will the SSD "brick" either immediately, or
very shortly afterwards (i. e. as soon as another cell fails)?


Since the cell block is still marked as bad, it still needs to get
remapped. With no dynamic OP, static OP gets used. If you create
dynamic OP (unallocated space) where some remaps could get stored, what
happens to the remaps there when you shrink the dynamic OP? Sure, the
bad blocks are still marked bad, so future writes will remap the bad
block into static OP, but happened to the data in the remaps in dynamic
OP when it went away? Don't know. I don't see any SSD tool or
partition manager will write the remaps from dynamic OP into static OP
before reducing dynamic OP. After defining dynamic OP, reducing it
could cause data loss.

If you just must reduce dynamic OP because you need that unallocated
space to get allocated into a partition, your real need is a bigger
drive. When you clone (copy) the old SSD to a new SSD, none of the
remaps in the old SSD carry to the new SSD. When you get the new SSD,
you could change the size (percentage) of unallocated space to change
the size of dynamic OP, but I would do that immediately after the clone
(or restore from backup image). I'd want to reduce the unallocated
space on the new bigger SSD as soon as possible, and might even use a
bootable partition manager to do that before the OS loads the first
time. I cannot find what happens to the remaps in dynamic OP when it
gets reduced.

If, once an SSD _has_ "bricked" [and is one of the ones that goes to
read-only rather than truly bricking], can you - obviously in a dock on
a different machine - change (increase) its overprovisioning allowance
and bring it back to life, at least temporarily?


Never tested that. Usually I replace drives before they run out of free
space (within a partition) with bigger drives, or I figure out how to
move data off the old drive to make for more free space. If I had an
SSD that catastrophically failed into read-only mode, I'd get a new (and
probably bigger) SSD and clone from old to new, then discard the old.

Besides my desire to up capacity with a new drive when an old drive gets
over around 80% full, and if I don't want to move files off of it to get
back a huge chunk to become free space, I know SSDs are self
destructive, so I expect them to fail unless I replace them beforehand.
From my readings, and although they only give a 1-year warranty, most
SSD makers seem to plan on a MTBF of 10 years, but that's under a write
volume "typical" of consumer use (they have some spec that simulates
typical write volume, but I've not seen those docs). Under business or
server use, MTBF is expected to be much lower. I doubt that I would
keep any SSD for more than 5 years in my personal computers. I up the
dynamic OP to add insurance, because I size drives far beyond expected
usage. Doubling is usually my minimum upsize scale.

I wouldn't plan on getting my SSD anywhere near its maximum write cycle
count that would read-only brick it. SMART does not report the number
of write cycles, but Samsung's Magician tool does. It must request info
from firmware that is not part of the SMART table. My current 1 TB NVMe
m.2 SSD is about 25% full after a year's use of my latest build.
Consumption won't change as much in the future (i.e., it pretty much
flattened after a few months), but if it gets to 80% would then be when
I consider getting another matching NVMe m.2 SSD, or replace the old 1
TB one with 2TB, or larger, and cloning would erase all those old remaps
in the old drive (the new drive won't have those). Based on my past
experience and usage, I expect my current build to last another 7 years
until I the itch gets too unbearable to do a new build. 20% got used
for dynamic OP just as insurance to get an 8-year lifespan, but I doubt
I will ever get close to bricking the SSD.

I could probably just use the 10% minimum for static OP, but I'm willing
to spend some capacity as insurance. More than for endurance, I added
dynamic OP to keep up the performance of the SSD. After a year, or
more, of use, lots of users have reported their SSDs don't perform like
when new. The NVMe m.2 SSD is a 5 times faster (sequential, and more
than 4 times for random) for both reads and writes than my old SATA SSD
drive, and I don't want to lose that joy of speed that I felt at the
start.

I might be getting older and slower, but not something I want for my
computer hardware as it ages.