View Single Post
  #15  
Old March 10th 21, 02:55 PM posted to alt.comp.os.windows-10,alt.comp.hardware
VanguardLH[_2_]
external usenet poster
 
Posts: 1,453
Default How it is possible

"J. P. Gilliver (John)" wrote:

Even leaving aside Jeff's point about bits versus bytes, speed isn't
the only important parameter for and SSD: there are probably many,
but the one that bugs me is the tolerated number of writes - which
for the same size SSD in the same machine/use, more or less maps to
lifetime. You also need to know how they behave when they reach their
end of life: do they continue trying to work (I don't think any),
switch to read-only, or just become a brick (at least one make/range
does).


Overprovisioning affects how much reserve space there is for remapping:
the more you have, the more remapping space is available, and the longer
your SSD survives before its catastrophic failure. Consumer SSDs get
overprovisioned 10%. Server SSDs get overprovisioned 20%. However,
I've seen SSDs get an initial (factory) overprovisioning of only 6%.
For my SSDs in my home PC, I up the overprovisioning from 10% to 20%.
Because of increased overprovisioning, I've not had an SSD long enough
to brick it. Eventually prices comes down enough that I can afford to
replace an HDD or SDD with a larger one.

https://www.seagate.com/tech-insight...its-master-ti/
https://www.youtube.com/watch?v=Q15wN8JC2L4 (skip promo ad at end)

I use Samsung's Magician to change overprovisioning on my Samsung SSDs
(https://www.samsung.com/semiconducto...mer/magician/).
However, any partition manager will work where you use it to increase or
decrease the size of unallocated space on the SSD. Because the consumer
SSDs are typically shipped with 10%, or less, by default, you steal
space from an allocated partition, if there are any, to make
unallocated. I keep the unallocated space in one block.

I'd rather give my SSDs longer potential lifespan than pinch on the
capacity of the [primary] partitions. If I eventually need more space,
I get a larger capacity drive. However, usually when I spec my fab, I
go a lot larger on drive space than expected over an 8-year lifespan or
than I've used before. I get fatter with every build.

Because of catastrophic failure of SSDs, I use those only for the OS,
apps, and temporary data. All those are either replaceable or
reproducible: I can reinstall the OS, reinstall the apps, or reproduce
the temporary data, or get them from backups. Any critical data goes on
a separate SSD (basically my reuse of a prior SSD), and gets backed up
to both an internal HDD which is mirrored to an external HDD. This PC
is not used for business, so I don't bother with another mirror to
removable media for off-site storage. SSDs *will* fail. Under "normal"
use, most SSD makers give an estimate of 10 years. Consider this like a
MTBF spec: you might get that, or higher, or you might get less. If
you're doing video editing, animation creation, high-volume data
processing, or file services, you're not their normal user. SSDs are
self-destructive storage devices! Expect them to catastrophically fail,
so plan on it ahead of time. All SSDs will eventually brick unless you
write to establish a state or image and then becomes a read-only device,
and you never write to it again. The same self-destruct occurs with USB
flash drives: great for short-term use, but don't use for long-time
storage unless you only read from it.

"The flame that burns twice as bright burns half as long" (Lao Tzu).

https://www.google.com/url?sa=t&rct=...hite-paper.pdf
Although there is no difference between the sequential and random
write performance for fresh-out-of-the-box (FOB) NAND,
the random write does not perform as well as the sequential write once
data has been written over the entire space of the NAND.
Random writes, smaller in size than sequential writes, mix valid and
invalid pages within blocks, which causes frequent GC and
results in decreased performance. If the OP is increased, more free
space that is inaccessible by the host can be secured, and the
resulting efficiency of GC contributes to improved performance. The
sustained performance is improved in the same manner.

Due to wear levelling to reduce writes on a particular block (to reduce
oxide stress in the junctions), new writes are positioned somewhere else
than where the data was read. Eventually all space within a partition
gets written. The drive itself can initiate a firmware-based garbage
collection (GC), or the OS can initiate via TRIM. Before a previously
used block can get reused for writes, it must be zeroed, and hopefully
that occurs as a background GC or TRIM. The waits, if any, like on a
super busy drive, affect write performance. More overprovisioning helps
maintain initial write performance; else, as noticed by many SSD users,
the SSD can get slower with age (volume of writes) until the next
firmware GC or OS TRIM (which are still keeping the drive busy).
Windows XP doesn't have TRIM built into it, and why SSD users on that OS
have to use a utility to send a TRIM (GC) request to the drive to
restore write performance.

Unless you overly penny-pinched on an SSD to barely get enough space
over its expected lifetime (how long you intend to use it), increase the
overprovisioning to up its potential lifespan and maintain write
performance. The SSD will still be a lot faster than an HDD, but you
might wonder later why it isn't so speedy as before for writes, plus you
might want to increase potential lifespan if you plan on keeping your
computer for 6 years, or longer, without ever replacing the SSD, or
continuing to repurpose it after getting a larger one.

You can use an SSD as shipped and hope the MTBF of 10 years is
sufficient for you as a "normal" user. However, if you're buying SSDs
and doing your own builds, you're not normal. Consumers that buy
pre-built average-spec computers are normal users. Overprovisioning
lengthens lifespan and maintains write performance, and the more the
better. It does reduce capacity, but then you should be over-building
on the parts you install yourself in your computers, so you should be
excessive in storage capacity beyond your expections. If you're willing
to toss more money to get an SSD over an HDD, you should also be willing
to toss more money to get over capacity.