A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

How it is possible



 
 
Thread Tools Display Modes
  #11  
Old March 10th 21, 10:34 AM posted to alt.comp.os.windows-10,alt.comp.hardware
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default How it is possible

Carlos E.R. wrote:
On 08/03/2021 00.30, J. P. Gilliver (John) wrote:
On Sun, 7 Mar 2021 at 16:12:25, micky wrote
(my responses usually follow points raised):
How it is possible that one SSD is 17 times as fast as another but costs
less? Both are 240G. Why would anyone buy the slower one (like I did
last summer)?

[]
In the comparison list of the first one, 4 of them side by side, half
way to the bottom of the page, 2 others are the same speed as the first,
but the second one is 17 times as fast.

In a similar side-by-side comparison list on the second page, the same
thing is true. Only the PNY is so fast, and for less money. Does PNY
know something the others don't know.


Even leaving aside Jeff's point about bits versus bytes, speed isn't
the only important parameter for and SSD: there are probably many, but
the one that bugs me is the tolerated number of writes - which for the
same size SSD in the same machine/use, more or less maps to lifetime.
You also need to know how they behave when they reach their end of
life: do they continue trying to work (I don't think any), switch to
read-only, or just become a brick (at least one make/range does).


Which one bricks? That's important to know.


Intel SSDs stop both reads and writes, when the wear life is exceeded.
Once the wear life hits, say, 3000 writes per location, the drive
stops responding. This makes it not possible to do a backup
or a clone.

As a consequence, the user is advised to keep a Toolkit handy
which has an end-of-life predictor, for better quality handoff.

Of course your drive is not near end of life. But, you
only know the wear rate, if you check the Toolkit occasionally
for the projections on life. And, you look pretty bad, if the
topic slips your mind, and you start asking for help with that
"too crusty backup I made two years ago". We don't want
this topic to be handled by people losing data.

It's a shame, that several of the toolkits, suck. I was
not impressed with a couple I checked. Hobbyists could
write better code - code that displayed the salient data
to keep users informed.

And a drive I could not keep because the hardware sucked,
the toolkit was great. That's just how this computer stuff works.

*******

The point of making an example out of Intel, is to make you
aware of what the most extreme policy is. And Intel wins the
prize in this case. Some products from competitors, will
allow you to read, and they stop writing. This allows you to
make a backup using a Macrium CD, and prepare a replacement SSD.

The reason Intel stops reading, is to guard against the possibility
that read errors are not getting detected properly. Intel arbitrarily
decided that only "perfect" data need apply. And they weren't going to
allow a certain BER to leak out and then customers blame Intel
for "accepting corrupt data".

One of the BER indicators in the SSD datasheets, is 10x less good
than a hard drive (one product might be 10^-15, the other 10^-14
kind of thing). And you may find review articles making
references to this, that this difference is a bad thing.

The ECC on SSDs is already a pretty heavy weight item. A bit
more than 10% of flash cells, are likely being used just to
hold the ECC. And it's that ECC calc that keeps TLC flash
from ruining our data. One of the first TLC drives, every
sector had errors, and it was the ECC that transparently
made the drive look "perfect" to the user. When this happens,
the drive can slow down (ECC done by ARM cores, not hardware),
and this makes the more aggressive storage techs (QLC flash)
look bad. It's the "stale slow" drive problem - one way to
fix it, is for the drive to re-write itself at intervals,
which of course depletes the wear life.

The topic is a lot like BEV (electric) cars :-) "Different,
in a bad way" :-) The populace will know, when everyone has
had the mechanic tell them "your battery pack needs to be
replaced".

Paul
  #12  
Old March 10th 21, 11:10 AM posted to alt.comp.os.windows-10,alt.comp.hardware
Carlos E.R.
external usenet poster
 
Posts: 8
Default How it is possible

On 10/03/2021 11.34, Paul wrote:
Carlos E.R. wrote:



Even leaving aside Jeff's point about bits versus bytes, speed
isn't the only important parameter for and SSD: there are
probably many, but the one that bugs me is the tolerated number
of writes - which for the same size SSD in the same machine/use,
more or less maps to lifetime. You also need to know how they
behave when they reach their end of life: do they continue trying
to work (I don't think any), switch to read-only, or just become
a brick (at least one make/range does).


Which one bricks? That's important to know.


Intel SSDs stop both reads and writes, when the wear life is
exceeded. Once the wear life hits, say, 3000 writes per location, the
drive stops responding. This makes it not possible to do a backup or
a clone.


Ok. I will have to check it I have any Intel, I don't remember.


Sure, of course one must have a backup, but even if one does a daily
backup (which most people don't), the incident can happen just after one
saves important files. And as typically the computer bricks, it is not
possible to save the file elsewhere. At best, the day work is lost.



As a consequence, the user is advised to keep a Toolkit handy which
has an end-of-life predictor, for better quality handoff.


Sorry, Toolkit? What is that? Ah, you mean that one must have
"something" that predicts life. True.


Of course your drive is not near end of life. But, you only know the
wear rate, if you check the Toolkit occasionally for the projections
on life. And, you look pretty bad, if the topic slips your mind, and
you start asking for help with that "too crusty backup I made two
years ago". We don't want this topic to be handled by people losing
data.

It's a shame, that several of the toolkits, suck. I was not impressed
with a couple I checked. Hobbyists could write better code - code
that displayed the salient data to keep users informed.

And a drive I could not keep because the hardware sucked, the toolkit
was great. That's just how this computer stuff works.



On the Windows side of my laptops I don't have anything. On the Linux
side I have the smartctl daemon, but I don't know what it says about end
of life warnings. It might send an email. Otherwise, it will be in the
warning log.



*******

The point of making an example out of Intel, is to make you aware of
what the most extreme policy is. And Intel wins the prize in this
case. Some products from competitors, will allow you to read, and
they stop writing. This allows you to make a backup using a Macrium
CD, and prepare a replacement SSD.


Right.

The reason Intel stops reading, is to guard against the possibility
that read errors are not getting detected properly. Intel
arbitrarily decided that only "perfect" data need apply. And they
weren't going to allow a certain BER to leak out and then customers
blame Intel for "accepting corrupt data".


heh.


One of the BER indicators in the SSD datasheets, is 10x less good
than a hard drive (one product might be 10^-15, the other 10^-14 kind
of thing). And you may find review articles making references to
this, that this difference is a bad thing.

The ECC on SSDs is already a pretty heavy weight item. A bit more
than 10% of flash cells, are likely being used just to hold the ECC.
And it's that ECC calc that keeps TLC flash from ruining our data.
One of the first TLC drives, every sector had errors, and it was the
ECC that transparently made the drive look "perfect" to the user.
When this happens, the drive can slow down (ECC done by ARM cores,
not hardware), and this makes the more aggressive storage techs (QLC
flash) look bad. It's the "stale slow" drive problem - one way to fix
it, is for the drive to re-write itself at intervals, which of course
depletes the wear life.

The topic is a lot like BEV (electric) cars :-) "Different, in a bad
way" :-) The populace will know, when everyone has had the mechanic
tell them "your battery pack needs to be replaced".

Paul


{chuckle}

--
Cheers, Carlos.
  #13  
Old March 10th 21, 01:00 PM posted to alt.comp.os.windows-10,alt.comp.hardware
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default How it is possible

Carlos E.R. wrote:
On 10/03/2021 11.34, Paul wrote:


As a consequence, the user is advised to keep a Toolkit handy which
has an end-of-life predictor, for better quality handoff.


Sorry, Toolkit? What is that? Ah, you mean that one must have
"something" that predicts life. True.


Just about every brand of SSD, has a software download for Windows.
In it, is a tool for displaying SMART data, and for also
logging per-day consumption, in an effort to predict
what day in some future year, the SSD will hit 3000 write cycles.

How important it is, to use the Toolkit download software on
Windows, depends on how bad the exit behavior of the drive is,
when it hits 3000 writes per each cell.

One of the enthusiast tech sites did a test involving different
brands of SSDs. And for the ones which don't stop working at
3000 writes, the devices lasted at least 50% longer than that.
The devices eventually brick when the "critical data" section
of the device gets corrupted. On one device, the triggering
event was a loss of power in the lab where the test was
being carried out. And a bit later, the device died and
could not continue.

But the test was good fun while it lasted. And shows that the
policy of killing the drive at 3000, is a very conservative one.

But if you don't automate the termination process, people will
just ignore the SMART warning and keep using the device.

Paul
  #14  
Old March 10th 21, 01:40 PM posted to alt.comp.os.windows-10,alt.comp.hardware
Carlos E.R.
external usenet poster
 
Posts: 8
Default How it is possible

On 10/03/2021 14.00, Paul wrote:
Carlos E.R. wrote:
On 10/03/2021 11.34, Paul wrote:


As a consequence, the user is advised to keep a Toolkit handy which
has an end-of-life predictor, for better quality handoff.


Sorry, Toolkit? What is that? Ah, you mean that one must have
"something" that predicts life. True.


Just about every brand of SSD, has a software download for Windows.
In it, is a tool for displaying SMART data, and for also
logging per-day consumption, in an effort to predict
what day in some future year, the SSD will hit 3000 write cycles.


Ah, ok, that one.


How important it is, to use the Toolkit download software on
Windows, depends on how bad the exit behavior of the drive is,
when it hits 3000 writes per each cell.

One of the enthusiast tech sites did a test involving different
brands of SSDs. And for the ones which don't stop working at
3000 writes, the devices lasted at least 50% longer than that.
The devices eventually brick when the "critical data" section
of the device gets corrupted. On one device, the triggering
event was a loss of power in the lab where the test was
being carried out. And a bit later, the device died and
could not continue.

But the test was good fun while it lasted. And shows that the
policy of killing the drive at 3000, is a very conservative one.

But if you don't automate the termination process, people will
just ignore the SMART warning and keep using the device.


Quite possible.


--
Cheers, Carlos.
  #15  
Old March 10th 21, 02:55 PM posted to alt.comp.os.windows-10,alt.comp.hardware
VanguardLH[_2_]
external usenet poster
 
Posts: 1,453
Default How it is possible

"J. P. Gilliver (John)" wrote:

Even leaving aside Jeff's point about bits versus bytes, speed isn't
the only important parameter for and SSD: there are probably many,
but the one that bugs me is the tolerated number of writes - which
for the same size SSD in the same machine/use, more or less maps to
lifetime. You also need to know how they behave when they reach their
end of life: do they continue trying to work (I don't think any),
switch to read-only, or just become a brick (at least one make/range
does).


Overprovisioning affects how much reserve space there is for remapping:
the more you have, the more remapping space is available, and the longer
your SSD survives before its catastrophic failure. Consumer SSDs get
overprovisioned 10%. Server SSDs get overprovisioned 20%. However,
I've seen SSDs get an initial (factory) overprovisioning of only 6%.
For my SSDs in my home PC, I up the overprovisioning from 10% to 20%.
Because of increased overprovisioning, I've not had an SSD long enough
to brick it. Eventually prices comes down enough that I can afford to
replace an HDD or SDD with a larger one.

https://www.seagate.com/tech-insight...its-master-ti/
https://www.youtube.com/watch?v=Q15wN8JC2L4 (skip promo ad at end)

I use Samsung's Magician to change overprovisioning on my Samsung SSDs
(https://www.samsung.com/semiconducto...mer/magician/).
However, any partition manager will work where you use it to increase or
decrease the size of unallocated space on the SSD. Because the consumer
SSDs are typically shipped with 10%, or less, by default, you steal
space from an allocated partition, if there are any, to make
unallocated. I keep the unallocated space in one block.

I'd rather give my SSDs longer potential lifespan than pinch on the
capacity of the [primary] partitions. If I eventually need more space,
I get a larger capacity drive. However, usually when I spec my fab, I
go a lot larger on drive space than expected over an 8-year lifespan or
than I've used before. I get fatter with every build.

Because of catastrophic failure of SSDs, I use those only for the OS,
apps, and temporary data. All those are either replaceable or
reproducible: I can reinstall the OS, reinstall the apps, or reproduce
the temporary data, or get them from backups. Any critical data goes on
a separate SSD (basically my reuse of a prior SSD), and gets backed up
to both an internal HDD which is mirrored to an external HDD. This PC
is not used for business, so I don't bother with another mirror to
removable media for off-site storage. SSDs *will* fail. Under "normal"
use, most SSD makers give an estimate of 10 years. Consider this like a
MTBF spec: you might get that, or higher, or you might get less. If
you're doing video editing, animation creation, high-volume data
processing, or file services, you're not their normal user. SSDs are
self-destructive storage devices! Expect them to catastrophically fail,
so plan on it ahead of time. All SSDs will eventually brick unless you
write to establish a state or image and then becomes a read-only device,
and you never write to it again. The same self-destruct occurs with USB
flash drives: great for short-term use, but don't use for long-time
storage unless you only read from it.

"The flame that burns twice as bright burns half as long" (Lao Tzu).

https://www.google.com/url?sa=t&rct=...hite-paper.pdf
Although there is no difference between the sequential and random
write performance for fresh-out-of-the-box (FOB) NAND,
the random write does not perform as well as the sequential write once
data has been written over the entire space of the NAND.
Random writes, smaller in size than sequential writes, mix valid and
invalid pages within blocks, which causes frequent GC and
results in decreased performance. If the OP is increased, more free
space that is inaccessible by the host can be secured, and the
resulting efficiency of GC contributes to improved performance. The
sustained performance is improved in the same manner.

Due to wear levelling to reduce writes on a particular block (to reduce
oxide stress in the junctions), new writes are positioned somewhere else
than where the data was read. Eventually all space within a partition
gets written. The drive itself can initiate a firmware-based garbage
collection (GC), or the OS can initiate via TRIM. Before a previously
used block can get reused for writes, it must be zeroed, and hopefully
that occurs as a background GC or TRIM. The waits, if any, like on a
super busy drive, affect write performance. More overprovisioning helps
maintain initial write performance; else, as noticed by many SSD users,
the SSD can get slower with age (volume of writes) until the next
firmware GC or OS TRIM (which are still keeping the drive busy).
Windows XP doesn't have TRIM built into it, and why SSD users on that OS
have to use a utility to send a TRIM (GC) request to the drive to
restore write performance.

Unless you overly penny-pinched on an SSD to barely get enough space
over its expected lifetime (how long you intend to use it), increase the
overprovisioning to up its potential lifespan and maintain write
performance. The SSD will still be a lot faster than an HDD, but you
might wonder later why it isn't so speedy as before for writes, plus you
might want to increase potential lifespan if you plan on keeping your
computer for 6 years, or longer, without ever replacing the SSD, or
continuing to repurpose it after getting a larger one.

You can use an SSD as shipped and hope the MTBF of 10 years is
sufficient for you as a "normal" user. However, if you're buying SSDs
and doing your own builds, you're not normal. Consumers that buy
pre-built average-spec computers are normal users. Overprovisioning
lengthens lifespan and maintains write performance, and the more the
better. It does reduce capacity, but then you should be over-building
on the parts you install yourself in your computers, so you should be
excessive in storage capacity beyond your expections. If you're willing
to toss more money to get an SSD over an HDD, you should also be willing
to toss more money to get over capacity.
  #16  
Old March 10th 21, 03:45 PM posted to alt.comp.os.windows-10,alt.comp.hardware
J. P. Gilliver (John)[_3_]
external usenet poster
 
Posts: 24
Default How it is possible

On Wed, 10 Mar 2021 at 08:55:04, VanguardLH wrote (my
responses usually follow points raised):
[]
https://www.seagate.com/tech-insight...its-master-ti/
https://www.youtube.com/watch?v=Q15wN8JC2L4 (skip promo ad at end)

I use Samsung's Magician to change overprovisioning on my Samsung SSDs
(https://www.samsung.com/semiconducto...t/consumer/mag
ician/).
However, any partition manager will work where you use it to increase or
decrease the size of unallocated space on the SSD. Because the consumer


Is that all these routines do - declare space to be unallocated?
[]
"The flame that burns twice as bright burns half as long" (Lao Tzu).

I like it (-:
[]
You can use an SSD as shipped and hope the MTBF of 10 years is
sufficient for you as a "normal" user. However, if you're buying SSDs
and doing your own builds, you're not normal. Consumers that buy
pre-built average-spec computers are normal users. Overprovisioning
lengthens lifespan and maintains write performance, and the more the
better. It does reduce capacity, but then you should be over-building
on the parts you install yourself in your computers, so you should be
excessive in storage capacity beyond your expections. If you're willing
to toss more money to get an SSD over an HDD, you should also be willing
to toss more money to get over capacity.


Do you have an (approximate) table of overprovisioning versus
(predicted) lifespan extension (as a percentage, not years)? For
example, what percentage increase in lifespan does increasing
overprovisioning from 10% to 20% give you? Does overprovisioning by 50%
give you double the (predicted) lifespan? And so on.
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)[email protected]+H+Sh0!:`)DNAf

Never raise your hand to your children. It leaves your mid-section unprotected
  #17  
Old March 10th 21, 06:18 PM posted to alt.comp.os.windows-10,alt.comp.hardware
VanguardLH[_2_]
external usenet poster
 
Posts: 1,453
Default How it is possible

"J. P. Gilliver (John)" wrote:

VanguardLH wrote:

I use Samsung's Magician to change overprovisioning on my Samsung SSDs
(https://www.samsung.com/semiconducto...mer/magician/).
However, any partition manager will work where you use it to increase
or decrease the size of unallocated space on the SSD.


Is that all these routines do - declare space to be unallocated?


There is a fixed minimum overprovisioning space assigned within the
firmware for space consumed on the drive. You can't overprovision for
less that that. You can't touch that. It's fixed. You can only add
more overprovision space (aka dynamic overprovisioning) by unallocating
more space on the SSD. Yep, that's all of what user-configurable
overprovisioning does: unallocate space on the drive. Use the
partitioning tool that came with the SSD (or available online from the
SSD maker), or use any partitioning tool to change the amount of
unallocated space.

The articles to which I linked also mention using a partition manager to
change the unallocated space on a drive. No everyone installs 3rd party
partition managers, and the Disk Mgmt tool included in Windows is
dismal. So, some SSD makers provide their own utility to report lots of
stats on the SSD along with some performance and endurance functions.

Do you have an (approximate) table of overprovisioning versus
(predicted) lifespan extension (as a percentage, not years)? For
example, what percentage increase in lifespan does increasing
overprovisioning from 10% to 20% give you? Does overprovisioning by
50% give you double the (predicted) lifespan? And so on.


How long have SSDs been around at a price tolerable by consumers? Yeah,
and you want me to somehow have experience with SSDs for over 10 years
to check on actual stats across multitudes of different brands and
models of SSDs. Maybe BackBlaze has stats for you.

The less mileage you put on your car, the longer it will last. The less
oxide stress you put on NAND junction, the longer the junctions last.
Is there a direct linear relationship between mileage or writes that
lets you know exactly how much you extend the lifespace of your car or
SSD? Nope. Your car will rust away, rust rings will form on the slide
rod for the brake calipers, tires will go flat and can get permanently
distorted if not rotated, the lighter solvents in the gasoline will
escape despite a closed fuel system, the wipers will deteriorate, and so
on while your car is parked for years in your driveway. Instead of
catastrophically failing after 10 years or 10 million writes, your SSD
could fail in a month after only a couple thousand writes. It takes a
lot of samples to calculate MTBF, and that's just a statistical
guesstimate.

Increased OP increases endurance, and that's due to the self-destructive
technology of SSDs. I'm not a datacenter employing thousands of SSDs to
have a sufficient history and sampling size to equate percent increase
of OP to percent increase in endurance. Why do the SSD manufacturers
add a minimum and fixed level of OP if it didn't help lengthen lifespan
(and reduce warranty costs)? Why do they recommend increasing OP if
your SSD experiences elevated volume of write? Why do they even bother
to project an expected lifespan (in writes) of their SSDs?

https://www.kingston.com/unitedstate...erprovisioning
When we compare each paired capacities, we can see the following:
1. The higher capacity drives (less OP) in each pair can maintain the
same transfer speeds (Bandwidth), but Random Write IOs per Second
(IOPS) are significantly reduced. That means that drives with less
OP will perform well in Read intensive applications, but may be
slower in write-intensive applications compared to drives with 32%
OP.
2. Less over-provisioning also means that Total Bytes Written (TBW) in
Terabytes on each drive will be lower. The greater the OP
percentage, the longer an SSD can last. A 960GB DC500R can
accommodate up to 876TBW of data written, whereas the 800GB DC500R
can achieve 860TBW. TBW numbers are derived by Kingston using JEDEC
workloads as a reference.
3. When the TBW numbers are translated into Drive Writes Per Day
(DWPD) over the warranty period, we can see that drives with 32% OP
almost reach double the amount of writes per day. This is why 32%
OP is recommended for applications that are more write-intensive.

I'll let you contact Kingston on obtaining their testing records. I
don't design nor manufacture SSDs. I refer to the manufacturers
(already cited, and above is another) to accede to what they say about
their products. They add enough OP to reliably get past the warranty
period to reduce their costs for replacement, and increase lifespan
sufficiently to not impact their image regarding quality. If you want
more endurance than what they gave, you up the OP. You're upping the
potential lifespan, not guaranteeing it. However, SSD do have a maximum
number of writes, so it's your choice based on your usage regarding your
write volume as to what level of OP with which you are comfortable.

You can keep the fixed OP level set at the plant for the SSD you buy,
and hope your write volume doesn't negate the "normal" level upon which
the TBW calculated (i.e., you hope never reach nor even approach the max
writes threshold). If capacity is so tight that you consider an
increase in OP as a waste and desparately need that extra capacity as
usable file space then you bought an undersized SSD. Hell, most users
never even bother to check both the read and *write* speeds, and buy
merely based on capacity and price. The makers aren't ignorant of who
are the customers: they fix a lower OP for consumers knowing they won't
"normally" produce the same level of write volume as corporate customers
employing SSDs in server operations, like as file servers, and where the
higher write volume would shorten the consumer-configured SSD, so they
up the OP for server-oriented SSDs, or, at least, recommend the OP be
increased in server deployments of SSDs.

You can use the SSD as you bought it, and hope you get the endurance of
10 years (although the maker only warranties the SSD for 1 or 5 years).
Or you can buy insurance by upping the OP at the cost of capacity. When
did you ever get insurance for free? Just like with anti-malware, it's
up to you to decide where is your comfort level. Only you know what is
your write volume, assuming you ever monitor it, to know how to project
the lifespan of your SSD.

There is not SMART attribute to monitor the TBW of SSDs. At best, the
utility provided by the SSD maker might give the total writes so far on
the SSD. Alas, you'll find it nearly impossible to find what is the max
TBW for an SSD, you only know the limit exists. For longer endurance,
you want to push that out. You could reduce your write volume, or you
could increase the OP.

I have a Samsung 970 Pro 1TB NVMe SSD in an m.2 slot on the mobo. I
also have a Samsung 850 EVO 250GB on a SATA3 port. The NVMe SSD drive
is 5 times faster on both reads and writes than the SATA3 SSD. The
SATA3 SSD used to be my OS+apps drive, but got repurposed in a new build
where the NVMe SSD was used for the OS and apps (and some temp data),
and the old SATA3 SSD got used for non-critical data. As for the total
bytes written (TBW):

NVMe: 13.1 TBW
SATA3: 33.4 TBW

The SATA3 SSD's TBW includes when it used to be the OS+app drive, and
later when it became a data drive used to hold captured video streams,
encrypted volumes (Veracrypt), movies (which eventually get moved to a
USB HDD on my BD player to watch movies on my TV), and WSUS Offline (for
an offline update store of my OS). I didn't bother to see what was the
SATA3 SSD's TBW before repurposed in the new build.

For the 1 TB NVMe SSD, I upped OP from 10% to 20% at the cost of
consuming 190 GB in unallocated space on the drive. I also upped OP
from 10% to 20% on the 256 GB SATA3 SSD at the cost of 46 GB. A year
after the build, and after upping the OP, I'm currently using 25% of the
remaining capacity of the 1 TB NVMe SSD, and 61% of the remaining
capacity of the 256 GB SATA3 SSD (of which 50% is for a 100 GB Veracrypt
container). So, my planning worked out by having far more capacity for
the SSDs than I'd use, so upping the OP was an easy choice.

Samsung's Magician shows me the TBW of each SSD, but it doesn't show the
maximum writes for each SSD, so I don't know how close the TBW is to the
max writes. Samsung just says the MTBF for the SSDs is 10 years using
some JEDEC "normal" write volume per day specification. No way to see
how close I am to the cliff to prevent falling off.

This is similar to the SMART attribute showing number of remapped
sectors, but nothing about the max size of the G-list reserved for
remapping of sectors during use. The P-list (primary defect list) is
defined during testing when manufactured; i.e., the manufacture found
and recorded the remaps. The G-list (grown defects list) is how many
remaps happened later, like when the drive is placed into use; i.e., it
is the remap list after manufacture. When I see a sudden increase of,
say, 300 remapped sectors in SMART, is that a low or high percentage of
the G-list? Don't know, because I don't know what is the size of the
G-list (how many entries it can hold), and the HDD makers won't tell
you. Oooh, a jump, but is that big, and how much is left for further
remaps? Don't know. HD Sentinel alerted that a HDD experienced a large
jump in the remapped sector count, but it can't show what is the max
(G-list size). I get an alert that's meaningless, because I don't know
how big a jump that was (percentage), or how close I'm to the cliff
(when there is no more G-list remapping space). Did I replace the HDD?
Nah, I used it for another 2 years as the primary drive for the OS and
apps, and later put it in an external USB enclosure (because my next
build used an NVMe SSD), and have continued using it for another 3
years. I got an alert, but an uninformative one, that would've scared
most users into buying a new HDD. If the G-list held only 1000 entries,
a jump of 300 would've been scary, and had me replace it. If the G-list
held 10,000 entries, a jump of 300 would be only 3% of the table size,
I'd have another 97% available, and the alert exaggerates the problem.
But I cannot find out from SMART, mfr specs, or anywhere else what is
the size of the G-list.

Without knowing max writes on an SSD, the TBW doesn't tell you how close
you're getting to catastrophic failure (the cliff). Without SMART
telling you the size of the G-list, you don't know how close you are
getting to an HDD failure. You have X failures. Okay, so how many are
left, or what max are allowed? Don't know. Intensity (how many remaps
versus total allowed, or the current TBW versus the max writes allowed)
is a measurement that is not available to users.

You can find some specs, like Samsung saying their NVMe SSDs will endure
1200 TBW per year, and are warranteed for 5 years. Does that mean the
max writes are 1200 TBW/year times 5 years for 6000 TBW total because
you assume they manufacture a product to just make it past warranty to
reduce their costs for replacements? If so, and with 13.1 TBW, so far,
on my NVMe SSD after deployed for 2 years, that means the expected
lifespan is 916 years. Yeah, right. Would doubling the OP give me a
1832-year lifespan? Only in a wet dream. I certainly design my builds
to definitely out-live an expected 8-year lifespan. I'm known for
overbuilding. You should see my carpentry projects. Mike Holmes would
be very impressed, but would remark that I wasn't cost efficient.

Overprovisioning, whether you use the fixed amount reserved by the
manufacturer or how much more you add, is insurance to prolong endurance
and maintain write performance. It is NOT a guarantee.
  #18  
Old March 10th 21, 07:04 PM posted to alt.comp.os.windows-10,alt.comp.hardware
nospam
external usenet poster
 
Posts: 160
Default How it is possible

In article , VanguardLH
wrote:


How long have SSDs been around at a price tolerable by consumers?


about 10-15 years.

Yeah,
and you want me to somehow have experience with SSDs for over 10 years
to check on actual stats across multitudes of different brands and
models of SSDs.


hard drives don't normally last that long, so why demand it from an ssd?

the reality is that ssds are more reliable than spinning hard drives in
addition to being significantly faster, and that's without needing to
jump through hoops to manually reprovision anything.

other parts of the computer will need replacing before the ssd wears
out.

more commonly, users want more space and upgrade a perfectly fine ssd,
just as they did with hard drives.

Maybe BackBlaze has stats for you.


in fact, they do.
https://www.backblaze.com/blog/wp-co.../blog-drivesta
ts-quarter-failure.jpg
https://www.backblaze.com/blog/wp-co.../blog-drivesta
ts-3-lifecycles.jpg
https://www.backblaze.com/blog/wp-co.../blog-drivesta
ts-6-year-life.jpg

after about 3 years, hard drive failures dramatically increase, and by
6 years, half are expected to fail.
  #19  
Old March 10th 21, 07:21 PM posted to alt.comp.os.windows-10,alt.comp.hardware
David W. Hodgins
external usenet poster
 
Posts: 147
Default How it is possible

On Wed, 10 Mar 2021 09:55:04 -0500, VanguardLH wrote:
I use Samsung's Magician to change overprovisioning on my Samsung SSDs
(https://www.samsung.com/semiconducto...mer/magician/).
However, any partition manager will work where you use it to increase or
decrease the size of unallocated space on the SSD. Because the consumer
SSDs are typically shipped with 10%, or less, by default, you steal
space from an allocated partition, if there are any, to make
unallocated. I keep the unallocated space in one block.


Using linux here. If you reduce a partition size to leave some unallocated space,
how would the drive controller be informed? The fstrim command only works on
mounted partitions.

I have one spinning rust drive and three ssd drives. For the ssd drives ...

[[email protected] ~]# smartctl -a /dev/sdb|grep -i -e life -e wear -e model
Model Family: Indilinx Barefoot_2/Everest/Martini based SSDs
Device Model: OCZ-AGILITY4
232 Lifetime_Writes 0x0000 100 100 000 Old_age Offline - 79371192420
233 Media_Wearout_Indicator 0x0000 094 000 000 Old_age Offline - 94

[[email protected] ~]# smartctl -a /dev/sdc|grep -i -e life -e wear -e model
Model Family: Phison Driven SSDs
Device Model: KINGSTON SEDC400S37960G
231 SSD_Life_Left 0x0013 100 100 000 Pre-fail Always - 100
241 Lifetime_Writes_GiB 0x0012 100 100 000 Old_age Always - 605
242 Lifetime_Reads_GiB 0x0012 100 100 000 Old_age Always - 472

Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
[[email protected] ~]# smartctl -a /dev/sdd|grep -i -e life -e wear -e model
Model Family: Intel 53x and Pro 1500/2500 Series SSDs
Device Model: INTEL SSDSC2BW240A4
226 Workld_Media_Wear_Indic 0x0032 100 100 000 Old_age Always - 65535
233 Media_Wearout_Indicator 0x0032 100 100 000 Old_age Always - 0
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error

The OCZ-AGILITY4 at nearly 80 billion writes is getting close to it's limit.
I've been using it since early 2013.

Thanks for the heads up about losing read access too!

Regards, Dave Hodgins

--
Change to for
email replies.
  #20  
Old March 10th 21, 11:42 PM posted to alt.comp.os.windows-10,alt.comp.hardware
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default How it is possible

David W. Hodgins wrote:
On Wed, 10 Mar 2021 09:55:04 -0500, VanguardLH wrote:
I use Samsung's Magician to change overprovisioning on my Samsung SSDs
(https://www.samsung.com/semiconducto...mer/magician/).

However, any partition manager will work where you use it to increase or
decrease the size of unallocated space on the SSD. Because the consumer
SSDs are typically shipped with 10%, or less, by default, you steal
space from an allocated partition, if there are any, to make
unallocated. I keep the unallocated space in one block.


Using linux here. If you reduce a partition size to leave some
unallocated space,
how would the drive controller be informed? The fstrim command only
works on
mounted partitions.


Create a partition in the unallocated area, use fstrim,
then remove that partition again.

Paul
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT +1. The time now is 12:29 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.
Copyright 2004-2022 HardwareBanter.
The comments are property of their posters.