A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Homebuilt PC's
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

"Why I Will Never Buy a Hard Drive Again"



 
 
Thread Tools Display Modes
  #11  
Old August 9th 18, 09:47 AM posted to alt.comp.hardware.pc-homebuilt
VanguardLH[_2_]
external usenet poster
 
Posts: 1,453
Default "Why I Will Never Buy a Hard Drive Again"

Remapping (redirection from old bad sectors/blocks to the spare
sectors/blocks in the reserve space) is done in the firmware of the
drive (although the OS can direct the firmware to flag a sector/block as
bad to get the drive to do the remap). Before a reserved sector can get
used, it has to be tested and, if okay, then the data is copied there
and the old sector/block gets flagged by the drive's firmware.

Even if you reformat the drive, the table on the drive will still see
the bad blocks the firmware flagged as bad. Sectors/blocks that get
remapped will NEVER be reusable by the OS or any process. The flagged
blocks are marked bad for life.

The drives already come with bad sectors or memory blocks. Those get
masked at the time of manufacture when the devices are tested before
released. A table of bad spots gets recorded in the firmware as a table
in ROM on the drive's PCB. EEPROM is on the drive's PCB so more blocks
can get flagged during use of the drive; i.e., the OS or tools can
determine a block is flaky and get the drive's firmware to remap that
block. That's why moving the PCB from one drive to another despite them
being the exact same manufacture, product, and version will result in
mismatched after-manufacture flagged blocks. What was good on one drive
might use a bad block to where the PCB get moved, and what was bad on
the source drive might have a block marked bad that was good on the
drive to which the PCB got moved.

https://www.mjm.co.uk/articles/bad-s...remapping.html

Note the difference between primary remapping (p-list) at the time of
manufacture and the G-list remapping performed during use of the drive.

SSDs are a little different. Every write is a remap. That's part of
the wear leveling algorithm to spread the writes out across the entire
capacity of the SSD. You write some data in one place, read it, and
when you write it then it goes somewhere else on the SSD. When a block
is determined as bad, it get flagged as unusable and remains unusable
thereafter for the life of the SSD. So, looks like I mixed HDD and SSD
together regarding remapping. HDDs remap only when a sector is
determined as bad and that causes redirection from that sector to the
reserve sector. SSDs remap on every write as part of wear leveling, and
blocks get flagged unusable forever thereafter if found bad.

I've seen benchmarks where SSDs get slower on writes as more blocks are
flagged as unusable, so I'm not sure why since remapping is always
happening to move the data to somewhere else on every write. Users
complain there is gradual degradation of performance of SSDs, and they
are using TRIM and GC. Always remapping and flagging blocks as bad
which can never be reused would seem to merely effect a reduction the
capacity of the SSD during use as blocks eventually go bad, yet I've
seen them catastrophically fail and it has been an artifact attributed
to SSDs for quite a while.

With blocks getting marked as unusable ever after which would reduce the
capacity of the SSD, how is it the partition doesn't shrink? SSDs use
overprovisioning: they have about 7%, or more, extra space than the OS
can get at. When blocks get flagged as unusable thereafter, apparently
a block comes out of the failed block reserve in the overprovisioning.
So this is akin to how HDDs have reserve space to which bad sectors get
remapped (in the drive's firmware, not in the file system). Seems the
catastrophic failure of SSDs that others have noted is when the total of
flagged unusable blocks exceeds the space in the overprovisioning for
failed blocks. Any remapping thereafter would corrupt the file system
within the partition. Health monitors can use SMART for HDDs or SSDs to
determine if they are running out of reserve space.

https://www.seagate.com/tech-insight...its-master-ti/
https://www.kingston.com/us/ssd/overprovisioning

Where could a sector reside for a cluster within the file system if no
more bad blocks can be flagged unusable? I have never attempt to adjust
the overprovisioning of an SSD. If I up the overprovisioning, the SSD's
capacity decreases which means [aggregate] size of my partition[s] would
have to decrease.

https://www.youtube.com/watch?v=bHf6rCDUTYU

I have a Samsung SSD and also have their Magician software; however, I
don't touch overprovisioning. It shows me my SSD has 10% for
overprovision space. While the Youtube video shows how to remove
overprovisioning, that looks stupid and dangerous. The articles that
I've read about *increasing* the SSD's performance is to *increase* the
overprovisioning. Also, with an increase in overprovisioning, maybe you
get a larger reserve space, so more blocks can go bad before the SSD
fails catastrophically (hopefully by becoming a read-only device but
that means the OS won't load since it also needs to write).

http://www.atpinc.com/Memory-insider...-wear-leveling

Upon further reading, the progressive slowdown of SSDs that other
experience (but not me) might be due to them piling too much into their
SSD. At a certain point, the SSD will slow down as it fills up.

https://www.howtogeek.com/165542/why...-fill-them-up/
https://pureinfotech.com/why-solid-s...ce-slows-down/

That explains why SSDs slow down as you fill them up. I keep my SSDs at
low consumption; for example, on my home PC, I'm currently using only
20% of the SSD's rated capacity. It could be that the progressive
slowdown noticed by others on their PCs with SSDs is due to them
overfilling their SSDs, like beyond 70%. I've never really kept track
of how full are the SSDs of other users, only that they complain their
SSD has slowed down and nowhere as speedy as when they first installed
it.

Note that catastrophic failure for an SSD does *not* mean the drive
becomes unreadable. The SSD will [hopefully] switch into read-only
mode, just no writing. Since the OS want to write to its own partition,
that's likely a cause for "catastrophic failure": the OS cannot load.
So you can probably still get your data off a failed SSD (unless cause
for failure is due to non-media components) but as a data drive.

SSDs wear out and stop being writable devices but should still be
readable unless failure was not due to the media (memory). Well, an SSD
degrading into a read-only drive is a best case scenario for a failed
SSD. Seems that is how they are designed. HDDs die due to failure of
its mechanicals, not because the media went bad and why you can still
get data off its platters or even move the platters into another
housing.

Ain't silicon voodoo fun?
  #12  
Old August 9th 18, 12:09 PM posted to alt.comp.hardware.pc-homebuilt
Flasherly[_2_]
external usenet poster
 
Posts: 2,407
Default "Why I Will Never Buy a Hard Drive Again"

On Thu, 9 Aug 2018 00:52:41 -0400, Bill wrote:


I've heard that the reliability of SSDs far exceeds that of the
mechanical hard drives (for, in fact, an obvious reason--no
moving parts). The "trim" software for my Intel SSD even provides
an indication of the drive's reliability (I'm not sure how well
that works). I do regular backups too.


I get a little confused on these new array memory schemes, the 3D
stacking of 2- or 3-bit address advantages. Presumably MLC isn't
quite as volatile a restructure of technological accountability --
apart from caching advantages, if and when employed and neither to
exclude SLC. Generally and apart upper-end drives reflecting that
price premium when exclusively employed. And then there's also the
whole controller issue, perhaps more recognizable characteristically,
enough so for indications of established baseline performance.

Crucial and Samsung would seem most of all dominating, although I must
say all my SSDs also are theirs.

Leaving the residual of the indicative subject to TLC NAND, which is
presently going through marketing loops and spins. There's now a new
all-time low hitting, every day, a SSD market for TTC. And they can
very aggressive in purporting unique merits of both memory and
controller structures, hitherto unavailable from a constraints of
technological understanding.

As if almost a sideshow to the deluge and onus of any
cost-to-performance, obvious from both the warranty intention,
synonymous to total terabytes rewritten, a drive hypothetically is
projected to withstand.

A wide field, as it is now, covering not an inconsiderable amount of
means available, once seen from a focal objective where industry is
marketing an important distinction of SDDs now, and those SDDs which
shortly have preceded them.

And for these "everyday" sale TLC items, your nickel indeed will
stretch far, squeezed, before the buffalo ultimately squats;- and, as
is in keeping the plurality of things, there will be a dearth of
realworld reviews, among those scant few, too conspicuous to not
include, whom invariably express both displeasure at near or immediate
failure, upon assuming receipt of their "everyday" TLC SDD, usually
with a side-barb towards a faulty warranty mechanism skewed from
industrial clout.

With actually for a moment's pause, if hardly given to reassess, that
neither Samsung, nor Crucial, would realistically ignore the same TLC
3D NAND technology, from a standpoint of applicable popularly they're
favored when stepping out and onto the razor-edge's delineating this
technological void.

Less a matter of end- than deferring to the gloss-reviews from actual
hardware sites, I find, where manufacturers traditionally supply
subject merchandise for testing purposes, assessment and publicity.
  #13  
Old August 9th 18, 12:28 PM posted to alt.comp.hardware.pc-homebuilt
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default "Why I Will Never Buy a Hard Drive Again"

Bill wrote:
mike wrote:

How's the reliability?
I'm still reading that they fail catastrophically without warning.


I've heard that the reliability of SSDs far exceeds that of the
mechanical hard drives (for, in fact, an obvious reason--no moving
parts). The "trim" software for my Intel SSD even provides an indication
of the drive's reliability (I'm not sure how well that works). I do
regular backups too.


You're the perfect customer for an SSD.

You're mixing up reliability and wear life.

Reliability consists of two components. Say a solder joint
on the PCB fails. It causes the device to stop delivering
the intended function. That's part of the reliability
number. Let's pretend for the sake of argument, it's
an MTBF of 2 million hours. In some cases, just the
tiny power converter inside, making VCore for some chip,
might dominate the reliability calc (you can't make
a power converter better than about 10 million hours
or 100 FITS).

OK, well, what rate do bugs show up in the SSD firmware ?
We don't know. We do know, that early SSDs "bricked"
due to firmware. In some cases, the drive even "bricked"
during a firmware update (but of course the owner backed
up the data, making the situation not quite the same).
In a system at work, our reliability expert (a guy with
a PhD in the subject), warned that some large products
we were selling, it was quite possible the software
was dropping the system reliability by a factor of 10.

Now the MTBF is down to 200,000 hours. You will find
Seagate and WDC unwilling to factor this in. While our
reliability expert argued for this, only field data
could indicate how sucky our software was.

*******

Wear life is different. Both hard drives and SSDs wear.
In the case of the SSD, the mechanism is known and
predictable. If you know the temperature when the
writes were done, you know the temperature of the
media over long-term life, you can make a reasonably
accurate prediction of wear. (High temperatures
anneal defects, but high temperatures might also
shorten retention time.)

Hard drives are different. The manufacturer won't admit
to wear. The manufacturer won't prepare large quantities
of drives, and simulate life conditions, and provide
curves related to wear. But, third party studies have
noted wear characteristics in the failure population
curves. Instead of a traditional bathtub curve, drive
failures have another shape in the graph. There are
tremendous differences between various model numbers
for this (things that might be noted by Newegg reviewers
if a model is for sale for long enough).

*******

Now, let's summarize:

What do you have to know as an SSD owner.

1) Consider the history of the technology. You're doing
basically what my PhD guy at work was doing, consulting
a "field return data" log and noting brickage, brickage
caused by bad firmware. For early SSD drives, you
wouldn't touch them with a barge pole. Especially
the ones with "predictable brickage", where the
device fails after being powered for exactly
30 days. Owners who didn't hear about the 30 day
brickage, might not have known (in time) that there
was a firmware update for it, to be applied in advance.
If it bricked and you had no backup (because it was
"reliable"), well, "fool you once". Now you're learning.

2) Consider the wear life. The drives are taking fewer
and fewer write cycles per flash location, as the
technology "advances". The storage cells are getting
"mushy". SLC, MLC, TLC, QLC. SLC is great stuff. Maybe
100,000 write cycles and 10 year retention. QLC might
be 1,000 write cycles and ?? year retention. A Samsung
TLC was showing signs of being "mushy", by requiring
significant error correction inside (to the point it
was slowing the read rate). Roughly 10% of the storage
capacity on the drive, is reserved for ECC code storage,
protecting the data from errors. That is a very high ratio,
much highe than hard drives in the past. It's quite possible
every sector has at least one error in it, corrected
by the CPU inside before you get it. And now, they're just
starting to ship QLC.

3) Consider the end of life policy. Not all the drive
brands have the same policy. Some return an error
on each write at end of life (as a cheap way of warning
you), causing the SSD to enter "read-only state". That
is a reasonable policy, helping to warn and cover people
who refuse to make backups. Windows won't run on a read only
device, so you'll be smothered in error dialogs. That
will get your attention, and make you back up the drive.

But Intel just "bricks" the drive, when the *computed*
wear value is exceeded. With an Intel brand SSD, you
had better be monitoring the "life remaining percentage"
*very very carefully* . That's why the promotion in that
Toms article above is particularly egregious. The dude is
promoting an Intel QLC SSD (yuck!) which has a total-brickage
end-of-life policy (double yuck!). What could go wrong ?
If you're not paying attention, Beuler, you suddenly
lose access to your data. Did you have backups ? No ?
"Fool you twice".

So, yeah, SSDs have no moving parts, and hay, they're
"reliable". A stupid MIL spec calc prepared by the
marketing department (not by engineers), says so.

The firmware could have bugs. Not quantified in a
MIL spec calc. They could have include field data in the
MIL spec calc, but they'd be nuts to do so. No one is
there to slap their fingers for failing to do this.
The history of SSDs would mean dropping the MIL spec calc by a
factor of ten. No marketing guy is going to allow that.
But if your Sherman Tank is booted off an SSD, you
can be damn sure two PhDs got into a spat about what
the real reliability is. Between big companies doing
business, the MTBF is "negotiated". The customer
would say "hay, idiot, include firmware reliability
in your calc".

The wear life is tangible. There's an indicator in
SMART. What is the brickage policy of your brand ?
Pay attention!

Is an SSD the same as a hard drive ?

No, it is not.

HTH,
Paul
  #14  
Old August 9th 18, 01:29 PM posted to alt.comp.hardware.pc-homebuilt
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default "Why I Will Never Buy a Hard Drive Again"

VanguardLH wrote:


Since I leave my computer running 24x7, I've *never* had the thermal
creeping problem with connectors, memory modules, etc.


The temperature variation on always-running equipment
is not zero.

Basically, any connector technology with a "walkout"
problem, will eventually manifest.

The DIMM slots have lock latches.

The ATX main connector and ATX12V connector have latches.

The newer SATA connectors have a metal jaw for security.

You can have local heating effects, that have a higher
amplitude variation per day, than the internal case
air temperature.

The Molex Aux connector on my video card, walked out
on its own. And that's because the connector carries
5 amps+ when a game starts to play, and that caused
the connector to heat up and walk out. When it got to
the point that one pin was starting to separate (go ohmic),
that's when the pin burned. It burned bad enough, to cause
the video output to stop (the red "ATI warning box" appears
on the screen, saying to plug in the cable). That's the
first warning I got, that there was a problem. Since I didn't
have the right connector in my junk box, I had to
solder a pigtail to the video card (with a Molex
on the end). That lasted until the card was retired.

Even the solder balls on a badly designed video card
can crack, just from heating from gaming. The fact
you left the machine on at night, doesn't remove the
variation when the card is used for gaming. This is why
it's important that they select the correct underfill
polymer to put under the BGA GPU package.

While leaving a PC powered removes some reliability
issues, it doesn't solve all of them.

Paul
  #15  
Old August 9th 18, 05:12 PM posted to alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage
Mr. Man-wai Chang
external usenet poster
 
Posts: 697
Default "Why I Will Never Buy a Hard Drive Again"

On 8/9/2018 9:37 AM, Lynn McGuire wrote:
"Why I Will Never Buy a Hard Drive Again"

https://www.tomshardware.com/news/ch...ves,37563.html


"It’s been years since I was willing to work on any PC that boots from a
mechanical hard drive. Once you get used to the snappy response times
and speedier gameload times of an SSD, going back to a hard drive feels
like computing through a thick layer of molasses."


But hard disks are still cheaper and more proven, despite of new
encoding methods.

You should copy important data to some old hard disks which use less
risky encoding and could be more long-lasting than newer ones!

It's also an interesting experiment to find out which old hard disk fail
first. assumed that no "experts" sneaked into your home to do damage!

--
@~@ Remain silent! Drink, Blink, Stretch! Live long and prosper!!
/ v \ Simplicity is Beauty!
/( _ )\ May the Force and farces be with you!
^ ^ (x86_64 Ubuntu 9.10) Linux 2.6.39.3
不借貸! 不詐騙! 不*錢! 不援交! 不打交! 不打劫! 不自殺! 不求神! 請考慮綜援
(CSSA):
http://www.swd.gov.hk/tc/index/site_...sub_addressesa
  #16  
Old August 9th 18, 05:26 PM posted to alt.comp.hardware.pc-homebuilt
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default "Why I Will Never Buy a Hard Drive Again"

Rene Lamontagne wrote:


Do anyone here remember the Apple III walkout era?

Rene


Socketed chips ? Ouch.

https://www.hardwaresecrets.com/inside-the-apple-iii/3/

Some fun for the owner I guess. I hope
the cover comes off the unit easily :-)

Paul

  #17  
Old August 9th 18, 05:42 PM posted to alt.comp.hardware.pc-homebuilt
Michael Black[_3_]
external usenet poster
 
Posts: 2
Default "Why I Will Never Buy a Hard Drive Again"

On Thu, 9 Aug 2018, Bill wrote:

mike wrote:

How's the reliability?
I'm still reading that they fail catastrophically without warning.


I've heard that the reliability of SSDs far exceeds that of the mechanical
hard drives (for, in fact, an obvious reason--no moving parts). The "trim"
software for my Intel SSD even provides an indication of the drive's
reliability (I'm not sure how well that works). I do regular backups too.

But I've never had a hard drive problem. That goes back to 1993, when I
go tmy first hard drive.

I've moved on to different hard drives, but that's because of a different
computer or wanting more space. But none have failed, not even the ones
that had been used when I got them.


I'm sure that when I get around to turning on that computer from 2003,
which was used at the time, the hard drive will be fine. Though I
splurged on a new hard drive, a 160g, about 2006. But it stayed on most
of the time till I moved to a different computer in 2012.

Hard drives became reliable at some point, and so cheap.

And I'm yet to be convinced that an SSD is appreciably faster than a
mechanical hard drive. Though it helps that I leave the computer on, so
any "slowness" of booting is an iregular thing.

Michael



  #18  
Old August 9th 18, 06:07 PM posted to alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage
Neill Massello[_3_]
external usenet poster
 
Posts: 39
Default "Why I Will Never Buy a Hard Drive Again"

VanguardLH wrote:

Ever look at the cost of a 2TB or 4TB SSD? Ouch!!!


Note the phrase "boots from" in the Tom's Hardware clip. Run system and
apps from a modestly sized SSD and store your big data, such as audio
and video, on a big HDD. These days, it's the only way to fly.

  #19  
Old August 9th 18, 07:04 PM posted to alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage
Ant
external usenet poster
 
Posts: 858
Default "Why I Will Never Buy a Hard Drive Again"

Can I buy a 2 TB SSD for really cheap yet like a HDD price?


In alt.comp.hardware.pc-homebuilt Lynn McGuire wrote:
"Why I Will Never Buy a Hard Drive Again"


https://www.tomshardware.com/news/ch...ves,37563.html


"It???s been years since I was willing to work on any PC that boots from a
mechanical hard drive. Once you get used to the snappy response times
and speedier gameload times of an SSD, going back to a hard drive feels
like computing through a thick layer of molasses."


Lynn


--
Quote of the Week: "You feel the faint grit of ants beneath your shoes,
but keep on walking because in this world you have to decide what you're
willing to kill." --Tony Hoagland from "Candlelight"
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\Ant(Dude) @ http://antfarm.home.dhs.org / http://antfarm.ma.cx
/ /\ /\ \ Please nuke ANT if replying by e-mail privately. If credit-
| |o o| | ing, then please kindly use Ant nickname and URL/link.
\ _ /
( )
  #20  
Old August 9th 18, 08:44 PM posted to alt.comp.hardware.pc-homebuilt
Rene Lamontagne
external usenet poster
 
Posts: 187
Default "Why I Will Never Buy a Hard Drive Again"

On 08/09/2018 7:29 AM, Paul wrote:
VanguardLH wrote:


Since I leave my computer running 24x7, I've *never* had the thermal
creeping problem with connectors, memory modules, etc.


The temperature variation on always-running equipment
is not zero.

Basically, any connector technology with a "walkout"
problem, will eventually manifest.

The DIMM slots have lock latches.

The ATX main connector and ATX12V connector have latches.

The newer SATA connectors have a metal jaw for security.

You can have local heating effects, that have a higher
amplitude variation per day, than the internal case
air temperature.

The Molex Aux connector on my video card, walked out
on its own. And that's because the connector carries
5 amps+ when a game starts to play, and that caused
the connector to heat up and walk out. When it got to
the point that one pin was starting to separate (go ohmic),
that's when the pin burned. It burned bad enough, to cause
the video output to stop (the red "ATI warning box" appears
on the screen, saying to plug in the cable). That's the
first warning I got, that there was a problem. Since I didn't
have the right connector in my junk box, I had to
solder a pigtail to the video card (with a Molex
on the end). That lasted until the card was retired.

Even the solder balls on a badly designed video card
can crack, just from heating from gaming. The fact
you left the machine on at night, doesn't remove the
variation when the card is used for gaming. This is why
it's important that they select the correct underfill
polymer to put under the BGA GPU package.

While leaving a PC powered removes some reliability
issues, it doesn't solve all of them.

** Paul


Do anyone here remember the Apple III walkout era?

Rene

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
"One Billion Drive Hours and Counting: Q1 2016 Hard Drive Stats" Lynn McGuire[_2_] Storage (alternative) 7 May 22nd 16 07:30 AM
External USB hard drive showing wrong "Free Space" "Used Space" inthe Capacity RayLopez99 Homebuilt PC's 3 February 17th 14 09:40 PM
USB bootable maker: Diff between "HP Drive Key Boot Utility" and "HP USB Disk Storage Format Tool"? Jason Stacy Storage (alternative) 1 April 21st 09 01:14 AM
WinExplorer shows no "Used space/Free space" in properties for USB stick drive ? "Optimized for quick removal" error? Joe deAngelo Storage (alternative) 0 January 18th 08 02:28 PM
Western Digital "My Book" - Replacing the hard drive [email protected] Storage (alternative) 0 July 8th 06 06:31 PM


All times are GMT +1. The time now is 10:34 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright 2004-2024 HardwareBanter.
The comments are property of their posters.