A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

How it is possible



 
 
Thread Tools Display Modes
  #1  
Old March 7th 21, 09:12 PM posted to alt.comp.os.windows-10,alt.comp.hardware
micky
external usenet poster
 
Posts: 28
Default How it is possible

How it is possible that one SSD is 17 times as fast as another but costs
less? Both are 240G. Why would anyone buy the slower one (like I did
last summer)?

https://www.amazon.com/dp/B01N5IB20Q...t_details&th=1
350 Mb per second $35

https://www.amazon.com/PNY-CS900-240...01N5IB20Q?th=1
6 Gb per second $30 plus $6 for a bracket if you need one.


In the comparison list of the first one, 4 of them side by side, half
way to the bottom of the page, 2 others are the same speed as the first,
but the second one is 17 times as fast.

In a similar side-by-side comparison list on the second page, the same
thing is true. Only the PNY is so fast, and for less money. Does PNY
know something the others don't know.
  #2  
Old March 7th 21, 09:39 PM posted to alt.comp.os.windows-10,alt.comp.hardware
Jeff Barnett
external usenet poster
 
Posts: 6
Default How it is possible

On 3/7/2021 2:12 PM, micky wrote:
How it is possible that one SSD is 17 times as fast as another but costs
less? Both are 240G. Why would anyone buy the slower one (like I did
last summer)?

https://www.amazon.com/dp/B01N5IB20Q...t_details&th=1
350 Mb per second $35

https://www.amazon.com/PNY-CS900-240...01N5IB20Q?th=1
6 Gb per second $30 plus $6 for a bracket if you need one.


In the comparison list of the first one, 4 of them side by side, half
way to the bottom of the page, 2 others are the same speed as the first,
but the second one is 17 times as fast.

In a similar side-by-side comparison list on the second page, the same
thing is true. Only the PNY is so fast, and for less money. Does PNY
know something the others don't know.


I have two suggestions: 1) follow the two URLs you provided and read
carefully - there is little difference between the two advertised read
speeds and 2) note that MB and Mb are different by a factor of eight as
are GB and Gb.

Confusion between the latter two has many proud computer scientist
bragging about their Gigabyte LAN; it's really Gigabit and that's quite
different.
--
Jeff Barnett
  #3  
Old March 7th 21, 10:39 PM posted to alt.comp.os.windows-10,alt.comp.hardware
David W. Hodgins
external usenet poster
 
Posts: 147
Default How it is possible

On Sun, 07 Mar 2021 16:12:25 -0500, micky wrote:

How it is possible that one SSD is 17 times as fast as another but costs
less? Both are 240G. Why would anyone buy the slower one (like I did
last summer)?

https://www.amazon.com/dp/B01N5IB20Q...t_details&th=1
350 Mb per second $35

https://www.amazon.com/PNY-CS900-240...01N5IB20Q?th=1
6 Gb per second $30 plus $6 for a bracket if you need one.


The first drive has a read speed of 500 Megabytes Per Second. The second 535, so
only slightly faster.

A sata iii Hardware Interface works with a sata iii controller that supports a max
bus speed of 6 Gb. See https://en.wikipedia.org/wiki/Serial_ATA

The first drive is actually a sata ii drive, the second sata iii.

The term "Style" has no technical meaning. It's just a marketing term. The hardware
interface speed tells you whether it's sata, sata ii, or sata iii.

Newer drives are often lower in price per MB than older ones.

Regards, Dave Hodgins

--
Change to for
email replies.
  #4  
Old March 7th 21, 11:30 PM posted to alt.comp.os.windows-10,alt.comp.hardware
J. P. Gilliver (John)[_3_]
external usenet poster
 
Posts: 24
Default How it is possible

On Sun, 7 Mar 2021 at 16:12:25, micky wrote (my
responses usually follow points raised):
How it is possible that one SSD is 17 times as fast as another but costs
less? Both are 240G. Why would anyone buy the slower one (like I did
last summer)?

[]
In the comparison list of the first one, 4 of them side by side, half
way to the bottom of the page, 2 others are the same speed as the first,
but the second one is 17 times as fast.

In a similar side-by-side comparison list on the second page, the same
thing is true. Only the PNY is so fast, and for less money. Does PNY
know something the others don't know.


Even leaving aside Jeff's point about bits versus bytes, speed isn't the
only important parameter for and SSD: there are probably many, but the
one that bugs me is the tolerated number of writes - which for the same
size SSD in the same machine/use, more or less maps to lifetime. You
also need to know how they behave when they reach their end of life: do
they continue trying to work (I don't think any), switch to read-only,
or just become a brick (at least one make/range does).
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

A leader who keeps his ear to the ground allows his rear end to become a
target. - Angie Papadakis
  #5  
Old March 7th 21, 11:53 PM posted to alt.comp.os.windows-10,alt.comp.hardware
nospam
external usenet poster
 
Posts: 160
Default How it is possible

In article , J. P. Gilliver (John)
wrote:

Even leaving aside Jeff's point about bits versus bytes, speed isn't the
only important parameter for and SSD: there are probably many, but the
one that bugs me is the tolerated number of writes - which for the same
size SSD in the same machine/use, more or less maps to lifetime.


an ssd will very likely outlast the computer it's in, certainly a lot
longer than a spinning hard drive would have, and with a lot less noise
and heat.

You
also need to know how they behave when they reach their end of life: do
they continue trying to work (I don't think any), switch to read-only,


many do.

or just become a brick (at least one make/range does).


that's what backups are for.

drive failure is not unique to ssd. hard drives crashed, often without
warning.
  #6  
Old March 10th 21, 10:11 AM posted to alt.comp.os.windows-10,alt.comp.hardware
Carlos E.R.
external usenet poster
 
Posts: 8
Default How it is possible

On 08/03/2021 00.30, J. P. Gilliver (John) wrote:
On Sun, 7 Mar 2021 at 16:12:25, micky wrote (my
responses usually follow points raised):
How it is possible that one SSD is 17 times as fast as another but costs
less?Â*Â* Both are 240G.Â*Â* Why would anyone buy the slower one (like I did
last summer)?

[]
In the comparison list of the first one, 4 of them side by side, half
way to the bottom of the page, 2 others are the same speed as the first,
but the second one is 17 times as fast.

In a similar side-by-side comparison list on the second page, the same
thing is true.Â* Only the PNY is so fast, and for less money.Â* Does PNY
know something the others don't know.


Even leaving aside Jeff's point about bits versus bytes, speed isn't the
only important parameter for and SSD: there are probably many, but the
one that bugs me is the tolerated number of writes - which for the same
size SSD in the same machine/use, more or less maps to lifetime. You
also need to know how they behave when they reach their end of life: do
they continue trying to work (I don't think any), switch to read-only,
or just become a brick (at least one make/range does).


Which one bricks? That's important to know.

--
Cheers, Carlos.
  #7  
Old March 10th 21, 10:34 AM posted to alt.comp.os.windows-10,alt.comp.hardware
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default How it is possible

Carlos E.R. wrote:
On 08/03/2021 00.30, J. P. Gilliver (John) wrote:
On Sun, 7 Mar 2021 at 16:12:25, micky wrote
(my responses usually follow points raised):
How it is possible that one SSD is 17 times as fast as another but costs
less? Both are 240G. Why would anyone buy the slower one (like I did
last summer)?

[]
In the comparison list of the first one, 4 of them side by side, half
way to the bottom of the page, 2 others are the same speed as the first,
but the second one is 17 times as fast.

In a similar side-by-side comparison list on the second page, the same
thing is true. Only the PNY is so fast, and for less money. Does PNY
know something the others don't know.


Even leaving aside Jeff's point about bits versus bytes, speed isn't
the only important parameter for and SSD: there are probably many, but
the one that bugs me is the tolerated number of writes - which for the
same size SSD in the same machine/use, more or less maps to lifetime.
You also need to know how they behave when they reach their end of
life: do they continue trying to work (I don't think any), switch to
read-only, or just become a brick (at least one make/range does).


Which one bricks? That's important to know.


Intel SSDs stop both reads and writes, when the wear life is exceeded.
Once the wear life hits, say, 3000 writes per location, the drive
stops responding. This makes it not possible to do a backup
or a clone.

As a consequence, the user is advised to keep a Toolkit handy
which has an end-of-life predictor, for better quality handoff.

Of course your drive is not near end of life. But, you
only know the wear rate, if you check the Toolkit occasionally
for the projections on life. And, you look pretty bad, if the
topic slips your mind, and you start asking for help with that
"too crusty backup I made two years ago". We don't want
this topic to be handled by people losing data.

It's a shame, that several of the toolkits, suck. I was
not impressed with a couple I checked. Hobbyists could
write better code - code that displayed the salient data
to keep users informed.

And a drive I could not keep because the hardware sucked,
the toolkit was great. That's just how this computer stuff works.

*******

The point of making an example out of Intel, is to make you
aware of what the most extreme policy is. And Intel wins the
prize in this case. Some products from competitors, will
allow you to read, and they stop writing. This allows you to
make a backup using a Macrium CD, and prepare a replacement SSD.

The reason Intel stops reading, is to guard against the possibility
that read errors are not getting detected properly. Intel arbitrarily
decided that only "perfect" data need apply. And they weren't going to
allow a certain BER to leak out and then customers blame Intel
for "accepting corrupt data".

One of the BER indicators in the SSD datasheets, is 10x less good
than a hard drive (one product might be 10^-15, the other 10^-14
kind of thing). And you may find review articles making
references to this, that this difference is a bad thing.

The ECC on SSDs is already a pretty heavy weight item. A bit
more than 10% of flash cells, are likely being used just to
hold the ECC. And it's that ECC calc that keeps TLC flash
from ruining our data. One of the first TLC drives, every
sector had errors, and it was the ECC that transparently
made the drive look "perfect" to the user. When this happens,
the drive can slow down (ECC done by ARM cores, not hardware),
and this makes the more aggressive storage techs (QLC flash)
look bad. It's the "stale slow" drive problem - one way to
fix it, is for the drive to re-write itself at intervals,
which of course depletes the wear life.

The topic is a lot like BEV (electric) cars :-) "Different,
in a bad way" :-) The populace will know, when everyone has
had the mechanic tell them "your battery pack needs to be
replaced".

Paul
  #8  
Old March 10th 21, 11:10 AM posted to alt.comp.os.windows-10,alt.comp.hardware
Carlos E.R.
external usenet poster
 
Posts: 8
Default How it is possible

On 10/03/2021 11.34, Paul wrote:
Carlos E.R. wrote:



Even leaving aside Jeff's point about bits versus bytes, speed
isn't the only important parameter for and SSD: there are
probably many, but the one that bugs me is the tolerated number
of writes - which for the same size SSD in the same machine/use,
more or less maps to lifetime. You also need to know how they
behave when they reach their end of life: do they continue trying
to work (I don't think any), switch to read-only, or just become
a brick (at least one make/range does).


Which one bricks? That's important to know.


Intel SSDs stop both reads and writes, when the wear life is
exceeded. Once the wear life hits, say, 3000 writes per location, the
drive stops responding. This makes it not possible to do a backup or
a clone.


Ok. I will have to check it I have any Intel, I don't remember.


Sure, of course one must have a backup, but even if one does a daily
backup (which most people don't), the incident can happen just after one
saves important files. And as typically the computer bricks, it is not
possible to save the file elsewhere. At best, the day work is lost.



As a consequence, the user is advised to keep a Toolkit handy which
has an end-of-life predictor, for better quality handoff.


Sorry, Toolkit? What is that? Ah, you mean that one must have
"something" that predicts life. True.


Of course your drive is not near end of life. But, you only know the
wear rate, if you check the Toolkit occasionally for the projections
on life. And, you look pretty bad, if the topic slips your mind, and
you start asking for help with that "too crusty backup I made two
years ago". We don't want this topic to be handled by people losing
data.

It's a shame, that several of the toolkits, suck. I was not impressed
with a couple I checked. Hobbyists could write better code - code
that displayed the salient data to keep users informed.

And a drive I could not keep because the hardware sucked, the toolkit
was great. That's just how this computer stuff works.



On the Windows side of my laptops I don't have anything. On the Linux
side I have the smartctl daemon, but I don't know what it says about end
of life warnings. It might send an email. Otherwise, it will be in the
warning log.



*******

The point of making an example out of Intel, is to make you aware of
what the most extreme policy is. And Intel wins the prize in this
case. Some products from competitors, will allow you to read, and
they stop writing. This allows you to make a backup using a Macrium
CD, and prepare a replacement SSD.


Right.

The reason Intel stops reading, is to guard against the possibility
that read errors are not getting detected properly. Intel
arbitrarily decided that only "perfect" data need apply. And they
weren't going to allow a certain BER to leak out and then customers
blame Intel for "accepting corrupt data".


heh.


One of the BER indicators in the SSD datasheets, is 10x less good
than a hard drive (one product might be 10^-15, the other 10^-14 kind
of thing). And you may find review articles making references to
this, that this difference is a bad thing.

The ECC on SSDs is already a pretty heavy weight item. A bit more
than 10% of flash cells, are likely being used just to hold the ECC.
And it's that ECC calc that keeps TLC flash from ruining our data.
One of the first TLC drives, every sector had errors, and it was the
ECC that transparently made the drive look "perfect" to the user.
When this happens, the drive can slow down (ECC done by ARM cores,
not hardware), and this makes the more aggressive storage techs (QLC
flash) look bad. It's the "stale slow" drive problem - one way to fix
it, is for the drive to re-write itself at intervals, which of course
depletes the wear life.

The topic is a lot like BEV (electric) cars :-) "Different, in a bad
way" :-) The populace will know, when everyone has had the mechanic
tell them "your battery pack needs to be replaced".

Paul


{chuckle}

--
Cheers, Carlos.
  #9  
Old March 10th 21, 01:00 PM posted to alt.comp.os.windows-10,alt.comp.hardware
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default How it is possible

Carlos E.R. wrote:
On 10/03/2021 11.34, Paul wrote:


As a consequence, the user is advised to keep a Toolkit handy which
has an end-of-life predictor, for better quality handoff.


Sorry, Toolkit? What is that? Ah, you mean that one must have
"something" that predicts life. True.


Just about every brand of SSD, has a software download for Windows.
In it, is a tool for displaying SMART data, and for also
logging per-day consumption, in an effort to predict
what day in some future year, the SSD will hit 3000 write cycles.

How important it is, to use the Toolkit download software on
Windows, depends on how bad the exit behavior of the drive is,
when it hits 3000 writes per each cell.

One of the enthusiast tech sites did a test involving different
brands of SSDs. And for the ones which don't stop working at
3000 writes, the devices lasted at least 50% longer than that.
The devices eventually brick when the "critical data" section
of the device gets corrupted. On one device, the triggering
event was a loss of power in the lab where the test was
being carried out. And a bit later, the device died and
could not continue.

But the test was good fun while it lasted. And shows that the
policy of killing the drive at 3000, is a very conservative one.

But if you don't automate the termination process, people will
just ignore the SMART warning and keep using the device.

Paul
  #10  
Old March 10th 21, 02:55 PM posted to alt.comp.os.windows-10,alt.comp.hardware
VanguardLH[_2_]
external usenet poster
 
Posts: 1,453
Default How it is possible

"J. P. Gilliver (John)" wrote:

Even leaving aside Jeff's point about bits versus bytes, speed isn't
the only important parameter for and SSD: there are probably many,
but the one that bugs me is the tolerated number of writes - which
for the same size SSD in the same machine/use, more or less maps to
lifetime. You also need to know how they behave when they reach their
end of life: do they continue trying to work (I don't think any),
switch to read-only, or just become a brick (at least one make/range
does).


Overprovisioning affects how much reserve space there is for remapping:
the more you have, the more remapping space is available, and the longer
your SSD survives before its catastrophic failure. Consumer SSDs get
overprovisioned 10%. Server SSDs get overprovisioned 20%. However,
I've seen SSDs get an initial (factory) overprovisioning of only 6%.
For my SSDs in my home PC, I up the overprovisioning from 10% to 20%.
Because of increased overprovisioning, I've not had an SSD long enough
to brick it. Eventually prices comes down enough that I can afford to
replace an HDD or SDD with a larger one.

https://www.seagate.com/tech-insight...its-master-ti/
https://www.youtube.com/watch?v=Q15wN8JC2L4 (skip promo ad at end)

I use Samsung's Magician to change overprovisioning on my Samsung SSDs
(https://www.samsung.com/semiconducto...mer/magician/).
However, any partition manager will work where you use it to increase or
decrease the size of unallocated space on the SSD. Because the consumer
SSDs are typically shipped with 10%, or less, by default, you steal
space from an allocated partition, if there are any, to make
unallocated. I keep the unallocated space in one block.

I'd rather give my SSDs longer potential lifespan than pinch on the
capacity of the [primary] partitions. If I eventually need more space,
I get a larger capacity drive. However, usually when I spec my fab, I
go a lot larger on drive space than expected over an 8-year lifespan or
than I've used before. I get fatter with every build.

Because of catastrophic failure of SSDs, I use those only for the OS,
apps, and temporary data. All those are either replaceable or
reproducible: I can reinstall the OS, reinstall the apps, or reproduce
the temporary data, or get them from backups. Any critical data goes on
a separate SSD (basically my reuse of a prior SSD), and gets backed up
to both an internal HDD which is mirrored to an external HDD. This PC
is not used for business, so I don't bother with another mirror to
removable media for off-site storage. SSDs *will* fail. Under "normal"
use, most SSD makers give an estimate of 10 years. Consider this like a
MTBF spec: you might get that, or higher, or you might get less. If
you're doing video editing, animation creation, high-volume data
processing, or file services, you're not their normal user. SSDs are
self-destructive storage devices! Expect them to catastrophically fail,
so plan on it ahead of time. All SSDs will eventually brick unless you
write to establish a state or image and then becomes a read-only device,
and you never write to it again. The same self-destruct occurs with USB
flash drives: great for short-term use, but don't use for long-time
storage unless you only read from it.

"The flame that burns twice as bright burns half as long" (Lao Tzu).

https://www.google.com/url?sa=t&rct=...hite-paper.pdf
Although there is no difference between the sequential and random
write performance for fresh-out-of-the-box (FOB) NAND,
the random write does not perform as well as the sequential write once
data has been written over the entire space of the NAND.
Random writes, smaller in size than sequential writes, mix valid and
invalid pages within blocks, which causes frequent GC and
results in decreased performance. If the OP is increased, more free
space that is inaccessible by the host can be secured, and the
resulting efficiency of GC contributes to improved performance. The
sustained performance is improved in the same manner.

Due to wear levelling to reduce writes on a particular block (to reduce
oxide stress in the junctions), new writes are positioned somewhere else
than where the data was read. Eventually all space within a partition
gets written. The drive itself can initiate a firmware-based garbage
collection (GC), or the OS can initiate via TRIM. Before a previously
used block can get reused for writes, it must be zeroed, and hopefully
that occurs as a background GC or TRIM. The waits, if any, like on a
super busy drive, affect write performance. More overprovisioning helps
maintain initial write performance; else, as noticed by many SSD users,
the SSD can get slower with age (volume of writes) until the next
firmware GC or OS TRIM (which are still keeping the drive busy).
Windows XP doesn't have TRIM built into it, and why SSD users on that OS
have to use a utility to send a TRIM (GC) request to the drive to
restore write performance.

Unless you overly penny-pinched on an SSD to barely get enough space
over its expected lifetime (how long you intend to use it), increase the
overprovisioning to up its potential lifespan and maintain write
performance. The SSD will still be a lot faster than an HDD, but you
might wonder later why it isn't so speedy as before for writes, plus you
might want to increase potential lifespan if you plan on keeping your
computer for 6 years, or longer, without ever replacing the SSD, or
continuing to repurpose it after getting a larger one.

You can use an SSD as shipped and hope the MTBF of 10 years is
sufficient for you as a "normal" user. However, if you're buying SSDs
and doing your own builds, you're not normal. Consumers that buy
pre-built average-spec computers are normal users. Overprovisioning
lengthens lifespan and maintains write performance, and the more the
better. It does reduce capacity, but then you should be over-building
on the parts you install yourself in your computers, so you should be
excessive in storage capacity beyond your expections. If you're willing
to toss more money to get an SSD over an HDD, you should also be willing
to toss more money to get over capacity.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT +1. The time now is 10:52 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.