A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage (alternative)
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

SSD transfer speed



 
 
Thread Tools Display Modes
  #1  
Old October 19th 14, 12:02 PM posted to comp.sys.ibm.pc.hardware.storage
Jan Stożek
external usenet poster
 
Posts: 1
Default SSD transfer speed

Hi,

I measured transfer speed on my disk, which I use for some time
already, the result is he http://bit.ly/1uojTNe.

Interestingly, the size of the ragged part, which is about 60%
of the disk's capacity, is (almost) exactly the size of my system
disk, while the other part of the disk, with almost uniform
performance, remains unused for some time already - though it HAD been
used previously. What is even more interesting, when sime time ago I
measured the disk performance after the system installed on the
*second* partition of the very same disk had been used for some time,
the graph looked almost exactly mirrored: it started with a degraded
performance on the initial sectors (like it is now), then there was a
large area of a pretty uniform, high performance representing the then
unused partition, followed by a ragged area of the partition in use.

Unfortunately, I do not have a screenshot to demonstrate it.

The system in question happens to be OpenSuse Linux, so most -
if not all - folders with significant amount of writes (/home, /var,
/srv, /swap) are located on magnetic drives or ramdisks. Noatime is
turned on, so I believe that the actual writes to the SSD are made
only during updates and upgrades. The disk is trimmed once a week
automatically.

The questions:
* Is this performance characteristic a significant symptom of a
wear?
* Is it dangerous to the data
* Can it be reversed anyhow?

The disk is Kingston SSDNOW 30GB. Performance was measured
using HD Tune run from Hiren's Diag CD, so it had nothing to do with
the drives. The computer in question is my home PC - used on a daily
basis, turned-off for the nights.

Thank you very much in advance for any hints.

--
Best regards,

(js).

PS. In case you prefer to responding directly, please remove the dash
with all subsequent letters from the email address.

  #2  
Old October 19th 14, 06:10 PM posted to comp.sys.ibm.pc.hardware.storage
Arno[_3_]
external usenet poster
 
Posts: 1,425
Default SSD transfer speed

This is a result from disk-internal fragmentation, and no, there
is nothing reasonable you can do about it. Unless you have an
actual speed-problem, just ignore it.

Arno


Jan Sto??ek wrote:
Hi,


I measured transfer speed on my disk, which I use for some time
already, the result is he http://bit.ly/1uojTNe.


Interestingly, the size of the ragged part, which is about 60%
of the disk's capacity, is (almost) exactly the size of my system
disk, while the other part of the disk, with almost uniform
performance, remains unused for some time already - though it HAD been
used previously. What is even more interesting, when sime time ago I
measured the disk performance after the system installed on the
*second* partition of the very same disk had been used for some time,
the graph looked almost exactly mirrored: it started with a degraded
performance on the initial sectors (like it is now), then there was a
large area of a pretty uniform, high performance representing the then
unused partition, followed by a ragged area of the partition in use.


Unfortunately, I do not have a screenshot to demonstrate it.


The system in question happens to be OpenSuse Linux, so most -
if not all - folders with significant amount of writes (/home, /var,
/srv, /swap) are located on magnetic drives or ramdisks. Noatime is
turned on, so I believe that the actual writes to the SSD are made
only during updates and upgrades. The disk is trimmed once a week
automatically.


The questions:
* Is this performance characteristic a significant symptom of a
wear?
* Is it dangerous to the data
* Can it be reversed anyhow?


The disk is Kingston SSDNOW 30GB. Performance was measured
using HD Tune run from Hiren's Diag CD, so it had nothing to do with
the drives. The computer in question is my home PC - used on a daily
basis, turned-off for the nights.


Thank you very much in advance for any hints.


--
Best regards,


(js).


PS. In case you prefer to responding directly, please remove the dash
with all subsequent letters from the email address.


  #3  
Old October 19th 14, 10:18 PM posted to comp.sys.ibm.pc.hardware.storage
Jan Stozek
external usenet poster
 
Posts: 2
Default SSD transfer speed

Hi Arno,

Po głębokim namyśle Arno napisał w niedziela, 19 października 2014
19:10:

This is a result from disk-internal fragmentation, and no, there
is nothing reasonable you can do about it. Unless you have an
actual speed-problem, just ignore it.


The system actually runs like a dream, so my only concern was a
possible wear.

Thank you very much for your help.

--
Best regards,

(jan).

  #4  
Old October 19th 14, 11:16 PM posted to comp.sys.ibm.pc.hardware.storage
Mark F[_2_]
external usenet poster
 
Posts: 164
Default SSD transfer speed

On Sun, 19 Oct 2014 13:02:26 +0200, Jan Sto?ek
wrote:

Hi,

I measured transfer speed on my disk, which I use for some time
already, the result is he http://bit.ly/1uojTNe.

Interestingly, the size of the ragged part, which is about 60%
of the disk's capacity, is (almost) exactly the size of my system
disk, while the other part of the disk, with almost uniform
performance, remains unused for some time already - though it HAD been
used previously. What is even more interesting, when sime time ago I
measured the disk performance after the system installed on the
*second* partition of the very same disk had been used for some time,
the graph looked almost exactly mirrored: it started with a degraded
performance on the initial sectors (like it is now), then there was a
large area of a pretty uniform, high performance representing the then
unused partition, followed by a ragged area of the partition in use.

Unfortunately, I do not have a screenshot to demonstrate it.

The system in question happens to be OpenSuse Linux, so most -
if not all - folders with significant amount of writes (/home, /var,
/srv, /swap) are located on magnetic drives or ramdisks. Noatime is
turned on, so I believe that the actual writes to the SSD are made
only during updates and upgrades. The disk is trimmed once a week
automatically.

The questions:
* Is this performance characteristic a significant symptom of a
wear?

Could be several things
1. internal fragmentation, or whatever, inside of the device
2. high recoverable read error rate
due to normal charge leakage over time
3. high recoverable read rate due to
wearing out of cells or weak (high leakage?) cells
to begin with. (Depending on the hardware, unused
cells might not show leakage; I forget with is typical
with current technologies.)

Backing up, initializing as appropriate for the device,
and restoring fixes most cases. (Do a real image backup
that doesn't attempt to move or defragment files if you want
to avoid system disk and licensing issues.)

Alternatively, buy the SpinRite program from www.grc.com
and have it refresh the data.

Many devices will rewrite the data when SpinRite scans
it if the data shows a high recoverable error rate.

I usually use the SpinRite level that does a read followed
by a write. This will get the drive to rewrite the data
on some types of drives. (Uses about 1 full erase cycle
of the device)

Using SpinRite at a higher level to:
read, write inverted, read, write inverted again
will cause most drives to actually write the data anew.
(uses about 2 full erase cycles of the device)

It is good to do a backup first, but I have never
lost data with SpinRite.

Note:
Many devices, particularly "consumer" devices,
will have longer access times for most blocks
(from the user view) that were written 2 or more
times since drive initialization. This means you
might see 0.3 millisecond access time instead of
the 0.2 millisecond access time that you get if
you only write 0 or 1 times. (In your case you
might see "ragged" performance everywhere.)

I prefer to avoid the risk of restore failure and
run SpinRite to write or write/invert/write.

You don't have to run SpinRite over the entire
(user view of) the device, so you don't have to
check it all in one session.

Note the as data ages the number of errors
recoverable in read increases and you will start
getting delays due to error recovery which will
be more than the 0.1ms access time increase that
the first rewrite of the data may incur.
(See Samsung 840 EVO bug circa 2014 October 15
for an extreme case of long delays in reading
due to the drive's reluctance to rewrite data.)

I have a Dell Precision 380 system that I don't use very much.
I run SpinRite on it about 2 times/year at the
highest "level" (read/write inverted/read/write)

I run SpinRite on all drives, but am only showing the
HD Tune Pro 5.50 results for the SSD system disk.

The system disk has:
Firmware 2.15
(E7) SSD Life Remaining 0 (meaning 100%, I think)
(E9) Life Time Writes (NAND) 8385 GB
(EA) Life Time Writes (Host) 5905 GB
(F1) Life Time Writes 5905 GB
(F2) Life Time Reads 11436 GB

8385/240 is less than 40 writes, so, as I said,
little use.



My system disk is
OCZ-VERTEX3 MI 240gB
My system partition is 204800 MB, starting at 0
the remanding part is unused from the user view
HD Tune Pro 5.50 {oops, I need to get new version}
Benchmark with "Full test" most Accurate, Block size 8 MB
shows
200 MB/s +/- about 10MB/s for 0-72 and 192-240 gB
200 MB/s +/- about 20MB/s for 72-192 gB

The times are percentages are my eyeballed guesses.
Access times (with "seeks") are clustered in 2 bands:
0.17 milliseconds for about 5% of the samples
0.22 milliseconds for about 95% of the samples

Random Access Read shows
512 bytes 4031 IOPS
clustered at about 0.18 (5%) and 0.25 (95%) milliseconds

I don't have the original benchmark numbers for the system disk,
but I some numbers for another OCT Vertex 3 MI 240gB drive.

These tests used HD Tune Pro 4.60, Random Access 512 bytes
After firmware upgrade (to 2.15) and secure erase:
14386 IOPS 0.069 milliseconds
{95%+ at about 0.07 milliseconds)
The above number is unrealistic because effectively nothing
is on the device, so no mapping overhead is incurred.

After writing a "bunch" of data:
8881 IOPS 0.112 milliseconds
(80% at about 0.07 ms, 20% at 0.27)
This is the fastest for real access, but much of the
device still doesn't have mapping overhead. (Also,
"bunch" was less than the user capacity worth
(240gB), much less than actual capacity of the device,
but more than 40GB.)

I was a little surprised to not see much at 0.14 and 0.21 ms

I don't have any tests showing before and after for the
2 or 3 times that I have ran SpinRite on the disk in the
last two years.

Today, I see 4041 IOPS. Although 1440 gB of writes for
the 6 SpinRite passes is significant in the less than 30
since the SSD was last initialized.

I haven't done anything to see if the "concentrated"
writes caused by SpinRite contributed significantly to
the fall from 8881 Read IOPS to 4041 Read IOPS.

I'll have to collect before and after numbers next time
I run. I don't expect to do an image to another device
and image back.

* Is it dangerous to the data
* Can it be reversed anyhow?

The disk is Kingston SSDNOW 30GB. Performance was measured
using HD Tune run from Hiren's Diag CD, so it had nothing to do with
the drives. The computer in question is my home PC - used on a daily
basis, turned-off for the nights.

Thank you very much in advance for any hints.

  #5  
Old October 20th 14, 12:05 PM posted to comp.sys.ibm.pc.hardware.storage
Yousuf Khan[_2_]
external usenet poster
 
Posts: 1,296
Default SSD transfer speed

On 19/10/2014 7:02 AM, Jan Stożek wrote:
Hi,

I measured transfer speed on my disk, which I use for some time
already, the result is he http://bit.ly/1uojTNe.

Interestingly, the size of the ragged part, which is about 60%
of the disk's capacity, is (almost) exactly the size of my system
disk, while the other part of the disk, with almost uniform
performance, remains unused for some time already - though it HAD been
used previously. What is even more interesting, when sime time ago I
measured the disk performance after the system installed on the
*second* partition of the very same disk had been used for some time,
the graph looked almost exactly mirrored: it started with a degraded
performance on the initial sectors (like it is now), then there was a
large area of a pretty uniform, high performance representing the then
unused partition, followed by a ragged area of the partition in use.


I would guess that this performance degradation represents the deferred
TRIM writes no longer being deferred anymore. In a working system, there
is a normal amount of writes happening everywhere on the active part of
the disk. When an SSD is written to, it writes to unused flash memory
cells, and marks the old cells as ready for reuse later, through the
TRIM command. The idea being that after some idle time comes around,
it'll begin the process of zeroing the old cells, before putting them
back into the pool for reuse. During a benchmark, you might overload its
ability to leisurely reset these cells in its own time, and you end up
with cell recycling happening in the middle of actual operations.

Yousuf Khan
  #6  
Old October 22nd 14, 07:53 PM posted to comp.sys.ibm.pc.hardware.storage
Jan Stozek
external usenet poster
 
Posts: 2
Default SSD transfer speed

Hi,

Po głębokim namyśle Mark F napisał w poniedziałek, 20 października
2014 00:16:

Backing up, initializing as appropriate for the device,
and restoring fixes most cases. (Do a real image backup
that doesn't attempt to move or defragment files if you want
to avoid system disk and licensing issues.)


Actually, I copied sector by sector the partition to itself
using Linux'es dd command, then trimmed, and accounted the full speed,
~188 MB/s +-1% across the whole drive. Didn't notice any impact on the
percieved speed of the computer though.

I run SpinRite on all drives, but am only showing the
HD Tune Pro 5.50 results for the SSD system disk.

The system disk has:
Firmware 2.15
(E7) SSD Life Remaining 0 (meaning 100%, I think)
(E9) Life Time Writes (NAND) 8385 GB
(EA) Life Time Writes (Host) 5905 GB
(F1) Life Time Writes 5905 GB
(F2) Life Time Reads 11436 GB


It's in SMART, isn't it? Unfortunately, for Kingstone those
params are not available.

Thank you very much for very detailed info. I must find
somewhere durability of my drive.

--
Best regards,

(js).

PS. When responding directly, please remove dash with all subsequent
letters from the email address.

  #7  
Old October 25th 14, 10:18 PM posted to comp.sys.ibm.pc.hardware.storage
Mark F[_2_]
external usenet poster
 
Posts: 164
Default SSD transfer speed

On Wed, 22 Oct 2014 20:53:52 +0200, Jan Stozek
wrote:

Hi,

Po g??bokim namy?le Mark F napisa? w poniedzia?ek, 20 pa?dziernika
2014 00:16:

Backing up, initializing as appropriate for the device,
and restoring fixes most cases. (Do a real image backup
that doesn't attempt to move or defragment files if you want
to avoid system disk and licensing issues.)


Actually, I copied sector by sector the partition to itself

It is possible that some of the higher end devices do things like:
.. have RAM cache or flash cache
.. check if they have the same data around
(this would be like caching, except not in dedicated cache
or temporary cache, but rather data that just happens
to be around.
and maybe even
.. check to see if the same data is stored elsewhere and not in
use.

Thus, unless you take steps to make sure that you haven't
referenced the same data recently the device may not write
the new data, but rather just set pointers to it.

Thus I recommend backup elsewhere and copy back. And, if
you are backing the data up anyhow, you should do the extra
steps needed to "initialize" the device between the backup
and restore.
using Linux'es dd command, then trimmed, and accounted the full speed,
~188 MB/s +-1% across the whole drive. Didn't notice any impact on the
percieved speed of the computer though.

I run SpinRite on all drives, but am only showing the
HD Tune Pro 5.50 results for the SSD system disk.

The system disk has:
Firmware 2.15
(E7) SSD Life Remaining 0 (meaning 100%, I think)
(E9) Life Time Writes (NAND) 8385 GB
(EA) Life Time Writes (Host) 5905 GB
(F1) Life Time Writes 5905 GB
(F2) Life Time Reads 11436 GB


It's in SMART, isn't it? Unfortunately, for Kingstone those

Yes, from the SMART data.
params are not available.

Thank you very much for very detailed info. I must find
somewhere durability of my drive.

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
data transfer speed with NAS? [email protected] Storage (alternative) 1 August 10th 14 02:48 AM
Reasonable HDD transfer speed? kimiraikkonen General 2 November 12th 07 07:12 AM
Transfer speed MikeM Homebuilt PC's 2 January 26th 07 12:17 AM
Firewire vs. USB 2.0 Transfer Speed Walter Ego Cdr 6 April 23rd 04 08:20 PM
Transfer speed over USB 2.0 Bishoop Storage (alternative) 8 September 1st 03 04:33 PM


All times are GMT +1. The time now is 09:56 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.