A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Disc Wear Question



 
 
Thread Tools Display Modes
  #1  
Old December 10th 19, 07:07 PM posted to alt.comp.hardware
Trimble[_2_]
external usenet poster
 
Posts: 5
Default Disc Wear Question

I cant decide the answer to this situation

A Disc SSD or mechanical is divided into partitions
The operating system is installed on the first partition
the rest of that disc is mostly data - games - etc
There for the 1st partition is accessed read and write very often
while the rest of the disc infrequently

To maximize life span does it make sense to occasionally move
that OS partition to another part of that disc - perhaps the end -
to spread-even out the wear ?
(\__/)
(='.'=)
(")_(") mouse (Hmm.. a puzzle or a silly question ?)

  #2  
Old December 10th 19, 08:20 PM posted to alt.comp.hardware
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default Disc Wear Question

Trimble wrote:
I cant decide the answer to this situation

A Disc SSD or mechanical is divided into partitions
The operating system is installed on the first partition
the rest of that disc is mostly data - games - etc
There for the 1st partition is accessed read and write very often
while the rest of the disc infrequently

To maximize life span does it make sense to occasionally move
that OS partition to another part of that disc - perhaps the end -
to spread-even out the wear ?
(\__/)
(='.'=)
(")_(") mouse (Hmm.. a puzzle or a silly question ?)


For the SSD, the answer is, it definitely does not matter.

On write, the SSD uses a common pool of empty sectors for
the clusters doing the actual storage of write data. The
"heads" are virtual, and there is no wear sustained from
moving the heads between SSD partitions.

You have no need to move stuff on the SSD.

For best results on the SSD, do a TRIM per partition every
month or so. The TRIM tells the OS what parts of the
partition are not being used, and gives the drive more
materials to use for wear leveling. (But this also means,
potentially, that using "Undelete" programs on an SSD,
may give worse results than on a hard drive. The white space
on the disk, no longer belongs to you.)

*******

It is less clear on a hard drive, whether we should be
moving stuff around all the time.

I would say "Move stuff if you detect trouble".

I have a hard drive (with an OS on it which I use
occasionally), that has 48000 hours of usage. And
the drive has no signs at all, of a wear pattern.

But not all my drives are like that.

And it isn't even one of the more expensive SKUs either.

I have had other drives, where the characteristics of
the drive were so bad... I would not dare to move the
OS partition closer to the hub. The experiment would end
in disaster.

And this means, each drive family has a "personality", and
the five drives I own all with the same flaky behavior near
the hub, you don't mess with those. Those drives are
best left alone. They have exhibited bad health since
the day I bought them. However, none of the drives has
failed, and they are in my scratch drive pool, available
for experiments. The Reallocated counter hasn't increased
materially in the last three years or so, and I don't
particularly fear the drives. But I certainly would not
use the drives for my "daily driver" OS any more.

As for Seagate, they do occasionally make good drives.
Not everything they made was rubbish :-) The hard part is
predicting when it is safe to buy their stuff. I got a
couple 4TB drives of theirs which were excellent.
And the 48000 hour drive (500GB) is a Seagate.

And the question a customer has to ask, is
"why do these things keep happening?". Why is it
that the quality of released designs, varies so
much ? Don't they detect fatal flaws in design
before release ? It's a real puzzle. I know
the engineering in these is first-class work,
and it's strange that lot testing before a design
is released to the public, does not stop the release
of "loser" designs. I can tell they know how to test,
based on some of the equipment I've seen in pictures.

I don't think they have enough factories any more,
that we can blame a particular factory for all
the flawed products.

On a "loser" drive, could you "delay death" forever
by moving the partition around ? I doubt it. I think
instead, you'd be in for a rude surprise when the
Service Area (SA) failed. In the case of my
48,000 hour drive, moving the partition around
would have made no difference at all. It never
presented symptoms of being a loser, so needs
none of that sort of maintenance.

Paul
  #3  
Old December 10th 19, 09:14 PM posted to alt.comp.hardware
John McGaw
external usenet poster
 
Posts: 732
Default Disc Wear Question

On 12/10/2019 3:20 PM, Paul wrote:
Trimble wrote:
I cant decide the answer to this situation

A Disc SSD or mechanical is divided into partitions The operating system
is installed on the first partition
the rest of that disc is mostly data - games - etc
There for the 1st partition is accessed read and write very often while
the rest of the disc infrequently

To maximize life span does it make sense to occasionally move
that OS partition to another part of that disc - perhaps the end - to
spread-even out the wear ?
(\__/)
(='.'=)
(")_(")Â* mouse (Hmm.. a puzzle or a silly question ?)


For the SSD, the answer is, it definitely does not matter.

On write, the SSD uses a common pool of empty sectors for
the clusters doing the actual storage of write data. The
"heads" are virtual, and there is no wear sustained from
moving the heads between SSD partitions.

You have no need to move stuff on the SSD.

For best results on the SSD, do a TRIM per partition every
month or so. The TRIM tells the OS what parts of the
partition are not being used, and gives the drive more
materials to use for wear leveling. (But this also means,
potentially, that using "Undelete" programs on an SSD,
may give worse results than on a hard drive. The white space
on the disk, no longer belongs to you.)

*******

It is less clear on a hard drive, whether we should be
moving stuff around all the time.

I would say "Move stuff if you detect trouble".

I have a hard drive (with an OS on it which I use
occasionally), that has 48000 hours of usage. And
the drive has no signs at all, of a wear pattern.

But not all my drives are like that.

And it isn't even one of the more expensive SKUs either.

I have had other drives, where the characteristics of
the drive were so bad... I would not dare to move the
OS partition closer to the hub. The experiment would end
in disaster.

And this means, each drive family has a "personality", and
the five drives I own all with the same flaky behavior near
the hub, you don't mess with those. Those drives are
best left alone. They have exhibited bad health since
the day I bought them. However, none of the drives has
failed, and they are in my scratch drive pool, available
for experiments. The Reallocated counter hasn't increased
materially in the last three years or so, and I don't
particularly fear the drives. But I certainly would not
use the drives for my "daily driver" OS any more.

As for Seagate, they do occasionally make good drives.
Not everything they made was rubbish :-) The hard part is
predicting when it is safe to buy their stuff. I got a
couple 4TB drives of theirs which were excellent.
And the 48000 hour drive (500GB) is a Seagate.

And the question a customer has to ask, is
"why do these things keep happening?". Why is it
that the quality of released designs, varies so
much ? Don't they detect fatal flaws in design
before release ? It's a real puzzle. I know
the engineering in these is first-class work,
and it's strange that lot testing before a design
is released to the public, does not stop the release
of "loser" designs. I can tell they know how to test,
based on some of the equipment I've seen in pictures.

I don't think they have enough factories any more,
that we can blame a particular factory for all
the flawed products.

On a "loser" drive, could you "delay death" forever
by moving the partition around ? I doubt it. I think
instead, you'd be in for a rude surprise when the
Service Area (SA) failed. In the case of my
48,000 hour drive, moving the partition around
would have made no difference at all. It never
presented symptoms of being a loser, so needs
none of that sort of maintenance.

Â*Â* Paul


Rotating drives can be a bitch. I went through a "recovery" a few weeks
back as an experiment. One of the five 2tB data drives on an old Windows
Home Server went bad and I mean _really_ bad. Given the way that the OS
"blesses" each of its known drives I decided to try to clone it to a new
drive. Well, it took about five days of 24X7 thrashing with two external
docking stations connected to my old Linux notebook running Clonezilla but
it finally did finish. The server more-or-less accepted the copy, not
throwing a fit as it usually does, but there was still so much data loss
because the OS spreads data across all drives (but with no allowance for
severe failures) that I pretty much had to start fresh backups for
everything the server covers.

I have Drobo NAS units which have pretty well replaced the function of the
old server and it is so much more sane about failures. I could have one or
two drives fail out of five and still have use of all the data although a
two-drive failure would be pretty much of a total panic situation and if
that happens you had better have a spare of proper capacity to plug in
immediately. I've never had a drive failure on a Drobo but I know from
experience that it is only a matter of time.
  #4  
Old December 10th 19, 10:15 PM posted to alt.comp.hardware
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default Disc Wear Question

John McGaw wrote:

Rotating drives can be a bitch. I went through a "recovery" a few weeks
back as an experiment. One of the five 2tB data drives on an old Windows
Home Server went bad and I mean _really_ bad. Given the way that the OS
"blesses" each of its known drives I decided to try to clone it to a new
drive. Well, it took about five days of 24X7 thrashing with two external
docking stations connected to my old Linux notebook running Clonezilla
but it finally did finish. The server more-or-less accepted the copy,
not throwing a fit as it usually does, but there was still so much data
loss because the OS spreads data across all drives (but with no
allowance for severe failures) that I pretty much had to start fresh
backups for everything the server covers.

I have Drobo NAS units which have pretty well replaced the function of
the old server and it is so much more sane about failures. I could have
one or two drives fail out of five and still have use of all the data
although a two-drive failure would be pretty much of a total panic
situation and if that happens you had better have a spare of proper
capacity to plug in immediately. I've never had a drive failure on a
Drobo but I know from experience that it is only a matter of time.


Your Drobo sounds like it has RAID6, while the Windows Home Server
was just doing some variant of spanning.

Was Clonezilla using ddrescue or something else ?

With ddrescue (gddrescue package), you get a .log file when
the multiple pass job is finished, and from that (and nfi.exe),
you might be able to piece together what files are damaged.

Paul
  #5  
Old December 11th 19, 12:11 AM posted to alt.comp.hardware
John McGaw
external usenet poster
 
Posts: 732
Default Disc Wear Question

On 12/10/2019 5:15 PM, Paul wrote:
John McGaw wrote:

Rotating drives can be a bitch. I went through a "recovery" a few weeks
back as an experiment. One of the five 2tB data drives on an old Windows
Home Server went bad and I mean _really_ bad. Given the way that the OS
"blesses" each of its known drives I decided to try to clone it to a new
drive. Well, it took about five days of 24X7 thrashing with two external
docking stations connected to my old Linux notebook running Clonezilla
but it finally did finish. The server more-or-less accepted the copy, not
throwing a fit as it usually does, but there was still so much data loss
because the OS spreads data across all drives (but with no allowance for
severe failures) that I pretty much had to start fresh backups for
everything the server covers.

I have Drobo NAS units which have pretty well replaced the function of
the old server and it is so much more sane about failures. I could have
one or two drives fail out of five and still have use of all the data
although a two-drive failure would be pretty much of a total panic
situation and if that happens you had better have a spare of proper
capacity to plug in immediately. I've never had a drive failure on a
Drobo but I know from experience that it is only a matter of time.


Your Drobo sounds like it has RAID6, while the Windows Home Server
was just doing some variant of spanning.

Was Clonezilla using ddrescue or something else ?

With ddrescue (gddrescue package), you get a .log file when
the multiple pass job is finished, and from that (and nfi.exe),
you might be able to piece together what files are damaged.

Â*Â* Paul


The Drobos use some they call "Beyond RAID" IIRC. They are very
tight-lipped about what it does internally but it is the most forgiving
sort of NAS I've seen. There is really nothing to do when setting it up
other than plugging in anywhere from 2 to 5 drives and, when appropriate,
how much redundancy you want. No other decisions need be made. If you run
short on space you can, without powering down, pull a drive and plug in a
larger one and it will populate that drive, re-spread the data, and keep in
plugging without any loss of service.

Not sure what Clonezilla does internally. It provides a crude sort of
graphic interface and offers a few simple options and just does its own
thing. Normally it would be quite quick but in my case the source drive was
so trashed that a byte-by-byte copy got quite agonizing to watch. The
estimated run times in the GUI got so extreme that it overwrote part of the
screen with all the extra digits but it just kept on grinding. I still
haven't pulled that drive apart but I expect to find one of the platters
has been pretty well plowed.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Dell Win 7 Disc question No Name Dell Computers 31 August 7th 10 05:59 AM
XP Pro SP3 OEM disc Question Steve W.[_4_] Dell Computers 9 October 7th 09 05:36 AM
When it comes to purchasing professional clothing or work wear formen, there are numerous options that are both attractive and appealing. A manhas needs when it comes to his work wear, just like a woman. Fashion designerstoday often provide options p [email protected] Homebuilt PC's 0 April 21st 08 01:25 PM
CD-RW Disc Question Guy Quinn Cdr 14 October 23rd 04 08:42 AM
Disc cloning question Barry Delfino General 3 March 24th 04 10:18 PM


All times are GMT +1. The time now is 12:32 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.