A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage (alternative)
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Why is it not letting me extend the partition?



 
 
Thread Tools Display Modes
  #11  
Old March 24th 21, 01:45 PM posted to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage
Yousuf Khan[_2_]
external usenet poster
 
Posts: 1,296
Default Why is it not letting me extend the partition?

On 3/23/2021 11:20 PM, Yousuf Khan wrote:
So one of my oldest SSD's just finally had a bad misfire. One of its
memory cells seems to have gone bad, and it happened to be my boot
drive, so I had to restore to a new SSD from backups. That took a fair
bit of time to restore, but the new drive is twice as large as the old
one, but it created a partition that is the same size as the original. I
expected that, but I also expected that I should be able to extend the
partition after the restore to fill the new drive's size. However going
into disk management it doesn't allow me to fill up that entire drive.
Any idea what's going on here?

Â*Â*Â*Â*Yousuf Khan


Okay, I figured it out, I was just being fooled into thinking it wasn't
working. Due to the fact that the new drive was exactly twice as big as
the previous drive, I thought it was telling me that the current size
was its maximum limit, and that it couldn't add any more of the drive
space. But in actual fact it was telling me that it could add an
additional amount of space that just so happened to be exactly the same
numerically as the existing space. So I got fooled into thinking the
wrong thing. I added the additional space without problem.

On an alternate note, the old drive now has one tiny little bad sector
hole in it, that I'm thinking the drive can deprovision, and carry on
without in the future. Is there something that can allow the drive
electronics to carry on an internal test and remove the bad sectors?

Yousuf Khan
  #12  
Old March 24th 21, 02:31 PM posted to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default Why is it not letting me extend the partition?

Yousuf Khan wrote:


On an alternate note, the old drive now has one tiny little bad sector
hole in it, that I'm thinking the drive can deprovision, and carry on
without in the future. Is there something that can allow the drive
electronics to carry on an internal test and remove the bad sectors?

Yousuf Khan


Testing burns wear life.

*******

A sector has three states (for this discussion):

1) Error free (in TLC/QLC era, highly unlikely)

2) Errors present, ECC can correct.

3) Errors present, ECC cannot correct. tiny little bad sector.

If (3) were marked with "write, but do immediate read verify",
this would allow evaluating the material in question, after
it was put in the free pool. The "questionable status" should
follow the block around, until it can be ascertained that it
is (1) or (2) again. If it showed up (3) on a retry, it should
be thrown into the old sock drawer. Any "write attempt", is
an excellent time to be checking credentials of the block.

The procedure should be similar to hard drives, economical
in nature, yet not endangering user data. To do walking-ones
or a GALPAT on the flash block, that would be seriously naughty
and pointless. You could burn out the entire block wear life, then
conclude there is nothing wrong with the block :-)

Seagate has a field on their hard drives, called "CurrentPending".
For the longest while, I took that at face value. However,
that field isn't what it appears. It only seems to increment
when the drive is in serious trouble and has run out of spares
at some level. It's unclear whether there is an "honest"
item in the SMART table, keeping track of items like (3) so
a customer can judge how bad things are.

SMART is generally not completely honest anyway. There's some info,
but they are dishonest so that users do not "cherry pick" drives,
and send back the ones that have a tiny blemish when purchased.

On hard drives, at one time it was considered to be OK for a
drive to leave the factory, with 100,000 errored sectors on it.
That's because the yields were bad, and the science could not
keep up. Now, if SMART was completely honest about your drive,
imagine how you'd freak out if you saw "100,000" in some table.
This is why the scheme is intentionally biased so drive devices
look "perfect" when they leave the factory, when we know there
is metadata inside indicating the drive is not perfect. Especially
with TLC or QLC. SSD drives do not leave the factory with
a state of (1) over 100% of the surface. There is lots of (2),
and more (2) the longer the new drive sits on the shelf. That's
why, if you want to bench a modern SSD, you should write it from
end to end first. This removes the degree of errored-ness on
the surface, before you do your read benchmark test. If the drive
was SLC or MLC, I would not be doing this... It would not need it.

The Corsair Neutron I bought, on first test, I was getting 125 to 130MB/sec
on reads. Dreadful. The performance popped up, after a refresh. I still took
it back to the store for a refund the next morning, because
(maybe) the manufacturer would like some feedback on what
I think of them.

Paul
  #13  
Old March 24th 21, 03:58 PM posted to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage
Ken Blake[_4_]
external usenet poster
 
Posts: 11
Default Why is it not letting me extend the partition?

On 3/23/2021 8:20 PM, Yousuf Khan wrote:
So one of my oldest SSD's just finally had a bad misfire. One of its
memory cells seems to have gone bad, and it happened to be my boot
drive, so I had to restore to a new SSD from backups. That took a fair
bit of time to restore, but the new drive is twice as large as the old
one, but it created a partition that is the same size as the original. I
expected that, but I also expected that I should be able to extend the
partition after the restore to fill the new drive's size. However going
into disk management it doesn't allow me to fill up that entire drive.
Any idea what's going on here?



It's probably because there's no free space contiguous to the partition
you want to expand. You need to use a third-party partition manager.


--
Ken
  #14  
Old March 24th 21, 05:28 PM posted to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage
J. P. Gilliver (John)[_3_]
external usenet poster
 
Posts: 24
Default SSD "overprovisioning"

On Wed, 24 Mar 2021 at 08:24:36, Paul wrote (my
responses usually follow points raised):
J. P. Gilliver (John) wrote:

If, after some while using an SSD, it has used up some of the slack,
because of some cells having been worn out, does the apparent total
size of the SSD - including unallocated space - appear (either in
manufacturer's own or some third-party partitioning utility) smaller
than when that utility is run on it when nearly new?


The declared size of an SSD does not change.

The declared size of an HDD does not change.

What happens under the covers, is not on display.


That's what I thought.

The reason you cannot arbitrarily move the end of a drive,
is because some structures are up there, which don't appear
in diagrams. This too is a secret.

Any time something under the covers breaks, the
storage device will say "I cannot perform my function,
therefore I will brick". That is preferable to moving
the end of the drive and damaging the backup GPT partition,
the RAID metadata, or the Dynamic Disk declaration.

Paul


So how come our colleague is telling us we can change the amount of
"overprovisioning", even using one of many partition managers _other_
that one made by the SSD manufacturer? How does the drive firmware (or
whatever) _know_ that we've given it more to play with?
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

It's no good pointing out facts.
- John Samuel (@Puddle575 on Twitter), 2020-3-7
  #15  
Old March 24th 21, 05:44 PM posted to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default SSD "overprovisioning"

J. P. Gilliver (John) wrote:

So how come our colleague is telling us we can change the amount of
"overprovisioning", even using one of many partition managers _other_
that one made by the SSD manufacturer? How does the drive firmware (or
whatever) _know_ that we've given it more to play with?


Once you've set the size of the device, it's
not a good idea to change it. That's all I can
tell you.

If you don't want to *use* the whole device, that's your business.
I've set up SSDs this way before. As you write C: and materials
"recirculate" as part of wear leveling, the virtually unused
portion continues to float in the free pool, offering more
opportunities for wear leveling or consolidation. You don't
have to do anything. You could make a D: partition, keep it empty,
issue a "TRIM" command, to leave no uncertainty as to what your
intention is. Then delete D: once the "signaling" step is complete.

+-----+-----------------+--------------------+
| MBR | C: NTFS | unallocated |
+-----+-----------------+--------------------+
\__ This much extra__/
in free pool

Paul
  #16  
Old March 25th 21, 12:15 AM posted to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage
VanguardLH[_2_]
external usenet poster
 
Posts: 1,453
Default SSD "overprovisioning"

"J. P. Gilliver (John)" wrote:

How does the firmware (or whatever) in the SSD _know_ how much space
you've left unallocated, if you use any partitioning utility other
than one from the SSD maker (which presumably has some way of
"telling" the firmware)?


Changing the amount of unallocated space on the SSD is how the tools
from the SSD makers work, too. You can use their tool, or you can use a
partitioning tool.

If, after some while using an SSD, it has used up some of the slack,
because of some cells having been worn out, does the apparent total
size of the SSD - including unallocated space - appear (either in
manufacturer's own or some third-party partitioning utility) smaller
than when that utility is run on it when nearly new?


The amount of overprovisioning space set at the factory is never
available for you to change. If they set 7% space for overprovisioning,
you'll never be able to allocate that space to any partition. That
space is not visible, fixed, and set at the factory. For example, they
might sell a 128GB SSD, but usuable capacity is only 100GB. This is the
static overprovisioning set at the factory.

From the usable capacity of the drive, unallocated space is used for
dynamic overprovisioning. Typically you find that you cannot use all
unallocated space for a partition. There's some that cannot be
partitioned; however, by making partition(s) smaller then there is more
unallocated space available for use by dynamic overprovisioning. It's
dynamic because it changes with the amount of write delta (stored data
changes). The unallocated space is a reserve. Not all of it may get
used.

Individual cells don't get remapped. Blocks of cells get remapped. If
you were to reduce the OP using unallocated space, the previously marked
bad blocks would have to get re-remapped to blocks within the partition.
Those bad blocks are still marked as bad, so remapping has to be
elsewhere. Might you lose information in the blocks in the dynamic OP
space when you reduce it? That I don't know. Partition managers don't
know about how the content of unallocated space is used.

The SSD makers are so terse as to be sometimes unusably vague in their
responses. Samsung said "Over Provisioning can only be performed on the
last accessible partition." What does that mean? Unallocated space
must be located after the last partition? Well, although by accident,
that's how I (and Samsung Magician) have done it. The SSD shows up with
1 partition consuming all usuable capacity, and I or Samsung Magician
ended up shrinking the partition to make room for unallocated space at
the end. However, SSD makers seem to be alchemists or witches: once
they decide on their magic brew of ingredients, they keep it a secret.

I have increased OP using Samsung Magician, and decreased it, too. All
that it did was change the size of the unallocated space by shrinking or
enlarging the last partition, so the unallocated space change was after
the last partition. When shrinking the unallocated space, it was not
apparent in Samsung Magician that any bad cell blocks that got remapped
to unallocated space either got re-remapped into the static OP space
which would reduce endurance. Since the firmware had marked a block as
bad, it still gets remapped into static or dynamic OP. If unallocated
space were reduced to zero (no dynamic OP), static OP gets used for the
remappings. However, I haven't found anything that discusses for
remappings into dynamic OP when the unallocated space is shrunk.
Samsung Magician's OP adjustment looks to be nothing more than a limited
partition manager to shrink or enlarge the last partition, which is the
same you could do using a partition manager. I suspect any remap
targets in the dynamic OP do not get written into the static OP, so you
could end up with data corruption. A bad block got mapped into dynamic
OP, you reduced the size of dynamic OP which means some of those
mappings there are gone, and they are not written into static OP. Maybe
Samsung's Magician is smart enough to remap the dynamic OP remaps into
static OP, but I don't see that happening yet it could keep that
invisible to the user. Only if I had a huge number of remappings stored
in dynamic OP and then shrunk the unallocated space might I see the
extra time spent to copy those remappings into static OP when compared
to using a partition tool just just enlarge the last partition.

Since the information doesn't seem available, I err on the side of
caution: I only reduce dynamic OP immediately after enlarging it should
I decide the extra OP consumed a bit more than I want to lose in
capacity in the last partition. Once I set dynamic OP and have used the
computer for a while, I don't reduce dynamic OP. I have yet to find out
what happens to the remappings in dynamic OP when it is reduced. If I
later need more space in the partition, I get a bigger drive, clone to
it, and decide on dynamic OP at that time. With a bigger drive, I
probably will reduce the percentage of dynamic OP since it would be a
huge waste of space. For a drive clone, the static or dynamic
remappings from the old drive aren't copied to the new drive. The new
drive will have its own independent remappings, and the reads during the
clone are going to copy from the remaps from the old drive into the the
new drive's partition(s). Old remappings vaporize during the copy to a
different drive.

Unless reducing the dynamic OP size (unallocated space) is done very
early after creating it to reduce the chance of new remappings happening
between defining the unallocated space and then reducing its size, I
would be leery of reducing unallocated space on an SSD after lots of use
for a long time. Cells will go bad in SSDs, and why remapping is
needed. I don't see any tools that move remappings from dynamic OP when
it gets reduced, and the sectors where were the remapping get moved to
static OP. You can decide not to use dynamic OP at all, and hope the
factory-set static OP works okay for you for however long you own the
SSD. You can decide to sacrifice some capacity to define dynamic OP,
but I would recommend only creating it, perhaps later enlarging it, but
not to shrink it. I just can't find info on what happens to the remaps
in dynamic OP when it is shrunk.

Overprovisioning, whether fixed (static, set by factory) or dynamic
(unallocated space within the usuable space after static OP) always
reduces capacity of the drive. The reward is reducing write
amplication, increased performance (but not better than factory-time
performance), and endurance. You trade some of one for the other. It's
like insurance: the more you buy, the less money you have now, but you
hope you won't be spending a lot more later.

If - assuming you _can_ - you reduce the space for overprovisioning to
zero (obviously unwise), will the SSD "brick" either immediately, or
very shortly afterwards (i. e. as soon as another cell fails)?


Since the cell block is still marked as bad, it still needs to get
remapped. With no dynamic OP, static OP gets used. If you create
dynamic OP (unallocated space) where some remaps could get stored, what
happens to the remaps there when you shrink the dynamic OP? Sure, the
bad blocks are still marked bad, so future writes will remap the bad
block into static OP, but happened to the data in the remaps in dynamic
OP when it went away? Don't know. I don't see any SSD tool or
partition manager will write the remaps from dynamic OP into static OP
before reducing dynamic OP. After defining dynamic OP, reducing it
could cause data loss.

If you just must reduce dynamic OP because you need that unallocated
space to get allocated into a partition, your real need is a bigger
drive. When you clone (copy) the old SSD to a new SSD, none of the
remaps in the old SSD carry to the new SSD. When you get the new SSD,
you could change the size (percentage) of unallocated space to change
the size of dynamic OP, but I would do that immediately after the clone
(or restore from backup image). I'd want to reduce the unallocated
space on the new bigger SSD as soon as possible, and might even use a
bootable partition manager to do that before the OS loads the first
time. I cannot find what happens to the remaps in dynamic OP when it
gets reduced.

If, once an SSD _has_ "bricked" [and is one of the ones that goes to
read-only rather than truly bricking], can you - obviously in a dock on
a different machine - change (increase) its overprovisioning allowance
and bring it back to life, at least temporarily?


Never tested that. Usually I replace drives before they run out of free
space (within a partition) with bigger drives, or I figure out how to
move data off the old drive to make for more free space. If I had an
SSD that catastrophically failed into read-only mode, I'd get a new (and
probably bigger) SSD and clone from old to new, then discard the old.

Besides my desire to up capacity with a new drive when an old drive gets
over around 80% full, and if I don't want to move files off of it to get
back a huge chunk to become free space, I know SSDs are self
destructive, so I expect them to fail unless I replace them beforehand.
From my readings, and although they only give a 1-year warranty, most
SSD makers seem to plan on a MTBF of 10 years, but that's under a write
volume "typical" of consumer use (they have some spec that simulates
typical write volume, but I've not seen those docs). Under business or
server use, MTBF is expected to be much lower. I doubt that I would
keep any SSD for more than 5 years in my personal computers. I up the
dynamic OP to add insurance, because I size drives far beyond expected
usage. Doubling is usually my minimum upsize scale.

I wouldn't plan on getting my SSD anywhere near its maximum write cycle
count that would read-only brick it. SMART does not report the number
of write cycles, but Samsung's Magician tool does. It must request info
from firmware that is not part of the SMART table. My current 1 TB NVMe
m.2 SSD is about 25% full after a year's use of my latest build.
Consumption won't change as much in the future (i.e., it pretty much
flattened after a few months), but if it gets to 80% would then be when
I consider getting another matching NVMe m.2 SSD, or replace the old 1
TB one with 2TB, or larger, and cloning would erase all those old remaps
in the old drive (the new drive won't have those). Based on my past
experience and usage, I expect my current build to last another 7 years
until I the itch gets too unbearable to do a new build. 20% got used
for dynamic OP just as insurance to get an 8-year lifespan, but I doubt
I will ever get close to bricking the SSD.

I could probably just use the 10% minimum for static OP, but I'm willing
to spend some capacity as insurance. More than for endurance, I added
dynamic OP to keep up the performance of the SSD. After a year, or
more, of use, lots of users have reported their SSDs don't perform like
when new. The NVMe m.2 SSD is a 5 times faster (sequential, and more
than 4 times for random) for both reads and writes than my old SATA SSD
drive, and I don't want to lose that joy of speed that I felt at the
start.

I might be getting older and slower, but not something I want for my
computer hardware as it ages.
  #17  
Old March 25th 21, 01:00 AM posted to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage
VanguardLH[_2_]
external usenet poster
 
Posts: 1,453
Default SSD "overprovisioning"

"J. P. Gilliver (John)" wrote:

So how come our colleague is telling us we can change the amount of
"overprovisioning", even using one of many partition managers _other_
that one made by the SSD manufacturer? How does the drive firmware
(or whatever) _know_ that we've given it more to play with?


Static OP: What the factory defines. Fixed. The OS, software, and you
have no access. Not part of usable space.

Dynamic OP: You define unallocated space on the drive. You can shrink a
partition to make more unallocated space, or expand a
partition to make less unallocated space (but might cause
data loss for remaps stored within the dynamic OP). (*)

(*) I've not found info on what happens to remaps stored in the dynamic
OP when the unallocated space is reduced (and the reduction covers
the sectors for the remaps).
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Win XP not letting me view hidden folders..... geronimo Homebuilt PC's 3 February 17th 09 03:51 PM
IDE extend card anything related to OS? Zhang Weiwu General 4 September 7th 04 01:30 PM
how to indentify motherboard and extend mem Zbigniew Lisiecki Homebuilt PC's 1 July 23rd 04 05:01 AM
SVGA Cable - can I cut and extend it? Ger General 7 November 26th 03 05:41 PM
Dabs - are you letting me down text news UK Computer Vendors 6 September 3rd 03 09:22 AM


All times are GMT +1. The time now is 11:45 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.