View Single Post
  #7  
Old March 24th 21, 10:39 AM posted to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default Why is it not letting me extend the partition?

Jeff Barnett wrote:
On 3/24/2021 12:14 AM, Paul wrote:
Yousuf Khan wrote:
So one of my oldest SSD's just finally had a bad misfire. One of its
memory cells seems to have gone bad, and it happened to be my boot
drive, so I had to restore to a new SSD from backups. That took a
fair bit of time to restore, but the new drive is twice as large as
the old one, but it created a partition that is the same size as the
original. I expected that, but I also expected that I should be able
to extend the partition after the restore to fill the new drive's
size. However going into disk management it doesn't allow me to fill
up that entire drive. Any idea what's going on here?

Yousuf Khan


It's GPT and you need to find a utility that does a
better job of showing the partitions.

The Microsoft Reserved partition has no recognizable
file system inside, and the information I can find suggests
it is used as a space when something needs to be adjusted. It
is a tiny supply of "slack". But, it might also function as
a "blocker" when Disk Management is at work. And then, not
every utility lists it properly. Some utilities try to "hide"
things like this, and only show data partitions.

Try Linux GDisk or Linux GParted, and see if you can
spot the blocker there. The disktype utility might work,
but the only edition available there is the Cygwin one.

disktype.exe /dev/sda

--- /dev/sda
Block device, size 2.729 TiB (3000592982016 bytes)
DOS/MBR partition map
Partition 1: 2.000 TiB (2199023255040 bytes, 4294967295 sectors from 1)
Type 0xEE (EFI GPT protective)
GPT partition map, 128 entries
Disk size 2.729 TiB (3000592982016 bytes, 5860533168 sectors)
Disk GUID EE053214-E191-B343-A670-D3A712F353DB
Partition 1: 512 MiB (536870912 bytes, 1048576 sectors from 2048)
Type EFI System (FAT) (GUID 28732AC1-1FF8-D211-BA4B-00A0C93EC93B)
Partition Name "EFI System Partition"
Partition GUID 0CF3D241-6DA1-764C-AE0F-559E55314B8C
FAT32 file system (hints score 5 of 5)
Volume size 511.0 MiB (535805952 bytes, 130812 clusters of 4 KiB)
Partition 2: 20 GiB (21474836480 bytes, 41943040 sectors from 1050624)
Type Unknown (GUID AF3DC60F-8384-7247-8E79-3D69D8477DE4)
Partition Name "MINT193"
Partition GUID 0647492B-0C78-DC4E-914C-E210AB6FF5A5
Ext3 file system
Volume name "MINT193"
UUID E96B501E-23B5-4F80-A41C-CEE6A5E1D59C (DCE, v4)
Last mounted at "/media/bullwinkle/MINT193"
Volume size 20 GiB (21474836480 bytes, 5242880 blocks of 4 KiB)
Partition 3: 16 MiB (16777216 bytes, 32768 sectors from 123930624)
=== not visible,
Type MS Reserved (GUID
16E3C9E3-5C0B-B84D-817D-F92DF00215AE) diskmgmt.msc
Partition Name "Microsoft reserved partition"
Partition GUID 0C569E59-E917-AC40-B336-E7B2527D77AD
Blank disk/medium
Partition 4: 300.4 GiB (322502360576 bytes, 629887423 sectors from
123963392)
Type Basic Data (GUID A2A0D0EB-E5B9-3344-87C0-68B6B72699C7)
Partition Name "Basic data partition"
=== actually,
Partition GUID
65A1A4E6-4F11-7944-874A-B3A515F131DE "WIN10"
NTFS file system
Volume size 300.4 GiB (322502360064 bytes, 629887422 sectors)
Partition 5: 514 MiB (538968064 bytes, 1052672 sectors from 753854464)
Type Unknown (GUID A4BB94DE-D106-404D-A16A-BFD50179D6AC)
Partition Name ""
Partition GUID 99242951-459E-1144-BF88-61517A280CCA
=== recovery
NTFS file
system partition
Volume size 514.0 MiB (538967552 bytes, 1052671 sectors)

HTH,
Paul


There may be another issue. I'm thinking of Samsung over provisioning
(or is over something else?) where about 10% of disk free space is used
by the disk firmware to shuffle blocks in use in order to level wear. If
I wanted to change my SSD, I'd probably need to use the Samsung Magician
to first undo that block; then I could do my partition management; then
use Samsung again to enable the wear leveling. I presume that that more
than Samsung implements such a scheme.

This is not my area of expertise and I'm generalizing from my limited
experience using a few Samsung SSD on my systems. Perhaps someone more
knowledgeable can either poo poo my observation or, if it sounds right,
flesh out what is going on.


Wear leveling is done in the virtual to physical translation
inside the drive. Sector 1 is not stored in offset 1 of the
flash. Your data is "sprayed" all over the place in there.
If you lose the virtual to physical map inside the SSD, the
data recovery specialist will not be able to "put the
blocks back in order".

The drive declares a capacity. It's a call in the ATA/ATAPI
protocol. The sizing was settled in a law suit long ago, which
penalized a company for attempting to lie about the capacity.
The capacity on a 1TB drive, will be some number of
cylinders larger than 1e12 bytes. The size is an odd number,
so some CHS habits of yore, continue to work. The size is not
actually a rounded number that customers would enjoy, it's
a number used to keep snotty softwares happy.

Any spares pool, and spares management for wear leveling,
is behind the scenes and does not influence drive operation.
The spares pool means the physical surface inside the drive,
is somewhat larger than the virtual presentation to the outside
world.

We can Secure Erase the drive. All this does, is remove
memory of what was there previously (Secure Erase being
suitable before selling on the drive).

We can TRIM a drive, and this is an opportunity for the
OS, to deliver a "hint" to the drive, as to what virtual
areas of the 1TB, are not actually in usage by the OS.
If you've removed the partition table from the drive,
then the OS during TRIM, could tell the drive that the
entire surface is unused, then all LBAs are put in the
spares, ready to be used on the next write(s). You might
be able to deliver this news from the ToolKit software,
if the GUI in the OS had no mechanism for it. (Maybe
you can do it from Diskpart, but I haven't checked.)

The SMART table gives information about Reallocations,
which are permanently spared out blocks. As the drive
gets older, the controller may mark portions of it as
unusable. But, because there is virtual to physical
translation, as long as there are sufficient blocks
to present a 1TB surface, we can't tell from the outside,
it's in trouble. However, if you have the ToolKit for
the drive installed, it can take a reading every day,
and extrapolate remaining life (using either the
number of writes to cells, or, using the reallocation
total to predict the drive is in trouble). A drive
can die before the warranty period is up, or before the
wear life has expired. SMART allows this to be tracked.

There is a "critical data" storage area, which may
receive a lot more writes than the average cell. Perhaps
it's constructed from SLC cells. If this is damaged, that
can lead to instant drive death, because the drive
has lost its spares table, its map of virtual to
physical and so on. Some drives may have sufficient wear
life, but a failure to record critical data, means they
poop out early. And maybe this isn't covered all that
well from a SMART perspective.

But generally, all corner cases ignored, you just use
SSDs in the same way you'd use an HDD. You don't need to
pamper them. The ToolKit will tell you if your pattern
is abusive, and with any luck, warn you before the drive
takes a dive. But like any device, you should have
backups for any eventuality. Regular hard drives can
die instantly, if the power (like +12V), rises above
+15V or so. So if someone tells me they have a 33TB array
and no backups, all I have to do is warn them that the
ATX PSU is a liability and could, if it chose to, ruin
the entire array (redundancy and all) in one fell swoop.

We had a server at work, providing licensed software to
500 engineers. One day, at 2PM in the afternoon, the
controller firmware in the RAID controller card, wrote
zeros across the array, down low. Wiping out some critical
structure for the file system. Instantly, 500 engineers
had no software. Most went home for the day :-) Paid of course.
Costing the company a lost-work fortune. While RAIDs are
nice and all, they do have some (rather unfortunate)
common mode failure modes.

A second RAID controller of the same model, did the same
thing to its RAID array. Nobody went home for that one,
and at least then they were thinking it was a firmware
bug in the RAID card.

Summary - No, the SSD has no excuses. It's either ready
for service, or its not. There are no in-between
states where a partition boundary cannot move.
The ToolKit software each brand provides, will
have rudimentary extrapolation of life-remaining.
As long as some life remains, you can move
partition boundaries or do anything else involving
writes.

Paul