If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Display Modes |
|
#1
|
|||
|
|||
Why is it not letting me extend the partition?
So one of my oldest SSD's just finally had a bad misfire. One of its
memory cells seems to have gone bad, and it happened to be my boot drive, so I had to restore to a new SSD from backups. That took a fair bit of time to restore, but the new drive is twice as large as the old one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going into disk management it doesn't allow me to fill up that entire drive. Any idea what's going on here? Yousuf Khan |
#2
|
|||
|
|||
Why is it not letting me extend the partition?
Yousuf Khan wrote:
So one of my oldest SSD's just finally had a bad misfire. One of its memory cells seems to have gone bad, and it happened to be my boot drive, so I had to restore to a new SSD from backups. That took a fair bit of time to restore, but the new drive is twice as large as the old one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going into disk management it doesn't allow me to fill up that entire drive. Any idea what's going on here? You mean Microsoft disk management? Use a real partitioning utility. I got a free one several years ago downloaded from Amazon that works... Partition Master Technician 13.0 Portable. See if it's still available. If you make Windows backups (like everybody should), you don't even need to keep it on your system, just don't re-install it after the next restore. |
#3
|
|||
|
|||
Why is it not letting me extend the partition?
Yousuf Khan wrote:
So one of my oldest SSD's just finally had a bad misfire. One of its memory cells seems to have gone bad, and it happened to be my boot drive, so I had to restore to a new SSD from backups. That took a fair bit of time to restore, but the new drive is twice as large as the old one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going into disk management it doesn't allow me to fill up that entire drive. Any idea what's going on here? Yousuf Khan There are a lot of partition manipulations that the Disk Manager in Windows won't do. You need to use a 3rd party partition manager. There are lots of free ones. I use Easeus Partition Master, but there are lots of others. You might want to investigate overprovisioning for SSDs. It prolongs the lifespan of SSDs by giving them more room for remapping bad blocks. SSDs are self-destructive: they have a maximum number of writes. They will fail depending on the volume of writes you impinge on the SSD. The SSD will likely come with a preset of 7% to 10% of its capacity to use for overprovisioning. You can increase that. A tool might've come with the drive, or be available from the SSD maker. However, a contiguous span of unallocated space will increase the overprovisioning space, and you can use a 3rd party partition manager for that, too. You could expand the primary partition to occupy all of the unallocated space, or you could enlarge it just shy of how much unallocated space you want to leave to increase overprovisioning. |
#4
|
|||
|
|||
SSD "overprovisioning" (was: Why is it not letting me extend the partition?)
On Tue, 23 Mar 2021 at 23:25:49, VanguardLH wrote (my
responses usually follow points raised): Yousuf Khan wrote: [] drive, so I had to restore to a new SSD from backups. That took a fair bit of time to restore, but the new drive is twice as large as the old one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going into disk management it doesn't allow me to fill up that entire drive. Any idea what's going on here? Yousuf Khan There are a lot of partition manipulations that the Disk Manager in Windows won't do. You need to use a 3rd party partition manager. There are lots of free ones. I use Easeus Partition Master, but there are lots of others. (I use that one too. It was the first one I tried and does what I want, so I haven't tried any others, so can't say if it's better or worse than any. The UI is similar to the Windows one - but then maybe they all are.) You might want to investigate overprovisioning for SSDs. It prolongs the lifespan of SSDs by giving them more room for remapping bad blocks. SSDs are self-destructive: they have a maximum number of writes. They will fail depending on the volume of writes you impinge on the SSD. The SSD will likely come with a preset of 7% to 10% of its capacity to use for overprovisioning. You can increase that. A tool might've come with the drive, or be available from the SSD maker. However, a contiguous span of unallocated space will increase the overprovisioning space, and you can use a 3rd party partition manager for that, too. You could expand the primary partition to occupy all of the unallocated space, or you could enlarge it just shy of how much unallocated space you want to leave to increase overprovisioning. How does the firmware (or whatever) in the SSD _know_ how much space you've left unallocated, if you use any partitioning utility other than one from the SSD maker (which presumably has some way of "telling" the firmware)? If, after some while using an SSD, it has used up some of the slack, because of some cells having been worn out, does the apparent total size of the SSD - including unallocated space - appear (either in manufacturer's own or some third-party partitioning utility) smaller than when that utility is run on it when nearly new? If - assuming you _can_ - you reduce the space for overprovisioning to zero (obviously unwise), will the SSD "brick" either immediately, or very shortly afterwards (i. e. as soon as another cell fails)? If, once an SSD _has_ "bricked" [and is one of the ones that goes to read-only rather than truly bricking], can you - obviously in a dock on a different machine - change (increase) its overprovisioning allowance and bring it back to life, at least temporarily? -- J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf "I'm tired of all this nonsense about beauty being only skin-deep. That's deep enough. What do you want, an adorable pancreas?" - Jean Kerr |
#5
|
|||
|
|||
SSD "overprovisioning"
J. P. Gilliver (John) wrote:
If, after some while using an SSD, it has used up some of the slack, because of some cells having been worn out, does the apparent total size of the SSD - including unallocated space - appear (either in manufacturer's own or some third-party partitioning utility) smaller than when that utility is run on it when nearly new? The declared size of an SSD does not change. The declared size of an HDD does not change. What happens under the covers, is not on display. The reason you cannot arbitrarily move the end of a drive, is because some structures are up there, which don't appear in diagrams. This too is a secret. Any time something under the covers breaks, the storage device will say "I cannot perform my function, therefore I will brick". That is preferable to moving the end of the drive and damaging the backup GPT partition, the RAID metadata, or the Dynamic Disk declaration. Paul |
#6
|
|||
|
|||
SSD "overprovisioning"
On Wed, 24 Mar 2021 at 08:24:36, Paul wrote (my
responses usually follow points raised): J. P. Gilliver (John) wrote: If, after some while using an SSD, it has used up some of the slack, because of some cells having been worn out, does the apparent total size of the SSD - including unallocated space - appear (either in manufacturer's own or some third-party partitioning utility) smaller than when that utility is run on it when nearly new? The declared size of an SSD does not change. The declared size of an HDD does not change. What happens under the covers, is not on display. That's what I thought. The reason you cannot arbitrarily move the end of a drive, is because some structures are up there, which don't appear in diagrams. This too is a secret. Any time something under the covers breaks, the storage device will say "I cannot perform my function, therefore I will brick". That is preferable to moving the end of the drive and damaging the backup GPT partition, the RAID metadata, or the Dynamic Disk declaration. Paul So how come our colleague is telling us we can change the amount of "overprovisioning", even using one of many partition managers _other_ that one made by the SSD manufacturer? How does the drive firmware (or whatever) _know_ that we've given it more to play with? -- J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf It's no good pointing out facts. - John Samuel (@Puddle575 on Twitter), 2020-3-7 |
#7
|
|||
|
|||
SSD "overprovisioning"
J. P. Gilliver (John) wrote:
So how come our colleague is telling us we can change the amount of "overprovisioning", even using one of many partition managers _other_ that one made by the SSD manufacturer? How does the drive firmware (or whatever) _know_ that we've given it more to play with? Once you've set the size of the device, it's not a good idea to change it. That's all I can tell you. If you don't want to *use* the whole device, that's your business. I've set up SSDs this way before. As you write C: and materials "recirculate" as part of wear leveling, the virtually unused portion continues to float in the free pool, offering more opportunities for wear leveling or consolidation. You don't have to do anything. You could make a D: partition, keep it empty, issue a "TRIM" command, to leave no uncertainty as to what your intention is. Then delete D: once the "signaling" step is complete. +-----+-----------------+--------------------+ | MBR | C: NTFS | unallocated | +-----+-----------------+--------------------+ \__ This much extra__/ in free pool Paul |
#8
|
|||
|
|||
SSD "overprovisioning"
"J. P. Gilliver (John)" wrote:
So how come our colleague is telling us we can change the amount of "overprovisioning", even using one of many partition managers _other_ that one made by the SSD manufacturer? How does the drive firmware (or whatever) _know_ that we've given it more to play with? Static OP: What the factory defines. Fixed. The OS, software, and you have no access. Not part of usable space. Dynamic OP: You define unallocated space on the drive. You can shrink a partition to make more unallocated space, or expand a partition to make less unallocated space (but might cause data loss for remaps stored within the dynamic OP). (*) (*) I've not found info on what happens to remaps stored in the dynamic OP when the unallocated space is reduced (and the reduction covers the sectors for the remaps). |
#9
|
|||
|
|||
SSD "overprovisioning"
"J. P. Gilliver (John)" wrote:
How does the firmware (or whatever) in the SSD _know_ how much space you've left unallocated, if you use any partitioning utility other than one from the SSD maker (which presumably has some way of "telling" the firmware)? Changing the amount of unallocated space on the SSD is how the tools from the SSD makers work, too. You can use their tool, or you can use a partitioning tool. If, after some while using an SSD, it has used up some of the slack, because of some cells having been worn out, does the apparent total size of the SSD - including unallocated space - appear (either in manufacturer's own or some third-party partitioning utility) smaller than when that utility is run on it when nearly new? The amount of overprovisioning space set at the factory is never available for you to change. If they set 7% space for overprovisioning, you'll never be able to allocate that space to any partition. That space is not visible, fixed, and set at the factory. For example, they might sell a 128GB SSD, but usuable capacity is only 100GB. This is the static overprovisioning set at the factory. From the usable capacity of the drive, unallocated space is used for dynamic overprovisioning. Typically you find that you cannot use all unallocated space for a partition. There's some that cannot be partitioned; however, by making partition(s) smaller then there is more unallocated space available for use by dynamic overprovisioning. It's dynamic because it changes with the amount of write delta (stored data changes). The unallocated space is a reserve. Not all of it may get used. Individual cells don't get remapped. Blocks of cells get remapped. If you were to reduce the OP using unallocated space, the previously marked bad blocks would have to get re-remapped to blocks within the partition. Those bad blocks are still marked as bad, so remapping has to be elsewhere. Might you lose information in the blocks in the dynamic OP space when you reduce it? That I don't know. Partition managers don't know about how the content of unallocated space is used. The SSD makers are so terse as to be sometimes unusably vague in their responses. Samsung said "Over Provisioning can only be performed on the last accessible partition." What does that mean? Unallocated space must be located after the last partition? Well, although by accident, that's how I (and Samsung Magician) have done it. The SSD shows up with 1 partition consuming all usuable capacity, and I or Samsung Magician ended up shrinking the partition to make room for unallocated space at the end. However, SSD makers seem to be alchemists or witches: once they decide on their magic brew of ingredients, they keep it a secret. I have increased OP using Samsung Magician, and decreased it, too. All that it did was change the size of the unallocated space by shrinking or enlarging the last partition, so the unallocated space change was after the last partition. When shrinking the unallocated space, it was not apparent in Samsung Magician that any bad cell blocks that got remapped to unallocated space either got re-remapped into the static OP space which would reduce endurance. Since the firmware had marked a block as bad, it still gets remapped into static or dynamic OP. If unallocated space were reduced to zero (no dynamic OP), static OP gets used for the remappings. However, I haven't found anything that discusses for remappings into dynamic OP when the unallocated space is shrunk. Samsung Magician's OP adjustment looks to be nothing more than a limited partition manager to shrink or enlarge the last partition, which is the same you could do using a partition manager. I suspect any remap targets in the dynamic OP do not get written into the static OP, so you could end up with data corruption. A bad block got mapped into dynamic OP, you reduced the size of dynamic OP which means some of those mappings there are gone, and they are not written into static OP. Maybe Samsung's Magician is smart enough to remap the dynamic OP remaps into static OP, but I don't see that happening yet it could keep that invisible to the user. Only if I had a huge number of remappings stored in dynamic OP and then shrunk the unallocated space might I see the extra time spent to copy those remappings into static OP when compared to using a partition tool just just enlarge the last partition. Since the information doesn't seem available, I err on the side of caution: I only reduce dynamic OP immediately after enlarging it should I decide the extra OP consumed a bit more than I want to lose in capacity in the last partition. Once I set dynamic OP and have used the computer for a while, I don't reduce dynamic OP. I have yet to find out what happens to the remappings in dynamic OP when it is reduced. If I later need more space in the partition, I get a bigger drive, clone to it, and decide on dynamic OP at that time. With a bigger drive, I probably will reduce the percentage of dynamic OP since it would be a huge waste of space. For a drive clone, the static or dynamic remappings from the old drive aren't copied to the new drive. The new drive will have its own independent remappings, and the reads during the clone are going to copy from the remaps from the old drive into the the new drive's partition(s). Old remappings vaporize during the copy to a different drive. Unless reducing the dynamic OP size (unallocated space) is done very early after creating it to reduce the chance of new remappings happening between defining the unallocated space and then reducing its size, I would be leery of reducing unallocated space on an SSD after lots of use for a long time. Cells will go bad in SSDs, and why remapping is needed. I don't see any tools that move remappings from dynamic OP when it gets reduced, and the sectors where were the remapping get moved to static OP. You can decide not to use dynamic OP at all, and hope the factory-set static OP works okay for you for however long you own the SSD. You can decide to sacrifice some capacity to define dynamic OP, but I would recommend only creating it, perhaps later enlarging it, but not to shrink it. I just can't find info on what happens to the remaps in dynamic OP when it is shrunk. Overprovisioning, whether fixed (static, set by factory) or dynamic (unallocated space within the usuable space after static OP) always reduces capacity of the drive. The reward is reducing write amplication, increased performance (but not better than factory-time performance), and endurance. You trade some of one for the other. It's like insurance: the more you buy, the less money you have now, but you hope you won't be spending a lot more later. If - assuming you _can_ - you reduce the space for overprovisioning to zero (obviously unwise), will the SSD "brick" either immediately, or very shortly afterwards (i. e. as soon as another cell fails)? Since the cell block is still marked as bad, it still needs to get remapped. With no dynamic OP, static OP gets used. If you create dynamic OP (unallocated space) where some remaps could get stored, what happens to the remaps there when you shrink the dynamic OP? Sure, the bad blocks are still marked bad, so future writes will remap the bad block into static OP, but happened to the data in the remaps in dynamic OP when it went away? Don't know. I don't see any SSD tool or partition manager will write the remaps from dynamic OP into static OP before reducing dynamic OP. After defining dynamic OP, reducing it could cause data loss. If you just must reduce dynamic OP because you need that unallocated space to get allocated into a partition, your real need is a bigger drive. When you clone (copy) the old SSD to a new SSD, none of the remaps in the old SSD carry to the new SSD. When you get the new SSD, you could change the size (percentage) of unallocated space to change the size of dynamic OP, but I would do that immediately after the clone (or restore from backup image). I'd want to reduce the unallocated space on the new bigger SSD as soon as possible, and might even use a bootable partition manager to do that before the OS loads the first time. I cannot find what happens to the remaps in dynamic OP when it gets reduced. If, once an SSD _has_ "bricked" [and is one of the ones that goes to read-only rather than truly bricking], can you - obviously in a dock on a different machine - change (increase) its overprovisioning allowance and bring it back to life, at least temporarily? Never tested that. Usually I replace drives before they run out of free space (within a partition) with bigger drives, or I figure out how to move data off the old drive to make for more free space. If I had an SSD that catastrophically failed into read-only mode, I'd get a new (and probably bigger) SSD and clone from old to new, then discard the old. Besides my desire to up capacity with a new drive when an old drive gets over around 80% full, and if I don't want to move files off of it to get back a huge chunk to become free space, I know SSDs are self destructive, so I expect them to fail unless I replace them beforehand. From my readings, and although they only give a 1-year warranty, most SSD makers seem to plan on a MTBF of 10 years, but that's under a write volume "typical" of consumer use (they have some spec that simulates typical write volume, but I've not seen those docs). Under business or server use, MTBF is expected to be much lower. I doubt that I would keep any SSD for more than 5 years in my personal computers. I up the dynamic OP to add insurance, because I size drives far beyond expected usage. Doubling is usually my minimum upsize scale. I wouldn't plan on getting my SSD anywhere near its maximum write cycle count that would read-only brick it. SMART does not report the number of write cycles, but Samsung's Magician tool does. It must request info from firmware that is not part of the SMART table. My current 1 TB NVMe m.2 SSD is about 25% full after a year's use of my latest build. Consumption won't change as much in the future (i.e., it pretty much flattened after a few months), but if it gets to 80% would then be when I consider getting another matching NVMe m.2 SSD, or replace the old 1 TB one with 2TB, or larger, and cloning would erase all those old remaps in the old drive (the new drive won't have those). Based on my past experience and usage, I expect my current build to last another 7 years until I the itch gets too unbearable to do a new build. 20% got used for dynamic OP just as insurance to get an 8-year lifespan, but I doubt I will ever get close to bricking the SSD. I could probably just use the 10% minimum for static OP, but I'm willing to spend some capacity as insurance. More than for endurance, I added dynamic OP to keep up the performance of the SSD. After a year, or more, of use, lots of users have reported their SSDs don't perform like when new. The NVMe m.2 SSD is a 5 times faster (sequential, and more than 4 times for random) for both reads and writes than my old SATA SSD drive, and I don't want to lose that joy of speed that I felt at the start. I might be getting older and slower, but not something I want for my computer hardware as it ages. |
#10
|
|||
|
|||
Why is it not letting me extend the partition?
Yousuf Khan wrote:
So one of my oldest SSD's just finally had a bad misfire. One of its memory cells seems to have gone bad, and it happened to be my boot drive, so I had to restore to a new SSD from backups. That took a fair bit of time to restore, but the new drive is twice as large as the old one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going into disk management it doesn't allow me to fill up that entire drive. Any idea what's going on here? Yousuf Khan It's GPT and you need to find a utility that does a better job of showing the partitions. The Microsoft Reserved partition has no recognizable file system inside, and the information I can find suggests it is used as a space when something needs to be adjusted. It is a tiny supply of "slack". But, it might also function as a "blocker" when Disk Management is at work. And then, not every utility lists it properly. Some utilities try to "hide" things like this, and only show data partitions. Try Linux GDisk or Linux GParted, and see if you can spot the blocker there. The disktype utility might work, but the only edition available there is the Cygwin one. disktype.exe /dev/sda --- /dev/sda Block device, size 2.729 TiB (3000592982016 bytes) DOS/MBR partition map Partition 1: 2.000 TiB (2199023255040 bytes, 4294967295 sectors from 1) Type 0xEE (EFI GPT protective) GPT partition map, 128 entries Disk size 2.729 TiB (3000592982016 bytes, 5860533168 sectors) Disk GUID EE053214-E191-B343-A670-D3A712F353DB Partition 1: 512 MiB (536870912 bytes, 1048576 sectors from 2048) Type EFI System (FAT) (GUID 28732AC1-1FF8-D211-BA4B-00A0C93EC93B) Partition Name "EFI System Partition" Partition GUID 0CF3D241-6DA1-764C-AE0F-559E55314B8C FAT32 file system (hints score 5 of 5) Volume size 511.0 MiB (535805952 bytes, 130812 clusters of 4 KiB) Partition 2: 20 GiB (21474836480 bytes, 41943040 sectors from 1050624) Type Unknown (GUID AF3DC60F-8384-7247-8E79-3D69D8477DE4) Partition Name "MINT193" Partition GUID 0647492B-0C78-DC4E-914C-E210AB6FF5A5 Ext3 file system Volume name "MINT193" UUID E96B501E-23B5-4F80-A41C-CEE6A5E1D59C (DCE, v4) Last mounted at "/media/bullwinkle/MINT193" Volume size 20 GiB (21474836480 bytes, 5242880 blocks of 4 KiB) Partition 3: 16 MiB (16777216 bytes, 32768 sectors from 123930624) === not visible, Type MS Reserved (GUID 16E3C9E3-5C0B-B84D-817D-F92DF00215AE) diskmgmt.msc Partition Name "Microsoft reserved partition" Partition GUID 0C569E59-E917-AC40-B336-E7B2527D77AD Blank disk/medium Partition 4: 300.4 GiB (322502360576 bytes, 629887423 sectors from 123963392) Type Basic Data (GUID A2A0D0EB-E5B9-3344-87C0-68B6B72699C7) Partition Name "Basic data partition" === actually, Partition GUID 65A1A4E6-4F11-7944-874A-B3A515F131DE "WIN10" NTFS file system Volume size 300.4 GiB (322502360064 bytes, 629887422 sectors) Partition 5: 514 MiB (538968064 bytes, 1052672 sectors from 753854464) Type Unknown (GUID A4BB94DE-D106-404D-A16A-BFD50179D6AC) Partition Name "" Partition GUID 99242951-459E-1144-BF88-61517A280CCA === recovery NTFS file system partition Volume size 514.0 MiB (538967552 bytes, 1052671 sectors) HTH, Paul |
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Win XP not letting me view hidden folders..... | geronimo | Homebuilt PC's | 3 | February 17th 09 02:51 PM |
IDE extend card anything related to OS? | Zhang Weiwu | General | 4 | September 7th 04 01:30 PM |
how to indentify motherboard and extend mem | Zbigniew Lisiecki | Homebuilt PC's | 1 | July 23rd 04 05:01 AM |
SVGA Cable - can I cut and extend it? | Ger | General | 7 | November 26th 03 04:41 PM |
Dabs - are you letting me down | text news | UK Computer Vendors | 6 | September 3rd 03 09:22 AM |