A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Homebuilt PC's
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

128 bit addressing for harddisk/block addressing.



 
 
Thread Tools Display Modes
  #1  
Old February 10th 20, 12:21 AM posted to alt.comp.hardware.pc-homebuilt
[email protected]
external usenet poster
 
Posts: 533
Default 128 bit addressing for harddisk/block addressing.

(Prepare yourself, I don't know how yet except demanding this =D)

Title: 128 bit addressing for harddisk/block addressing.

I will share with you what I have learned so far from excursion into the ms-dos age and "retro-gaming" and such:

Addressing limitations are super annoying in all kinds of ways:

1. 2GB Harddisk/partition size limitations.
2. 16 bit memory Memory limitations.
3. 64 MB ram limitations (though have not yet run into this for real)

The most annoying one was limitation in VMWare Workstation 8 not supporting usb mass storage devices large than 2 Terrabytes.

My conclusion for now is we must not repeat the mistakes of the past and prepare for the future.

40 Years from now harddisks bigger than 16 exabytes might exist for consumers and then when trying to virtualize 64 bit windows it will becoming annoying again if it can't be done because the software does not support 128 bit block addressing.

For this reason my recommendation to VM Ware is to create a 128 bit API, possibly a plugin system, so that in the future a driver/plug in can be written to make VM Ware work with 128 bit block addressing so it can read/write/find/store more data on futuristics harddisk.

To test this API, random data can be generated for specific offsets. The randseed can be set to the offset/block number to allows generate the same data for verification purposes for example.

This plug-in should be build into VM Ware sooner then later.

Another one of my concerns is NTFS of Microsoft Windows. It is as far as I know 64 bit limited.

My drives are already 4 TB, if I add them up could be as much as 10 TB or even 16 TB.

That means at least 40 bits, or 44 bits or so.

So NTFS is 20 bits away before it runs into limitations.

Harddisks may grow at a different rate than Moore's Law.

However there is a new kid in town "SSD" and such and they are created with transistors and their growth rate may be much higher than harddisks the coming years.

I don't know exact data but I will use both based on my own experience:

1986 ? C64, 64 KB of data on floppy
1992 ? MS-DOS/PC 80486 2.0 MB on floppy, Seagate, Capacity: 120 MB on harddisk
1996 ? Pentium 166, Quantum Fireball 3.5 TM, Capacity: 2 GB on harddisk.
1999 ? Pentium III 450 MHZ, Harddisk: 16 GB (for speed dumb) and later 120 GB.(2004)
2006 DreamPC AMD X2 3800+ Harddisks: 512 GB (2006) and, 2 TB (2011) 2 TB (2019). 2 TB (2020)

Now: 2020, 4 TB on usb mass storage device.

So some significant stagnation on harddisk front over last 14 years.

However there is also "speak" of 100 TB SSD coming from Samsung, 2018 or so..

Going to put this roughly in an ASCII chart

100 TB
10 TB AMD X2 AMD X2
1 TB AMD X2
100 GB PIII
10 GB PIII
1 GB P166
100 MB 80486
10 MB
1 MB FLOPPY
100 KB C64
1985 - 1990 - 1995 - 2000 - 2005 - 2010 - 2015 - 2020

I think from this chart it can be seen that harddisk capacity is slowing down.

While flash/sd/ssd capacity is not shown in this chart it is probably ramping up.

One of the reasons of the slow down of harddisk capacity might also have to do with BIOS limitations of old systems. I purchased a 2 TB harddisk in 2020, just so I could fit it into DreamPC/AMD X2 3800+ if I need that capacity to maybe re-install windows 7 x64 edition with platform update. So far I have not done that yet, but did it install it on USB stick to see what it would be like and it's quite pleasing to do that. Geforce Now also works on it which is quite weird to game on the cloud.

Anyway it's hard to say what the exact growth rate is there are probably two;

Harddisk growth rate. It think the chart does show quite nicely it's roughly 10x each 5 years.

SSD growth rate. For now my guess is this will follow Moore's Law. Roughly double capacity each 1.5 years.

So let's calculate when NTFS will run out of bits for consumers/home users like me !

It's either the SSD rate which would mean 20x1.5 = 25 years.

or

The harddisk rate which would mean 20 bits left which is 1000x1000.

So log(1.000.000) = 6 x 5 = 30 years.

So either way in roughly 30 years NTFS will not be sufficient anymore.

CPU/Software/Compiler/Assembler/Linker/Debugger/Drivers/Operating System/GUI/File System/BIOS/EUFI/Protocols/Sata?

A lot of stuff will need to change to be able to support 128 bit harddisks if growth of capacity continues.

Also hundreds of years from now space will become very important for IT to store capacity in space =D

And then thousands of years from now we will be dumping computers in space that are not dragged along by the sun and we will communicate with them via quantum communications =D

So some investments into space elevator, space ships and space technology will also become important for IT sector =D

I feel slightly at uneasy that Windows 10 does not yet have a NTFS 128 bit file system.

Bye,
Skybuck.
  #2  
Old February 10th 20, 02:03 AM posted to alt.comp.hardware.pc-homebuilt
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default 128 bit addressing for harddisk/block addressing.

wrote:
(Prepare yourself, I don't know how yet except demanding this =D)

Title: 128 bit addressing for harddisk/block addressing.

I will share with you what I have learned so far from excursion into the ms-dos age and "retro-gaming" and such:

Addressing limitations are super annoying in all kinds of ways:

1. 2GB Harddisk/partition size limitations.
2. 16 bit memory Memory limitations.
3. 64 MB ram limitations (though have not yet run into this for real)

The most annoying one was limitation in VMWare Workstation 8 not supporting usb mass storage devices large than 2 Terrabytes.

My conclusion for now is we must not repeat the mistakes of the past and prepare for the future.

40 Years from now harddisks bigger than 16 exabytes might exist for consumers


We can stop there.

There is a relationship between capacity and speed.

The people who use the drives, even at their current size (16TB),
find the speed of 250MB/sec is too slow.

Seagate plans to fix this, by giving some hard drives two arms,
doubling the I/O rate.

But such a scheme is not scalable.

What kind of speed would a 16 exabyte drive need ?

Even if based on flash, the electronics in the CPU are
saturating too. Lots of stuff, can't go that much faster.

We will run out of the speed needed, to efficiently use
your oversized storage device.

We can't wait four months for the "erase" command on
your device to complete. We can't wait four months
for one (defective) 16 exabyte disk to be transferred
to a brand new 16 exabyte device. The defective device
will likely die before the transfer is completed.

My newest drive, it must have taken around 12 hours
to erase it. Pathetic. And not really the fault of the
company making the drive. Moores law never said that
every parameter in physics would "increase forever".

And if we made a hard drive that cost $1,000,000, how
many will you be buying ? Can I put you down for an
advanced order ? You'll notice the price of the
new drives, there's a proportionality to capacity.
They don't give these drives away for free. You don't
get a 16TB drive for the price of a 4TB drive, the
day that the 16TB drive is released.

Paul




  #3  
Old February 10th 20, 04:41 AM posted to alt.comp.hardware.pc-homebuilt
[email protected]
external usenet poster
 
Posts: 533
Default 128 bit addressing for harddisk/block addressing.

On Monday, February 10, 2020 at 2:03:16 AM UTC+1, Paul wrote:
wrote:
(Prepare yourself, I don't know how yet except demanding this =D)

Title: 128 bit addressing for harddisk/block addressing.

I will share with you what I have learned so far from excursion into the ms-dos age and "retro-gaming" and such:

Addressing limitations are super annoying in all kinds of ways:

1. 2GB Harddisk/partition size limitations.
2. 16 bit memory Memory limitations.
3. 64 MB ram limitations (though have not yet run into this for real)

The most annoying one was limitation in VMWare Workstation 8 not supporting usb mass storage devices large than 2 Terrabytes.

My conclusion for now is we must not repeat the mistakes of the past and prepare for the future.

40 Years from now harddisks bigger than 16 exabytes might exist for consumers


We can stop there.


Nope.

There is a relationship between capacity and speed.


Yup.

The people who use the drives, even at their current size (16TB),
find the speed of 250MB/sec is too slow.


Somewhat yup.

Seagate plans to fix this, by giving some hard drives two arms,
doubling the I/O rate.


Interesting.

But such a scheme is not scalable.


Hmmm, maybe have heads across entire diameter

What kind of speed would a 16 exabyte drive need ?


Somewhat irrelevant question because the number of cores will then be:

4.294.967.296

Multiply this by 2.0 ghz at the very least.

7.45 exabyte/sec.

Plenty to read that drive in 2 seconds if enough parallelization is added.

However this is not necessarily about harddisk technology, must likely SSD technology will over take it, then again magnetics are also quite nice.

Even if based on flash, the electronics in the CPU are
saturating too. Lots of stuff, can't go that much faster.


It will go smaller for a while more. But indeed it might reach limitations before the 64 bit frontier is reached.

We will run out of the speed needed, to efficiently use
your oversized storage device.


Not if Moore's law continues

We can't wait four months for the "erase" command on
your device to complete. We can't wait four months
for one (defective) 16 exabyte disk to be transferred
to a brand new 16 exabyte device. The defective device
will likely die before the transfer is completed.

My newest drive, it must have taken around 12 hours
to erase it. Pathetic. And not really the fault of the
company making the drive. Moores law never said that
every parameter in physics would "increase forever".

And if we made a hard drive that cost $1,000,000, how
many will you be buying ? Can I put you down for an
advanced order ? You'll notice the price of the
new drives, there's a proportionality to capacity.
They don't give these drives away for free. You don't
get a 16TB drive for the price of a 4TB drive, the
day that the 16TB drive is released.


Time will tell, but I know Moore's law will continue one way or the other.

If necessary humanity will go into space and build big harddisks there =D

Bye,
Skybuck =D
  #4  
Old February 12th 20, 09:30 AM posted to alt.comp.hardware.pc-homebuilt
Bill[_40_]
external usenet poster
 
Posts: 10
Default 128 bit addressing for harddisk/block addressing.

Paul wrote:
wrote:
(Prepare yourself, I don't know how yet except demanding this =D)

Title: 128 bit addressing for harddisk/block addressing.

I will share with you what I have learned so far from excursion into
the ms-dos age and "retro-gaming" and such:

Addressing limitations are super annoying in all kinds of ways:

1. 2GB Harddisk/partition size limitations.
2. 16 bit memory Memory limitations.
3. 64 MB ram limitations (though have not yet run into this for real)

The most annoying one was limitation in VMWare Workstation 8 not
supporting usb mass storage devices large than 2 Terrabytes.

My conclusion for now is we must not repeat the mistakes of the past
and prepare for the future.

40 Years from now harddisks bigger than 16 exabytes might exist for
consumers


We can stop there.

There is a relationship between capacity and speed.

The people who use the drives, even at their current size (16TB),
find the speed of 250MB/sec is too slow.

Seagate plans to fix this, by giving some hard drives two arms,
doubling the I/O rate.

But such a scheme is not scalable.


I thought I would mention RAID (in the unlikely even that it might
possibly be informative to you).

Bill
  #5  
Old February 12th 20, 05:51 PM posted to alt.comp.hardware.pc-homebuilt
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default 128 bit addressing for harddisk/block addressing.

Bill wrote:
Paul wrote:
wrote:
(Prepare yourself, I don't know how yet except demanding this =D)

Title: 128 bit addressing for harddisk/block addressing.

I will share with you what I have learned so far from excursion into
the ms-dos age and "retro-gaming" and such:

Addressing limitations are super annoying in all kinds of ways:

1. 2GB Harddisk/partition size limitations.
2. 16 bit memory Memory limitations.
3. 64 MB ram limitations (though have not yet run into this for real)

The most annoying one was limitation in VMWare Workstation 8 not
supporting usb mass storage devices large than 2 Terrabytes.

My conclusion for now is we must not repeat the mistakes of the past
and prepare for the future.

40 Years from now harddisks bigger than 16 exabytes might exist for
consumers


We can stop there.

There is a relationship between capacity and speed.

The people who use the drives, even at their current size (16TB),
find the speed of 250MB/sec is too slow.

Seagate plans to fix this, by giving some hard drives two arms,
doubling the I/O rate.

But such a scheme is not scalable.


I thought I would mention RAID (in the unlikely even that it might
possibly be informative to you).

Bill


I think you're saying I have to get out my calculator now ?

Let's work with a 16TB drive running at 250MB/sec, as the
benchmark of "we won't accept bigger drives unless they go faster".

16TB 16,000,000,000,000 13.204119982655924780854955578898 = 2^44
0.3010

In seven computer slots place a Marvell 8 port SATA controller
times on each port, a 1:5 SATA multiplexer. 7*8*5 in base 2
gives ~2^8 . Or, I could take a Biostar 21 slot coin miner
and put a 24 port Areca RAID controller in each slot. 21*24 in base 2
gives ~2^9. 44+9 = 2^53, and I'm still a factor of 2^11 away
from 2^64. This assumes that the 16TB drive running at 250MB/sec
preserved the desired ratio that the datacenter people are
complaining about, in terms of increasing drive sizes with
constant I/O capability.

And there are some plumbing errors in the straw man in the previous
paragraph, so those don't actually meet the requirements. SATA
multiplexers can't run at 250MB/sec on all ports at the same time.
An Areca RAID controller, fed from a PCI Express Rev.2 x1 port, would
starve, because the port can't carry all the bandwidth an Areca
has to offer. You can't raise the port speed too much, because
the RAID cards are on "cable extenders" so they'll all fit into
a packaging scheme. I don't think coaxial-style cabling to extend
PCI Express would work at PCI Express Rev.4. Too much attenuation.

Computers themselves, since I/O dumps to memory, you can
only use a fraction of the memory bandwidth. (Memory bandwidth
could be a limit.) Your RAID, at around 32 drives of 250MB/sec
would be 8GB/sec, and that's probably a comfortable limit to
leaving enough remaining bandwidth for the computer to do anything
with the data. Even though my four channel eight slot DDR3 computer
has high theoretical bandwidth numbers, the best measurement I can
get from it is around 7.5GB/sec (TMPFS in Linux). That's where the
8GB/sec estimate comes from.

I'm still about 2000 times short of exhausting the 64 bit space,
plus the small factors in my plumbing errors.

Paul
  #6  
Old February 15th 20, 08:57 AM posted to alt.comp.hardware.pc-homebuilt
Bill[_40_]
external usenet poster
 
Posts: 10
Default 128 bit addressing for harddisk/block addressing.

Paul wrote:

I'm still about 2000 times short of exhausting the 64 bit space,
plus the small factors in my plumbing errors.

Â*Â* Paul



Okay, Thanks. It was just a thought! : )

Bill
  #7  
Old February 17th 20, 02:55 AM posted to alt.comp.hardware.pc-homebuilt
[email protected]
external usenet poster
 
Posts: 533
Default 128 bit addressing for harddisk/block addressing.

On Wednesday, February 12, 2020 at 5:51:22 PM UTC+1, Paul wrote:
Bill wrote:
Paul wrote:
wrote:
(Prepare yourself, I don't know how yet except demanding this =D)

Title: 128 bit addressing for harddisk/block addressing.

I will share with you what I have learned so far from excursion into
the ms-dos age and "retro-gaming" and such:

Addressing limitations are super annoying in all kinds of ways:

1. 2GB Harddisk/partition size limitations.
2. 16 bit memory Memory limitations.
3. 64 MB ram limitations (though have not yet run into this for real)

The most annoying one was limitation in VMWare Workstation 8 not
supporting usb mass storage devices large than 2 Terrabytes.

My conclusion for now is we must not repeat the mistakes of the past
and prepare for the future.

40 Years from now harddisks bigger than 16 exabytes might exist for
consumers

We can stop there.

There is a relationship between capacity and speed.

The people who use the drives, even at their current size (16TB),
find the speed of 250MB/sec is too slow.

Seagate plans to fix this, by giving some hard drives two arms,
doubling the I/O rate.

But such a scheme is not scalable.


I thought I would mention RAID (in the unlikely even that it might
possibly be informative to you).

Bill


I think you're saying I have to get out my calculator now ?

Let's work with a 16TB drive running at 250MB/sec, as the
benchmark of "we won't accept bigger drives unless they go faster".

16TB 16,000,000,000,000 13.204119982655924780854955578898 = 2^44
0.3010

In seven computer slots place a Marvell 8 port SATA controller
times on each port, a 1:5 SATA multiplexer. 7*8*5 in base 2
gives ~2^8 . Or, I could take a Biostar 21 slot coin miner
and put a 24 port Areca RAID controller in each slot. 21*24 in base 2
gives ~2^9. 44+9 = 2^53, and I'm still a factor of 2^11 away
from 2^64. This assumes that the 16TB drive running at 250MB/sec
preserved the desired ratio that the datacenter people are
complaining about, in terms of increasing drive sizes with
constant I/O capability.

And there are some plumbing errors in the straw man in the previous
paragraph, so those don't actually meet the requirements. SATA
multiplexers can't run at 250MB/sec on all ports at the same time.
An Areca RAID controller, fed from a PCI Express Rev.2 x1 port, would
starve, because the port can't carry all the bandwidth an Areca
has to offer. You can't raise the port speed too much, because
the RAID cards are on "cable extenders" so they'll all fit into
a packaging scheme. I don't think coaxial-style cabling to extend
PCI Express would work at PCI Express Rev.4. Too much attenuation.

Computers themselves, since I/O dumps to memory, you can
only use a fraction of the memory bandwidth. (Memory bandwidth
could be a limit.) Your RAID, at around 32 drives of 250MB/sec
would be 8GB/sec, and that's probably a comfortable limit to
leaving enough remaining bandwidth for the computer to do anything
with the data. Even though my four channel eight slot DDR3 computer
has high theoretical bandwidth numbers, the best measurement I can
get from it is around 7.5GB/sec (TMPFS in Linux). That's where the
8GB/sec estimate comes from.

I'm still about 2000 times short of exhausting the 64 bit space,
plus the small factors in my plumbing errors.

Paul


Ok, you look at it from a hardware perspective.

Let's do that and watch the CERN guys go at it =D:

https://home.cern/science/computing/storage

Apperently they use magnetic tapes for long term storage.

Roughly 100 petabytes of data each year.

So that's 100.000 terrabytes of data each year.

in bytes:
10 bits = 1 kb
20 bits = 1 mb
30 bits = 1 gb
40 bits = 1 tb
50 bits = 1 pb
100 is about 6 bits so they are at 56 bits already.

So they are only 8 bits away from a problem possibly ! LOL.

Now when they do run into a problem. Will Microsoft have a solution for them ? And attract their business ?! =D How about Linux ? LOL.

Bye for now,
Skybuck ! =D
  #8  
Old February 20th 20, 11:29 AM posted to alt.comp.hardware.pc-homebuilt
[email protected]
external usenet poster
 
Posts: 533
Default 128 bit addressing for harddisk/block addressing.

Unfortunately even 128 bits will quickly be surpassed once 3D chips are achieved and they become much bigger than just 40x40x40 centimeters.

But implementing dynamic indexing and universal codes might be a little bit slow and hard or maybe I will try in future but then it might be to late =D

I just had to talk some sense into the stupid programmers at comp.lang.c and this is what I wrote, enjoy lol:

This will be a very simple calculation assuming 3D chips will be constructed next.

Let's take some nice and "big" like 40 centimeters or so.

Now 40 centimeters is about 40.000 millimeters is about 40.000.000 micrometers is about 40.000.000.000 nanometers.

Now let's calculate the volume that this describes.

40.000.000.000 * 40.000.000.000 * 40.000.000.000 =

64.000.000.000.000.000.000.000.000.000.000

10x10 bits = 100 bits + 6 bits = 106 bits.

1x1x1 nanometer transister like this will become possible.

So unfortunately you are all a bunch of ****ing ******s with limited minds lol.

And we will all suffer stupid limitations once again very soon ! LOL.

You could have known this by reading about the brain and biology... the brain cotains lots and lots of neurons hard to simulate and such.

Bye,
Skybuck.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
48-bit LBA addressing Nicola Taibi Storage (alternative) 2 October 5th 07 08:24 PM
48bit LBA addressing in XP Mark Storage (alternative) 11 January 10th 04 03:27 AM
48-Bit LBA Addressing CHANGE USERNAME TO westes Compaq Computers 0 October 13th 03 05:02 PM
Addressing modes Daniel Huw Bellringer Intel 3 September 9th 03 01:33 AM
48bit IDE addressing with ALI M1535D+ ? Ken Mandelberg Storage (alternative) 1 September 1st 03 07:45 PM


All times are GMT +1. The time now is 10:54 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.