View Single Post
  #9  
Old May 18th 20, 11:11 PM posted to comp.sys.ibm.pc.hardware.storage
Mark Perkins
external usenet poster
 
Posts: 110
Default Do you think the days of the hard drive is finally over?

On Sun, 17 May 2020 22:06:35 -0400, Yousuf Khan
wrote:

On 5/16/2020 5:02 PM, Mark Perkins wrote:
On Sat, 16 May 2020 07:50:32 -0400, Yousuf Khan
wrote:

On 5/16/2020 5:31 AM, wrote:
20 terrorbites would be an "archive" drive with shingles.
The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.


I haven't done a scandisk in quite a few years, and prior to that it was
another few years since the previous one. It's not something I worry about,
nor do I worry about how long it takes to fill a drive with data. My
primary concerns are how many SATA ports and drive bays I have on hand.
Those are the limiting factors.


Well, nobody does Scandisks more than once in several years. I'm sure
Pedro meant that as an extreme example, but not something that is
unreasonable to expect to do occasionally.

I think even 16 TB is way too large, shingles or not. It would still
take nearly 18 hours.


We all have different needs. My server has 16 SATA ports and 15 drive bays,
so the OS lives on an SSD that lays on the floor of the case. The data
drives are 4TB x5 and 2TB x10, for a raw capacity of 40TB, formatted to
36.3TB. I use DriveBender to pool all of the drives into a single volume.
Windows is happy with that. Since there are no SATA ports or drive bays
available, upgrading for more storage means replacing one or more of the
current drives. External drives aren't a serious long-term option.


But the point is, neither are internal ones these days, it seems.


I don't follow what you're saying. To me, internal drives are the primary
data storage option.

Assuming even if these are mainly used in enterprise settings, they
would likely be part of a RAID array. Now if the RAID array is new and
all of these drives were put in new as part of the initial setup,

snip

No, I'm not assuming that (Enterprise and RAID) at all. I'm assuming use in
the home market, and specifically the subset of the home market where
people want to keep large amounts of data accessible. RAID is relatively
rare in that setting, isn't it? I don't know anyone who uses it, but that
doesn't mean much.

Now, looking up what Drive Bender is, it seems to be a virtual volume
concatenator. So it's not really a RAID, individual drives die and only
the data on them are lost, unless they are backed up. So even in that
case, if one of these massive drives is part of your DB setup, replacing
that drive will be a major pain in the butt even while restoring from


Restoring just the missing files is a major pain? Why does that have to be
the case? FWIW, I haven't found that to be true. It's much faster than
doing a full restore, for example.

backups. It really begs the question how long are you willing to wait

for a drive to get repopulated, knowing that while this is happening
it's also going to be maxing out the rest of your system for the amount
of hours that the restore operation is happening?


If there's something you need right away, you prioritize that. Otherwise,
let the restore run and do its thing. It's not like disk access brings a
modern system to its knees, right? Performance wise, you wouldn't even know
it's happening. So in general, there's no significant waiting, and remember
that failed drives are not an every day/week/month/year occurrence. Most
drives last longer than I'm willing to use them, getting replaced when the
data has outgrown their capacity.

My point is that I think people will only be willing to wait a few
hours, perhaps 4 or 5 hours at most, before they say it's not worth it,
in a home environment.


I don't follow that at all.

In an enterprise environment, that tolerance may
get extended out to 8 or 10 hours. So at some point, all of this
capacity is useless, because it's impractical to manage with the current
drive and interface speeds.


??? How often are you clearing and refilling an entire drive?

If SSD's were cheaper per byte, then even SSD's running on a SATA
interface would still be viable at the same capacities we see HDD's at
right now. So a 16 or 20 TB SSD would be usable devices, but 16 or 20 TB
HDD's aren't.


That sounds like nonsense. If 100TB HDD's were available at a reasonable
price and reasonably reliable, many people would find them to be perfectly
usable. I'd love to replace all of my smaller drives with fewer larger
drives and in fact that's exactly what I've been doing since the
mid-1980's.