A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Long term archival storage



 
 
Thread Tools Display Modes
  #11  
Old March 30th 05, 11:27 PM
Faeandar
external usenet poster
 
Posts: n/a
Default

On Wed, 30 Mar 2005 18:12:54 -0000,
wrote:

Imagine that today (2004) you would need to read 20-year old data.
Say it is the content of a hierarchical database (not a relational
database). The source code of the database still exists, but it is
written in IBM 360 assembly, and only runs under OS/VSE, being run 20
years ago on a 3081 under VM. The last guy who maintained it died of
cancer 10 years back; his widow threw out his files.

Or the data was written 20 years ago with a cp/m machine, in binary
format using Borland dBase. Say for grins the cp/m program was doing
data acquisition on custom-built hardware (this was very common back
then), and requires special hardware interfaces to external sensors
and actuators to run.

In the former case, you have to deal with a huge problem: The data is
probably not in 512-byte blocks, but is written in IBM CKD
(count-key-data) format, on special disks (probably 3350 or 3380); a
sensible database application on the 370 would use CKD search
instructions for performance. Fortunately, IBM will today still sell
you a mainframe that is instruction-set compatible with the 360, and
disk arrays that can still execute CKD searches. And you can still
get an OS that somewhat resembles OS/VSE and VM. So for the next few
years, a few million $ and several months of hard work would recover
the information.


Well, for the most part we're discussing open systems so this would
not be an issue I think. These data sets all follow a standard, the
biggest problem I see is the application to read them as well as
something they can run on. Some apps that we packaged with the data
can only run on (get this) NT4 SP3. Nothing higher. We did not
package an OS with the data so this could be a serious problem in
about 5 more years. Right now it can still be acquired with minimal
effort.


Or you could read 50000 lines of IBM assembly code to determine what
the exact data format really is, and write a converter. Enjoy.

The second case is even worse. Most likely, the old cp/m hardware
(even if you have managed to preserve it) will no longer boot, because
the EPROMs and boot floppies have decayed. You can no longer buy a
legal copy of cp/m or dbase. Running an illegal copy on a cp/m
emulator on a modern computer won't work, because the program requires
custom-built hardware sensors and actuators (I carefully constructed
the problem to maximally inconvenience you). Finding dbase manuals
today to decode what the dbase code was doing and understand the data
format will be very time consuming.

What I'm trying to say: The problem of preserving the bit pattern of
the raw data is the absolute least of the issues. It can be solved
trivially: Write the data to good-quality rewriteable CDs, make a few
copies of each, and every few years read all of them back, and write
them to newly current media. Done. The real problem is documenting
the semantics of the bits.


I'm not sure I understand what you're saying here; "the semantics of
the bits"? If you can elaborate a little I would appreciate it.
As to your trivial solution, good luck. If it were that trivial
everyone would be doing it. The problem is people, no one wants to
recall 4000 cd's from cold storage so they can convert them to DVD or
Blue Ray or whatever. Of course this is a procedural problem not a
technical one but you have to plan for those as well.
And like you said, if someone can't preserve data for that long how in
hell are they going to preserve the entire environment? Some people
do, but those are rare.

After thinking about the semantics a bit (pun intended) I finally got
it. Forget I asked for the elaboration. And as I said initially,
package the app with it. As long as you can get the app to run you
can access the data. I think we're saying the same thing but my way
is ALOT easier IMO. And cheaper in the long run too. Tried to get
support for a System 34 lately?

The easy way out is to preserve the
complete computing environment used to read the data (including all
hardware, documentation, and wetware that is required to operate it).
This is hard, because hardware, paperware and wetware don't preserve
very well.


So is it easy or hard?


I'm not saying that preserving the raw bits should be abandoned. This
is absolutely the most important step; if you fail at that, all other
problems become irrelevant. But please don't believe that it solves
the problem.


Keep the app with the data. That should solve 90% of the potential
problems, minus things like OS or hardware platform.


The long-term preservation of data is a huge research topic. Please
read the abundant literature on it, to get a flavor of the difficulty.
The real issue you need to think about is this: How valuable is this
data really? How valuable will it be in 20 years? What is the
expected cost of recovering it in 20 years (above I budgeted M$ for
buying the hardware for reading CKD data)? How much do you need to
invest now to minimize the expected data recovery cost in 20 years?
Is you CEO cool with investing this much money now, given that in 20
years he will no longer be the CEO? Will it be economically viable to
use the old data in 20 years? Wouldn't it be easier to print it all
on acid-free paper now, store it in a mine or an old railroad tunnel,
and scan the printout in 20 years?


Most data loses any real value after 10 years. Some much sooner. The
most common value for data 10+ years is patent defense. And we've had
to pull data from 13 years ago for just that so it's a reality not
just a possibility. It cost a bundle but the alternative would have
cost infinitely more.


As an example: I used to be an astrophysicist. I happen to have the
original data tape of the 8 neutrinos from the 1987 supernova that hit
the particle detector in the northern US at home. The tape is 6250
bpi open reel, with a highly complex data format on it; fortunately,
the data format was described on paper people's PhD thesis, but
finding the old decoding software and getting it to run would be very
hard (anyone got a VAX with VMS 4?). Reading it and decoding the data
would take several months of work. As this point, the tape has only
emotional value.


This is why I say keep it on disk and migrate it with the rest of your
data. It's extremely hard to find a Kennedy reel as well, but we have
data on those tapes too. Now if it were on disk with the application
that wrote it, I might have a 40% chance of getting to it rather than
a .05% chance, hardware and OS availability being key at that point.

~F
  #12  
Old March 31st 05, 03:50 AM
boatgeek
external usenet poster
 
Posts: n/a
Default

Check out this article.
http://www.infoconomy.com/pages/stor...roup101451.adp

basically digital information for present term access, understanding
that it will need a migration of the back end storage platform and a
translation of the front end software and data into whatever is the
current technical lingua franca..

For long term storage and DR, using microfilm rated at 250 years.

The was a law enacted by congress of UK and US for long term census
records.

So, there is your answer, everyone is right.

Wanna get really cool, ideas are floating for laser lithographs on
ceramic disks at the microscopic level for storage which would last
thousands of years. I personally like that, but I'm a geek with an
interest in history.

  #13  
Old March 31st 05, 04:08 AM
Al Dykes
external usenet poster
 
Posts: n/a
Default

In article . com,
boatgeek wrote:
Check out this article.
http://www.infoconomy.com/pages/stor...roup101451.adp

basically digital information for present term access, understanding
that it will need a migration of the back end storage platform and a
translation of the front end software and data into whatever is the
current technical lingua franca..

For long term storage and DR, using microfilm rated at 250 years.

The was a law enacted by congress of UK and US for long term census
records.

So, there is your answer, everyone is right.

Wanna get really cool, ideas are floating for laser lithographs on
ceramic disks at the microscopic level for storage which would last
thousands of years. I personally like that, but I'm a geek with an
interest in history.



microscopic laser pits on nickel sheets.

But seriously, folks....


Emulation and virtual machines are going to be the salvation for
recovering ancient applications and data. As it has been pointed out,
having a database dump does you no good unless you can run the
application. Now it can be done in emulation. Current computers are
sooo much faster than the machines we emulate that performance can be
decent even if you have to emulate the entire instruction set.

There will always be some service shop that will read your old media
(if it's readable and burn it into a CDR (or whatever the media is
years from now) and as long as you have the OS, application and data
you'll be good to go.


Take a look at this for a list of emulators for machines that haven't
existed outside museums for decades.

http://simh.trailing-edge.com/


I've owned (as a corproate manager) several of the machines on this
list and played with an emulator. A machine that cost close to a
million bucks in 1978 and sucked about 30kW runs slower than its
emulator does on my PC, at least for a single user.

The PC Emulator of IBM370 (http://www.conmicro.cx/hercules/) was used
big-time by corporations in 1999 for testing Y2K conversions.




--
a d y k e s @ p a n i x . c o m

Don't blame me. I voted for Gore.
  #14  
Old April 4th 05, 12:42 AM
dgm
external usenet poster
 
Posts: n/a
Default


Paul Rubin wrote:
[...]

Sounds kind of complicated. Where's this data now, how is it stored,
and how fast are you adding to it and through what kind of system?

20
TB isn't really big storage these days. You could have a small tape
library online and move incoming raw data to tape immediately while
also making the online viewing copies on disk. HSM systems with
automatic migration and retrieval are probably overkill.


It is kind of complicated. Currently we have 6Tb digitised and are
adding 0.1Tb/week.

Now this is data that's stuff that needs to be kept for ever - the
audio stuff is world heritage stuff. The driver for using HSM is two
fold

1) keeping multiple copies securely including offsite
2) we know we have a 900kg gorilla called video waiting in the wings
....

  #15  
Old April 4th 05, 12:47 AM
dgm
external usenet poster
 
Posts: n/a
Default

The point about file formats is well made, but we've been through the
same arguement in detail already. We're choosing file formats which are
publically described for which there are multiple (open source)
clients.

The idea is to be able to ensure that we have the format description
and enough example code to be able to recreate viewers in the future.
That's why we're using tiff and bwf as the archival masters. I don't
care about the mp3's as they are *derived* copies - we can as easily
use ogg vorbis, or whatever we're using in 2055 as long as we can parse
the original compression free datastream

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Long term 100% CPU 'Captain' Kirk DeHaan Dell Computers 6 December 15th 04 06:45 AM
Redundant network storage Andrew Gideon Storage & Hardrives 0 December 6th 04 05:12 PM
SAN (Storage Area Network) Security FAQ Revision 2004/06/23 - Part 1/1 Will Spencer Storage & Hardrives 0 June 23rd 04 07:04 AM
Enterprise Storage Management (ESM) FAQ Revision 2004/06/23 - Part 1/1 Will Spencer Storage & Hardrives 0 June 23rd 04 06:58 AM
Get the Serial Number with Visual Basic Michael Wittmann General 15 November 15th 03 06:03 PM


All times are GMT +1. The time now is 07:52 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.