A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage (alternative)
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

1000 year data storage for autonomous robotic facility



 
 
Thread Tools Display Modes
  #81  
Old May 11th 13, 08:07 AM posted to comp.sys.ibm.pc.hardware.storage,sci.physics,sci.electronics.design
[email protected]
external usenet poster
 
Posts: 19
Default 1000 year data storage for autonomous robotic facility

On Sat, 11 May 2013 12:27:10 +1000, "Rod Speed"
wrote:

Jeff Liebermann wrote
Jeroen wrote
Jeff Liebermann wrote


Mother nature, Microsoft, and satellite technology have provided
examples the long term survivability that work. Mother nature
offers evolution, where a species adapts to changing conditions.
Microsoft has Windoze updates, which similarly adapts a know buggy
operating system into a somewhat less buggy operating system.


I beg to differ! Somewhat different bugs, sure. Somewhat less buggy,
surely not!


Ok, not the best example.


Yeah, its nothing like evolution in fact.


If we are using the evolutional model, several sites with different
technologies must be used.

Some of these sites are successful, some are not, but of course we do
not know in advance, which system will survive and which will fail.

  #82  
Old May 11th 13, 08:09 AM posted to comp.sys.ibm.pc.hardware.storage,sci.physics,sci.electronics.design
Rod Speed
external usenet poster
 
Posts: 8,559
Default 1000 year data storage for autonomous robotic facility



"Bernhard Kuemel" wrote in message
...
On 05/10/2013 07:44 AM, Jeff Liebermann wrote:
On Fri, 10 May 2013 04:53:40 +1000, "Rod Speed"
wrote:

"Jeff Liebermann" wrote in message
...


Actually, the biggest problem are the human operators. The Three Mile
Island and Chernobyl reactor meltdowns comes to mind, where the humans
involved made things worse by their attempts to fix things. Yeah,
maybe autonomous would better than human maintained.

I don't believe it's possible to achieve 1000 year reliability
for electronics and mechanisms. If it moves, it breaks...
unless something extraordinary (and expensive) is employed.

He did say that cost was no object.


I once worked on a cost plus project, which is essentially an
unlimited cost system. They pulled the plug before we even got
started because we had exceeded some unstated limit. There's no such
thing as "cost is no object".


I said: "Price is not a big issue, if necessary." I know it's gonna be
expensive and we certainly need custom designed parts, but a whole
semiconductor fab and developing radically new semiconductors are
probably beyond our limits.

The list of probable hazards are just too great for such a device.

That stuff doesn't matter if it can repair what breaks.


Repair how and using what materials?


Have the robots fetch a spare part from the storage and replace it.
Circuit boards, CPUs, connectors, cameras, motors, gears, galvanic
cells/membranes of the vanadium redox flow batteries, thermo couples,
etc. They need to be designed and arranged so the robots can replace them.

Ok, lets see if that works. The typical small signal transistor has
an MTBF of 300,000 to 600,000 hrs or 34 to 72 years. I'll call it 50
year so I can do the math without finding my calculator. MTBF (mean
time between failures) does not predict the life of the device, but
merely predicts the interval at which failures might be expected. So,
for the 1000 year life of this device, a single common signal
transistor would be expected to blow up 200 times. Assuming the robot
has about 1000 such transistors, you would need 200,000 spares to make
this work. You can increase the MTBF using design methods common in
satellite work, but at best, you might be able to increase it to a few
million hours.


It's quite common that normal computer parts work 10 years. High
reliability parts probably last 50 years. Keep 100 spare parts of each
and they last 1000 years, if they don't deteriorate in storage.


Yeah, that should be doable.

Also robots are usually idle and only active when there's something to
replace. The power supply, LN2 generator and sensors are more active.


I wonder how reliable rails or overhead cranes
that carry robots and parts around are.


Those can certainly be designed to last 1000 years.

If replacing rails or overhead crane beams is necessary
and unfeasible, the robots will probably drive with wheels.


Yeah, I don't see any need for overhead crane beams.

If you can get the electronics that drives everything to last
1000 years by replacement of what fails, the mechanical stuff
they need to move parts around should be easy enough.

Obviously with multiple devices that move parts around
so when one fails you just stop using that one etc.

Geosynchronous satellites are unlikely to suffer from serious orbital
decay. However, they have been known to drift out of their assigned
orbital slot due to various failures. Unlike LEO and MEO, their
useful life is not dictated by orbital decay. So, why are they not
designed to last more than about 30 years?


Because we evolve. We update TV systems, switch from analog to
digital etc. My cryo store just needs to the same thing for a long time.


It doesn't actually. The approach the egyptians took lasted fine,
even when the locals chose to strip off the best of the decoration
to use in their houses etc.

Corse its unlikely that you could actually afford something that big
and hard to destroy.

At the risk of being repetitive, the reason that one needs
to improve firmware over a 1000 year time span is to allow
it to adapt to unpredictable and changing conditions.


Initially there will be humans verifying how the cryo store does
and improve soft/firmware and probably some hardware, too,
but there may well be a point where they are no longer available.
Then it shall continue autonomously.


That conflicts with your other proposal of a tomb like thing
in the Australian desert. Its going to be hard to stop those
involved in checking its working from telling anyone about it.

There is going to be one hell of a temptation for
one of them to spill the beans to 60 Minutes etc.

It would be a hell of a lot safer to not even attempt
any improvements, just replace what dies.


True. However, not providing a means of improving or adapting the
system to changing conditions will relegate this machine to the junk
yard in a fairly short time. All it takes is one hiccup or environmental
"leak", that wasn't considered by the designers, and it's dead.


Yes. We need to consider very thoroughly every failure
mode. And when something unexpected happens, the
cryo facility will call for help via radio/internet.


At which time you have just blown your disguise as a tomb.

I even thought of serving live video of the facility so it remains
popular and people might call the cops if someone tries to harm it.


Its more likely to just attract vandals who watch the video.

Volunteers could fix bugs or implement hard/software
for not considered failure modes.


Or they might just point and laugh instead.

  #83  
Old May 11th 13, 08:29 AM posted to comp.sys.ibm.pc.hardware.storage,sci.physics,sci.electronics.design
benj
external usenet poster
 
Posts: 3
Default 1000 year data storage for autonomous robotic facility

On Sat, 11 May 2013 09:56:24 +0300, upsidedown wrote:

On Fri, 10 May 2013 21:59:18 +0200, Bernhard Kuemel
wrote:


It's quite common that normal computer parts work 10 years. High
reliability parts probably last 50 years. Keep 100 spare parts of each
and they last 1000 years, if they don't deteriorate in storage.


Did you see the pictures of the Fukushima reactor control room ?

So 1970's :-)

But generally, also in many other heavy industry sectors with the actual
industrial hardware being used for 50-200 years, you might still keep
over 30 years old sensors, actuators, field cables and I/O cards, while
upgrading higher level functions, such as control rooms, to modern
technology.


Oddly I've got some radio from the 1920s that are still working fine (one
A****er Kent had the pot metal tuning mechanism disintegrate, but if you
tuned each capacitor by hand it still worked fine). But radios of
essentially the same technology from the 30s an 40s are all dead. Parts
like electrolytic capacitors do not have long life. The "improvement" of
tubes with cathode coatings also limited their useful life. Today, since
short lifetime parts are just too convenient to ignore, nobody builds for
any extended life. Electronic lifetimes just keep getting shorter and
shorter.

Some years ago I started a project of an electronic grandfather
"superclock". But the idea was not to simply build an accurate clock, but
to build one that several hundred years from now would still be running
as accurately. (Same idea as a mechanical grandfather clock...ever notice
the similarity of a tall grandfather clock to a relay rack... get the
picture)

But I soon discovered that building electronics with several hundred year
life is not so simple. Making sure all you capacitors are of materials
that don't degrade, that active parts have a decent life time and all the
rest takes some careful considerations even if the electronics ends up
shielded in air-tight containers. Sure you can pick out things like
ceramic and glass capacitors and other items that will work for hundreds
of years but using ONLY those items to build a complex device takes some
serious design thought.




























  #84  
Old May 11th 13, 04:56 PM posted to comp.sys.ibm.pc.hardware.storage,sci.physics,sci.electronics.design
Jeff Liebermann[_2_]
external usenet poster
 
Posts: 134
Default 1000 year data storage for autonomous robotic facility

On Fri, 10 May 2013 21:59:18 +0200, Bernhard Kuemel
wrote:

That stuff doesn't matter if it can repair what breaks.


Repair how and using what materials?


Have the robots fetch a spare part from the storage and replace it.
Circuit boards, CPUs, connectors, cameras, motors, gears, galvanic
cells/membranes of the vanadium redox flow batteries, thermo couples,
etc. They need to be designed and arranged so the robots can replace them.


I'm thinking there may be a different way to do this. The basic
problem is that the life of an electronic system can currently be
built that will last about 50 years before MTBF declares that problems
will begin. With redundancy and spares, this might be extended to 100
years. The building will last somewhat longer, but probably no more
than 100 years before maintenance problems arrive.

Rather than replace all the individual components, I suggest you
consider replacing the entire building and all the machinery every
50-100 years. Instead of one building, you build two buildings, in
alternation. When the first facility approaches the end of its
designed life, construction on a 2nd facility begins adjacent to the
first facility. It would be an all new design, building upon the
lessons learned from its predecessor, but also taking advantage of any
technological progress from the previous 100 years. Instead of
perpetually cloning obsolete technology, this method allows you to
benefit from progress. When the new facility is finished, the severed
heads are moved from the old facility to the new. The old equipment
can then be scrapped, and the building torn down to await the next
reconstruction in 100 years.

Note: The 100 year interval is arbitrary, my guess(tm), and probably
wrong. The MTBF may also increase with technical progress over time.

Yes. We need to consider very thoroughly every failure mode.


It's called a finite state machine.
https://en.wikipedia.org/wiki/Finite-state_machine
Every state, including failure modes, must have a clearly defined
output state, which in this case defines the appropriate action. These
are very efficient, quite reliable, but require that all possible
states be considered. That's not easy. A friend previously did
medical electronics and used a finite states. Every possible
combination of front panel control and input was considered before the
machines servo would move. Well, that was the plan, but some clueless
operator, who couldn't be bothered to read the instructions, found a
sequence of front panel button pushing that put the machine into an
undefined and out of control state. You'll have the same problem.
Some unlikely combination of inputs, that were completely impossible
in accordance to even the worst case operating conditions, will happen
and ruin everything. I've seen state diagrams and tables, for fairly
simple machines, cover a wall of an office.
https://en.wikipedia.org/wiki/State_diagram

And when
something unexpected happens, the cryo facility will call for help via
radio/internet.


Maybe. I have some good stories of alarms going awry. The short
version is that too few and too many alarms are both a problem. Too
few and there's not enough warning to be able to prevent a failure.
Too many, and the humans that are expected to fix the problem treat it
as "normal" and over a period of time, ignore the alarm (Chicken
Little effect).
https://en.wikipedia.org/wiki/Chicken_little
I've seen it happen. I did some process control work at a local
cannery. Plenty of sensors and alarms everywhere. Because
maintenance was overextended, the sensors were constantly getting
clogged with food residue. Rather than keep them clean, someone
simply increased the sensor sensitivity so that it would work through
the encrusted food residue layers. The result was constant false
alarms, as the overly sensitive sensors failed to distinguish between
a line stoppage or another layer of filth. The false alarms were far
worse when the sensors were cleaned, which served as a good excuse to
never clean them. I managed to fix the problem just before the
cannery closed and moved to Mexico. Hint: Building and planning alarm
systems is not easy.

Volunteers could fix bugs or implement hard/software for not
considered failure modes.


I suspect that you are not involved in running a volunteer
organization. In terms of reliability, volunteers can be anything
from totally wonderful to an absolute disaster. Because the usual
"carrot and stick" financial incentives are lacking with volunteers,
there's very little you can do to motivate or control volunteers. If
you demand that they do something that they consider personally
repulsive, they'll just walk away. Please talk with someone that runs
a volunteer organization for additional clues.

--
Jeff Liebermann
150 Felker St #D
http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann AE6KS 831-336-2558
  #85  
Old May 11th 13, 08:45 PM posted to comp.sys.ibm.pc.hardware.storage,sci.physics,sci.electronics.design
Rod Speed
external usenet poster
 
Posts: 8,559
Default 1000 year data storage for autonomous robotic facility



"Jeff Liebermann" wrote in message
...
On Fri, 10 May 2013 21:59:18 +0200, Bernhard Kuemel
wrote:

That stuff doesn't matter if it can repair what breaks.

Repair how and using what materials?


Have the robots fetch a spare part from the storage and replace it.
Circuit boards, CPUs, connectors, cameras, motors, gears, galvanic
cells/membranes of the vanadium redox flow batteries, thermo couples,
etc. They need to be designed and arranged so the robots can replace them.


I'm thinking there may be a different way to do this. The basic
problem is that the life of an electronic system can currently be
built that will last about 50 years before MTBF declares that problems
will begin. With redundancy and spares, this might be extended to 100
years.


The building will last somewhat longer, but probably no
more than 100 years before maintenance problems arrive.


That's just plain wrong when its designed to last 1000
years in the first place without any maintenance.

Rather than replace all the individual components,
I suggest you consider replacing the entire building
and all the machinery every 50-100 years.


That's much harder to achieve with an autonomous
system with no humans involved.

Instead of one building, you build two buildings, in alternation.
When the first facility approaches the end of its designed life,
construction on a 2nd facility begins adjacent to the first facility.
It would be an all new design, building upon the lessons learned
from its predecessor, but also taking advantage of any
technological progress from the previous 100 years.


Impossible with an autonomous system with no humans involved.

Instead of perpetually cloning obsolete technology,
this method allows you to benefit from progress.


But does necessarily involve someone keeping
humans involved in doing that for 1000 years,
just to keep your head. Good luck with that.

When the new facility is finished, the severed heads
are moved from the old facility to the new. The old
equipment can then be scrapped, and the building
torn down to await the next reconstruction in 100 years.


And how do you proposed to recruit a new crew of humans
the next time you need to replace everything except the heads ?

Note: The 100 year interval is arbitrary, my guess(tm), and probably
wrong. The MTBF may also increase with technical progress over time.


Yes. We need to consider very thoroughly every failure mode.


It's called a finite state machine.
https://en.wikipedia.org/wiki/Finite-state_machine
Every state, including failure modes, must have a clearly defined
output state, which in this case defines the appropriate action.
These are very efficient, quite reliable, but require that all possible
states be considered. That's not easy. A friend previously did
medical electronics and used a finite states. Every possible
combination of front panel control and input was considered before the
machines servo would move. Well, that was the plan, but some clueless
operator, who couldn't be bothered to read the instructions, found a
sequence of front panel button pushing that put the machine into an
undefined and out of control state. You'll have the same problem.


Not if there are no humans involved.

Some unlikely combination of inputs, that were completely impossible
in accordance to even the worst case operating conditions, will happen
and ruin everything. I've seen state diagrams and tables, for fairly
simple machines, cover a wall of an office.
https://en.wikipedia.org/wiki/State_diagram


And when something unexpected happens, the
cryo facility will call for help via radio/internet.


Maybe. I have some good stories of alarms going awry. The short
version is that too few and too many alarms are both a problem. Too
few and there's not enough warning to be able to prevent a failure.
Too many, and the humans that are expected to fix the problem treat it
as "normal" and over a period of time, ignore the alarm (Chicken Little
effect).
https://en.wikipedia.org/wiki/Chicken_little
I've seen it happen. I did some process control work at a local
cannery. Plenty of sensors and alarms everywhere. Because
maintenance was overextended, the sensors were constantly getting
clogged with food residue. Rather than keep them clean, someone
simply increased the sensor sensitivity so that it would work through
the encrusted food residue layers. The result was constant false
alarms, as the overly sensitive sensors failed to distinguish between
a line stoppage or another layer of filth. The false alarms were far
worse when the sensors were cleaned, which served as a good excuse to
never clean them. I managed to fix the problem just before the
cannery closed and moved to Mexico. Hint: Building and planning alarm
systems is not easy.


Volunteers could fix bugs or implement hard/software for not
considered failure modes.


I suspect that you are not involved in running a volunteer
organization. In terms of reliability, volunteers can be anything
from totally wonderful to an absolute disaster. Because the usual
"carrot and stick" financial incentives are lacking with volunteers,
there's very little you can do to motivate or control volunteers. If
you demand that they do something that they consider personally
repulsive, they'll just walk away. Please talk with someone that runs
a volunteer organization for additional clues.



  #86  
Old May 12th 13, 03:49 PM posted to comp.sys.ibm.pc.hardware.storage,sci.physics,sci.electronics.design
Mark F[_2_]
external usenet poster
 
Posts: 164
Default 1000 year data storage for autonomous robotic facility

Re 1000 year data storage:
Could Intel or some other company use modern equipment but old design
rules to make the integrated circuits have a much longer expected
lifetime?

It seems like it might be possible that if dimensions of the devices
were made larger then things would last longer.

I know that making flash memory just a few times larger and using
only single level cells increases the number of reliable life cycles
100's of times (1000 to hundreds of thousands) while at the same time
raising the data decay time from a couple of about a year to about
a 10 years. Refreshing every year would only require 1000's of
write cycles, well within the 100's of thousands possible.

I think the functions besides memory storage a couple 10's of years
now, but I don't know if making things a few times larger and
tuning the manufacturing process would get to a 1000 years.
(For example, I don't know if the memory cells would last a 1000
years, but data decay would not be a problem since only 100's of
rewrites/cell would be needed for refresh and 100's of thousands
are possible. (Actually, millions of rewrite cycles are likely
to be possible.)

Changing the designed circuit speed, the actual clock rate,
and operating voltage can also improve expected lifetime.

A long term power source would still be an issue unless things can
be made to not need refresh. I don't know how things scale, so
I used the numbers for actual products to get back to 10 year
decay time. I don't know if you would have to make things
logarithmically bigger in 1 dimension, or perhaps 2 or 3
or if linearly bigger in 1 dimension or perhaps 2 or 3, of
if making things much bigger than the old stuff would increase
the expected lifetime.
  #87  
Old May 12th 13, 07:24 PM posted to comp.sys.ibm.pc.hardware.storage,sci.physics,sci.electronics.design
Rod Speed
external usenet poster
 
Posts: 8,559
Default 1000 year data storage for autonomous robotic facility

Mark F wrote

Re 1000 year data storage:
Could Intel or some other company use modern equipment but old design
rules to make the integrated circuits have a much longer expected
lifetime?


Yes, but how much longer is less clear.

It seems like it might be possible that if dimensions of the
devices were made larger then things would last longer.


And particularly if the design was to minimise diffusion somehow.

I guess that since it's a cryo facility, one obvious way to get
a longer life is to run the ICs at that very low temp too etc.

I know that making flash memory just a few times larger and using
only single level cells increases the number of reliable life cycles
100's of times (1000 to hundreds of thousands) while at the same time
raising the data decay time from a couple of about a year to about
a 10 years. Refreshing every year would only require 1000's of
write cycles, well within the 100's of thousands possible.


You'd be better off with some form of ROM instead life wise.

I think the functions besides memory storage a couple 10's of years now,


Much longer than that with core.

but I don't know if making things a few times larger and
tuning the manufacturing process would get to a 1000 years.
(For example, I don't know if the memory cells would last a 1000
years, but data decay would not be a problem since only 100's of
rewrites/cell would be needed for refresh and 100's of thousands are
possible. (Actually, millions of rewrite cycles are likely to be
possible.)


Like I said, ROM is more viable for very long term storage.

Changing the designed circuit speed, the actual clock rate,
and operating voltage can also improve expected lifetime.


A long term power source would still be an issue
unless things can be made to not need refresh.


Yes, that's the big advantage of ROM and core.

I don't know how things scale, so I used the numbers for actual
products to get back to 10 year decay time. I don't know if you
would have to make things logarithmically bigger in 1 dimension,
or perhaps 2 or 3 or if linearly bigger in 1 dimension or perhaps
2 or 3, of if making things much bigger than the old stuff would
increase the expected lifetime.


  #88  
Old May 16th 13, 02:03 PM posted to comp.sys.ibm.pc.hardware.storage,sci.physics,sci.electronics.design
[email protected]
external usenet poster
 
Posts: 5
Default 1000 year data storage for autonomous robotic facility

On May 6, 3:40*pm, Clifford Heath wrote:
On 05/06/13 12:16, wrote:

On May 5, 6:33 pm, *wrote:
In comp.sys.ibm.pc.hardware.storage *wrote:
For some reason, there are a lot of people with big egos and
low itelligence that want to believe these marketing lies.
Never ceases to amaze me. It is also fascinating that the
most significant (and well-known in the data archival community)
problem is blatantly ignored: The equipment for reading the
storage devices needs to survive as well and the software for
processing it and hardware it runs on too. That means this
hardware has to stay in production, as these components will
have a shelf-life well below 30 years.


* *I made the assumption that the robots themselves would be the "read
hardware"


All this prompts the question of whether human culture will last, to
the point that anyone will care about decoding 1's and 0's in 1000yr.


The OP said nothing about humans (the *robots* use the software
during the 1000 yrs), or why the facility needed to be autonomous for
1000yr.

If it does, one might assume that there are times during that period
where interest is sufficient to copy to new or better media.


If the facility's tech can be modified per outside developments,
does it still qualify as autonomous?

I still have files that have survived five generations of media tech.


Did you keep the machinery to read them, too?


Mark L. Fergerson
  #89  
Old May 16th 13, 10:28 PM posted to comp.sys.ibm.pc.hardware.storage,sci.physics,sci.electronics.design
Rod Speed
external usenet poster
 
Posts: 8,559
Default 1000 year data storage for autonomous robotic facility



" wrote in message
...
On May 6, 3:40 pm, Clifford Heath wrote:
On 05/06/13 12:16, wrote:

On May 5, 6:33 pm, wrote:
In comp.sys.ibm.pc.hardware.storage
wrote:
For some reason, there are a lot of people with big egos and
low itelligence that want to believe these marketing lies.
Never ceases to amaze me. It is also fascinating that the
most significant (and well-known in the data archival community)
problem is blatantly ignored: The equipment for reading the
storage devices needs to survive as well and the software for
processing it and hardware it runs on too. That means this
hardware has to stay in production, as these components will
have a shelf-life well below 30 years.


I made the assumption that the robots themselves would be the "read
hardware"


All this prompts the question of whether human culture will last, to
the point that anyone will care about decoding 1's and 0's in 1000yr.


The OP said nothing about humans


He did however imply that there would be humans around in
the future to thaw him out and upload the contents of his head.

He wasn't proposing that his robots do that.

(the *robots* use the software during the 1000 yrs),
or why the facility needed to be autonomous for 1000yr.


He did say that later, essentially he believes that that's
the most likely way to ensure that his frozen head will
still be around in 1000 years for the humans that that
have worked out how to upload the contents to do that.

If it does, one might assume that there are times during that period
where interest is sufficient to copy to new or better media.


If the facility's tech can be modified per outside
developments, does it still qualify as autonomous?


Yes, if it can operate by itself.

I still have files that have survived five generations of media tech.


Did you keep the machinery to read them, too?


You don't need to if you have multiple generations, you
only need to keep the machinery for the latest generation.

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Dell's 1 Year 10GB Online DataSafe Storage Barry[_8_] Dell Computers 9 December 11th 07 02:58 AM
Scripts for archiving data older than 1 year using Netbackup 4.5 ? Net Worker Storage & Hardrives 2 May 20th 04 01:50 AM
DVD-R Data Storage Ben Storage & Hardrives 3 October 24th 03 09:59 PM
for data storage ONLY Bill & Sue Storage (alternative) 2 August 1st 03 12:11 AM


All times are GMT +1. The time now is 10:50 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.