Hacker News new | past | comments | ask | show | jobs | submit login

I am glad progress is being made here, but until we can avoid the destruction that occurs in the last 24 hours of expected death (eg cancer), or the damage that occurs with unexpected death (eg heart attack or trauma) from sitting at room temperature for hours after death, then all we are going to be preserving is grey mush.



There has to be enough information left over to reconstruct a person in "high enough fidelity" (whatever we may decide that means in the future).

A brain just sitting there for hours after death at room temperature isn't idea - however, there is some good news in the area. It turns out that the most destruction happening to a brain after an ischemic episode is actually due to a cascade triggered by eventual re-perfusion. Since the dead brain is never re-perfused, this cascade is never triggered. Cellular decay after death makes a biological re-animation infeasible, but speaking as someone who did prepare a lot of neuro slides at uni, it takes more than a few hours for the structure itself to decay heavily, so we should be good for a scan/upload scenario.

Pertaining to the article, the first hours after death are nowhere near as problematic to the brain's information content as the plastination procedure they're using!

Brain trauma before death is another matter. Since we don't have the capability to create backups or checkpoints of our neuronal structure, what's physically destroyed is simply lost beyond recovery. However, for example in aggressive brain cancers, a functional copy of the person might still be recovered in principle even if their neocortex was severely compromised, as long as the actual data is still there.


If people learn to take regular backups this wouldn't be a problem.


If only this were possible :(


Will those backups be you? What if one of those backups activated on accident? Would you kill it? Or should it kill you?


Depending on your outlook on life, if the backup is you, don't you already know the answer?


I agree. But I also think this work (and future work like it) will help with that; having convincing proof that a preservation process works will make it easier to get obstacles to prompt preservation out of the way.


You're assuming this "grey mush" can't be recovered. We don't actually know that. A sufficiently advanced AI should be able to recover a person from way less.


You're assuming the phrase "a sufficiently advanced AI" answers anything at all.

Presumably you're assuming if the information is there at all - if the necessary data hasn't been scrambled beyond the noise floor of the scrambling process - then there's something for magic (because you're really talking about magic here) to work with.

So, please (a) set out your claim with precision (b) back up your claim.

* What is the information you need to recover?

* To what degree is it scrambled?

* What of it is scrambled below the noise floor of the process?

* How do you know all this? (wrong answer: "here's a LessWrong/Alcor page." right answer: "here's something from a relevant neuroscientist.")

For comparison: even a nigh-magical superintelligent AI can't recover an ice sculpture from the bucket of water it's melted into. It is in fact possible to just lose information. So, since you're making this claim, I'd like you to quantify just what you think the damage actually is.


I'm quite sure I've read somewhere that information cannot be lost in the absolute sense, lost to us: yes, lost irrevocably and irrefutably: no.

In that sense, 'a sufficiently advanced AI' is not magic, because when people say that they definitively have something in mind, at least the people I often discuss this with do.

In short: if you're smart(fast, precise, determined) enough to look at the individual molecules of a puddle of brain-goo. And if you can infer the way it has collapsed by ray tracing those molecules back through how they collided with each other/the walls of your mold then it should be possible to reconstruct the spatial form of the brains at least. That's a pretty big IF obviously, but equally obviously not impossible. If only you can look deep/far/fast enough.

If you want to be theoretical about it, then yes. There is probably an upper bound on how smart/big an AI mind can possibly be. And thus there is a limit on how much information it can extract from arbitrary systems. So I agree with your assertion that there is information that even the smartest of all AIs cannot possibly reconstruct, but I'm not sure that the brain is such a structure.

Any justification about why/how 'a sufficiently advanced AI' could come about is more questionable.

Many knowledgeable people are making guesses based on our current understanding of intelligence/computation/AI, and then extrapolating. The paradoxical thing is that on the one hand AI-doomsday speakers tell us no to anthropomorphise (for good reasons) with the motives of an AI, but on the other hand apply human reasoning/understanding to predict such machines/patterns.


> I'm quite sure I've read somewhere that information cannot be lost in the absolute sense, lost to us: yes, lost irrevocably and irrefutably: no.

This is probably not quite at the requested standard of backing up a claim, and sounds very like "but you can't prove it isn't true!" But I'm not the one making a claim.

In any case, please back up your claim. What is "the absolute sense"? How does it differ from "in a practical sense", with examples?

> In short: if you're smart(fast, precise, determined) enough to look at the individual molecules of a puddle of brain-goo. And if you can infer the way it has collapsed by ray tracing those molecules back through how they collided with each other/the walls of your mold then it should be possible to reconstruct the spatial form of the brains at least. That's a pretty big IF obviously, but equally obviously not impossible. If only you can look deep/far/fast enough.

Noise floor. In this case, thermal noise.

Also, you literally can't know that much about all the molecules in your puddle of goo. (Heisenberg.) We do not live in a Newtonian universe.


http://en.wikipedia.org/wiki/Entropy_in_thermodynamics_and_i...

http://phys.org/news/2014-09-entropy-black-holes.html

http://phys.org/news/2014-09-black-hole-thermodynamics.html

Ben Crowell, phd in physics:

http://physics.stackexchange.com/questions/83731/entropy-inc...

The reason I didn't/don't back up those claims is because I'm really not knowledgeable about those subjects. I'm not sure what good sources are/how legitimate they are, but I have read it one day. Even if I cannot interpret the technical jargon behind it and/or give more nuance to my claim due to a low understanding of the subject.

Given a bit of googeling to "can information be lost", "conservation of information" one finds the articles I linked to above.

But you have dodged my refutal of your initial claim, the one I was really responding to, that: 'sufficiently advanced AI' is not just a stop-gap-word for magic. Because in this case it doesn't stand for "I don't know how or why but this and this", but instead it stands for "I don't know why(in the motivational sense), but given a bigger brain one can use and interpret finer instruments, which in turn enables us to extrapolate further back in time".


None of your links support your claim that winding the clock back is even theoretically possible, and the stackexchange link seems to say it isn't: "The resolution is that entropy isn't a measure of the total information content of a system, it's a measure of the amount of hidden information, i.e., information that is inaccessible to macroscopic measurements." Even if you're assuming a physical God, that physical God can't get good enough measurements.


I think that perhaps our views of the world are slight off-kilter/incompatible.

I agree with you that even godlike-AI must have an upper bound on what they can extract from a 'puddle of atoms'. It's obvious that given a handful of atoms it's not possible to predict what happened to a completely different bunch of atoms 5 billion years ago at the other side of the (observable) universe. That's also not what I'm claiming.

What I do claim is that, given enough smarts, it's possible to do this to a bunch of molecules present in the brain-goo.

I'm assuming here that whatever it is that makes the brain 'tick' is located on the molecular level, and not a lower level.

As to your claim of being able to 'turn back time', don't we do this all the time?

If we look at the link we've both referenced, say we had two pictures of the last milliseconds of the book falling, and we knew the exact time between when these pictures were taken then we can turn back the time right? We know exactly how/when/where the book was if we can interpret those pictures.

In a similar way, the information about the locations of the molecules in the 'brain goo' is available to a 'sufficiently advanced AI'. Thus what I'm arguing is that this is not information that is 'lost' in the way that we've been discussing so far.

Therefore it's also not 'magic' when people refer to such AI, because when they do they have this in mind. Not some law-bending/breaking super godlike-ai, but rather a system with the resources needed to stitch together the complete video from the last two images.


> I think that perhaps our views of the world are slight off-kilter/incompatible.

Yeah, possibly. I blame LessWrong fatigue. It's an entire site made of handwavy claims that, no matter how far you trace back through the links, never quite actually get backed up. So I tend to be harsh on similar claims, particularly when they appear to be from that sphere (judging by the buzzword "sufficiently advanced AI", which is in practice used to put forward outlandish claims and then try to reverse the burden of proof).

I actually started reading the site because of a friend who was getting into cryonics. I'd hitherto been neutral-to-positive on the idea, but the more I investigated it the more I went "what the hell is this rubbish." (Writeup is at http://rationalwiki.org/wiki/Cryonics which is a very middling article, and is still about the best critical article available on the subject ...) The handwavy claims are endemic, quite a few rely on effective magic (actual answers from cryonicist: "But, nanobots!" or "sufficiently advanced AI") and it really is largely just ill-supported guff, even if I'm being super-charitable to the arguments. Extracting a disprovable claim is nearly bloody impossible itself.

> As to your claim of being able to 'turn back time', don't we do this all the time? >If we look at the link we've both referenced, say we had two pictures of the last milliseconds of the book falling, and we knew the exact time between when these pictures were taken then we can turn back the time right? We know exactly how/when/where the book was if we can interpret those pictures.

But we couldn't do that if the data had been destroyed. That's the claim way up there: the information is recoverable from the mashed-up goo. The two pictures have been destroyed, we have the book sitting on the floor, there's nothing to reconstruct the fall in sufficient detail.

I say this because whenever I've seen an actual neuroscientist who's been asked this sort of question (can we recover the information with a magic AI or whatever), they answer "wtf, no, it's been utterly trashed. No, not even in theory. You can't even measure it. It's been trashed utterly." The questioner usually comes back with "but if we use a SUFFICIENTLY ADVANCED AI ..." i.e., if we let them assert their conclusion. And first they'd have to show you could measure stuff on the nanometre scale without messing it up. Let alone, e.g., reconstructing the precise locations of proteins in a cell after they've been denatured by cryoprotectant. Remember that it's a claim about physical reality that's being made here.

(A couple of examples, from scientists who would LOVE to be able to preserve and get back this information: http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson... http://freethoughtblogs.com/pharyngula/2012/07/14/and-everyo... )

>In a similar way, the information about the locations of the molecules in the 'brain goo' is available to a 'sufficiently advanced AI'.

Remember that there is no way to distinguish two molecules of the same substance. You're requiring more information than can actually be measured (Heisenberg).


>I'm quite sure I've read somewhere that information cannot be lost in the absolute sense,

Quantum information is, in some decently well-regarded theories, a conserved quantity. Classical information is not, in any major theory.

>In short: if you're smart(fast, precise, determined) enough to look at the individual molecules of a puddle of brain-goo. And if you can infer the way it has collapsed by ray tracing those molecules back through how they collided with each other/the walls of your mold then it should be possible to reconstruct the spatial form of the brains at least. That's a pretty big IF obviously, but equally obviously not impossible. If only you can look deep/far/fast enough.

Again: classical information is not a conserved quantity. A puddle of brain-goo will likely tell you more than a puddle of non-brain goo about the person who used to be that brain, but there is a very strong limit to what it can tell you. You cannot, so to speak, extrapolate the universe from one small piece of fairy cake.

(Disclaimer: I have previously donated to the Brain Preservation Foundation precisely because I think the issue deserves investigation by mainstream, non-wishful scientists so that people who want to... whatever it is they're planning on, can do it.)

>Many knowledgeable people are making guesses based on our current understanding of intelligence/computation/AI, and then extrapolating. The paradoxical thing is that on the one hand AI-doomsday speakers tell us no to anthropomorphise (for good reasons) with the motives of an AI, but on the other hand apply human reasoning/understanding to predict such machines/patterns.

The thing about "sufficiently advanced AI" is that it dodges the basic issues. A sufficiently advanced AI is just a machine for crunching data into generalized theories. It can only learn theories in the presence of data. Admittedly, the more data it gets from a broader variety of domains, the more it can form abstract theories that give usefully informed prior knowledge about what it can expect to find in new domains. But if it can use detailed knowledge about brains-in-general to reconstruct a puddle of brain-goo into a solid model of a human brain, solid enough to "make it live", that's not because of some ontologically basic "smartness" about the AI, it's because the AI has the right kind of learning machinery for crunching data about specific and general things together to allow it to learn and utilize very large sums of ___domain knowledge. These sums could possibly larger than any individual human might obtain in the course of a single 20-year education, from kindergarten to PhD, but the key factor in "AI's" understanding of the natural sciences will ultimately be experimental data and ___domain knowledge derived from experimental data.


Sure it can. Look at a photograph, make a mould and find a freezer :P


If it's a godlike AI it can simulate humankind in reverse and resurrect everyone's consciousness before each person's death.


That is absolute bunk that defies the basic principles of computational complexity and information theory in almost every possible way.


The problem is the information is encoded in the arrangements of the neural connections. Once your brain has reached the grey mush stage all those connection are lost.


Aren't neurons relatively sizable? I don't see why they would totally "decay" in a short time frame, but maybe it's because I'm ignorant.


Such AI will only be able to recover a gradient set of possible personalities depending on the percentage of missing data. Something like 1000000 of slightly different permutations of you.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: