[Previous] Bad Thinking About Cost Savings | Home | [Next] Aubrey de Grey Discussion, 7

Aubrey de Grey Discussion, 6

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
In a nutshell, I think most of what you’rve written here comes down to something I already entirely agree with, namely that any kind of ranking of competing ideas is inferior to the identification of a win-win. You don’t need to persuade me of that.
ok but i have stronger claims:

1) All human choices can and should be made using the win-win arbitration approach. it is the only method of rational thinking

2) Justificationism doesn't work at all, and has zero value as an alternative method
My preferred way of looking at this is that identifying a win/win is the extreme case of choosing by ranking, in rather the same sense that Popperian decision-making is the limiting case of Bayesian decision-making. But I mention that only for clarification; if you think it’s wrong, do tell me, but let’s not spend too much time on that (not yet anyway) because I don’t think it affects the rest of what I want to say.
I don't agree that Bayesian epistemology has any value. OK I won't argue that now. Though FYI DD's latest blog post is "Simple refutation of the ‘Bayesian’ philosophy of science":

http://www.daviddeutsch.org.uk/2014/08/simple-refutation-of-the-bayesian-philosophy-of-science/
My problem comes down to the impracticality of the arbitration approach. I can certainly believe that all conflicts can reliably be resolved in bounded time, but as you say, we have the problem of needing to make a decision now (or soon), not in 10000 years.
I'm glad to hear that. It's a big point of agreement. Most people think some problems aren't solvable, and some human conflicts don't have any possible win/win outcomes.

I meant the arbitration approach can always be done within real life time limits. Or at least scenarios where you have some time to think. For a starting point, let's limit discussion to cases where the time limit is at least an hour. And definitely not worry about the 5 millisecond case.

In Oct 2002, I made a similar objection to yours. DD answered why common preference finding (a.k.a. win/win arbitration) doesn't require infinite creativity.
... the finding of a common preference does not entail finding the solution to any particular problem.

The economy does not require infinite creativity to grow. Particular enterprises fail all the time. Particular inefficiencies may remain unimproved for long periods. The economy as a whole may have brief hitches where mistakes have been made and have to be undone; but if it stagnates to the extent of failing to innovate, there is a reason. It's not just 'one of those things'. The reason has nothing to do with there being a glut of nautiluses on the market, but is invariably caused by someone (usually governments, but in primitive societies also parents) forcibly preventing people from responding to market forces. Stagnation is not a natural state in a capitalist economy; it has to be caused by force.

Science does not require infinite creativity to make new discoveries. Particular lines of research fail all the time but where science as a whole has ceased to innovate it is never because the whole scientific community has turned its attention to the nautilus but invariably because someone (governments and/or parents) has forcibly prevented people from behaving according to the canons of scientific rationality.

An individual personality does not require infinite creativity to grow. Particular a priori wants go unmet all the time, and large projects also fail and sometimes a person has a major life setback. But if they get stuck to the extent of failing to innovate it is not because they have spontaneously wandered into a state where their head resembles a nautilus but because someone has forcibly thwarted them once (or usually a thousand times) too often.

...

... problems can be continually solved without infinite creativity, without perfect rationality, and without relying on any particular problem being solved by any particular time. And that is sufficient for -- in fact it is what *constitutes* -- economic growth, scientific progress, and human happiness.
Why DD equates innovation with win/win arbitration isn't explained here. One way to understand it is because we consider win/win arbitration to be the only epistemological method capable of creating knowledge, solving problems, making progress/innovation, etc.

The point that problem solving (or conflict resolution) in general doesn't require solving any particular problem is very important. That's what allows fast solutions.

What you can do is ask questions in arbitration like, "Given we think we won't solve problems X, Y and Z within our resource constraints, what should we do?" That question can be answered without solving problems X, Y or Z, and its answer can be a successful win/win arbitration outcome.

As with everything, it's open to criticism, e.g. a side might think X actually can be solved within the resource constraints. Then all sides might be able to agree, for example, to try to solve X, but also to set up a backup plan in case that doesn't work.

If an arbitration seems particularly hard relative to the resources available, a longer exclusion list can be proposed. By setting things aside as necessary, arbitration can succeed in the short term.


I also have a bunch of writing on this topic. E.g.:

http://fallibleideas.com/avoiding-coercion

And I gathered multiple links at:

http://curi.us/1595-rationally-resolving-conflicts-of-ideas

There's a lot. One reasonable way to approach this is read things until you find a specific point of disagreement or two, then comment. Maybe just the material in this email is enough.

I'm providing the links partly so if you like reading it, it's available to you. But if you prefer a more back-and-forth approach, that's fine with me. I like writing.
So we should absolutely put some effort into looking for a resolution, but the amount of effort we should put in before we throw in the towel and retreat to ranking is a trade-off between our commitment to making the right choice and our urgency to make any choice. Just as in life generally, in fact! - and that’s no accident, because even though it seems much more informal, all the decisions we make in life are subject to the same epistemic logic concerning science that you are setting out. The perfect is the enemy of the good, and all that.
I didn't intend to limit stuff to science. Yes, epistemology applies to the whole of life.

I would say more like, "wanting the impossible is an enemy of the good". But I'd be cautious because people often underestimate what's possible (e.g. with SENS).
Moreover, it turna out that the arbitration approach is considerably more impractical in some areas than in others, and biology is a particularly impractical one - basically because the complexity of the system under discussion and the depth of our ignorance of its details lead to the arising of lots of very similarly-ranked (by Occam’s razor, for example) conflicting ideas.

In a way, it seems to me that you’re describing arbitration rather in the way that mathematics works. A mathematical proof is (so I’m told, and it makes sense to me) no more nor less than an argument that other mathematicians find persuasive.
I agree about math being fallible and thinking of math proofs as arguments.

Lots of people think math proofs are infallible. DD criticized that in The Fabric of Reality.
So the discussion of a proposed proof is a process of arbitration between the belief that the conjecture is open and the belief that it is resolved (say, that it is true). And we find that mathematics lies at the opposite extreme from biology in terms of practicality: mathematicians tend to be able to agree really quite quickly whether a candidate proof holds water.
To be clear about my stronger claims above: I don't think which field affects arbitration practicality, since it always works.
So, let’s look at your cryonics proposal:
Here's an example of how I might argue for cryonics using scientific research.

Come up with a measure of brain damage (hard) which can be measured for both living and dead people. Come up with a measure of functionality or intelligence for living people with brain damage (hard). Find living brain damaged people and measure them. Try to work out a bound, e.g. people with X or less brain damage (according to this measure of damage) can still think OK, remember who they are, etc.

Vitrify some brains or substitutes and measure damage after a suitable time period. Compare the damage to X.

Measure damage numbers for freezing, burial and cremation too, for comparison. Show how those methods cause more than X damage, but vitrification causes less than X damage. Or maybe the empirical results come out a different way.
I would assert that you makes my case extremely well. Consider your first two steps, coming up with these measures. It’s actually really easy to come up with such measures - lots and lots of alternative ones.
It's easy to come up with bad measures. For good measures, I'm not convinced.

Part of my perspective on this has to do with how bad IQ tests and school tests are, and the great difficulty of doing better.
The only way to decide which to use is to (gasp) rank them, according to your third step, testing their correlation with function.
That isn't the only way. You could come up with an explanation of what measure you should use, and why, and expose it to criticism.

It's very important to consider explanations. E.g. percentage of undamaged brain cells could be tried in a measure because we have an explanatory understanding that more undamaged cells is better. And we might modify the measure due to the locations of damaged cells, because we have some explanatory understanding about what different region of the brain do and which regions are most important. It'd be a mistake to try arbitrary things as a measure and then look for correlations.


Typical correlation approaches are bad science because they are explanationless. If one does have explanation, that explanation should be primary. An explanation can reference a correlation and explain why it matters, and only then would a correlation matter.

Correlation is big topic. I think we should focus more on arbitration. But here's an initial explanation of correlation related problems.

Summary: explanationless correlation approaches to science are the same kind of thing as induction.

There are infinitely many correlations out there. What people do is find and focus on a small number of correlations, and pay selective attention to those.

The only thing that can make this selective attention reasonable is an explanation. And it should be a clear, explicit explanation that's exposed to criticism, not an unstated one that secretly governs which correlations get attention.

I think about this in a more general way which might be helpful. A correlation is a type of pattern. There are infinitely many patterns in the world you could find, most meaningless, and they only matter when there's an explanation that they do.

And there's also the problem that if you find a sequence, e.g. "2,2,2,2,2" and you think it's a pattern, you actually have no knowledge of how it will continue unless you have an explanation. Which brings us to induction, because dealing with sequences like this and say "oh it's going to be 2 next" – without an explanation – is a major inductivist activity. If the sequence is over time, the inductivist might add, "the future is likely to resemble the past".

Similarly if you find X correlates with Y during a particular time period, the assumption they will continue to correlate in a different future time period – without explanation – is basically "the future is likely to resemble the past", a.k.a. induction.

Selective attention is also a feature of induction. Inductivists look at evidence and notice it's consistent with several ideas of interest to them. But don't pay serious attention to the infinitely many other ideas that evidence is equally consistent with. And some of those ignored ideas, which are equally "supported" by the evidence, contradict the ideas getting their selective attention.


A further issue is that context matters. You can only understand what would be a significant change in circumstances (such that one wouldn't expect a correlation or pattern to continue) via an explanatory understanding of what context is relevant and what would be a significant change.

On a related note, suppose a ranking system is developed for something, and even assume it's good. How do you know if it's still applicable when dealing with anything that isn't absolutely literally 100% identical to the original context? How do you know which changes matter? How do you know if which country you're in is part of the relevant context that can't be changed? How do you know if the calendar year is part of the relevant context that can't be changed? Only by explanation. Only by understanding why the ranking system works can you tell what changes would mess that up and what changes wouldn't.

And how can you judge explanations and decide which ones are good? The win/win arbitration method.
Er, but there are loads of ways to test function too, so any such ranking (even setting asite the precision of measurement and such like) is only finitely reliable. The rest of what you say would be fine if we really could come up with a way to define and then measure brain damage that was unequivocally 100% reliable - but unfortunately, in the real world with the time we have, we can’t do that. So, we have no choice but to survey our various options for the measure of damage and function and the measurability of those measures, rank them according to something or other, and make our decision as to whether cryonics is worth doing on that basis - but, do so using some probability threshold of how likely we need it to be to work in order to justify the expense, so as to incorporate our uncertainty as to whether we have measured the brain damage correctly and accurately. If we can’t successfully perform your first steps, we have no right to proceed as if we had performed all steps - which is precisely what you’re doing by rejecting the (admittedly inferior, but doable) ranking approach and just subjectively saying you don’t think the available data justify spending that much money.

Tell me what’s wrong with the above.
Regarding rankings, they are OK when you have an explanation of why a particular ranking system will get you a good answer for a particular problem. In other words, deciding to use that ranking system for that purpose is the outcome of a win/win arbitration. If you don't have that, rankings are arbitrary.

The rankings could be fully arbitrary. Or they could have some reasons, but arbitrarily ignore some criticism or problem. (If no criticism or problem was being irrationally ignored, then it would be a win/win arbitration outcome). Another common approach to rankings is to intentionally design the ranking system so it reaches a predetermined conclusion which people already think is plausible not arbitrary.

My main point here is that if they haven't done my proposal, they should have done something else with an explanation of why it makes sense. They have do something, have some explanation, some knowledge.

They actually do have basic explanations, e.g. I've read one of them saying that vitrified brains look pretty OK, not badly damaged, to the unaided human eye. The implication is damage that's hard to see is small, so cryopreservation works well. This is a bad argument, but it's the right type of thing. They need this type of thing, but better, before anyone should sign up.

I think you have in your mind some explanations of the right type, but haven't said them because of your methodology that doesn't emphasize explanation as I do. So I don't know how good they are.

In footnote [1], I comment on a couple cyro papers and information about fracturing.


I also have a second way for judging Alcor and CI specifically. Consider the explanation, "Preserving people for much later revival is a very hard problem. Hard problems like this don't get solved by accident by irrational and incompetent methods, they require things like scientific or intellectual rigor."

As usual, one can't explain everything at once. This explanation leads to further questions like why people don't accidentally solve hard problems. An important thing about explanation, persuasion and win/win arbitration is you only have to satisfy objections that any side cares to make, not all possible objections. If no one thinks an objection is good, don't worry about it. Yes you could miss something important, but there are always infinitely many possible objections and you can't answer all of them, you have to go by the best knowledge anyone has of which are important, and if mistakes are made due to ignorance, so be it, that's not always avoidable.

(Explanations sometimes answer infinite categories of objections. But to answer literally all possible objections would basically require omniscience

Another aspect I didn't explain here is how incompetent and irrational Alcor and CI are. But I did give an initial explanation of that previously. And I have in my mind more extensive explanation of it, if you raised objections to my initial explanation.

A reason bad people don't solve hard problems is because mistakes and problems are inevitable, so there has to be rational problem solving and mistake-correcting taking place or else advanced stuff will never work. Since I don't see Alcor and CI doing a decent job with that, I don't think their service works.

[1]

http://198.170.115.106/reports/Scientific_Justification.pdf
A rabbit kidney has been vitrified, cooled to -135C, re-warmed and transplanted into a rabbit.
Rabbit was fine. Cool.
When cooling from -130C to -196C thermal stress on large solid vitrified samples can cause cracking and fracturing.
But rabbit kidney was not cooled to the relevant colder temperatures. This has footnote 27.
Due to its more well-defined nature, cracking damage may be much easier to repair than freezing damage.
This is too vague, plus doesn't say anything about how much damage there is. It has no footnote. Paper lacks better information than this about fracturing damage issues.

Footnote 27:

http://www.sciencedirect.com/science/article/pii/0011224090900386

One of their main conclusions given in the abstract:
fracturing depends strongly on cooling rate and thermal uniformity
So one question one might have is: what cooling rates do Alcor and CI use? How much thermal uniformity do they achieve? But to my knowledge they don't carefully measure that kind of information, or even use sufficiently standardized procedures to get consistent results.

Also kind of scary, the 2008 paper is citing information from 1989, rather than more recent information.

Another paper: http://www.lorentzcenter.nl/lc/web/2012/512/problems/4/Long-term%20storage%20of%20tissues%20by%20cryopreservation.pdf

This one has lots of interesting information about why cryonics is hard, and ends by saying, "In summary, we hope to have demonstrated that tissue cryopreservation is a complex problem..." The article can give one a sense of how hard these problems are, and therefore why it takes scientific rigor, top quality knowledge and rational problem-solving ability to succeed at human cryonics. Which Alcor and CI lack.


There's also some information about how bad vitrification damage is here:

http://lesswrong.com/lw/343/suspended_animation_inc_accused_of_incompetence/32pv

It's from an expert and I've found no contrary information. Example statements:
There is no present technology for preserving people in a "fairly pristine state" at cryogenic temperatures. Present cryopreservation technology even under perfect conditions causes biological effects such as toxicity and fracturing that are far more damaging than the types of problems you've expressed concern about.

...

Most cryobiologists would regard the idea of repairing organs that had cracked along fracture planes as preposterous, as I'm sure you do if you believe that 300 mmHg arterial pressure or one hour of ischemia is fatal to a cryonics patient.
In that first quote, we get an actual comparison of vitrification damage to something else. That something else is, "the types of problems you've expressed concern about". Those problems are, from the parent comment:
a bunch of unqualified, overgrown adolescents, who want to play doctor with dead people, while pretending to be surgeons and perfusionists
In summary, Brian Wowk (an expert on Alcor's board of directors) is saying that damage from vitrification, without any errors by cryo personnel, is "far more damaging" than the various horror stories of gross error by cryo personnel. And far more damaging than, e.g., an hour of ischemia.

I'm no expert on this, but trying to look it up, it seems a few minutes of ischemia causes brain damage. And there are explanations for this, e.g. "central neurons have a near-exclusive dependence on glucose as an energy substrate, and brain stores of glucose or glycogen are limited" [2]. Damage far worse than an hour of ischemia sounds to me like cryo's not going to work yet, and I haven't found information to the contrary.

[2] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC381398/

Continue reading the next part of the discussion.

Elliot Temple on October 5, 2014

Messages

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)