Discussion with Aubrey de Grey

I discussed epistemology with Aubrey de Grey via email. The discussion focused on cryonics initially, but the majority is about epistemology. Epistemology is the field of philosophy that covers knowledge and learning.

Aubrey de Grey is the driving force behind SENS – Strategies for Engineered Negligible Senescence. What that means is organized and comprehensive medical research to deal with the problems caused by aging. If you donate money to any kind of charity, consider SENS.

If you're interested in SENS, read Aubrey de Grey's book Ending Aging. I read it and think it's a good book with good arguments (something I don't say lightly, as you can see by the critical scrutiny I've subjected Ann Coulter and others to.)

Click here to read the whole discussion. I made minor edits to remove a few irrelevant personal remarks and fix typos. Or click below for individual parts.

This discussion is now complete.

Like this? Want to read more philosophical discussions? Join the Fallible Ideas email list.

Elliot Temple | Permalink | Message (1)

Aubrey de Grey Discussion, 1

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow-highlighted quotes are from Aubrey de Grey, with permission. Bluegreen highlights are me, red highlights are other quotes. The text with no highlights is me talking. I began the discussion like this:

You endorse Alcor and CI:

http://www.reddit.com/r/Futurology/comments/28e4v3/aubrey_de_grey_ama/cia3xn1?context=5
For the millionth time let me stress that referring to "getting older without getting sicker" as "becoming immortal" is not only inaccurate but actively counterproductive to this mission, because it entrenches the view of skeptics that the mission is quixotic. To answer the question you should have asked: obviously it depends on your age, but absolutely, everyone should have a life insurance policy with Alcor or Cryonics Institute, for exactly the same reason that they should have any other kind of health insurance.
Take a close look at Alcor and CI. While cryonics is a good idea in principle, Alcor and CI have lots of big problems (including that current cryonics technology isn't really good enough).

One big problem is not freezing people quickly. Max More, President and CEO of Alcor, writes:

http://lesswrong.com/lw/bk6/alcor_vs_cryonics_institute/69z7
You mention Mike Darwin, yet note that in Figure 11 of a recent analysis by him, he says that 48 percent of patients in Alcor's present population experienced "minimal ischemia." Of CI, Mike writes, "While this number is discouraging, it is spectacular when compared to the Cryonics Institute, where it is somewhere in the low single digits."
Alcor CEO brings up, favorably, a statistic meaning that Alcor does a bad job at least 52% of the time. Because, hey, CI does much worse, and the discussion topic is a comparison.

So I don't think you should tell people to sign up for CI and suggest it's the same quality as regular medicine.


You can find lots more information:

http://lesswrong.com/lw/bk6/alcor_vs_cryonics_institute/
http://lesswrong.com/lw/343/suspended_animation_inc_accused_of_incompetence/

(Comments include discussion from people like former Alcor President Mike Darwin.)

http://www.alcor.org/cases.html
http://www.cryonics.org/case-reports/

See e.g. the most recent CI case:

http://www.cryonics.org/case-reports/the-cryonics-institutes-123rd-patient
CI patient #123 was a 71 year old male from England. Due to the uncontrollable circumstances of this case, the patient was straight frozen without being perfused with cryoprotective solutions and was sent to the Cryonics Institute for long-term storage in liquid nitrogen.
They failed. As they often do. No cryoprotectants! And they don't care to provide details. And they indicate they won't do anything different in the future, since they consider whatever happened "uncontrollable".

The latest Alcor case is very problematic too:

http://www.alcor.org/Library/html/casesummary2680.html

They argued with a Medical Examiner for a while, then managed to get ahold of the body and began cool down 2.5 days after death. The delay sounds very worrisome to me, but the case report doesn't address this problem at all. No medical details are provided about how cool down went. And there's no explanation about what temperature the body was at for the 2.5 day delay, the resulting damage, and whether this person could reasonably be expected to ever be revived.

I like SENS. I like life. I like the idea of cryonics. But I wouldn't pay a bunch of money for the bad patient outcomes which CI and Alcor routinely provide (according even to their own claims on their websites).

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 2

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
I don’t understand your logic here. I’m well aware of the issues you mention regarding the quality of Alcor’s and CI’s preservations, and I’ve never suggested that any current cryonics service is the same quality as regular medicine. Why do you think it would need to be that good to justify signing up?
I don't think it would have to equal regular medicine to be worthwhile. But the gap is big, and cryonics is expensive.

You said everyone should sign up for cryonics, for the same reason they have regular health insurance. This suggests that cryonics has traits seen with regular medicine, like being run pretty competently, providing value for cost, routinely providing good outcomes, and making your life better. Cryonics currently provides none of those.


To answer your question about what would justify signing up: First, I'd want cryonics organizations to be run in a competent and responsible way. Second, I'd want cryonics technology to improve enough to preserve brains well enough to optimistically expect the relevant information (about one's mind and ideas) to be preserved, and I would want cryonics organizations to provide quality persuasive intellectual explanations on this point. I think those two problems are deal breakers.

Regarding preservation, without staff errors, one big problem is fracturing – meaning breaks in the brain. Alcor's attitude seems to be that fracturing doesn't destroy information and nanotech can theoretically fix it because the breaks are smooth and the separated parts of the brain do not end up far apart. I'm not convinced; I think they'd need much better reasons to say this physical brain damage is OK and the relevant information still preserved. (I also think the idea of nanotech repairs is misguided. The focus should be on one day getting the information from the brain into a computer, not on fixing and reviving the original organic brain.) Fracturing is not the only serious technological problem.


If those two issues were fixed, I still would not recommend cryonics to "everyone", or most people, because it'd be a large financial burden for most people on Earth, in return for a long shot. Unless cryonics improved SPECTACULARLY, it wouldn't be worth signing up at a big cost to one's standard of living now. There's also the issue that the majority of people don't value life and don't want to live, in some pretty fundamental philosophical ways, as explained e.g. in Atlas Shrugged. Cryonics, like SENS, doesn't fit everyone's values and preferences.


It would also help if societal institutions handled cryonics better, e.g. if you could conveniently go a cryonics facility and kill yourself on site with staff present, rather than having them wait around for you to die (possibly suffering increasing brain damage from your disease in the meantime), wait for you to be pronounced legally dead, and perhaps deal with days of interference from regular medical personnel. Similarly, sometimes courts order people removed from cryo facilities. These things lower the chance of getting a good patient outcome, but I don't see fixing this as a strict requirement to sign up.

It would also be nice if I was a lot more convinced that Alcor and CI won't go out of business within the next 50 years, let alone 1000 years. Cryo preservation requires frequent maintenance and upkeep costs.
Two more points:

- A key feature that you don’t mention is that the poor preservations you list are cases where the individual did not do what I also strongly recommend, namely get themselves to the vicinity of their provider while their heart is still beating. Other cryonicists’ self-neglect isn’t a very good basis for one’s own decisions.
I don't think you read the cases closely. The Alcor case said he was in the Phoenix area, which is around 12 miles from Scottsdale, where Alcor is. It is the vicinity. Alcor refers to the "Scottsdale/Phoenix metropolitan area" on their website when explaining why they chose their location.

The reason for that bad outcome, and bad case report writing, was not due to location. For the CI case, it doesn't say what the reason for the bad outcome was, so we don't know if it had to do with location or not.

There are plenty of cases where people did everything right and got bad outcomes. There are even plenty of cases where cryo personnel irresponsibly caused bad outcomes. I include an example at the bottom of this email. There are, unfortunately, more examples available at the links I provided.
- As you say, current cryonics technology has a ways to go; but that’s another reason to sign up, since the more members Alcor and CI have, the more they can work to improve the technology.
Signing up for medical purposes, and for donation purposes, are different.

You said that, "... everyone should have a life insurance policy with Alcor or Cryonics Institute, for exactly the same reason that they should have any other kind of health insurance."

Signing up because you want to donate is not signing up for "exactly the same reason" as one has regular health insurance.

And I do not think everyone is in a financial position where they should donate money to cryonics research (or to anything).

For a younger American signing up for Alcor, the rough ballpark cost is 35 minutes of minimum wage work, 365 days a year. That's a big deal. That is a lot of one's life! Cost increases with age, so that's a minimum. (CI costs less than half that, which is still a lot of money for most people, and the quality drops along with the price.)

And I think if people have the means to make medical donations, SENS is a better option than cryonics. The SENS project you explain very well in Ending Aging, and elsewhere, makes a lot of sense and is a great idea, and you're working on it in a reasonable, competent, and effective way. Cryonics is an in-principle good idea, but unfortunately it doesn't go much further than that today. And I don't think throwing money at the issue will fix problems like some of the bad ideas of the people involved with Alcor and CI.

Example of what can happen with cryonics, not the patient's fault:

http://www.cryonics.org/case-reports/the-cryonics-institutes-95th-patient
Curtis deanimated under as favorable a set of circumstances as any of us could have hoped-for.
A number of CI Directors have become concerned that I have been modifying the cryoprotectant carrier solutions without adequate testing ... In response to concerns by CI Directors (and my own concerns) I will not make more modifications to the carrier solutions, and I believe we should return to using the traditional VM−1 carrier for the time being
Ben Best, CI president (at that time), was experimenting on people who paid to be preserved. The result was failure to perfuse with cryoprotectants. And this is written by the guilty party. For an outside perspective, Mike Darwin comments:

https://web.archive.org/web/20120406161301/http://chronopause.com/index.php/2011/02/23/does-personal-identity-survive-cryopreservation/
Even in cases that CI perfuses, things go horribly wrong – often – and usually for to me bizarre and unfathomable (and careless) reasons. My dear friend and mentor Curtis Henderson was little more than straight frozen because CI President Ben Best had this idea that adding polyethylene glycol to the CPA solution would inhibit edema. Now the thing is, Ben had been told by his own researchers that PEG was incompatible with DMSO containing solutions, and resulted in gel formation. Nevertheless, he decided he would try this out on Curtis Henderson. He did NOT do any bench experiments, or do test mixes of solutions, let alone any animal studies to validate that this approach would in fact help reduce edema (it doesn’t). Instead, he prepared a batch of this untested mixture, and AFTER it gelled, he tried to perfuse Curtis with it. ... Needless to say, as soon as he tried to perfuse this goop, perfusion came to a screeching halt. [In other CI cases,] They have pumped air into patient’s circulatory systems…
Ben Best and Mike Darwin discuss the matter further here:

http://lesswrong.com/lw/bk6/alcor_vs_cryonics_institute/6c35

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 3

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
I merely claim that even today we are good enough at it that those who help the providers to help them have a good enough chance of revival that it makes sense to sign up, even if the cost compares with that of traditional health insurance.
Can you point me to writing which you think makes a correct, reasonably complete (across multiple sources is fine), and persuasive case for this reasonable chance of revival?

If I'm mistaken about this I'd like to find out (and sign up for cryonics), and I am willing put in the effort to find out.

I don't agree it's a matter of "personal evaluation". There's an objective, impersonal truth of the matter about the current state of cryonics. Just like whether SENS is currently a good idea is a matter of objective truth, not of personal evaluation. And various people who disagree with SENS are wrong.

I think people should only sign up for cryonics if adequate, objective, pro-cryonics arguments/explanations exist, which they can read and see why it makes sense, and which include answers to all important criticisms. And if that does exist, then it'd be a mistake to disagree anyway as some kind of personal matter. I (like Popper, Deutsch and Rand, who have explained some of the reasons) don't go for that "agree to disagree" and "personal evaluation" type stuff, which can be a way to dodge the rational pursuit of truth.
Let me conclude, however, by thanking you for your support of SENS and agreeing with you that SENS is plan A! It’s no accident that I work on SENS rather than on cryonics.

Cheers, Aubrey
Yeah. Best wishes.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 4

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
I can’t point you to anything better than what is posted at Alcor’s and CI’s sites, no. Instead let’s look at what you say below. Sure there is an objective, impersonal truth of the matter about the current state of any particular technology. The question is whether what we do with that truth should be similarly objective and impersonal, and I don’t think it should. I believe it is OK for people to have different values and priorities, whether it’s concerning the merits of tomato ketchup or the value of life. Therefore, I believe there is a range of legitimate opinions about the justifiability of a given course of action. For sure that range will be finite, i.e. there will be cases where people are not adopting a policy that is consistent with their other beliefs and will be resistant to recognising that fact, but that doesn’t change that fact that there is still that (finite) room for legitimate agreement to disagree. Cryonics is a rather extreme case, because its basis in the prospect of revival in the rather distant future entails so much uncertainty as to the pros and cons. I value my and others’ lives very highly, and I consider it quite likely that the future will be a progressively more fulfilling place to be, so I think signing up for cryopreservation makes sense even if one evaluates the chance of being revived and being glad one had been is quite low (I would probably go as low as 1%, and I definitely think we’re up at at least 10% today, even taking into account the issues we’ve been discussing). But I don’t claim to have an objective, impersonal argument for that 1% - rather, if someone else values life less than I do and/or they are more pessimistic about human progress, and they conclude that their cutoff is 50%, they’re welcome to their opinion. No?
I agree about some scope for people to differ, though I don't think the reasonable range extends to not signing up for cryonics that is 50% likely to work, for people who can afford it.

I, too, value life very highly and expect the future to be dramatically better. I think concerns about e.g. overpopulation and running out of jobs are bad philosophy, both generally (problems are soluble, and we don't have to and shouldn't expect to know all future solutions today) and also I could give specific arguments on those two issues today. And I'm not worried that I might not be glad to be revived.

But we have a disagreement about methodology and epistemology, which comes up with your comments on percentages.

If I believed cryonics had even 1% odds in a meaningful sense, I'd sign up too. I value my life more than 100x the price. That's easy. An example of meaningful odds would be that for every 1000 people who sign up, 10 will be revived. But it doesn't work that way.

Explanations don't have percentage odds. It's important to look at issues in terms of good and bad explanations, and criticisms, not odds. (You may have some familiarity with this view from David Deutsch, including his criticisms of weighing ideas and of Bayesian epistemology.)

In FoR, DD uses the example idea that eating grass will cure a cold. Because there's no explanation of how grass does that, he explains that this empirically-scientifically testable idea isn't worth testing. It should be rejected just from the philosophical criticism that it lacks a good explanation.

It shouldn't be assigned a probability either. It's bad thinking, to be rejected as such, unless and until a new idea changes things.

Odds apply to issues like physical events. Odds are a reasonable way to think about the possibility of dying in a plane crash, or other cryo-incompatible deaths. Odds can even somewhat model problems like whether the cryo staff will make a mistake, or whether Alcor stays in business, though there are some problems there.

You could die in a plane crash, or not. It could go either way, so odds make some sense. But either current cryo methods (assume perfusion etc go well) preserve the necessary information, or they don't. That can't go either way, there's a fact of reality one way or the other.

The basic way odds are misused is there are multiple rival ideas, and rationally resolving the conflicts between them turns out to be difficult. So people seek ways to retreat from critical discussion and find a shortcut to a conclusion. E.g. a person favors an idea, and there is some idea which contradicts it which he can't objectively refute. Rather than say "I don't know", or figure out how to know, he assigns some odds to his idea, then lowers the odds for each criticism he doesn't refute. But the odds are completely arbitrary numbers and have no bearing on which ideas are correct.

Fundamentally, he's mistaken to take sides when two ideas contradict and he can't refute either one. Often this is done by bias, e.g. favoring the idea he thought of himself, or spent the last five years working on.

A starting point for a cryo explanation is that digging up graves to revive people won't work, due to brain damage (this could be explained in more detail I won't go into). There is no good explanation of how it could ever work. This bad explanation isn't worth scientific testing, and should not be assigned any odds.

Freezing people is better than coffins because it preserves more brain matter and prevents a lot of decay, but there's no good explanation that it would work either, because there's so much brain damage. All claims that it would work can be refuted by criticism (in the context of present knowledge). But vice versa doesn't apply: one could write an explanation of why straight freezing won't work for cryo, which would stand up to criticism. (Today. All these things are always tentative, and can be rethought if someone has a new idea.)

That is how issues should be resolved rationally. Get a situation with one explanation that survives criticism, and no rivals that do. Then, while one could still be mistaken, there is a non-arbitrary opportunity to accept the best current knowledge.

This is a Popperian view, which many people disagree with. They're wrong. And all of their arguments have known answers. I can answer any points you're interested in.

Changing subjects briefly, let's apply this to SENS. SENS is the best available knowledge on the issues it addresses, and which should not be dismissed by arbitrarily assigning it odds. Odds are a semi-OK approximation for whether specific already-understood SENS milestones will be done by a particular date, but are not an OK way to judge the truth of the core explanatory ideas of SENS. It's very important to look at SENS in terms of the proposed explanations and criticisms, and actually resolve the conflicts between different ideas (e.g. go through the criticisms of SENS and figure out concretely why each criticism is wrong, rather than be unable to objectively and persuasively answer some criticism but continue anyway. Note you are able to address EVERY criticism, which makes SENS good, as opposed to other ideas which don't live up to that important standard.)


Finally, today's vitrification processes cause less brain damage than freezing. But still lots of brain damage. So for the same main reason as before (lots of brain damage prevents reviving), cryonics won't work (until there's better technology).

Either this is the best available explanation, or there is information somewhere refuting it, or there is a rival for the best explanation that's also unrefuted. In each case, it's not a matter of odds, and this initial skeptical explanation regarding cryo I've given should stand as the best view on the matter unless there are certain kinds of specific relevant ideas (rivals, criticisms).


Behinds statements about odds, there usually are some explanations, but it'd be better to critically discuss them directly.

I'm guessing you may have in mind an explanation something like, "We don't know how much brain damage is too much, and can model this uncertainty with odds." But someone could say the same thing to defend straight freezing or coffins, as methods for later revival, so that can't be a good argument by itself.

To make a rational case for today's cryonics, there has to be some explanation about how much brain damage is too much, why that much, and how vitrification gets over the line (while, presumably, freezing and grave digging don't – though Alcor and CI don't seem to take that seriously, e.g. Alcor has dug up a corpse from a grave and stored it). Well, either there should be an explanation like I said above, or one explaining why that's the wrong way to look at it, and explaining something even better. Without good explanation, it's the grass cure for the cold again. You may also have in mind some further answers to these issues, but I can't guess them, and if they are good points that good content was omitted from the statement of odds.


Finally to put it another way: I don't think people should donate to SENS if the explanations in Ending Aging didn't exist (or equivalent prior material). Those good ideas make all the difference. Without those ideas, a claim that SENS might work (even with only 10% odds) would not suffice. And I don't think cryonics has the equivalent good explanations like SENS. (Though I'd be happy to be corrected if it does have that somewhere.)


If you are interested, I will write more explaining the philosophy here. Actually I did write more and deleted it, to keep things briefer. Epistemology, btw, is my chosen specialty. (I don't want any authority, I just think it's relevant to mention.)

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (4)

Aubrey de Grey Discussion, 5

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
I’ve been completely unable to get my head around what [David Deutsch] says about explanations, and you’ve reawakened my confusion.

Essentially, I think I agree that there are no probabilities in the past, which I think is your epistemological point, but I don’t see how that matters in practice - in other words, how we can go wrong by treating levels of confidence as if they were probabilities.
That thing about the past isn't my point. My point is there are probabilities of events (in physics), but there are no probabilities that ideas are true (in epistemology). E.g. there is a probability a dice roll comes up 4, but there isn't a probability that the Many-Worlds Interpretation in physics is true – we either do or don't live in a multiverse.

So a reference to "probability" in epistemology is actually a metaphor for something else, such as my confidence level that the Many-Worlds Interpretation is true. This kind of metaphorical communication has caused confusion, but isn't a fundamental problem. It can be understood.

The bigger problem is that using confidence levels is also a mistake.

Below I write brief replies, then discuss epistemology fundamentals after.
The ultimate purpose of any analysis of this kind - whether phrased in terms of probabilities, parsimony of hypotheses, quality of explanations, whatever - is surely to determine what one should actually do in the face of incomplete information.
I agree with decision making as a goal, including decisions about mental actions (e.g. deciding what to think about a topic).
So, when you say this:
I'm guessing you may have in mind an explanation something like, "We don't know how much brain damage is too much, and can model this uncertainty with odds." But someone could say the same thing to defend straight freezing or coffins, as methods for later revival, so that can't be a good argument by itself.
I don’t get it. The amount of damage is less for vitrification than for freezing and less for freezing than for burial. So, the prospect of revival by a given method is less plausible (why not less “probable”?) for burial than freexing than vitrification.
I explain more about my intended point here at footnote [1] below.

I agree that changing "probable" to "plausible" doesn't change much. My position is a different epistemology, not a terminology adjustment.
But, when we look at a specific case (e.g. reviving a vitrified person by melting, or a frozen person by uploading), we need to look at all the evidence that we may think bears on it - the damage caused by fracturing, for example, and on the other side the lack of symptoms exhibited by people whose brain has been electrically inactive for over an hour due to low temperature. Since we know we’re working in the context of incomplete information, and since we need to make a decision, our only recourse is to an evaluation of the quality of the explanations (as you would say it - I rather prefer parsimony of hypotheses but I think that’s pretty nearly the same thing).
I actually wouldn't say that.

My approach is to evaluate explanations (or more generally ideas) as non-refuted or refuted. One or the other. This is a boolean (two-valued) evaluation, not a quantity on a continuum. Examples of continuums would be amount of quality, amount of parsimony, confidence level, or probability.

These boolean evaluations, while absolute (or "black and white") in one sense, are tentative and open to revision.

In short: either there is (currently known) a criticism of an idea, or there isn't. This categorizes ideas as refuted or not.

Criticisms are explanations of flaws ideas have – explanations of why the idea is wrong and not true. (The truth is flawless.)

Issues like confidence level aren't relevant. If you can't refute (explain a problem with) either of two conflicting ideas, why would you be more confident about one than the other?

When dealing with a problem, the goal is to get exactly one non-refuted idea about what to do. Then it's clear how to act. Act on the idea with no known flaws (criticisms) or alternatives.

Since this idea has no rivals, amount of confidence in it is irrelevant. There's nothing else to act on.

There are complications. One is that criticisms can be criticized, and ideas are only refuted by criticisms which are, themselves, non-refuted. Another is how to deal with the cases of having multiple or zero non-refuted ideas. Another is that parsimony or anything else is relevant again if you figure out how to use it in a criticism in order to refute something in a boolean way.
And the thing is, you haven’t proposed a way to rank that quality precisely, and I don’t think there is one. I think it is fine to assign probabilities, because that’s a reflection of our humility as regards the fidelity with which we can rank one explanation as better than another.
I think there's no way to rank this, precisely or non-precisely. Non-refuted or refuted is not a ranking system.

I don't think rankings work in epistemology. The kind of rankings you're talking about would use a continuum, not a boolean approach.

I provide an explanation about rankings at footnote [2], with cryonics examples.

The fundamental problem in epistemology is: ideas conflict with each other. How should people resolve these conflicts? How should people differentiate and choose between ideas?

One answer would be: whenever two ideas conflict, at least one of them is false. So resolve conflicts by rejecting all false ideas. But humans are fallible and have incomplete information. We don't have direct access to the truth. So we can't solve epistemology this way.

The standard answer today, accepted by approximately everyone, is so popular it doesn't even have a name. People think of it as epistemology, rather than as a particular school of epistemology. It involves things like confidence levels, parsimony, or other ranking on continuums. I call it "justificationism", because Popper did, and because of the mistaken but widespread idea that "knowledge is justified, true belief".

Non-justificationist epistemology involves differentiating ideas with criticism (a type of explanation) and choosing non-refuted ideas over refuted ideas. Conflicts are resolved by creating new ideas which are win/win from the perspectives of all sides in the conflict.

Standard "Justificationism" Epistemology

This approach involves choosing some criteria for amount of goodness (on a continuum) of ideas. Then resolving conflicts by favoring ideas with more goodness (a.k.a. justification).

Example criteria of idea goodness: reasonableness, logicalness, how much sense an idea makes, Occam's Razor, parsimony, amount and quality of supporting evidence, amount and quality of supporting arguments, amount and quality of experts who agree, degree of adherence to scientific method, how well it fits with the Bible.

The better an idea does on whichever criteria a particular person accepts, the higher goodness he scores (a.k.a. ranks) that idea as having. If he's a fallibilist, this scoring is his best but fallible judgment using what he knows today; it can be revised in the future.

There are also infallibilists who think some arbitrary quantity of goodness (justification) irreversibly changes an idea from non-good (non-justified) to good (justified). In other words, once you prove something, it's proven, the end. Then they say it's impossible for it to ever be refuted. Then when it's refuted, they make excuses about how it was never really proven in the first place, but their other ideas still really are proven. I won't talk about infallibilism further.

This goodness scoring is discussed in many ways like: justification, probability, confidence, plausibility, status, authority, support, verification, confirmation, proof, rationality and weight of the evidence.

Individual justificationists vary in which of these they see as good. Some reject the words "authority" or even "justification".

So both the criteria of goodness, and what they think goodness is, vary (which is why I use the very generic term "goodness"). And justificationists can be fallibilists or infallibilists. They can also be inductivists, or not and empiricists or not. Like they could think inductive support should raise our opinion of how good (justified) ideas are, but alternatively they could think induction is a myth and only other methods work.

So what's the same about all justificationists? What are the common points?

Justificationists, in some way, try to score how good ideas are. That is their method of differentiating ideas and choosing between ideas.

One more variation: justifications don't all use numerical scores. Some prefer to say e.g. "pretty confident" instead of "60% confident", perhaps because they think 60% is an arbitrary number. If someone thought the 60% was literal and exact, that'd be a mistake. But if it's understood to be approximate, then using an approximate number makes no fundamental difference over an approximate phrase. Using a number can be a different way to communicate "pretty confident".

Popper refuted justificationism. This has been mostly misunderstood or ignored. And even most Popperians don't understand it very well. It's a big topic. I'll briefly indicate why justificationism is a mistake, and can explain more if you ask.

Justificationism is a mistake because it fundamentally does not solve the epistemology problem of conflicts between ideas. If two ideas conflict, and one is assigned a higher score, they still conflict.

Other Justificationism Problems

Justificationism is anti-critical because instead of answering a criticism, a justificationist can too easily say, "OK, good point. I've lowered my goodness (justification) score for this idea. But it had a lead. It's still winning." (People actually say it less clearly.) In this way, many criticisms aren't taken seriously enough. A justificationist may have no counter-argument, but still not change his mind.

Justificationism is anti-explanatory, because scores aren't explanations.

Another issue is combining scores from multiple factors (such as parsimony and scientific evidence. Or evidence from two different kinds of experiments) to reach a single final overall score. This doesn't work. A lot about why it doesn't work is explained here: http://www.newyorker.com/magazine/2011/02/14/the-order-of-things

One might try using only one criterion to avoid combining scores. But that's too limited. And then you have to ignore criticism. For example, if the one single criterion is parsimony, the score can't be changed just because someone points out a logical contradiction, since that isn't a parsimony issue. This single criterion approach isn't popular.

There's more problems, I just wanted to indicate a couple.

Popper Misunderstandings

A common misunderstanding is that Popper was proposing new criteria for goodness (justification) such as (amount of) testability, severity of tests passed, how well an idea stands up to criticism, (amount of) corroboration, and (amount of) explanatory power. This is then dismissed as not making a big difference over the older criteria. DD's (David Deutsch's) "hard to vary" can also be misinterpreted as a criterion of goodness (justification).

That's not what Popper was proposing.

Another misunderstanding is that Popper proposed replacing positive justifying criteria with a negative approach. In this view, instead of figuring out which ideas are good by justifying, we figure out which ideas are bad by criticizing (anti-justifying).

This would not be a breakthrough. Some justificationists already viewed justification scores as going both up and down. There can be criteria for badness in addition to goodness. And it makes more sense to have both types of criteria than to choose one exclusively.

This wasn't Popper's point either.

Non-Justificationist Epistemology

This is very hard to explain.

Fundamentally, the way to (re)solve a conflict between ideas is to explain a (win/win) (re)solution.

This may sound vacuous or trivial. But it isn't what justificationism tries to do.

It's similar to BoI's point that what you need to solve a problem is knowledge of how to solve it.

How are (re)solutions found? There's many ways to approach this which look very different but end up equivalent. I'm going to focus on an arbitration model.

Think of yourself as the arbiter, and the conflicting ideas as the different sides in the arbitration. Your goal is not to pick a winner. That's what justificationism does. Your goal as arbiter, instead, is to resolve the conflict – help the sides figure out a win/win outcome.

This arbitration can involve any number of sides. Let's focus on two for simplicity.

Both sides in the conflict want some things. Try to figure out a new idea so that they both get what they want. E.g. take one side's idea and modify it according to some concerns of the other side. If you can do this so everyone is happy, you have a non-refuted idea and you're done.

This can be hard. But there are techniques which make solutions always possible using bounded resources.

DD would call this arbitration "common preference finding", and has written a lot about it in the context of his Taking Children Seriously. He's long said and argued e.g. that "common preferences are always possible". A common preference is an outcome which all sides prefer to their initial preference – wholeheartedly with no regrets, downsides, compromises or sacrifices. It's strictly better than alternatives, not better on balance.

In BoI, DD writes about problems being soluble – and what he means by solutions is strictly win/win solutions which satisfy all sides in this sort of arbitration.

An arbitration tool is new ideas (which are usually small modifications of previous ideas). For example, take one side's idea but modify a few parts to no longer conflict with what the other side wants.

As long as every side wants good things, there is a solution like this to be found. Good things don't inherently conflict.

Sometimes sides want bad things. This can either be an honest mistake, or they can be evil or irrational.

If it's an honest mistake, the solution is criticism. Point out why it seems good but is actually bad. Point out how they misunderstood the implications and it won't work as intended. Or point out a contradiction between it and something good they value. Or point out an internal contradiction. Analyze it in pieces and explain why some parts are bad, but how the legitimate good parts can be saved. When people make honest mistakes, and the mistake is pointed out, they can change their mind (usually only partially, in cases where only part of what they were saying was mistaken).

How can a side be satisfied by a criticism/refutation? Why would a side want to change its mind? Because of explanations. A good criticism points out a mistake of some kind and explains what's bad about it. So the side can be like, "Oh, I understand why that's bad now, I don't want that anymore." Good arguments offer something better and make it accessible to the other side, so they can see it's (strictly) better and change their mind with zero regrets (conflict actually resolved).

If there is an evil or irrational mistake, things can go wrong. Short answer: you can't arbitrate for sides which don't want solutions. You can't resolve conflicts with people who want conflict. Rational epistemology doesn't work for people/sides/ideas who don't want to think rationally. But one must be very careful to avoid declaring one's opponents irrational and becoming an authoritarian. This is a big issue, but I won't discuss it here.

Arbitration ends when there's exactly one win/win idea which all sides prefer over any other options. There are then no (relevant to the issue) conflicts of ideas. (DD would say no "active" conflicts). Put another way, there's one non-refuted idea.

Arbitration is a creative process. It involves things like brainstorming new ideas and criticizing mistakes. Creative processes are unpredictable. A solution could take a while. While a solution is possible, what if you don't think of it?

Reasonable sides in the arbitration can understand resource limits and lower expectations when arbitration resources (like time and creative energy) run low. They can prefer this, because it's the objectively best thing to do. No reasonable party to an arbitration wants it to take forever or past some deadline (like if you're deciding what to do on Friday, you have to decide by Friday).

When the sides in a conflict are different people, the basic answer is the more arbitration gets stuck, the less they should try to interact. If you can't figure out how to interact for mutual benefit, go your separate ways and leave each other alone.

With a conflict between ideas in one person, it's trickier because they can't disengage. One basic fact is it's a mistake to prefer anything that would prevent a solution (within available resources) – kind of like wanting the impossible. The full details of always succeeding in these arbitrations, within resource limits, are a big topic that I won't include here.

How do justificationists handle arbitrations? They hear each side and add and subtract points. They tally up the final scores and then declare a winner. The primary reason the loser gets for losing is "because you scored fewer points in the discussion". The loser is unsatisfied, still disagrees, and there's still a conflict, so the arbitration failed.

Here's a different way to look at it. Each side in arbitration tries to explain why its proposal is ideal. If it can persuade the other side, the conflict is resolved, we're done. If it can't, the rational approach is to treat this failure to persuade as "huh, I guess I need better ideas/explanations" not as "I have the truth, but the other guy just won't listen!"

In other words, if either side has enough knowledge to resolve the conflict, then the conflict can be resolved with that knowledge. If neither side has that, then both sides should recognize their ideas aren't good enough. Both sides are refuted and a new idea is needed. (And while brilliant new ideas to solve things are hard to come by, ideas meeting lowered expectations related to resource limits are easier to create. And it gets easier in proportion to how limited resources are, basically because it's a mistake to want the impossible.)

Justificationism sees this differently. It will try to pick a winner from the existing sides, even when (as I see it) they aren't good enough. As I see it, if the existing sides don't already offer a solution (and only a fully win/win outcome is a solution), then the only possible way to get a solution is to create a new idea. And if any side doesn't like it (setting aside evil, irrationality, not wanting a solution, etc), then it isn't a solution, and no amount of justifying how great it is could change that.


To relate this back to some of the original topics:

The arbitration model doesn't involve confidence levels or probabilities. Ideas have boolean status as either win/win solutions (non-refuted), or not (refuted), rather than a score or rank on a continuum. Solutions are explanations – they explain what the solution is, how it solves the problem(s), what mistakes are in all attempted criticisms of this solution, why it's a mistake to want anything (relevant) that this solution doesn't offer, why the things the solution does offer should be wanted, and so on. Explanation is what makes everything work and be appealing and allows conflicts to be resolved.

Final Comments

I don't expect you to understand or agree with all of this. Perhaps not much, I don't know. To discuss hard issues well requires a lot of back-and-forth to clear up misunderstandings, answer questions and objections, etc. Understanding has to be created iteratively (Popper would say "gradually" or "piecemeal").

I am open to discussing these topics. I am open to considering that I may be wrong. I wouldn't want a discussion to assume a conclusion from the start. I tried to explain enough to give some initial indication of what my epistemology is like, and some perspective about where I'm coming from.

Footnotes

[1]

My point was, whatever your method for preserving bodies, you could assign it some odds, arbitrarily. You could say cremation causes less damage than shooting bodies into the sun, so it has better revival odds. And then pick a small number for a probability. You need to have an argument regarding vitrification that couldn't be said by someone arguing for cremation, burial or freezing.

There should be something to clearly, qualitatively differentiate cryonics from alternatives like cremation. Like it should differentiate vitrification not as better than cremation to some vague degree, but as actually on a different side of an reasonably explained might-work/doesn't-work line.

Here's an example of how I might argue for cryonics using scientific research.

Come up with a measure of brain damage (hard) which can be measured for both living and dead people. Come up with a measure of functionality or intelligence for living people with brain damage (hard). Find living brain damaged people and measure them. Try to work out a bound, e.g. people with X or less brain damage (according to this measure of damage) can still think OK, remember who they are, etc.

Vitrify some brains or substitutes and measure damage after a suitable time period. Compare the damage to X.

Measure damage numbers for freezing, burial and cremation too, for comparison. Show how those methods cause more than X damage, but vitrification causes less than X damage. Or maybe the empirical results come out a different way.

Be aware that when doing all this, I was using many explanations as unconscious assumptions, background knowledge, explicit premises, and so on. Expose every part of this stuff to criticism, and for each criticism write an explanation addressing it or modify my view.

Then someone would be in a position to make a non-arbitrary claim favorable to cryonics.

This is not the only acceptable method, it's one example. If you could come up with some other method to get some useful answers, that's fine. You can try whatever method you want, and the only judge is criticism.

But something I object to is assigning probabilities, or any kind of evaluations, without a clear method and explanation of it. (E.g. where does your 10% for cryo come from? Where does anyone's positive evaluation come from?)

I don't think it's reasonable for Alcor or CI to ask people to pay 5-6 figures without first having a good idea about how to judge today's cryonics (like my example method). And from a decision making perspective, I expect people asking for lots of money – and saying they can perform a long term service for me in a reliable way – should have some basic competence and reasonable explanations about their stuff. But instead they put this on their website:

http://www.alcor.org/Library/html/CaseForWholeBody.html

It offers a variation on Pascal's Wager to argue for full-body cryo over neuro (basically, get full body just in case it's necessary for cryo to work). No comment is made on whether we should also believe in God due to Pascal's Wager. And it states:
Now, what if we would relax our assumptions a little and allow for some degree of ischemia or brain damage during cryopreservation? It strikes us that this further strengthens the case for whole body cryopreservation because the rest of the body could be used to infer information about the non-damaged state of the brain, an option not available to neuropatients.
No. I'm guessing you also disagree with this quote, so I won't argue unless you ask.

There are some complications like maybe Alcor is confused but today's cryonics works anyway. I won't go into that now.


[2]

We can, whenever we want, create ranking systems which we think will be useful for some purpose (somewhat like defining new units of measurement, or defining new categories to categorize stuff with).

The judge of these inventions is criticism. E.g. someone might criticize a ranking system by pointing out why it isn't effective for its intended purpose.

Concretely, we could rank body preservation methods by the amount of brain damage after 10 years. Then, in that system, we'd rank vitrification > freezing > burial > cremation.

Whether this is useful depends on context (which Popper calls the problem situation). What problem(s) are we trying to solve? Do we have a non-refuted idea for how to use the ranking in any solutions?

Our example ranking system has some relevance to people who consider brain damage important, but not to people who believe the goal should be to preserve the soul by using the most holy methods. They'd want to rank by holiness, and might rank vitrification last.

This is important because the rankings only matter in the context of some explanations of how they matter and for what (which must deal with criticism).

So ranking is secondary to explanation. It can't come first. This makes ranking unsuited for dealing with epistemology issues such as how to decide which explanations to accept in the first place.

In summary, we can make something up, argue why it's effective for a purpose, and if our argument is successful then we can use it for that purpose. This works with rankings and many other things.

But this is different than epistemology rankings, like trying to rank how good ideas are, or how probable, or how high quality of explanations they are.

Or put another way: to rank those things, you would have to specify how that ranking system worked, and explain why the results are useful for what. That's been tried a lot. I don't think those attempts have succeeded, or can succeed.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Message (1)

Aubrey de Grey Discussion, 6

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
In a nutshell, I think most of what you’rve written here comes down to something I already entirely agree with, namely that any kind of ranking of competing ideas is inferior to the identification of a win-win. You don’t need to persuade me of that.
ok but i have stronger claims:

1) All human choices can and should be made using the win-win arbitration approach. it is the only method of rational thinking

2) Justificationism doesn't work at all, and has zero value as an alternative method
My preferred way of looking at this is that identifying a win/win is the extreme case of choosing by ranking, in rather the same sense that Popperian decision-making is the limiting case of Bayesian decision-making. But I mention that only for clarification; if you think it’s wrong, do tell me, but let’s not spend too much time on that (not yet anyway) because I don’t think it affects the rest of what I want to say.
I don't agree that Bayesian epistemology has any value. OK I won't argue that now. Though FYI DD's latest blog post is "Simple refutation of the ‘Bayesian’ philosophy of science":

http://www.daviddeutsch.org.uk/2014/08/simple-refutation-of-the-bayesian-philosophy-of-science/
My problem comes down to the impracticality of the arbitration approach. I can certainly believe that all conflicts can reliably be resolved in bounded time, but as you say, we have the problem of needing to make a decision now (or soon), not in 10000 years.
I'm glad to hear that. It's a big point of agreement. Most people think some problems aren't solvable, and some human conflicts don't have any possible win/win outcomes.

I meant the arbitration approach can always be done within real life time limits. Or at least scenarios where you have some time to think. For a starting point, let's limit discussion to cases where the time limit is at least an hour. And definitely not worry about the 5 millisecond case.

In Oct 2002, I made a similar objection to yours. DD answered why common preference finding (a.k.a. win/win arbitration) doesn't require infinite creativity.
... the finding of a common preference does not entail finding the solution to any particular problem.

The economy does not require infinite creativity to grow. Particular enterprises fail all the time. Particular inefficiencies may remain unimproved for long periods. The economy as a whole may have brief hitches where mistakes have been made and have to be undone; but if it stagnates to the extent of failing to innovate, there is a reason. It's not just 'one of those things'. The reason has nothing to do with there being a glut of nautiluses on the market, but is invariably caused by someone (usually governments, but in primitive societies also parents) forcibly preventing people from responding to market forces. Stagnation is not a natural state in a capitalist economy; it has to be caused by force.

Science does not require infinite creativity to make new discoveries. Particular lines of research fail all the time but where science as a whole has ceased to innovate it is never because the whole scientific community has turned its attention to the nautilus but invariably because someone (governments and/or parents) has forcibly prevented people from behaving according to the canons of scientific rationality.

An individual personality does not require infinite creativity to grow. Particular a priori wants go unmet all the time, and large projects also fail and sometimes a person has a major life setback. But if they get stuck to the extent of failing to innovate it is not because they have spontaneously wandered into a state where their head resembles a nautilus but because someone has forcibly thwarted them once (or usually a thousand times) too often.

...

... problems can be continually solved without infinite creativity, without perfect rationality, and without relying on any particular problem being solved by any particular time. And that is sufficient for -- in fact it is what *constitutes* -- economic growth, scientific progress, and human happiness.
Why DD equates innovation with win/win arbitration isn't explained here. One way to understand it is because we consider win/win arbitration to be the only epistemological method capable of creating knowledge, solving problems, making progress/innovation, etc.

The point that problem solving (or conflict resolution) in general doesn't require solving any particular problem is very important. That's what allows fast solutions.

What you can do is ask questions in arbitration like, "Given we think we won't solve problems X, Y and Z within our resource constraints, what should we do?" That question can be answered without solving problems X, Y or Z, and its answer can be a successful win/win arbitration outcome.

As with everything, it's open to criticism, e.g. a side might think X actually can be solved within the resource constraints. Then all sides might be able to agree, for example, to try to solve X, but also to set up a backup plan in case that doesn't work.

If an arbitration seems particularly hard relative to the resources available, a longer exclusion list can be proposed. By setting things aside as necessary, arbitration can succeed in the short term.


I also have a bunch of writing on this topic. E.g.:

http://fallibleideas.com/avoiding-coercion

And I gathered multiple links at:

http://curi.us/1595-rationally-resolving-conflicts-of-ideas

There's a lot. One reasonable way to approach this is read things until you find a specific point of disagreement or two, then comment. Maybe just the material in this email is enough.

I'm providing the links partly so if you like reading it, it's available to you. But if you prefer a more back-and-forth approach, that's fine with me. I like writing.
So we should absolutely put some effort into looking for a resolution, but the amount of effort we should put in before we throw in the towel and retreat to ranking is a trade-off between our commitment to making the right choice and our urgency to make any choice. Just as in life generally, in fact! - and that’s no accident, because even though it seems much more informal, all the decisions we make in life are subject to the same epistemic logic concerning science that you are setting out. The perfect is the enemy of the good, and all that.
I didn't intend to limit stuff to science. Yes, epistemology applies to the whole of life.

I would say more like, "wanting the impossible is an enemy of the good". But I'd be cautious because people often underestimate what's possible (e.g. with SENS).
Moreover, it turna out that the arbitration approach is considerably more impractical in some areas than in others, and biology is a particularly impractical one - basically because the complexity of the system under discussion and the depth of our ignorance of its details lead to the arising of lots of very similarly-ranked (by Occam’s razor, for example) conflicting ideas.

In a way, it seems to me that you’re describing arbitration rather in the way that mathematics works. A mathematical proof is (so I’m told, and it makes sense to me) no more nor less than an argument that other mathematicians find persuasive.
I agree about math being fallible and thinking of math proofs as arguments.

Lots of people think math proofs are infallible. DD criticized that in The Fabric of Reality.
So the discussion of a proposed proof is a process of arbitration between the belief that the conjecture is open and the belief that it is resolved (say, that it is true). And we find that mathematics lies at the opposite extreme from biology in terms of practicality: mathematicians tend to be able to agree really quite quickly whether a candidate proof holds water.
To be clear about my stronger claims above: I don't think which field affects arbitration practicality, since it always works.
So, let’s look at your cryonics proposal:
Here's an example of how I might argue for cryonics using scientific research.

Come up with a measure of brain damage (hard) which can be measured for both living and dead people. Come up with a measure of functionality or intelligence for living people with brain damage (hard). Find living brain damaged people and measure them. Try to work out a bound, e.g. people with X or less brain damage (according to this measure of damage) can still think OK, remember who they are, etc.

Vitrify some brains or substitutes and measure damage after a suitable time period. Compare the damage to X.

Measure damage numbers for freezing, burial and cremation too, for comparison. Show how those methods cause more than X damage, but vitrification causes less than X damage. Or maybe the empirical results come out a different way.
I would assert that you makes my case extremely well. Consider your first two steps, coming up with these measures. It’s actually really easy to come up with such measures - lots and lots of alternative ones.
It's easy to come up with bad measures. For good measures, I'm not convinced.

Part of my perspective on this has to do with how bad IQ tests and school tests are, and the great difficulty of doing better.
The only way to decide which to use is to (gasp) rank them, according to your third step, testing their correlation with function.
That isn't the only way. You could come up with an explanation of what measure you should use, and why, and expose it to criticism.

It's very important to consider explanations. E.g. percentage of undamaged brain cells could be tried in a measure because we have an explanatory understanding that more undamaged cells is better. And we might modify the measure due to the locations of damaged cells, because we have some explanatory understanding about what different region of the brain do and which regions are most important. It'd be a mistake to try arbitrary things as a measure and then look for correlations.


Typical correlation approaches are bad science because they are explanationless. If one does have explanation, that explanation should be primary. An explanation can reference a correlation and explain why it matters, and only then would a correlation matter.

Correlation is big topic. I think we should focus more on arbitration. But here's an initial explanation of correlation related problems.

Summary: explanationless correlation approaches to science are the same kind of thing as induction.

There are infinitely many correlations out there. What people do is find and focus on a small number of correlations, and pay selective attention to those.

The only thing that can make this selective attention reasonable is an explanation. And it should be a clear, explicit explanation that's exposed to criticism, not an unstated one that secretly governs which correlations get attention.

I think about this in a more general way which might be helpful. A correlation is a type of pattern. There are infinitely many patterns in the world you could find, most meaningless, and they only matter when there's an explanation that they do.

And there's also the problem that if you find a sequence, e.g. "2,2,2,2,2" and you think it's a pattern, you actually have no knowledge of how it will continue unless you have an explanation. Which brings us to induction, because dealing with sequences like this and say "oh it's going to be 2 next" – without an explanation – is a major inductivist activity. If the sequence is over time, the inductivist might add, "the future is likely to resemble the past".

Similarly if you find X correlates with Y during a particular time period, the assumption they will continue to correlate in a different future time period – without explanation – is basically "the future is likely to resemble the past", a.k.a. induction.

Selective attention is also a feature of induction. Inductivists look at evidence and notice it's consistent with several ideas of interest to them. But don't pay serious attention to the infinitely many other ideas that evidence is equally consistent with. And some of those ignored ideas, which are equally "supported" by the evidence, contradict the ideas getting their selective attention.


A further issue is that context matters. You can only understand what would be a significant change in circumstances (such that one wouldn't expect a correlation or pattern to continue) via an explanatory understanding of what context is relevant and what would be a significant change.

On a related note, suppose a ranking system is developed for something, and even assume it's good. How do you know if it's still applicable when dealing with anything that isn't absolutely literally 100% identical to the original context? How do you know which changes matter? How do you know if which country you're in is part of the relevant context that can't be changed? How do you know if the calendar year is part of the relevant context that can't be changed? Only by explanation. Only by understanding why the ranking system works can you tell what changes would mess that up and what changes wouldn't.

And how can you judge explanations and decide which ones are good? The win/win arbitration method.
Er, but there are loads of ways to test function too, so any such ranking (even setting asite the precision of measurement and such like) is only finitely reliable. The rest of what you say would be fine if we really could come up with a way to define and then measure brain damage that was unequivocally 100% reliable - but unfortunately, in the real world with the time we have, we can’t do that. So, we have no choice but to survey our various options for the measure of damage and function and the measurability of those measures, rank them according to something or other, and make our decision as to whether cryonics is worth doing on that basis - but, do so using some probability threshold of how likely we need it to be to work in order to justify the expense, so as to incorporate our uncertainty as to whether we have measured the brain damage correctly and accurately. If we can’t successfully perform your first steps, we have no right to proceed as if we had performed all steps - which is precisely what you’re doing by rejecting the (admittedly inferior, but doable) ranking approach and just subjectively saying you don’t think the available data justify spending that much money.

Tell me what’s wrong with the above.
Regarding rankings, they are OK when you have an explanation of why a particular ranking system will get you a good answer for a particular problem. In other words, deciding to use that ranking system for that purpose is the outcome of a win/win arbitration. If you don't have that, rankings are arbitrary.

The rankings could be fully arbitrary. Or they could have some reasons, but arbitrarily ignore some criticism or problem. (If no criticism or problem was being irrationally ignored, then it would be a win/win arbitration outcome). Another common approach to rankings is to intentionally design the ranking system so it reaches a predetermined conclusion which people already think is plausible not arbitrary.

My main point here is that if they haven't done my proposal, they should have done something else with an explanation of why it makes sense. They have do something, have some explanation, some knowledge.

They actually do have basic explanations, e.g. I've read one of them saying that vitrified brains look pretty OK, not badly damaged, to the unaided human eye. The implication is damage that's hard to see is small, so cryopreservation works well. This is a bad argument, but it's the right type of thing. They need this type of thing, but better, before anyone should sign up.

I think you have in your mind some explanations of the right type, but haven't said them because of your methodology that doesn't emphasize explanation as I do. So I don't know how good they are.

In footnote [1], I comment on a couple cyro papers and information about fracturing.


I also have a second way for judging Alcor and CI specifically. Consider the explanation, "Preserving people for much later revival is a very hard problem. Hard problems like this don't get solved by accident by irrational and incompetent methods, they require things like scientific or intellectual rigor."

As usual, one can't explain everything at once. This explanation leads to further questions like why people don't accidentally solve hard problems. An important thing about explanation, persuasion and win/win arbitration is you only have to satisfy objections that any side cares to make, not all possible objections. If no one thinks an objection is good, don't worry about it. Yes you could miss something important, but there are always infinitely many possible objections and you can't answer all of them, you have to go by the best knowledge anyone has of which are important, and if mistakes are made due to ignorance, so be it, that's not always avoidable.

(Explanations sometimes answer infinite categories of objections. But to answer literally all possible objections would basically require omniscience

Another aspect I didn't explain here is how incompetent and irrational Alcor and CI are. But I did give an initial explanation of that previously. And I have in my mind more extensive explanation of it, if you raised objections to my initial explanation.

A reason bad people don't solve hard problems is because mistakes and problems are inevitable, so there has to be rational problem solving and mistake-correcting taking place or else advanced stuff will never work. Since I don't see Alcor and CI doing a decent job with that, I don't think their service works.

[1]

http://198.170.115.106/reports/Scientific_Justification.pdf
A rabbit kidney has been vitrified, cooled to -135C, re-warmed and transplanted into a rabbit.
Rabbit was fine. Cool.
When cooling from -130C to -196C thermal stress on large solid vitrified samples can cause cracking and fracturing.
But rabbit kidney was not cooled to the relevant colder temperatures. This has footnote 27.
Due to its more well-defined nature, cracking damage may be much easier to repair than freezing damage.
This is too vague, plus doesn't say anything about how much damage there is. It has no footnote. Paper lacks better information than this about fracturing damage issues.

Footnote 27:

http://www.sciencedirect.com/science/article/pii/0011224090900386

One of their main conclusions given in the abstract:
fracturing depends strongly on cooling rate and thermal uniformity
So one question one might have is: what cooling rates do Alcor and CI use? How much thermal uniformity do they achieve? But to my knowledge they don't carefully measure that kind of information, or even use sufficiently standardized procedures to get consistent results.

Also kind of scary, the 2008 paper is citing information from 1989, rather than more recent information.

Another paper: http://www.lorentzcenter.nl/lc/web/2012/512/problems/4/Long-term%20storage%20of%20tissues%20by%20cryopreservation.pdf

This one has lots of interesting information about why cryonics is hard, and ends by saying, "In summary, we hope to have demonstrated that tissue cryopreservation is a complex problem..." The article can give one a sense of how hard these problems are, and therefore why it takes scientific rigor, top quality knowledge and rational problem-solving ability to succeed at human cryonics. Which Alcor and CI lack.


There's also some information about how bad vitrification damage is here:

http://lesswrong.com/lw/343/suspended_animation_inc_accused_of_incompetence/32pv

It's from an expert and I've found no contrary information. Example statements:
There is no present technology for preserving people in a "fairly pristine state" at cryogenic temperatures. Present cryopreservation technology even under perfect conditions causes biological effects such as toxicity and fracturing that are far more damaging than the types of problems you've expressed concern about.

...

Most cryobiologists would regard the idea of repairing organs that had cracked along fracture planes as preposterous, as I'm sure you do if you believe that 300 mmHg arterial pressure or one hour of ischemia is fatal to a cryonics patient.
In that first quote, we get an actual comparison of vitrification damage to something else. That something else is, "the types of problems you've expressed concern about". Those problems are, from the parent comment:
a bunch of unqualified, overgrown adolescents, who want to play doctor with dead people, while pretending to be surgeons and perfusionists
In summary, Brian Wowk (an expert on Alcor's board of directors) is saying that damage from vitrification, without any errors by cryo personnel, is "far more damaging" than the various horror stories of gross error by cryo personnel. And far more damaging than, e.g., an hour of ischemia.

I'm no expert on this, but trying to look it up, it seems a few minutes of ischemia causes brain damage. And there are explanations for this, e.g. "central neurons have a near-exclusive dependence on glucose as an energy substrate, and brain stores of glucose or glycogen are limited" [2]. Damage far worse than an hour of ischemia sounds to me like cryo's not going to work yet, and I haven't found information to the contrary.

[2] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC381398/

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 7

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
You’re telling me that that’s not the right way to make a decision, but I’m still not seeing the details of the alternative approach you recommend. Can you please spell it out in similar terms - specifically, in terms that make clear how it can be performed in a chosen amount of time (say, a week)?
This can't be answered completely directly because part of the point is to think about epistemology in a different way. Creative thinking does not follow a specific formula. (Or at least, the formula is complicated enough we don't know all the exact details – or we'd have AGI already.)

Making decisions requires creative thought. The structure of creative thought is: solve problems using the method of guesses and criticism, which leads to a new situation with new problems.

(Guesses and criticism is the only method which creates knowledge. It's literally evolution, which is the only solution ever figured out to the problem of creating knowledge. I'm hoping you have some familiarity with this already from Popper and DD, or I could go into more detail.)

This structure is not a series of steps to be done in order. For example, guesses come before criticism to have something to criticize, but also afterwards to figure out how to deal with the criticism. And criticisms are themselves guesses. And criticisms need their own criticism to find and improve mistakes, or they'll be dumb.

And as one works on this, his understanding of the problem may improve. At which point he's in a new situation which may raise new problems already, before the original problem is resolved.

One can list options like, in response to criticism of a guess: revise understanding of that guess, make brand new alternative guesses, adjust the existing guess not to be refuted, criticize the criticism, or revise understanding of the problem.

But there's no flowchart saying which to do, when. One does one's best. One thinks and uses judgment. But some methods are bad and there's criticisms of them.

The important thing, like Popper explained about democracy, is not so much what one is doing right now, but if and how effectively mistakes are being found and improved.

Everyone has to start where they are. Use the best judgment one has. But improve it, and keep improving it. It's progress that's key. Methods shouldn't be static. Keep a lookout for problems, anything unsatisfactory, and then make adjustments. If that's hard, it's OK, exchange criticism with others whose set of blind spots and mistakes doesn't exactly overlap with one's own.

What if one misses something? That's why it's important to be open to discussion and to have some ways for ideas from the public to reach you. So if anyone doesn't miss it, you can find out. (http://fallibleideas.com/paths-forward) What if everyone misses something? It can happen. Actually it does happen, routinely. There's nothing to be done but accept one's fallibility and keep trying to improve. Continual progress, forever, is the only good lifestyle.


While there isn't a rigid structure or flowchart to epistemology, there is some structure. And there are some good tips. And there are a bunch of criticisms that one should be familiar with and then not doing anything they refute.


The win/win arbitration model provides a starting point with some structure. People have an idea of how arbitration works. And they have an idea of how a win/win outcome differs from a compromise or win/lose outcome.

Internal to the arbitration, creative thought (which means guesses and criticism) must be used. How do arbitrations end in time? Participants identify the problem that it might not, guess how to finish in time, and improve those ideas with criticism. That is, in a pretty fundamental way, the basic answer to everything. Whatever the problem is, guess at the solution and improve the guesses with criticism.


This raises questions like:

- what if one can't think of any guesses for something?

- what if one has some bad guesses, but can't think of any criticisms?

- what if one has several guesses and gets stuck deciding between them?

- what if different sides in an arbitration disagree strongly and get stuck?

- what if no one has any ideas for what would be a win/win solution?

- what if the sides in the arbitration keep fighting instead of discussing rationally

- what if the arbitration runs into resource limits?

- what if there is one or more issues no one has an answer to, how can arbitration work around those?


Rather than a flowchart, epistemology offers answers to all of these questions. Does that make sense? Would you agree that the loose method above, plus answers to all questions like this (and all criticisms) would be sufficient and satisfactory?

If you agree with the approach of addressing those questions (plus you can add some), and it would persuade you, then I'll do that next. Part of the reason the discussion is tricky is because we're starting with different ideas of what the goalposts should be.


I would also like to give more in the way of concrete examples but that's very hard. I can tell you why it's hard and try some examples.

People use these methods, successfully, hundreds of times per day. They get win/win solutions in mental arbitrations, routinely. Most of these are individual, and some are in small groups, and it isn't routine in large groups.

Examples of these come off as trivial. I'll give some soon.

People also get stuck sometimes. And what they really want are examples of how to solve the problems they find hard, get stuck on, and are irrational about. But I can't provide one-size-fits-all generic examples that address whatever individual readers are stuck on. And even if only talking to one person, I'd have to find out what their problems are, and solve them, to provide the desired examples.

If I wasn't concerned about privacy, I could give examples of problems that I had a hard time with, and solved. But it wouldn't do any good. People will predictably react by thinking my solution wouldn't work for them because they are different (true), or that problem I struggled with was always easy for them (common), or knowing my solution to my problem won't solve their problems (true).


Here are some examples of routine win/win arbitrations:

Guy is hungry but doesn't want to miss TV show. Decides to hit pause. Solved. (Other people would grab some food during a commercial. The important thing is the person doing it fully prefers it for their life.)

People want to eat together, but want different types of food. Go to a food court with multiple restaurants. Solved.

Person wants to buy something but hesitates to part with their money. Thinks about how awesome it would be, changes mind, happily buys. Solved.

Person wants to buy something but hesitates to part with their money. Estimates the value and decides it's not actually worth it. Changing mind about wanting it, happily doesn't buy. Solved.

Person wants to find their keys so they can leave the house, but doesn't feel like searching. Thinks about how great the sushi will be, finds he now wants to search for the keys, does so happily. Solved.

Person wants to get somewhere in car but is in unwanted traffic, some part of his personality wants to get mad. He thinks about how getting mad won't help, doesn't get mad.


All life is creative problem solving, and people do it routinely. And people change their mind about things, even emotions, routinely, in a win/win way without regrets or compromise. But people don't find these examples convincing, because they see these examples as unlike whatever they find hard and therefore notable. Or they find some of these hard, e.g. they hate looking for their keys, or have "road rage" problems.


Here's a more complex hypothetical example.

I want to borrow my child's book, which is in the living room, but he's not home. I have conflicting ideas about wanting the book now, but not wanting to disturb his things. While I want to respect his property, that doesn't feel concretely important, so I'm not immediately satisfied. I resolve this by remembering he specifically asked me never to disturb his things after a previous mistake. I don't want to violate that, so I change my attitude and am concretely satisfied that I shouldn't borrow his book, and I'm happy with this result.

I go on to brainstorm what to do instead. I could read a different book. I could buy the ebook from Amazon instantly (many people would consider this absurd, but books are very very cheap compared to the value of getting along slightly more smoothly with one's family). I could write an email instead of reading. I could phone my kid and ask permission.

Here is where examples can get tricky. Which of those solutions do I do? Whichever one I'm happy with. It depends on the exact details of my ideas and preferences. But whichever option works for me might not work so well for a reader imagining themselves in a similar situation. Their problem situation is different than mine, and needs its own creative problem solving applied to it.

And what if I don't like any of these options, can't think of more, and get stuck? Well, WHY? There is some reason I'm getting stuck, and there is information about what the problem is and why I'm stuck. What I should do depends on why I'm stuck. And why you would be stuck in a similar situation won't be the same as why I got stuck. You won't identify with my way of getting stuck, nor with what solutions work to get me unstuck.

So, I decide that phoning is easy, and I don't like giving up without trying when trying is cheap. So I phone.

9/10 times in similar situations with similarly reasonable requests, kid says yes. This time, kid says no.

9/10 scenarios kinda like this where kid says no, I HAPPILY accept this and move on to figuring out what else to do. This is easy to be happy to go along with because I respect (classical) liberal values, and I know there are great options available in life which don't violate them, so I'm not losing out.

1/10 times, I tell my kid how I'm really eager to read the book, and there's no electronic version for sale.

Then, 9/10 times, kid says "oh ok, then go ahead". 1/10s he still says no.

If he still says no, 9/10 I accept it because I care about respecting his preferences for his property, and I have plenty of alternative ways to have a good day. I want both a good day and to respect his property, and I can have both. And I don't want to be pushy and intrude on his life over something minor – it's not even worth the transaction costs of making a big deal out of – so I won't.

And 1/10 times I say "i'm sorry to bug you about this, but i ran out of stuff to do and was actually kinda sad, and then i thought of this one thing i wanted to do, which is read this book, and i got excited, and i'm really dreading going back to my problem of being bored and sad. so, please? what's the big downside to you?"

And then 9/10 times kid agrees, but 1/10 times he says "still no, sorry, but i wrote private notes in the margins of that book, do not open it".

And the pattern continues, but additional steps get exponentially rarer. The pattern is that at each step, usually one finds a way to prefer that outcome, and sometimes one doesn't and continues. Note at each step how it's harder to continue asking, it takes more unusual reasons.

DD persuaded me of the rule of thumb that approximately 90% of interpersonal conflicts, dealt with rationally, get resolved per step trying to resolve. I know this isn't intuitive in a world where people routinely fight with their families.

If you disagree, it's not so important. If someone's methods are wrong, and it causes any problems, and someone else knows better, that's no big deal. Methods can be criticized and changed. Correct or not, the approach in the example is – like many others – just fine as a starting point.



All of life can and should go smoothly with problem solving and progress. It often doesn't because of irrationality, because of not understanding the right epistemology, because of bad values, because of anti-rational memes, because of deeply destructive parenting and education practices. All of those are solvable problems which change people's intuitions about what lifestyles work, but which do not change what epistemology is true.



As a final example, let's take cryonics. Here is something I can say about it: I have given some arguments which you have not criticized and I have not found refutations for anywhere else. On the other hand, if you tell me any arguments against my position, I will either refute ALL of them or change my mind in some way to reach an uncriticized position. (Note refuting includes not just saying why the argument is false, but also for example why it's true but doesn't actually contradict my position.)

You create a 10% estimate in a vague way, which you describe as a subjective estimate of a feeling. This hides your actual reasoning, whatever it is, from criticism – not just criticism by me but also by yourself.

You gather arguments on all sides, but you don't analyze them individually and judge what's true or not and why. I do. That is a very key thing – to actually go through the arguments and sort out what's right and wrong, to learn things, to figure the subject out. It's only by doing that, not just kinda making up an intuitive conclusion, that progress and problem solving happen.

You see the situation as many arguments on both sides and want a method for how to turn those many arguments into one conclusion.

I see the situation as many arguments, which can be analyzed and dealt with. Many are false, and one can look through them and figure things out. My current position is that literally every known pro-cryonics-signup argument is false in the context of my situation, and most people's situations.

(Context is always a big deal. People in different situations can correctly reach different conclusions specific to their situation. For example a rich person with a strongly pro-cryonics wife might find signing up increases marital harmony, and has no downsides that bother him, even though he doesn't believe it can work.)

It's this critical analysis of the specific arguments by which one learns, by which progress happens, etc. It always comes down to critical challenges: no matter how great some side seems, if there is a criticism of it, that criticism is a challenge that must be answered, not in any way glossed over.

If the criticism cannot be refuted (today), one must change his mind to something no longer incompatible with the point (pending potential new ideas). It's completely irrational and destructive of problem solving to carry on with any idea which has any criticism one can't address.

There are many ways to deal with criticisms one can't directly refute. And these methods are themselves open to criticism. We could talk more about how to do this. But the key point is, any method which doesn't do this is very bad. Such as justificationism, and the specific version of it you outlined, which allow for acting contrary to outstanding unanswered criticisms.
The first may be only a point of clarification. While I certainly agree that we rationally choose which correlations to pay attention to on the basis of explanations, I think we have a problem that those explanations themselves emerge from analysis of other correlations, which were paid attention to because of other explanations, and so on, right back to correlations that we arbitrarily decide we don’t need to explain, such as that every time we measure the fundamental physical constants we get the same answers. This seems to me to tell us that explanations can’t be viewed as inherently better than correlations - they are part and parcel of a single process, just as science proceeds by an alternation between hypothesis formation and hypothesis testing. What am I missing?
Explanations come from brainstormed guesses in relation to problems. (And are improved with criticism for error-correction, or else the quality will be awful.)

There is no process which starts with correlations and outputs explanations (or more generally, knowledge).

Most correlations are due to coincidence. They are not important.

A correlation matters when referred to in an explanation. It has no special interest otherwise. Just like dust particles, blades of grass, mosquitos, copper atoms. There's dust all over the place, most is not important, but some can be when mentioned in an explanation.

The issue of getting started with learning is not serious, because it doesn't really matter where one starts. Start somewhere and then make improvements. The important thing is the process of improvement, not the starting point. One can start with bad guesses, which are not hard to come by.


Also we do have an explanation of why different experiments measuring the speed of light in a vacuum get the same answer. Because they measure the same thing. Just like different experiments measuring the size of my hand get the same answer. No big deal. The very concepts of different photons all being light, and of them all having the same speed, are explanatory ideas which make better sense out of the underlying reality.
The second one is possibly also just something I’m misunderstanding. For any pioneering technology that we have not yet perfected - SENS, cryonics, whatever - there are always explanations for why it is feasible (or, in the case of cryonics, why part of has already been achieved even though we won’t know that for sure until the rest of it also has) and other explanations for why it isn’t. I think what you’re saying is that the correct thing to do is to debate these explanations and eventually come up with an agreed winner, and that in the meantime the correct thing to do is to triage, by debating explanations for what we should do in the absence of an agreed winner between the first set of explanations, and act on the basis of an agreed winner between that second set of explanations. But I don’t see how that can work in practice, because the second debate will typically come down to the same issues as the first debate, so it will take just as long. No?
A second debate on the topic, "given the context of issues X, Y, Z being unresolved, now what?" cannot come down to the same issues as the first debate, because they're specifically excluded.

It may be helpful to look at it in terms of what IS known. Part of the context is people do know some things about SENS, cryo, or whatever topic. So there is an issue of, given that known stuff, what does it make sense to do about it?


When discussions get stuck in practice, it's not because of ignorance. If no one knows X yet, that doesn't make two people disagree, since that's the same for both of them, it's a point in common. The causes of disagreements between people are things like irrationality or different background knowledge like values or goals; perhaps someone has a lifetime of tangled thinking that's hard to sort out. The solution to those things are (classical) liberal values like tolerance, individualism, leaving people alone, and only interacting for mutual (self-perceived) benefit.

Take for example:

http://www2.technologyreview.com/sens/

The reason those debates didn't resolve your differences is because those people directed their creativity towards attacking SENS, not truth-seeking. Rational epistemology only works for people who choose to use it. The debate format was also deeply unsuited to making progress because it allowed very little back-and-forth to ask questions and clear up misunderstandings. It wasn't set up for creating mutual understanding, none of your opponents wanted to understand SENS, the results were predictable, but that has nothing to do with what's possible. (BTW, awful as this sounds, it isn't such a big deal, since they aren't going to use violence against you. Not even close. So you can just go on with SENS and work together with some better people.)

BTW notice the key thing about that debate: you could answer all of their criticisms. ALL. Specifically, not vaguely.

And I think you know that if you couldn't, that'd be a serious problem for SENS.

Take the claim, "even though these [SENS] categories are sometimes so general as to be almost meaningless, they still omit many age-related changes that contribute to senescence, including age-related increases in oxidative damage and changes in gene expression."

If you had no answer to that, SENS would be in trouble. It only takes one criticism to refute something. But you had the answer. And not in some vague way like, "I feel SENS is 10% likely to work, down from 20% before hearing that argument". But specifically you had an actual answer that makes the entire difference between SENS being refuted and SENS coming out completely fine.

This is a good example of how things can actually get resolved in debates. Like the claim about oxidative damage, that can be resolved, you knew how to resolve it. Progress can be made, things can be figured out. (Though not for those who aren't doing truth-seeking.)

Challenges like the oxidative damage argument can routinely be answered and discussions can resolve things. What you said should have worked. It only didn't because the other guy was not using anything resembling a rational epistemology, and did not want progress in the discussion.
The third one is where I’m really hanging up, though. You say a lot about good and bad explanations, but for the life of me I can’t find anything in what you’ve said that explains how you’re deciding (or are claiming people should decide) HOW good an explanation needs to be to justify a particular course of action.
Answer: that is the wrong question.

There is no such thing as how epistemologically good an explanation is.

The way to judge explanations I'm proposing is: refuted or non-refuted. Is there a criticism pointing out any flaw whatsoever? Yes or no?

No criticism doesn't justify anything. It just makes more sense to act on ideas with no known flaws (non-refuted) over ideas with known flaws (refuted).


One common concern is criticisms pointing out minor flaws, e.g. a typo, or that a wording is unclear. The answer is: if the criticism really is minor, then it will be easy to fix, so fix it. Create a new idea (a slight modification of the old idea) to which the criticism doesn't apply.

Or explain why a particular thing that seems like a flaw in some vague general way is not a flaw in this specific context (problem situation). Meaning: it seems "bad" in some way, but it won't prevent this approach from working and solving the problem in question.

For example, someone might say, "It'd be nice if the instruments on the space shuttle were 1000x more accurate. It's bad to have inaccurate instruments. That's my criticism." But a space shuttle has limited finite goals, it's not supposed to be perfect and do everything, it's only supposed to do specific things such as bring supplies to the space station, land on the moon, or complete specific experiments. Whatever the particular mission is, if it can be completed with the less accurate instruments, then the "inaccurate instruments are bad" criticism doesn't apply.
In the case of cryonics, you’ve read a bit about where the practice of cryonics is today and you’ve come to the conclusion that it doesn’t currently justify signing up, because you prefer the arguments that say the preservation isn’t good enough to the ones that say it is. But you don’t say where the analysis process should stop.
Stop when there is exactly one non-refuted idea. I am unaware of any non-refuted criticisms of my position on the matter.

This has nothing to do with preferring some arguments. I am literally unaware (despite looking) of any argument to sign up with Alcor or CI, that I can't refute right now today. (Though as I mentioned above, I have in mind my situation or most situations, but not all people's situations. In unusual situations, unusual actions can make sense.)

In your method you talk about gathering arguments for both sides. I have tried to do that for cryonics, but I've been unable to find any arguments on the pro-cryonics side that survive criticism. Why do you think give it a 10% chance to work? What are any arguments? And meanwhile I've given arguments against signing up which you have not individually, specifically refuted. E.g. the one about organizations that are bad at things don't solve hard problems because problems are inevitable so without ongoing problem solving it won't work.


I think a lot of the reason debates get stuck is specifically because of justificationist epistemology. People don't feel the need to give specific arguments and criticisms. Instead they do things like create arbitrary justification/solidity/goodness scores that are incapable of resolving the disagreements between the ideas.
For example, you say:
percentage of undamaged brain cells could be tried in a measure because we have an explanatory understanding that more undamaged cells is better. And we might modify the measure due to the locations of damaged cells, because we have some explanatory understanding about what different region of the brain do and which regions are most important.
We might, yes, or we might not. How do you decide whether to do so?
Creative thinking. Guess whether it's a good idea and why. Improve this understanding with criticism.
And if you decide that we should take account of location, why stop there? Suppose that someone has proposed a reason why neurons with more synaptic connections to other neurons matter more. It might be a really really hand-wavey explanation, something totally abstract concerning the holographic nature of memory for instance, but it might be consistent with available data and it might also be really hard to falsify by experiment.
Almost all refutation is by argument, not experiment. (See: section about grass cure for the cold in FoR, where DD explains that even most ideas which are empirical and could be dealt with by experiment, still aren't).

Since you call it "hand-wavey", what you mean is you have a criticism of it. The thing to do is state the criticism more clearly, and challenge the idea: either it answers the criticism or it gets thrown out.
So, should we take it into account and modify our measure of damage accordingly? What’s worse, we don’t even know whether we have even heard all the relevant explanations that have been proposed, even ignoring all the ones that will be proposed in the future. There might be ones that we don’t know that conflict with the ones we do know, and that we might eventually decide are better than the ones we do know. Shouldn’t we be taking account of that possibility somehow?
Yes. One should make reasonable efforts to find out about more ideas, and not to block off other people telling one ideas (http://fallibleideas.com/paths-forward).

You will ask what's reasonable, how much is enough. Answer: creative thinking on that point. Guess what's the right amount of effort to put into these things (given limits like resource constraints) and refine the guess with some critical thinking until it seems unproblematic to one. Then, be open to criticism about this guess from others, and try to notice if things aren't going well and one should reconsider.
This seems to bring one inexorably back to the probabilistic approach. Spelling it out in more detail, the probabilistic approach seems to me to consist of the following steps:

- Gather, as best one can in the time one has decided to spend, all the arguments recommending either of the alternative courses of action (such as, sign up with Alcor or don’t);

- Subjectively estimate how solid the two sets of arguments feel;
How? This vague step hides a thousand problems in its details.
- Estimate how often scientific consensus has, in the past, changed its mind between explanations that initially were felt to differ in solidity by that kind of amount, and how often it hasn’t (with some kind of weighting for how long the prevailing has been around);
This has a "future will resemble the past" element without a clear explanation of what will be the same and what context it depends on.

And it glosses over the details of what happened in the various cases, and the explanations of why.

It also gives far too much attention to majority opinion rather than substantive arguments.

It's also deeply hostile to large innovations in early stages. Those frequently start with a large majority disagreeing and feeling the case for the innovation has very low solidity.

If you look at the raw odds that a new idea is a brilliant innovation, they suck. There are more ways to be wrong than right. You need more specific categories like, "new ideas which no one has any non-refuted criticism of" – those turn out valuable at much higher rates.
- Use that as one’s estimate of one’s likelihood of being right that the seemingly more solid of the two sets of explanations is indeed the correct set, hence that the course of action that that set recommends is the correct course;

- decide what probability cutoffs motivate each of the three possible ways forward (sign up and focus on something else until some new item of data is brought to one’s attention, don’t sign up and focus on something else until some new item of data is brought to one’s attention, or decide to spend more time now on the question than one previously wanted to), and act accordingly.
This approach involves no open-ended creative thinking and not actually answering many specific criticisms and arguments. Nor does it come up with an explanation of the best way to proceed. It does not create knowledge.

This proposed justificationist method does not even try to resolve conflicts between ideas. It doesn't try to figure out what's right, what's wrong, or why. There's no part where anything gets figured out, anything gets solved, anyone learns anything about reality. It's kind of like a backup plan, "What if rational thinking fails? What if progress halts? Under that constraint, what could we do?" Which is a bad question. It's never a good idea to use irrational methods as a plan B when rational methods struggle.

One of the weirder things about discussing justificationism is, I know you frequently don't use the method you propose. It's only to the extent that you don't use this method that you get anywhere. Like at http://www2.technologyreview.com/sens/

You didn't present your subjective feeling of the solidity of SENS, or estimates about how often a scientific consensus has been right, or anything like that. You did not gather all the anti-SENS arguments and then estimate their solidity and give them undeserved partial credit without figuring out which are true and which false. Instead, you gave specific and meaningful arguments, including refuting ALL their criticisms of SENS. Then you concluded in favor of SENS not on balance – you didn't approach it that way – but because the pro-SENS view is the one and only non-refuted option available for answering the debate topic.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 8

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Thanks again Elliot. I have several issues below, but they have a single common theme.
This approach involves no open-ended creative thinking and not actually answering many specific criticisms and arguments. Nor does it come up with an explanation of the best way to proceed. It does not create knowledge.
I was probably unclear on that: that’s part (most, in fact, for interesting cases) of step 1, i.e. "Gather, as best one can in the time one has decided to spend, all the arguments recommending either of the alternative courses of action.” I didn’t mean to imply that this would be restricted to pre-existing arguments. So in other words, yes actually, I did use exactly this method in my evaluation of Estep’s criticism of SENS, and in my reply I articulated some of the results of that evaluation, namely some refutations of elements of the criticism. Consider your position as a reader: why did you accept my rebuttal as the last word? Why didn’t you write to Estep to ask him for a more thorough re-rebuttal than TR gave him the option of? Answer (I claim): because you subjectively decided that my rebuttal was impressive ENOUGH that Estep PROBABLY wouldn’t have a persuasive re-rebuttal, so you chose not to allocate time to contacting him. Note the quantitative, as well as subjective, elements of what I claim was your process (and I claim it confidently, because I can’t think of any other process you could have used for deciding not to write to Estep).
It's interesting you specifically express confidence, and can't think of any other process. This description isn't close to how I approached the Estep debate.


First, your rebuttal wasn't important here. I had already decided Estep was wrong before reading your rebuttal. That was easy. His position was largely philosophy, rather than being about detailed scientific points that I might have difficulty evaluating. While reading his text, I thought of criticisms of his arguments.


Actually, rather than being particularly impressed, I disliked three aspects of your rebuttal. But these criticisms were tangents, and are standard parts of academic culture. If I'm right about them, they don't make SENS wrong or Estep right. 1) Complaining about Estep's invective and saying you'd take the high road, but then returning some invective. 2) What I consider an overly prestigious writing style, partly intended to impress. 3) Arguing some over who has how much scientific authority and what they think (rather than only discussing substantive issues directly).

My interest in your rebuttal wasn't to learn why Estep was wrong – which I already knew. Note I say why he was wrong (explanation) rather than considering who is more impressive (ugh). Instead, I read to see how closely your thinking and approach matched my own (if I found important differences, I'd be interested in why, at least one of us would have to be wrong in an important way), to see what passes for debate in these kinds of papers in your field, and to see if you'd say an important point I'd missed or a mistake.


The main reason I didn't write to Estep is because I don't think he wants to have a discussion with me. My usual policy is not to write to paper authors who don't include contact information in their papers.

Now that you brought it up, I tried google and didn't find contact info there either. I think discussion is unwelcome. I did find his email in the GRG archives, but that's no invitation.

I actually would be happy to talk to him, if he wanted to have a discussion. Like if Estep volunteered to answer questions and criticisms from me, I'd participate. I like to talk to a variety of people, even ones I consider very bad. I want to understand irrationality and psychology better. And it helps keep my ideas exposed to all kinds of criticism. And I don't get myself stuck in unwanted polite or boring conversation.


You're right that I wouldn't expect Estep to change my mind if we talked. This is because I guessed an understanding of what he's like, which I have no criticisms of and no non-refuted alternatives to. Not probability. But this is minor. I'd talk to him anyway, the issue is he doesn't want to.

And I didn't just leave this to my judgment. I exposed my view on this matter to criticism. I wrote about it in public and invited criticism from the best thinkers I've been able to gather (or anyone else). (BTW you'd be welcome to join my Fallible Ideas discussion group and my private group.)

I don't do more than this because I have explanations of why other activities are better to spend my time on, and I don't know a problem/criticism with my approach or an explanation of a better approach. And all of this is open to public criticism. And I've made a large ongoing effort to have ready access to high quality criticism.
There is no such thing as how epistemologically good an explanation is.
I don’t get this. You’ve been referring to good and bad explanations throughout this exchange. What have you been meaning by that, if not epistemologically good and bad? I know you are saying that there are only refuted or non-refuted explanations, but you must have been meaning something else by good and bad, since you’ve definitely been using those adjectives - and other ones, like “clear”, “explicit” etc - in an unambiguously quantitative rather than binary/boolean sense, e.g.:
I can see how that'd be confusing. It's an imprecise but convenient way to speak. Depending what you're doing, you only need limited precision, so it can be OK. And it'd take forever to elaborate on every point, it's better only to go into detail on points where someone thinks it's worthwhile to, for some reason.

My position is that all correct arguments can be converted or translated into more precise statements that strictly adhere to the boolean epistemology approach.

Speaking of amount of clarity is a high level concept that's sometimes precise enough. You can, when you want to, get into more precise lower level details like pointing out specific ambiguous phrases or unanswered questions about the writer's position.

Saying an explanation is good or bad (in some amount) can quickly communicate an approximate evaluation without covering the details. It's loose speaking rather than epistemology.
They actually do have basic explanations, e.g. I've read one of them saying that vitrified brains look pretty OK, not badly damaged, to the unaided human eye. The implication is damage that's hard to see is small, so cryopreservation works well. This is a bad argument, but it's the right type of thing. They need this type of thing, but better, before anyone should sign up.
If it’s the right type of thing, what’s “bad" about it?
It is the right type of thing, meaning: it involves explanation and argument.

"Bad" here was an imprecise way to refer to some arguments I didn't write out upfront.

Damage that's hard to see to the naked human eye is not "small" in the relevant sense. The argument is a trick where it gets people to accept the damage is small (physical size in irrelevant regular daily life context), and implies the damage is small (brain still works well).

Why use unaided human eye instead of microscope? It's a parochial approach going after the emotional appeal of what people can see at scale they are used to. Rather than note appearances can be deceiving and try to help the reader understand the underlying reality, it tries to exploit the deceptiveness of appearances.

And it doesn't attempt to explore issues like how much damage would have what consequences. But with no concept of what damage has what consequences, even a correct statement of the damage wouldn't get you anywhere in terms of understanding the consequences. (And it's the consequences like having one's mind still revivable, or being dead, that people care about.)
- and more to the point, how bad?
Refuted.
What is your argument for saying "They need this type of thing, but BETTER (quantitative…), before anyone should sign up”? How much better, and why?
It needs to be better to the point it isn't refuted. Because it's a bad idea to act on ideas with known flaws.

(There are some complications here like they don't actually know my criticism, the flaws aren't known to them. What is "refuted" in each person's judgment depends on their individual knowledge. That's a tangent I won't write about now.)
You can’t just say “non-refuted”, because you know as well as I do that any argument about anything interesting can be met with a counter-argument, which itself can be met, etc., unless one has decided in advance how to terminate the exchange.
No, I disagree!

It's hard to keep up meaningful criticism for long.

Yes someone can repeat "That's dumb, I disagree" forever. But a criticism, as I mean it, is an explanation of a flaw/mistake with something, and this kind of bad repetitive objection doesn't explain any mistakes.

I don't think you had this kind of repetition in mind, or you wouldn't have specified "about anything interesting". "That's dumb, I disagree" can be used on trivial topics just as well as interesting topics.

I think you're saying that substantive critical discussion doesn't terminate and keeps having good points indefinitely. Until you terminate it arbitrarily.

I think good points are hard to come by. What are "good" points here, specifically? Ones which aren't already refuted by pre-existing criticism.

As you go along in productive discussions, you build up criticisms of many things. Not just of specific points, but of whole categories of points. Some of the criticisms have "reach" as DD calls it. They have some level of generality, they apply to many things. As criticism builds up, it gets progressively harder to come up with new ideas which aren't already refuted by existing criticism.

The reason many discussions don't look like this in practice is because of irrationality and bad methods, rather than discussions having to be that way.
My fundamental problem remains: you haven’t given me a decision-making algorithm that terminates, or even usually terminates, in an amount of time that I can specify in advance.
It's a mistake to 100% rigidly specify time limits in advance. Reasoning for time limits should be open to criticism.

The closest to a flowchart I can give you is something like:

- think creatively etc, as discussed previously

- when nearing a resource limit (like time), start referring to this limit in arguments, to bring arbitration to a close. e.g. instead of "I disagree with that, and here's why in detail", a side might say, "I disagree with that, but we don't have time to get into it. Instead, here is what I propose that we may both find acceptable."

- as resources get tighter, it gets easier to please all sides. like, they may agree it's better to flip a coin than not to reach a decision by a certain deadline.

- reasonable sides understand their fallibility and don't want anyone to go along with something without persuasion. and they understand persuasion on some point can exceed a resource limit. so they actively PREFER to find mutually agreeable temporary measures for now, when appropriate, while working on persuasion more in the longer term as more resources are available

- sometimes things go smoothly. no problem. sometimes they don't. when they don't, there are specific techniques which can be used.

- specifically, one considers questions of the form, "Given the context - and specifically not reaching agreement on points X, Y and Z, but having agreement on A, B and C - what can be done that's mutually agreeable? What can be done on this issue with the limited agreement?"

- while working on this new question, if there are any sticking points, then a similar question can be asked adding those sticking points to the exclusion list.

- these questions reduce the complexity and difficulty of the arbitration as low as needed.

- the more you use questions like this and temporarily exclude things due to resource limits, the easier it is to reach agreement. if it's different people, it goes to "since we disagree so much, let's go our separate ways". the harder case is either when a person has conflicting ideas or two people are entangled (e.g. parent and child). but that still reaches outcomes like, "given we disagree so much, and we need a decision now, let's flip a coin". both sides can prefer that to any known alternatives, in which case it's a win/win outcome.

- but what if they don't agree to flip a coin over it? well, why not? this is fundamentally why a flowchart doesn't work. because people disagree about things for reasons, and you can't flowchart answers to those reasons.

- but basically sides will either agree to a coin flip (or some better default they know of), or else they will propose something they consider a better idea. a better idea while being reasonable – so like, something they think the other side could agree with, not something that'd take a great deal of persuasion involving currently-unavailable resources.

- if sides are unreasonable – e.g. try to sabotage things, or just want their initial preference no matter what – then any conflict resolution procedure can stall or fail. that's unavoidable.

- this doesn't terminate in predictable-in-advance time because sometimes everyone agrees that the deadline is less important than further arbitration, and prefers to allocate more resources. i don't think this is a problem. it can terminate quickly when that's a good idea. the only reason it won't terminate quickly is specifically because a side disagrees that terminating quickly is a good idea in this case. (and if that happens, there will be a reason in context, which may be right or wrong, and there is no one-size-fits-all flowchart answer to it, it matters what the reason is)
I have one. It’s not perfect - I accept all your criticisms of it, I think - but the single feature that it terminates in a reasonably predictable time (just how predictable is determined, of course, by how close together one chooses the two cutoff probabilities to be) is so important that I think the method is better than any alternative that doesn’t reliably terminate.

The thing is, I think you DO have an algorithm that reliably terminates, and that despite your protestations it is pretty much identical to mine. Look at this example for illustration:
Also we do have an explanation of why different experiments measuring the speed of light in a vacuum get the same answer. Because they measure the same thing. Just like different experiments measuring the size of my hand get the same answer. No big deal. The very concepts of different photons all being light, and of them all having the same speed, are explanatory ideas which make better sense out of the underlying reality.
Nonsense, because each measurement measures different photons, and we have no better explanation for all photons having the same speed than for all pigeons having the same mass. This is not trivial: indeed, I recall that Wheeler made quite a big deal out of the awfully similar question of the mass of the electron and proposed that there is in fact only one electron in the Universe. We have explicitly made the choice not to enquire further on the question.
If you go deeper, then yes I don't know everything about physics. There's some initial explanations about this stuff, but it's limited.

I'm unclear on why this is important. I don't study physics more because I prefer to do other things and I don't know of any criticisms/problems with my approach. Even if I did study physics all day, I still wouldn't know everything about it and would make choices about which things to enquire further about, because I couldn't do everything at once. I would think of an explanation for how I should approach the matter, adjust or rethink until no criticism, and do that.
Or this one:
Person wants to buy something but hesitates to part with their money. Thinks about how awesome it would be, changes mind, happily buys. Solved.
That only works with an additional step that comes just before “happily buys”, namely “switches brain off before remembering that one might soon change one’s mind back”. And, actually, another step that says “remembers that one is really good at not crying over spilt milk, i.e. once the money is spent one is happy to live with whatever regret one might later have”. And so on. I know you know this.
But I don't know it. I deny it.

I think switching off the brain and trying not to think of some issues, because one couldn't deal with the issues if he paid attention to them, is a really bad approach. It's choosing winners in an irrational way – instead of resolving the conflict of ideas, you're playing the role of an arbiter who only lets one side speak, then declares them the winner.

About spilt milk: Sometimes people think of that and it helps them happily buy something. But sometimes people don't. It's not required. There are many optional steps that people find useful, or not, depending on their specific circumstances.
But, yet, you were fine with just writing “Solved”! I conclude that you DO have a termination procedure in your algorithm, and moreover that it’s an indisputably vague and subjective and probabilistic and epistemologically hole-riddled one just like mine, and I don’t know why you’re having such trouble admitting it.
I don't concede because I disagree.

I think a rational non-hole-riddled epistemology is possible, and that I understand it.
Let’s get back to cryonics - largely because I am now somewhat invested in the goal of changing your mind about signing up, coupled of course with the equally legitimate converse goal of giving you a fair shot at changing mine.

Let’s start with the specific question I already referred to above:
They actually do have basic explanations, e.g. I've read one of them saying that vitrified brains look pretty OK, not badly damaged, to the unaided human eye. The implication is damage that's hard to see is small, so cryopreservation works well. This is a bad argument, but it's the right type of thing. They need this type of thing, but better, before anyone should sign up.
As this stands, as I just said, it is too vague to be amenable to refutation even in principle, i.e. it doesn’t meet your own epistemological standards, because it doesn’t incorporate any statement of (let alone any argument for) your criterion for how good that explanation needs to become.
my standard is: is there a criticism of it? not some criterion for how good.
As above, “non-refuted” doesn’t work, because that relies on consideration of (for example) how much time I choose to allocate to giving you refutations and how much you choose to allocate to giving me refutations, and I sense that that that’s a decidedly non-level playing field.
You mean, it's not a level playing field because I allocate more time to trying to get this issue right? Or at least to writing down my thinking, so that if I'm mistaken someone could tell me?

BTW, what is your explanation of why no one has written good explanations of why to sign up for cryonics anywhere? Why have they left it to you to write it, instead of merely link things?

(Good explanations to what standard? Your own. If stuff met your standards you'd link it instead of writing your own.)
My (unashamedly justificationist) starting-point is that the absence of gross damage feels like enough evidence for revivability to satisfy me that people should sign up.
The evidence you refer to is consistent with infinitely many positions, including ones that conclude not to sign up for cryo. Considering it evidence for a specific conclusion, instead of others it's equally consistent with, is some mix of 1) arbitrary 2) using unstated reasons

Why should a fact fully compatible with non-revivability be counted as "evidence for revivability"?
So let’s start with you amplifying your above statement, with a sense of what you WOULD view as a good enough (yes I said it) argument, to give me some goalposts to aim for.
The goalposts fundamentally are: I don't have further criticism.

This is hard because I have many criticisms. But there really have to be ways for me to get answers to all of them (though not all from you personally). Or else you'd be asking me to do something I have a reason not to do; you'd be asking me to just ignore my own judgment arbitrarily for no reason.

I also think you overestimate how problematic this is because you're used to debates that don't go anywhere, don't resolve anything, because of how terribly irrational most people are.

Another big factor is people who don't want to be persuaded. Rational persuasion is impossible with unwilling subjects. People always have to persuade themselves and fill in lots of details, you can't tell them everything and perfectly customize it all to their context and integrate it with all their other ideas. They have to play an active role, or any persuasion will be superficial.


Something that I'd see as a good starting place is explanations connecting different amounts of damage to consequences like being fine or dead, and quantifying the amount of damage Alcor and CI cause today.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 9

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Thanks. Hm. I’m sincerely trying my very hardest to understand what you’re saying about your own thought processes, but I’m not making much progress.
I understand. It's very hard. Neither DD nor Popper had much success explaining these things in their books. I mean the books are great, but hardly anyone has thoroughly been persuaded by those books that e.g. justificationism is false.

I'm trying to explain better than they did, but that's tough. It's something I've been working on for a long time, but I haven't yet figured out a way to do it dramatically more effectively than DD and Popper. I think correct epistemology is very important, so I keep working at it. But I'm not blaming you or losing patience or anything like that.
At this point I think where I’m getting stuck is that the differences between your and my descriptions of how you make decisions (and of how one ought to make decisions) mainly hinge on the distinction between (a) not having any further criticisms and (b) not choosing to spend further time coming up with further criticisms,
I think there's a misunderstanding here.

I wouldn't draw a distinction there. If you don't know more criticisms, and resolved all the conflicts of ideas you know about, you're done, you resolved things. Whether you could potentially create more criticisms doesn't change that.

The important thing is not to ignore (or act against) any criticisms (or ideas) that you do know about. Either ones you came up with, or someone told you.

If you do know about a conflict between two ideas, don't arbitrarily pick a side. Rationality requires you either resolve the conflict, or proceed in a way that's neutral regarding the unresolved conflict. This is always possible.

Does that summarize one of my big points more clearly?


In other words, when there's a disagreement, either figure out how to resolve it or how to work around it, but don't assume a conclusion while the debate is ongoing. (The relevant ongoing debate typically being the one in your own mind. This isn't a formula to let irrational confused people hold you up indefinitely. But details of how to deal with this aspect are complex and tricky.)



Secondarily it's also important to be open to criticism and new ideas. If the reason you don't know about a criticism is you buried your head in the sand, that's not OK. (This part is pretty uncontroversial as an ideal, though people often don't live up to it very well.)
and I claim that for most interesting questions that is a distinction that is very hard to make, because it’s almost always fairly easy to come up with a new criticism (and I don’t mean a content-free one like “that’s dumb”, I mean a substantive one). Now, you disagree - you say "It's hard to keep up meaningful criticism for long”. That’s absolutely not my experience. In fact I would go further: I think that the way our brains work is that exhaustion or distraction from what we objectively know we’d like to do is a phenomenon that we generally like to put out of our minds, because we wish it weren’t so, so it’s virtually impossible to know whether we have truly exhausted our potential supply of criticisms. I really, really like to know why I think what I think, so I feel I go further down these rabbit-holes than most people, but they’re still rabbit-holes.
I'm mainly concerned with actual criticisms and conflicts of ideas, not potential.

Apart from the issue of willfully not thinking of arguments you couldn't answer, or choosing not to hear them, then it's only the actual ideas you have that matter and need conflict resolution between them now.
I think the only promising-sounding way to resolve this (i.e. to determine how difficult it really is to keep up meaningful criticism - which will very probably entail gaining a better understanding of each other’s threshold of “meaningful”) is for us to work through a concrete example. Naturally I suggest we continue with cryonics.
I disagree with "only". But that's fine, sure.

Though, actually, I don't think cryonics is ideally suited because on cryonics I'm more in the role of critic, and you more in the role of defending against criticism.

But our epistemology disagreement is kind of along the lines of: I have higher standards. So when I'm in the role of critic, this will come off as: my criticism is picky and demands standards you think can't be met.

If we used a different topic where I have a lot of knowledge and positive claims exposed to criticism, it could more easily be you making criticisms as picky as you want – trying to demonstrate such picky criticisms can't be answered – and then me showing how to answer them.

What do you think?

I reply about cryonics below anyway.
Before that, though, I have a new issue with some of what you said in this latest reply. You seem to have created a massive loophole in your approach here:
- the more you use questions like this and temporarily exclude things due to resource limits, the easier it is to reach agreement. if it's different people, it goes to "since we disagree so much, let's go our separate ways".
I can’t for the life of me see how you can seriously view that as an epistemologically acceptable outcome. And yet, I claim that it is indeed necessary to say that in order to reach your claim that resource limitations are not fatal to the epistemologically respectable method you advocate. Agreeing to disagree is no different from saying “that’s dumb”, except insofar as the participants may have gained a better understanding of the issues (negligibly better, in most cases, I claim). This is particularly important because of the non-level-playing field issue - much more often than not, the two participants in a debate will have unequal resource limits, so one of them will need to quit before the other feels ready to quit, so going separate ways ends up as the only option.
I'm unclear on the problem. If people AGREE to leave each other alone, and act accordingly, then they have a mutually agreeable win/win outcome that neither of them has a criticism of. This resolves the conflict between them that they were trying to sort out.

This doesn't resolve the tough problems in the field – but they know that and aren't claiming otherwise. What their agreement resolves is the problems surrounding their immediate decision making about how to deal with each other.
OK, let’s get back to cryonics.
BTW, what is your explanation of why no one has written good explanations of why to sign up for cryonics anywhere? Why have they left it to you to write it, instead of merely link things?
I think what’s been written by Alcor is (in aggregate) a good explanation, and you’ve read it already, so I didn’t suggest you read it.
In aggregate, I think you will agree it contains flaws. I've pointed some out.

So what's needed to save it is some modifications. Some way to have a position similar to it, without the flaws.

But I've been unable to figure out a position like that. And I haven't found Alcor's material to be much help for doing this.


I'm also unclear on what you think the gist of Alcor's case is. What primary claims make up their argument that you think is good? I actually have very little concept of what you think their website says.

Do you think their website presents something like your argument below? That's not what I got from it.
The evidence you refer to is consistent with infinitely many positions, including ones that conclude not to sign up for cryo. Considering it evidence for a specific conclusion, instead of others it's equally consistent with, is some mix of 1) arbitrary 2) using unstated reasons

Why should a fact fully compatible with non-revivability be counted as "evidence for revivability"?
In most scientific fields, and certainly in almost all of biology, the totality of available evidence is consistent with infinitely many positions, including the position that eating grass cures the common cold.
yes
Thus, one doesn’t reject the position that eating grass cures the common cold on the basis of a boolean approach to available evidence - one does so on the basis, as you said, that the quality of explanations for why eating grass cures the common cold (i.e. refutations of the position that eating grasss does not cure the common cold) is inadequate - there are no “meaningful” such explanations.
i disagree and think one should approach the grass-cures-cold with specific criticisms, not vague quality/justification judgments. Examples below.
Let’s have a go. Grass contains huge numbers of phytochemicals that we have identified, and the limitations of breadth and depth of our investigations are such that we can be quite sure it also contains lots that we have not identified. Phytochemicals have many diverse properties, such as antioxidant properties, that are shared with compounds that are known to have therapeutic effects on the common cold. Kids occasionally eat grass, and they occasionally recover faster than average from the common cold, so in order to know whether grass cures the common cold we would need to survey the cases of this to determine whether the two were positively correlated, and no one has done this. I don’t claim that this is a meaningful refutation of the position that eating grass doesn’t cure the common cold, but I do claim that it is a meaningful refutation of the position that it’s not worth doing the experiment to determine whether eating grass cures the common cold. I don’t claim that it’s a persuasive refutation, but the only reason I have for distinguishing between persuasive and meaningful is probabilistic/justificationist: based on my subjective intuition, I think the chances of the experiment coming out on the side that grass indeed cures the common cold are too low to justify the resources needed to do the experiment. What am I missing?
This argument is fine in the sense of being unlike "that's dumb" with no reason given. It's "meaningful". To put it approximately but perhaps communicate effectively: I wasn't trying to exclude anything even 1% as reasonable as this.

But this passage makes several mistakes. Here are some criticisms:

It's suggesting resources be allocated to this. But it doesn't compare the value it thinks can be gained by this change in resource allocation to the value gained from current allocation. So it doesn't really actually argue its case and is vague about what specifically should be done.

It's too much of a "try this, it might work" approach. There are more promising leads. One way (of many) to get more promising leads is to think of a specific mechanism by which something could work which you don't know how to rule out given current evidence and arguments, and then test that.

Another mistake is looking for correlation itself, when the thing we actually care about is causation (we care whether eating grass CAUSES recovery from colds). A good project would try to determine causation. This could maybe involve looking at correlations, but there'd have to be an idea about what to usefully do with the correlation information if found.


Note BTW that all three of these criticisms use fairly general purpose ideas. They're mildly adapted from previous discussions of other topics. For that reason, it doesn't take much work to create them. And as one builds up a greater knowledge of general purpose criticisms, it gets harder to propose any ideas that pass initial criticism using already-known criticism techniques.
Back to cryonics.
Damage that's hard to see to the naked human eye is not "small" in the relevant sense. The argument is a trick where it gets people to accept the damage is small (physical size in irrelevant regular daily life context), and implies the damage is small (brain still works well).

Why use unaided human eye instead of microscope? It's a parochial approach going after the emotional appeal of what people can see at scale they are used to. Rather than note appearances can be deceiving and try to help the reader understand the underlying reality, it tries to exploit the deceptiveness of appearances.

And it doesn't attempt to explore issues like how much damage would have what consequences. But with no concept of what damage has what consequences, even a correct statement of the damage wouldn't get you anywhere in terms of understanding the consequences. (And it's the consequences like having one's mind still revivable, or being dead, that people care about.)
Sure, all agreed - but they are not making that mistake. It’s known that living systems have pretty impressive self-repair machinery, and that it tends to work better to repair physically smaller damage than physically larger damage. Therefore, even though we know perfectly well that damage too physically small to be seen with the naked eye could still be too much for revivability, we know that there is a whole category of damage that would indeed (probably) be too much and is absent,
ok
and that’s meaningful evidence.
Meaningful evidence – meaning what?

This evidence is consistent with many things, so if you want to bring it up you should give an explanation about what it means. It doesn't speak for itself.

Do you mean that of the infinitely many cryo-doesn't-work possibilities, an infinite subset have been ruled out? Yes. Do you mean that this raises the amount of remaining cryo-does-work possibilities relative to the cryo-doesn't-work possibilities? No, infinity doesn't work that way.
Plus, of course Alcor (and more importantly 21CM) have looked at vitrified tissue with microscopes and not seen appreciable damage
What do you mean "appreciable" and where do they provide this information? Aren't fractures appreciable damage?

How does this fit with Brian Wowk's comments, brought up earlier, about lots of damage? Do you think he was mistaken, or is this somehow compatible?
- but how much magnification is enough? If they were basing everything on 100X microscopic images, what would be your procedure for deciding whether or not to complain that they hadn’t looked at the EM level?
I'd ask WHY they didn't use EM level and see if I see something wrong with their answer. There ought to be an explanation, presumably already written down.

I'd hope the answer wasn't "lack of funds even though it's very important". That'd be a plausible but disappointing answer I could imagine getting.

Not using the best microscopes around would strike me as suspicious enough to ask a question about. But in that scenario, I wouldn't be surprised to find they had a reason I have no criticism of, and then I'd drop it. Advanced technology sometimes has drawbacks in some cases, rather than being universally the best option.
I can certainly provide (as Alcor do) positive evidence for how much damage is tolerable - but of course there are ways to refute it, but only if one views one’s refutations as meaningful. For example, we can look at the amount of variabiity in structure of the brain in non-demented elderly, and we can see big differences between people who are equally cognitively healthy - easily big enough to be seen without a microscope.
Damage and non-damage variation are different things. What is this comparison supposed to accomplish?

People have different ideas. It would unsurprising if this has significant physical consequences since ideas have to have physical form. Though we also can see non-microscopic differences in healthy hearts, lungs, skin, etc, so the easily visible brain differences don't necessarily mean more than those other differences.
You could say, ah, but all one is doing there is identifying changes that are not harmful - but that’s circular, in the absence of direct evidence as to whether the damage done by vitrification is harmful.
I'm unclear what you're saying would be circular, or how you'd answer my comments in the section right above. I think I didn't quite get your point here, unless my comments above address it.

To phrase this as a direct criticism, for the context of me being persuaded, the issues have to be clear to me, so things I find unclear won't work.

To succeed in this context, they have to be either modified to be clear to me (which I always try to do myself before objecting), or else there'd have to be auxiliary explanations, either about the specific subject, or about how to read and think better, so that I could then get the point.
Is that a refutation that you would view as meaningful? If so, what’s your re-refutation of it? And if not, why not?
Yes, meaningful. I think the bar there is real low. I just wanted to exclude complete non-engagement like a tape recorder could accomplish.

Some answers above. Plus this doesn't address some points I raised previously, but we can set those aside for now.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)