[Previous] Division of Labor and Experts: Generally Great but Sometimes Overrated | Home | [Next] Error Identifying Superpower

Academia's Inadequacy

TheCriticalRat posted my article Animal Rights Issues Regarding Software and AGI on the Debate A Vegan SubReddit.

This post shares a discussion highlight where I wrote something I consider interesting and important. The main issue is about the inadequacy of academia.

The two large blockquotes are messages from pdxthehunted and the non-quotes are me. I made judgment calls about newline spacing for reproducing pdxthehunted's messages and I changed the format for the two footnotes.

This came up a few weeks ago, when u/curi was posing questions on this subreddit. I looked through some of Elliot's work then and did so again just now. I'm not accusing them of being here in bad faith--they seem like they are legitimately interested in thinking about this topic and are asking interesting questions/making interesting claims.

That being said, they also seem to have little or no formal education in philosophy of mind or AGI. All of their links to who they are circle back to their own commercial website/blog, where they sell their services as a rationalist philosopher/consultant. It appears that they are (mostly) self-taught. Their (supposed)[1] connection to David Deutsch is why I bothered even to look further.

I don't think you need to have a degree to understand even advanced tenets in philosophy of mind or artificial intelligence. The problem here is that Elliot seems to have written an enormous amount--possibly thousands of pages--but has never been published in any peer-reviewed journal (at least none that I have access to through my community college) and so their credibility is questionable. Judging from their previous interactions on this sub, Elliot seems to have created their own curriculum and field of expertise.

I was impressed by the scope and seriousness of their work (the little I took the time to read). Still, it's very problematic for debate: they seem to be looking for someone who has the exact same intellectual background as they do--but without any kind of standardization, it's very hard to know what that is without investing possibly hundreds of hours into reading his corpus. This is the benefit of academic credentials; we can engage with someone under the assumption that they know what they're talking about. Most of Elliot's citations and links are to their own blog--not to any peer-reviewed, actual science. I suspect that's why they've left the caveat that "blog posts are okay."

A very quick browse through Academic Search Premier found over 100 published peer-reviewed journal articles on nonhuman animals and general intelligence. I browsed the abstracts of the first three, all of which discuss general intelligence in nonhuman animals. General intelligence is hard to define--especially in a way that doesn't immediately bias it in favor of humans--but even looking at the usual suspects in cognition demonstrate that many animals possess it unless we move the goalposts to human-specific achievements like writing symphonies or building spacecraft (which of course leaves the vast majority of all humans who've ever existed in the cold).

In short--not to be rude or dismissive--but the reason that animal rights activists aren't concerned about the "algorithms" that animals have that "give them the capacity to suffer" (forgive me if I'm misquoting) is that it is a non-issue. No serious biologists doubt that nonhuman animals (at least mammals and birds) can have preferences for or against different mental states and that those preferences can be frustrated or thwarted. Pain and suffering are fitness-selecting traits that allowed animals to avoid danger and seek nourishment and mates. I'm not an expert in any of your claimed domains; that being said, to believe that consciousness and the capacity to suffer evolved only in one species of primate demonstrates a shockingly naive understanding of evolution, philosophy of mind, cognitive science/neuroscience, and biology.

Similar questions can be asked about general intelligence. My answer to that is we don’t entirely know. We haven’t yet written an AGI. So what should we think in the meantime? We can look at whether all animal behavior is consistent with non-AGI, non-conscious, non-suffering robots with the same sorts of features and design as present day software and robots that we have created and do understand. Is there any evidence to differentiate an animal from non-AGI software? I’m not aware of any, although I’ve had many people point me to examples of animal behavior that are blatantly compatible with non-AGI programming algorithms.

There is no "scoop" here. There are a few serious philosophers I've read--Daniel Dennett, for instance--who I think make similar arguments as you're making here, which we can call the "animals as automata" meme. The very fact that you believe that cows show no more intelligence than a self-driving car makes me feel very suspicious that you don't know what you're talking about. Nick Bostrum basically states in his AI opus Superintelligence that if humans managed to emulate a rodent mind, we would have mostly solved human-level AGI.

To claim that there are "no examples" of an animal doing something that a non-AGI robot couldn't[2] do discredits your entire thesis--you're either woefully misinformed, or disingenuous. Again, I'm very impressed by your (Elliot's) obvious dedication to learning and thinking. Still, I don't think this argument is even to the point where it's refined enough to take seriously. There's so much wrong with it that betrays not just a lack of competence in adjacent disciplines but also an arrogance around the author's imagined brilliance that it feels awkward and unrewarding to engage with.

EDIT 12/2: [1] Connection to Deutsch--though not necessarily relevant to this argument--is not overstated.

[2] Changed would to couldn't

Suppose I'm right about ~everything. What should I do that would fix these problems?

Thanks for the response. Also, I checked the Beginning of Infinity and saw that you don't seem to be exaggerating your claim (obviously you know this--I'm mentioning it for any skeptics). Elliot Temple is not only listed in the acknowledgments of BOI, but they are given special thanks from the author. That's very cool, regardless of anything else. Congratulations. I'm hesitant to do too much cognitive work for you on how to fix your problems--it sounds like you're used to charging people a fair amount of money to do the same. Still, I engaged with you here, so I'll let you know what I think.

Read More

You need to become better read in adjacent fields--cognitive neuroscience, ethology, evolutionary biology, ethics--these are just a few that come up off the top of my head. If you're right about more or less everything, peer-reviewed research done by actual scientists in most of these fields should agree with your thesis. If it doesn't, make a weaker claim.

Publish

Right now, your argument is formatted as a blog post. Anyone with access to a computer is technically capable of self-publishing thousands of pages of their thoughts. Write an article and submit it to an academic journal for peer review. Any publication that survives the peer-review process will give you more credibility. I'm not saying that's fair, but it is a useful heuristic for nonexperts to decide whether or not you are worth their time. An alternative would be to see your blog posts cited in books by experts (for instance, Eliezer Yudkowsky has no formal secondary education, but his ideas are good enough that he is credited by other experts in his field).

Empiricism/Falsifiability

As it currently stands, you're essentially making a claim and insisting that others disprove it. This, of course, is acceptable as a Reddit discussion or a blog post--but is not suitable for uncovering the truth. I can insist that my pet rock has a subjective experience and refuse to believe otherwise unless someone can prove it to me, but I won't be taken seriously (nor should I be). Could you design an experiment that tests a falsifiable claim about nonhuman animal general intelligence? (Or, alternatively, find one that has already been published demonstrating that only humans possess it?) What would it look like?

What computations, what information processing, what inputs or outputs to what algorithms, what physical states of computer systems like brains indicates or is consciousness? I have the same question for suffering too.

We don't know the answer to these questions. Staking your thesis on possible answers to open questions might be a way to stalemate internet debates, but won't deepen your or anyone else's understanding.

Gatekeeping

You're widely read and the depth of your knowledge/understanding in some areas is significant. You need to recognize that some people will have different foundations than yours--they might be very well-read on evolutionary biology--but have less of an understanding of Turing computability. Instead of rudely dismissing arguments that are outside of the disciplines you're most comfortable with, try to meet these people on their level. What do they have to teach you? What thinkers can they expose you to? Your self-curated curriculum is impressive but uneven and far from comprehensive. Try a little humility. Assuming you're right about everything, you should be able to communicate it to experts outside of your field.

Closing

I think that advice is good whether or not you're correct; if you are, people far more intelligent than I should start to recognize it. If you aren't, you might be able to clarify where you went wrong and either abandon your claim or reformulate it to make a weaker--but possibly true--version.

Lastly, I encourage anyone observing from the sidelines to use Google Scholar or similar if you have an interest in animal general intelligence. I linked an article above; here it is again. The article references 60 others and has been cited in 14. This does not mean that the authors' findings are replicable or ironclad, but again--it is a useful heuristic in deciding what kind of probability we want to assign to the likelihood it is on the right track, especially when the alternative is trying to read through hundreds of pages of random blog posts so that we can meet an interlocutor on their level.

To find that article, I searched for "general intelligence in animals" using Academic Search Premier. Pubmed and Google Scholar might find similar results. I filtered out all articles that were not subject to peer review or were published before 2012. It was the 4th search result out of over 50 published in the last seven years. Science may never be finished or solvable, but nonhuman animal's capacity to learn, have intentional states, preferences, and experience pain are not really still open questions in relevant disciplines.

If I'm right about ~everything, that includes my views of the broad irrationality of academia and the negative value of current published research in many of the fields in question.

For example, David Deutsch's static meme idea, available in BoI, was rejected for academic publication ~20 years earlier. Academia gatekeeps to keep out ideas they don't want to hear, and they don't really debate what's true much in journals. It's like a highly moderated forum with biased moderators following unwritten and inconsistent rules (like reddit but stricter!).

My arguments re animals are largely Deutsch's. He taught me his worldview. The reason he doesn't write it up and publish it in a journal is because (he believes that) it either wouldn't be published or wouldn't be listened to (and it would alienate people who will listen to his physics papers). The same goes for many other important ideas he has. Being in the Royal Society, etc., is inadequate to effectively get past the academic gatekeeping (to get both published and seriously, productively engaged with). I don't think a PhD and 20 published papers would help either (especially not with issues involving many fields at once). I don't think people would, at that point, start considering and learning different ideas than what they already have, e.g. learning Critical Rationalism so they could apply that framework to animal rights to reach a conclusion like "If Critical Rationalism is true, then animal rights is wrong." (And CR is not the only controversial premise I use that people are broadly ignorant of, so it's harder than that.) People commonly dismiss others, despite many credentials, if they don't like the message. I don't think playing the game of authority and credentials – an irrational game – will solve the problem of people's disinterest in truth-seeking. This is view of academia is, again, a view Deutsch taught me.

Karl Popper published a ton but was largely ignored. Thomas Szasz too. There are many other examples. Even if I got published, I could easily be treated like e.g. Richard Lindzen who has published articles doubting some claims about global warming.

Instead of rudely dismissing arguments that are outside of the disciplines you're most comfortable with, try to meet these people on their level.

If I'm right about ~everything (premise), that includes that I'm right about my understanding of evolutionary biology, which is an area I've studied a lot (as has Deutsch). That's not outside my comfort zone.

I think that advice is good whether or not you're correct; if you are, people far more intelligent than I should start to recognize it.

We disagree about the current state of the world. How many smart people exist, how many competent people exist in what fields, how reasonable are intellectuals, what sort of things do they do, etc. You mention Eliezer Yudkowsky, who FYI agrees with me about this something like this particular issue, e.g. he denies "civilizational adequacy", and says the world is on fire, in Hero Licenscing. OTOH, he's also the same guy who took moderator action to suppress discussion of Critical Rationalism on his site because – according to him – it was downvoted a lot (factually there were lots of downvotes, but I mean he actually said that was his reason for taking moderator action – so basically just suppressing unpopular ideas on the basis that they are unpopular). He has publicly claimed Critical Rationalism is crap but has never written anything substantive about that and won't debate, answer counter-arguments, or endorse any criticism of Critical Rationalism written by someone else (and I'm pretty confident there is no public evidence that he knows much about CR).

The reason I asked about how to fix this is I think your side of the debate, including academic institutions and their alleged adequacy, are blocking error correction. They don't allow any reasonable or realistic way that, if I'm right, it gets fixed. FYI I've written about the general topic of how intellectuals are closed to ideas and what rational methods of truth seeking look like, e.g. Paths Forward. The basic theme of that article is about doing intellectual activities in such a way that, if you're wrong, and someone knows you're wrong, and they're willing to tell you, you don't prevent them from correcting you. Currently ~everyone is doing that wrong. (Of course there are difficulties like how to do this in a time-efficient manner, which I go into. It's not an easy problem to solve but I think it is solvable.)

Lastly, I encourage anyone observing from the sidelines to use Google Scholar or similar if you have an interest in animal general intelligence. I linked an article above; here it is again.

PS, FYI it's readily apparent from the first sentence of the abstract of that article that it's based on an intellectual framework which contradicts the one in The Beginning of Infinity. It views intelligence in a different way than we do, which must be partly due to some epistemology ideas which are not stated or cited in the paper. And it doesn't contain the string "compu" so it isn't engaging with our framework re computation either (instead it's apparently making unstated, uncited background assumptions again, which I fear may not even be thought through).

I guess you'll think that, in that case, I should debate epistemologists, not animal rights advocates. Approach one of the biggest points of disagreements more directly. I don't object to that. I do focus a lot on epistemology and issues closer to it. The animal welfare thing is a side project. But the situation in academic epistemology has the same problems I talked about in my sibling post and is, overall, IMO, worse. Also, even if I convinced many epistemologists, that might not help much, considering lots of what I was saying about computation is already a standard (sorta, see quote) view among experts. Deutsch actually complains about that last issue in The Fabric of Reality (bold text emphasized by me):

The Turing principle, for instance, has hardly ever been seriously doubted as a pragmatic truth, at least in its weak forms (for example, that a universal computer could render any physically possible environment). Roger Penrose's criticisms are a rare exception, for he understands that contradicting the Turing principle involves contemplating radically new theories in both physics and epistemology, and some interesting new assumptions about biology too. Neither Penrose nor anyone else has yet actually proposed any viable rival to the Turing principle, so it remains the prevailing fundamental theory of computation. Yet the proposition that artificial intelligence is possible in principle, which follows by simple logic from this prevailing theory, is by no means taken for granted. (An artificial intelligence is a computer program that possesses properties of the human mind including intelligence, consciousness, free will and emotions, but runs on hardware other than the human brain.) The possibility of artificial intelligence is bitterly contested by eminent philosophers (including, alas, Popper), scientists and mathematicians, and by at least one prominent computer scientist. But few of these opponents seem to understand that they are contradicting the acknowledged fundamental principle of a fundamental discipline. They contemplate no alternative foundations for the discipline, as Penrose does. It is as if they were denying the possibility that we could travel to Mars, without noticing that our best theories of engineering and physics say that we can. Thus they violate a basic tenet of rationality — that good explanations are not to be discarded lightly.

But it is not only the opponents of artificial intelligence who have failed to incorporate the Turing principle into their paradigm. Very few others have done so either. The fact that four decades passed after the principle was proposed before anyone investigated its implications for physics, and a further decade passed before quantum computation was discovered, bears witness to this. People were accepting and using the principle pragmatically within computer science, but it was not integrated with their overall world-view.

I think we live in a world where you can be as famous as Turing, have ~everyone agree you're right, and still have many implications of your main idea substantively ignored for decades (or forever. Applying Turing to physics is a better result than has happened with many other ideas, and Turing still isn't being applied to AI adequately). As Yudkowsky says, it's not an adequate world.


Update: Read more of this discussion at Discussing Animal Intelligence


Elliot Temple on December 2, 2019

Messages (9)

For documentation in case the link ever breaks, the first sentence of the paper's abstract reads:

> Contemporary descriptions of human intelligence hold that this trait influences a broad range of cognitive abilities, including learning, attention, and reasoning.

The paper is: Wass, Christopher & Denman-Brice, Alexander & Rios, Chris & Light, Kenneth & Kolata, Stefan & Smith, Andrew & Matzel, Louis. (2012). Covariation of Learning and "Reasoning" Abilities in Mice: Evolutionary Conservation of the Operations of Intelligence. Journal of experimental psychology. Animal behavior processes. 38. 109-24. 10.1037/a0027355.


curi at 5:23 PM on December 2, 2019 | #14660 | reply | quote

Oh God I hadn't actually read the rest of the abstract. Just looked. It's openly inductivist except it defines induction as a present-day non-AGI algorithm. This is ridiculous. Though it's unclear if that's what they think induction is or they think that's an instance of induction (with deduction, I think they intend it to be an instance, not the whole thing). Here's the abstract, emphasis added:

> Contemporary descriptions of human intelligence hold that this trait influences a broad range of cognitive abilities, including learning, attention, and reasoning. Like humans, individual genetically heterogeneous mice express a "general" *cognitive trait that influences* performance across a diverse array of learning and attentional tasks, and it has been suggested that this trait is qualitatively and structurally analogous to general intelligence in humans. However, the hallmark of human intelligence is the ability to use various forms of "reasoning" to support solutions to novel problems. Here, we find that genetically heterogeneous mice are capable of solving problems that are nominally indicative of *inductive and deductive forms of reasoning*, and that individuals' capacity for reasoning covaries with more general learning abilities. Mice were characterized for their general learning ability as determined by their aggregate performance (derived from principal component analysis) across a battery of *five diverse learning tasks*. These animals were then assessed on prototypic tests indicative of *deductive reasoning (inferring the meaning of a novel item by exclusion, i.e., "fast mapping")* and *inductive reasoning (execution of an efficient search strategy in a binary decision tree)*. The animals exhibited systematic abilities on each of these nominal reasoning tasks that were predicted by their aggregate performance on the battery of learning tasks. These results suggest that the coregulation of reasoning and general learning performance in genetically heterogeneous mice form a core cognitive trait that is analogous to human intelligence.

Re "fast mapping", what they wrote there is misleading, but what they mean is: you have 5 pairs of things. You know 4 of the pairings. You can deduce the fifth pairing.

Anyway, *we have binary tree search algorithms and they are not AGIs*. And they aren't what induction is about ... as presented in the philosophy literature by pro-inductivist thinkers.

Anyway another reason academic publication would be a hard route for me is because what I want to publish is rather close to claims that the large majority of academics are grossly incompetent and should be fired (not fired overnight but as an urgent goal). Even if I avoided all direct criticism of things I disagree with, it'd still be the case that anyone who actually understood my ideas would find it pretty easy to get from them to the conclusion that people like these paper authors are both grossly incompetent and, in many ways, actively harmful. If my ideas were understood, people could apply them critically and see the implications even if I didn't say it myself.

Here are the five learning tasks:

> Lashley III maze, passive avoidance, odor guided discrimination, Morris water maze, and associative fear conditioning

The first maze task just means the intentionally-food-deprived mouse found food in a maze, and was able to find it again with fewer wrong turns on additional visits to the maze.

A Roomba or self-driving car could do that. Doesn't mean they learn. Or if you really want to call that learning, then you need a different word for stuff humans can do that is unlike that, you don't just get to say that since you used the word "learn" it must be equivalent to human learning.


curi at 5:59 PM on December 2, 2019 | #14661 | reply | quote

If you click "edit" on your reddit comments, then copy/paste from there, you'll get the markdown version which has blockquote markers in it. If you could repost that way the formatting should come out mostly the same and I'll delete the plain text comment which is hard to read due to missing formatting.

Also, wtf, the mods deleted the first half of what you said in reply to me!?

https://www.reddit.com/r/DebateAVegan/comments/e4ul7v/the_programmers_challenge_to_animal_rights/f9iaauu/


curi at 2:12 PM on December 3, 2019 | #14669 | reply | quote

I just got banned from the Debate A Vegan subreddit for posting the following comment:

> For anyone interested in reading more of this discussion, we're going to continue at https://curi.us/2253-academias-inadequacy


curi at 2:29 PM on December 3, 2019 | #14670 | reply | quote

[While writing this response, the original post was removed. I think that’s unfortunate, but what’s done is done. I’d still love a quick response—just to see if I understand you correctly.]

Hi, Elliot. Thanks for your response. I want to say off the bat that I don’t think I’m equipped to debate the issue at hand with you past this point. (Mostly based off your sibling post; I’m not claiming you’re wrong, but just that I think I—finally—realize that I *don’t understand* where you’re coming from, entirely (or possibly at all). I’m willing to concede that—if you’re right about everything—you probably *do* need to have this conversation with programmers or physicists. If the general intelligence on display in the article I cited is categorically different from what you’re talking about when you talk about G.I. than I’m out of my depth.

That being said, I’d love to continue the conversation for a little while, if you’re up for it, either here or possibly on your blog if that works better for you. I have some questions and would like to try and understand your perspective.

> If I'm right about ~everything, that includes my views of the broad irrationality of academia and the negative value of current published research in many of the fields in question.

> For example, David Deutsch's static meme idea, available in BoI, was rejected for academic publication ~20 years earlier. Academia gatekeeps to keep out ideas they don't want to hear, and they don't really debate what's true much in journals. It's like a highly moderated forum with biased moderators following unwritten and inconsistent rules (like reddit but stricter!).

> My arguments re animals are largely Deutsch's. He taught me his worldview. The reason he doesn't write it up and publish it in a journal is because (he believes that) it either wouldn't be published or wouldn't be listened to (and it would alienate people who will listen to his physics papers). The same goes for many other important ideas he has. Being in the Royal Society, etc., is inadequate to effectively get past the academic gatekeeping (to get both published and seriously, productively engaged with). I don't think a PhD and 20 published papers would help either (especially not with issues involving many fields at once).

For what it’s worth, I think this is a fair criticism and concern, especially for someone—like you—who is trying to distill specific truths out of many fields at once. If your (and Deutsch’s) worldview conflicts with the prevailing academic worldview, I concede that publishing might be difficult or impossible and not the best use of your energy.

> I don't think people would, at that point, start considering and learning different ideas than what they already have, e.g. learning Critical Rationalism so they could apply that framework to animal rights to reach a conclusion like "If Critical Rationalism is true, then animal rights is wrong." (And CR is not the only controversial premise I use that people are broadly ignorant of, so it's harder than that.) People commonly dismiss others, despite many credentials, if they don't like the message. I don't think playing the game of authority and credentials – an irrational game – will solve the problem of people's disinterest in truth-seeking. This is view of academia is, again, a view Deutsch taught me.

> Karl Popper published a ton but was largely ignored. Thomas Szasz too. There are many other examples. Even if I got published, I could easily be treated like e.g. Richard Lindzen who has published articles doubting some claims about global warming.

Fair enough.

I’m not going to respond to the rest of your posts line-by-line because I think most of what you’re saying is uncontroversial or is not relevant to the OP (it was relevant to my posts; thank you for the substantial, patient responses).

For any bystanders who are interested and have made it this far, I think that this conversation between OP and Elliot is helpful in understanding their argument (at least it was for me).

Without the relevant CS or critical rationality background, I can attempt to restate their argument in a way that seems coherent (to me). Elliot or OP can correct me if I’m way off base.

The capacity for an organism to suffer may be binary; essentially, at a certain level of general intelligence, the capacity to suffer may turn on.

(I imagine suffering to exist on a spectrum; a human’s suffering may be “worse” than a cow’s or a chicken’s because we have the ability to reflect on our suffering and amplify it by imagining better outcomes, but I’m not convinced that—if I experienced life from the perspective of a cow—that I wouldn’t recognize the negative hallmarks of suffering, and prefer it to end. My thinking is that a sow in a gestation crate could never *articulate* to herself “I’m uncomfortable and in pain; I wish I were comfortable and pain-free,” but that doesn’t preclude a conscious preference for circumstances to be otherwise, accompanied by suffering or its nonhuman analog.)

Back to my interpretation of the argument: Beneath a certain threshold of general intelligence, pain—or the experience of having any genetically preprogrammed preference frustrated—may not be interpreted as suffering in the way humans understand it and may not constitute suffering in any meaningful or morally relevant way (even if you otherwise think we have a moral obligation to prevent suffering where we can).

It’s possible that suffering requires uniquely human metacognition; without the ability to think about pain and preference frustration *abstractly*, animals might not suffer in any meaningful sense.

So far (I hope) all I’ve done is restate what’s already been claimed by Elliot in his original post. Whether I’ve helped make it any clearer is probably an open question. Hopefully, Elliot can correct me if I’ve misinterpreted anything or if I’ve dumbed it down to a level where it’s fundamentally different from the original argument.

This is where I think it gets tricky and where a lot of miscommunication and misunderstanding has been going on. Here is a snippet of the conversation I linked earlier:

>**curi**: my position on animals is awkward to use in debates because it's over 80% background knowledge rather than topical stuff.

>**curi**: that's part of why i wanted to question their position and ask for literature that i could respond to and criticize, rather than focusing on trying to lay out my position which would require e.g. explaining KP and DD which is hard and indirect.

>**curi**: if they'll admit they have no literature which addresses even basic non-CR issues about computer stuff, i'd at that point be more interested in trying to explain CR to them.

I’m willing to accept that Elliot is here in good faith; nothing I’ve read on their blog thus far looks like an attempt to “own the soyboys” or “DESTROY vegan arguments.” They’re reading Singer (and Korsgaard) and are legitimately looking for literature that compares or contrasts nonhuman animals with AI.

The problem is—whether they’re right or not—it seems like the foundation of their argument requires a background in CR and theoretical computer science.

From my POV, **(a)** the argument that suffering may be binary vs. occurring on a spectrum is possible but far from settled and might be unfalsifiable. From my POV, it’s far more likely that animals *do* suffer in a way that is very different from human suffering but still ethically and categorically relevant.

new_grass made a few posts that more eloquently describe that perspective; humans, yelping dogs, and so on evolved from a common ancestor and it seems unlikely that suffering is a uniquely human feature when so many of our other cognitive skills seem to be continuous with other animals.

New_grass says:

> But this isn't the relevant proposition, unless you think the probability that general intelligence (however you are defining it) is required for the ability to suffer or be conscious is one. And that is absurd, given our current meager understanding of consciousness.

> The relevant question is what the probability is that other animals are conscious, or, if you are a welfarist, whether they can suffer. And that probability is way higher than zero, for the naturalistic reasons I have cited.

But according to Elliot, our judgment of the conservatism argument hinges on our understanding of CR and Turing computability.

Does the following sound fair?

*If pdxthehunted had an adequate understanding of the Turing principle and CR and their implications on intelligence and suffering, their opinion on **(a)** would change; they would understand why suffering certainly does occur as a binary off/on feature of sufficiently intelligent life.*

Please let me know if I’ve managed to at least get a clearer view of the state of the debate and where communication issues are popping up.

Frankly, I’ve enjoyed this thread. I’ve learned a lot. I bought DD’s BOI a couple of years ago after listening to his two podcasts with Sam Harris, but never got around to reading it. I’ve bumped it up to next on my reading list and am hoping that I’m in a better position to understand your argument afterward.

Finally--if capacity for suffering hinges on general intelligence, is consciousness relevant to the argument at all?

Thanks again.


pdxthehunted at 2:30 PM on December 3, 2019 | #14671 | reply | quote

#14670

That's really frustrating. The ban seems wildly inappropriate.

I tried copying my response from Reddit's editor. It looks pretty ugly--I used markdown mode. Thanks for your patience. If I end up in more conversations here, I'll work on mastering the formatting. Feel free to make any cosmetic/aesthetic changes to the above if it's worth your time; alternatively, I can try to repost a cleaned-up version later today or tomorrow.

I'm interested in seeing whether or not I'm on the right track for understanding where you're coming from. Thanks again, sorry about the reddit drama.


pdxthehunted at 2:41 PM on December 3, 2019 | #14672 | reply | quote

Reddit drama is not your fault! I'll fix your comment and reply today. I removed the backslashes but I see there's a few more things to do.


curi at 2:45 PM on December 3, 2019 | #14673 | reply | quote

#14671 I wrote a long reply so I posted it to https://curi.us/2256-discussing-animal-intelligence


curi at 5:29 PM on December 3, 2019 | #14676 | reply | quote

Comments on Yudkowsky's Hero Licensing (linked in the OP, talks about civilizational inadequacy):

http://curi.us/2065-open-letter-to-machine-intelligence-research-institute#9282


Anonymous at 12:45 AM on December 5, 2019 | #14696 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)