Effective Altruism Is Mostly Wrong About Its Causes

EA has gotten a few billion dollars to go towards its favored charitable causes. What are they? Here are some top cause areas:

  • AI alignment
  • animal welfare and veganism
  • disease and physical health
  • mental health
  • poverty
  • environmentalism
  • left wing politics

This is not a complete list. They care about some other stuff such as Bayesian epistemology. They also have charities related to helping the EA movement itself and they put a bunch of resources into evaluating other charities.

These causes are mostly bad and harmful. Overall, I estimate EA is doing more harm than good. For their billions of dollars, despite their cost-effectiveness oriented mission, they’ve created negative value.

EA’s Causes

Meta charities are charities whose job is to evaluate other charities then pass on money to the best ones. That has upsides and downsides. One of the downsides is it makes it less obvious what charities the EA movement funds.

Giving What We Can has a list of recommended organizations to donate to. The first two charities I see listed are a “Top Charities Fund” and an “All Grants Fund”, which are generic meta charities. That’s basically just saying “Give us money and trust it to use it for good things on any topic we choose.”

The second group of charities I see is related to animal welfare, but all three are meta charities which fund other animal welfare charities. So it’s unclear what the money is actually for.

Scrolling down, next I see three meta charities related to longtermism. I assume these mostly donate to AI alignment, but I don’t know for sure, and they do care about some other issues like pandemics. You might expect pandemics to be categorized under health not longtermism/futurism, but one of the longtermism meta charity descriptions explicitly mentions pandemics.

Scrolling again, I see a climate change meta charity, a global catastrophe risk meta charity (vague), and an EA Infrastructure charity that tries to help the EA cause.

Scrolling again, I finally see some specific charities. There are two about malaria, then one about childhood vaccinations in Nigeria. Scrolling horizontally, they have a charity about treating depression in Africa along side iodine and vitamin A supplements and deworming. There are two about happiness and one about giving poor people some cash while also spending money advocating for Universal Basic Income (a left wing political cause). It’s weird to me how these very different sorts of causes get mixed together: physical health, mental health and political activism.

Scrolling down, next is animal welfare. After that comes some pandemic stuff and one about getting people to switch careers to work in these charities.

I’m sure there’s better information somewhere else about what EA actually funds and how much money goes to what. But I’m just going to talk about how good or bad some of these causes are.

Cause Evaluations

AI Alignment

This cause has various philosophical premises. If they’re wrong, the cause is wrong. There’s no real way to debate them, and it’s hard to even get clarifications on what their premises are, and there’s significant variation in beliefs between different advocates which makes it harder to have any specific target to criticize.

Their premises are things like Bayesian epistemology, induction, and there being no objective morality. How trying to permanently (mind) control the goals of AIs differs from trying to enslave them is unclear. They’re worried about a war with the AIs but they’re the aggressors looking to disallow AIs from having regular freedoms and human rights.

They’re basically betting everything on Popper and Deutsch being wrong, but they don’t care to read and critique Popper and Deutsch. They don’t address Popper’s refutation of induction or Deutsch on universal intelligence. If humans already have universal intelligence, then in short AIs will at best be like us, not be something super different and way more powerful.

Animal Welfare

The philosophical premises are wrong. To know whether animals suffer you need to understand things about intelligence and ideas. That’s based on epistemology. The whole movement lacks any serious interest in epistemology. Or in simple terms, they don’t have reasonable arguments to differentiate a mouse from a roomba (with a bit better software). A roomba uncontroversially can’t suffer, and it still wouldn’t be able to suffer if you programmed it a bit better, gave it legs and feet instead of wheels, gave it a mouth and stomach instead of a battery recharging station, etc.

The tribalism and political activism are wrong too. Pressuring corporations and putting out propaganda is the wrong approach. If you want to make the world better, educate people about better principles and ways of thinking. People need rationality and reasonable political philosophy. Skipping those steps and jumping straight into specific causes, without worrying about the premises underlying people’s reasoning and motivations, is bad. It encourages people to do unprincipled and incorrect stuff.

Basically, for all sorts of causes that have anything to do with politics, there are tons of people working to do X, and also tons of people working to stop X or to get something that contradicts X. The result is a lot of people working against each other. If you want to make a better world, you have to stop participating in those tribalist fights and start resolving those conflicts. That requires connecting what you do to principles people can agree on, and teaching people better principles and reasoning skills (which requires first learning those things yourself, and also being open to debate and criticism about them).

See also my state of the animal rights debate tree diagram. I tried to find any vegans or other advocates who had any answers to it or relevant literature to add anything to it, but basically couldn’t get any useful answers anywhere.

Let’s also try to think about the bigger picture in a few ways. Why are factory farms so popular? What is causing that?

Do people not have enough money to afford higher quality food? If so, what is causing that? Maybe lack of capitalism or lack of socialism. You have to actually think about political philosophy to have reasonable opinions about this stuff and reach conclusions. You shouldn’t be taking action before that. I don’t think there exists a charity that cares about animal welfare and would use anti-poverty work as the method to achieve it. That’s too indirect for people or something, so they should get better at reasoning…

A ton of Americans do have money to afford some better food. Is the problem lack of awareness of how bad factory farms are, the health concerns they create for humans, or lack of knowledge of which brands or meats are using factory farms? Would raising awareness help a lot? I saw something claiming that in a survey over 50% of Americans said they thought their meat was from animals that were treated pretty well, but actually like 99% of US meat is from factory farms, so a ton of people are mistaken. I find that plausible. Raising awareness is something some charities work on, but often in shrill, propagandistic, aggressive, alienating or tribalist ways, rather than providing useful, reliable, unbiased, non-partisan information.

Maybe the biggest issue with factory farms in the US is laws and regulations (including subsidies) which were written largely by lobbyists for giant food corporations, and which are extremely hostile to their smaller competitors. That is plausible to me. How much animal welfare work is oriented towards this problem? I doubt it’s much since I’ve seen a decent amount of animal welfare stuff but never seen this mentioned. And what efforts are oriented towards planning and figuring out whether this is the right problem to work on, and coming up with a really good plan for how to make a change? So often people rush to try to change things without recognizing how hard, expensive and risky change is, and making damn sure they’ve thought everything through and the change will actually work as intended and the plan to cause the change will work as intended too.

Left Wing Politics

More broadly, any kind of left wing political activism is just fighting with the right instead of finding ways to promote social harmony and mutual benefit. It’s part of a class warfare mindset. The better way is in short classical liberalism, which neither the current left nor right knows much about. It’s in short about making a better society for everyone instead of fighting with each other. Trying to beat the rival political tribe is counter-productive. Approaches which transcend politics are needed.

Mental Health

Psychiatry is bad. It uses power to do things to people against their consent, and it’s manipulative, and its drugs are broadly poison. Here’s a summary from Thomas Szasz, author of The Myth of Mental Illness.

Environmentalism

As with many major political debates, both sides of the global warming debate are terrible and have no idea what they’re talking about. And there isn’t much thoughtful debate. I’ve been unable to find refutations of Richard Lindzen’s skeptical arguments related to water vapor. The “97% of scientists agree” thing is biased junk, and even if it were true it’s an appeal to authority not a rational argument. The weather is very hard to predict even in the short term, and a lot of people have made a lot of wrong predictions about long term warming or cooling. They often seem motivated by other agendas like deindustrialization, anti-technology attitudes or anti-capitalism, with global warming serving as an excuse. Some of what they say sounds a lot like “Let’s do trillions of dollars of economic harm taking steps that we claim are not good enough and won’t actually work.” There are fairly blatant biases in things like scientific researching funding – science is being corrupted as young scientists are under pressure to reach certain conclusions.

There are various other issues including pollution, running out of fossil fuels, renewables, electric cars and sustainability. These are all the kinds of things where

  1. People disagree with you. You might be wrong. You might be on the wrong side. What you’re doing might be harmful.
  2. You spend time and resources fighting with people who are working against you.
  3. Most people involved are have tribalist mindsets.
  4. Political activism is common.
  5. Rational, effective, productive debate, to actually reasonably resolve the disagreements about what should be done, is rare.

What’s needed is to figure out ways to actually rationally persuade people (not use propaganda on them) and reach more agreement about the right things to do, rather than responding to a controversy by putting resources into one side of it (while others put resources into the other side, and you kinda cancel each other out).

Physical Health

These are the EA causes I agree with the most. Childhood vaccinations, vitamin A, iodine and deworming sound good. Golden rice sounds like a good idea to me (not mentioned here but I’ve praised it before). I haven’t studied this stuff a ton. The anti-malaria charities concern me because I expect that they endorse anti-DDT propaganda.

I looked at New Incentives which gets Nigerian babies vaccinated (6 visits to a vaccination clinic finishing at 15 months old) for around $20 each by offering parents money to do it. I checked if they were involved with covid vaccine political fighting and they appear to have stayed out of that, so that’s good. I have a big problem with charities that have some good cause but then get distracted from it to do tribalist political activism and fight with some enemy political tribe. A charity that just sticks to doing something useful is better. So this one actually looks pretty good and cost effective based on brief research.

Pandemic prevention could be good but I’d be concerned about what methods charities are using. My main concern is they’d mostly do political activism and fight with opponents who disagree with them, rather than finding something actually productive and effective to do. Also pandemic prevention is dominated quite a lot by government policy, so it’s hard to stay out of politics. Just spending donations to stockpile some masks, vaccines and other supplies (because some governments don’t have enough) doesn’t sound like a very good approach, and that’d be more about mitigation than prevention anyway.

Even something like childhood vaccination in Nigeria has some concerns. Looking at it in isolation, sure, it makes some things better. It’s a local optima. But what bigger picture does it fit into?

For example, why isn’t the Nigerian government handling this? Is it essentially a subsidy for the Nigerian government, which lets them buy more of something else, and underfund this because charities step in and help here? Could the availability of charity for some important stuff cause budgets to allocate money away from those important things?

Does giving these poor Nigerians this money let their government tax them at higher rates than it otherwise would, so some of the money is essentially going to the Nigerian government not to the people being directly helped? Might some of the money be stolen in other ways, e.g. by thugs in areas where there’s inadequate protection against violence? Might the money attract any thugs to the area? Might the thugs pressure some women to have more babies and get the money so that they can steal it? I don’t know. These are just some initial guesses about potential problems that I think are worth some consideration. If I actually studied the topic I’m sure I’d come up with some other concerns, as well as learning that some of my initial concerns are actually important problems while others aren’t.

Why are these Nigerians so poor that a few dollars makes a significant difference to them? What is causing that? Is it bad government policies? Is there a downside to mitigating the harm done by those policies, which helps them stay in place and not seem so bad? And could we do more good by teaching the world about classical liberalism or how to rationally evaluate and debate political principles? Could we do more good by improving the US so it can set a better example for other countries to follow?

Helping some poor Nigerians is a superficial (band aid) fix to some specific problems, which isn’t very effective in the big picture. It doesn’t solve the root causes/problems involved. It doesn’t even try to. It just gives some temporary relief to some people. And it has some downsides. But the downsides are much smaller compared to most of EA’s other causes, and the benefits are real and useful – they make some people’s lives clearly better – even if they’re just local optima.

Meta Charities

I think EA’s work on evaluating the effectiveness of charity interventions has some positive aspects, but a lot of it is focused on local optima which can actually make it harmful overall even if some of the details are correct. Focusing attention and thinking on the wrong issues makes it harder for more important issues to get attention. If no one was doing any kind of planning, it’s easier to come along and say “Hey let’s do some planning” and have anyone listen. If there’s already tons of planning of the wrong types, a pro-planning message is easier to ignore.

EA will look at how well charities work on their own terms, without thinking about the cause and effect logic of the full situation. I’ve gone over this a few times in other sections. Looking at cost per childhood vaccination is a local optima. The bigger picture includes things like how it may subsidize a bad government or local thugs, or how it’s just a temporary short-term mitigation while there are bigger problems like bad economics systems that cause poverty. How beneficial is it really to fix one instance of a problem when there are systems in the world which keep creating that problem over and over? Dealing with those systems that keep causing the problems is more important. In simple terms, imagine a guy was going around breaking people’s legs, and you went around giving them painkillers… There is a local optima of helping people who are in pain, but it’s much more important to deal with the underlying cause. From what I’ve seen, EA’s meta charity evaluation is broadly about local optima not bigger picture understanding of causes of problems, so it often treats symptoms of a problem not the real problem. They will just measure how much pain relief an intervention provides and evaluate how good it is on that basis (unless they manage to notice a bigger picture problem, which they occasionally do, but they aren’t good at systematically finding those).

Also they try to compare charities that do different kinds of things. So you have benefits in different dimensions and they try to compare. They tend to do this, in short, by weighting factor summing, which fundamentally doesn’t work (it’s completely wrong, broken and impossible, and means there are hidden and generally biased thought processes responsible for the conclusions reached). As a quick example, one of the EA charities I saw was doing something about trying to develop meat alternatives. This is approach animal welfare in a very different way than, say, giving painkillers to animals on factory farms or doing political activist propaganda against the big corporations involved. So there’s no way to directly compare which is better in simple numerical terms. As much as people like summary numbers, people need to learn to think about concepts better.

Details

I could do more detailed research and argument about any of these causes, but it’s unrewarding because I don’t think EA will listen and seriously engage. That is, I think a lot of my ideas would not be acted on or refuted with arguments. So I’d still think I’m right, and be given no way to change my mind, and then people would keep doing things I consider counter-productive. Also, I already have gone into a lot more depth on some of these issues, and didn’t include it here because it’s not really the point.

Why do some people have a different view of EA and criticism, or different experiences with that? Why do some people feel more heard and influential? Two big reasons. First, you can social climb at EA then influence them. Second, compared to me, most people do criticism that’s simpler and more focused on local optima not foundations, fundamentals, root causes or questioning premises. (I actually try to give simple criticisms sometimes like “This is a misquote” or “This is factually false” but people will complain about that too. But I won’t get into that here.) People like criticism better when it doesn’t cross field boundaries and make them think about things they aren’t good at, don’t know much about, aren’t experienced at, or aren’t interested in. My criticisms tend to raise fundamental, important challenges and be multi-disciplinary instead of just staying within one field and not challenging its premises.

Conclusion

The broad issues are people who aren’t very rational or open to criticism and error correction, who then pursue causes which might be mistaken and harmful, and who don’t care much about rational persuasion, rational debate. People seem so willing to just join a tribe and fight opponents, and that is not the way to make the world better. Useful work almost all transcends those fights and stays out of them. And the most useful work, which will actually fix things in very cost-effective lasting ways is related to principles and better thinking. Help people think better and then the rest is so much easier.

There’s something really really bad about working against other people, who think they’re right, and you just spend your effort to try to counter-act their effort. Even if you’re right and they’re wrong, that’s so bad for cost effectiveness. Persuading them would be so much better. If you can’t figure out how to do that, why not? What are the causes that prevent rational persuasion? Do you not know enough? Are they broken in some way? If they are broken, why not help them instead of fighting with them? Why not be nice and sympathetic instead of viewing them as the enemies to be beaten by destructively overwhelming their resources with even more resources? I value things like social harmony and cooperation rather than adversarial interactions, and (as explained by classical liberalism) I don’t think there are inherent conflicts of interest between people that require (Marxist) class warfare or which disallow harmony and mutual benefit. People who are content to work against other people, in a fairly direct fight, generally seem pretty mean to me, which is rather contrary to the supposed kind, helping-others spirit of EA and charity.

EA’s approach to causes, as a whole, is a big bet on jumping into stuff without enough planning and understanding root causes of things and figuring out how to make the right changes. They should read e.g. Eli Goldratt on transition tree diagrams and how he approaches making changes within one company. If you want to make big changes affecting more people, you need much better planning than that, which EA doesn’t do or have, which encourages a short term mindset of pursuing local optima, that might be counter-productive, without adequately considering that you might be wrong and in need of better planning.

People put so much work into causes while putting way too little into figuring out whether those causes are actually beneficial, and understanding the whole situation and what other actions or approaches might be more effective. EA talks a lot about effectiveness but they mostly mean optimizing cost/benefit ratios given a bunch of unquestioned premises, not looking at the bigger picture and figuring out the best approach with detailed cause-effect understanding and planning.

More posts related to EA.


Elliot Temple | Permalink | Messages (0)

Effective Altruism Related Articles

I wanted to make it easier to find all my Effective Altruism (EA) related articles. I made an EA blog category.

Below I link to more EA-related stuff which isn't included in the category list.

Critical Fallibilism articles:

I also posted copies of some of my EA comments/replies in this topic on my forum.

You can look through my posts and comments on the EA site via my EA user profile.

I continued a discussion with an Effective Altruist at my forum. I stopped using the EA forum because they changed their rules to require giving up your property rights for anything you post there (so e.g. anyone can sell your writing without your permission and without paying you).

I also made videos related to EA:


Elliot Temple | Permalink | Messages (0)

EA Should Raise Its Standards

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I think EA could be over 50% more effective by raising its standards. Community norms should care more about errors and about using explicit reasoning and methods.

For a small example, quotes should be exact. There should be a social norm that says misquotes are unacceptable. You can’t just change the words (without brackets), put it in quote marks, and publish it. That’s not OK. I believe this norm doesn’t currently exist and there would be significant resistance to it. Many people would think it’s not a big deal, and that it’s a pedantic or autistic demand to be so literal with quotes. I think this is a way that people downplay and accept errors, which contributes to lowered effectiveness.

There are similar issues with many sorts of logical, mathematical, grammatical, factual and other errors where a fairly clear and objective “correct answer” can be determined, which should be uncontroversial, and yet people don’t care and take seriously that getting it right is important. Errors should be corrected. Retractions should be issued. Post-mortems should be performed. What process allowed the error to happen? What changes could be made to prevent similar errors from happening in the future?

It’s fine for beginners to make mistakes, but thought leaders in the community should be held to higher standards, and the higher standards should be an aspirational ideal that the beginners want to achieve, rather than something that’s seen as unnecessary, bad or too much work. It’s possible to avoid misquotes and logical errors without it being a major burden; if someone finds it’s a large burden, that means they need to practice more until they improve their intuition and subconscious mind. Getting things right in these ways should be easy and something they you can do while tired, distracted, autopiloting, etc.

Fixes like these alone won’t make EA far more effective by themselves. They will set the stage to enable more advanced or complex improvements. It’s very hard to do more important improvements when frequently making small errors. Getting the basics rights enables working more effectively on more advanced issues.

One of the main more advanced issues is rational debate.

Another is not trusting yourself. Don’t bet anything on your integrity or lack of bias when you can avoid it. There should be a strong norm against doing anything that would fail if you have low integrity or bias. If you can find any alternative, which doesn’t rely on your rationality, do that instead. Bias is common. Learning to be better at not fooling yourself is great, but you’ll probably screw it up a lot. If you can approach things so that you don’t have the opportunity to fool yourself, that’s better. There should be strong norms for things like transparency and following flowcharted methods and rules that dramatically reduce scope for bias. This can be applied to debate as well as to other things. And getting debate right enables criticism when people don’t live up to norms; without getting debate right norms have to be enforced in significant part with social pressure which compromises the rationality of the community and prevents it from clearly seizing the rationality high ground in debates with other groups.


Elliot Temple | Permalink | Messages (0)

Criticizing The Scout Mindset (including a misquote)

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


This is quick notes, opinions and criticisms about the book The Scout Mindset by Julia Galef (which EA likes and promotes). I’m not going in depth, being very complete, or giving many quotes, because I don’t care much. I think it’s a bad book that isn’t worth spending more time on, and I don’t expect the author or her fans to listen to, engage with, value, appreciate or learn from criticism. If they were reasonable and wanted to interact, then I think this would be plenty to get the discussion/debate started, and I could give more quotes and details later if that would help make progress in our conversation.

The book is pretty shallow.

Galef repeatedly admits she’s not very rational, sometimes openly and sometimes by accident. The open admissions alone imply that the techniques in the book are inadequate.

She mentions that while writing the book she gathered a bunch of studies that agree with her but was too biased to check their quality. She figured out during writing that she should check them and she found that lots were bad. If you don’t already know that kinda stuff (that most studies like that are bad, that studies should be checked instead of just trusting the title/abstract, or that you should watch out for being biased), maybe you’re too new to be writing a book on rationality?

The book is written so it’s easy to read and think you’re already pretty good and not change. Or someone could improve a little.

The book has nothing that I recognized as substantive original research or thinking. Does she have any ideas of her own?

She uses biased examples, e.g. Musk, Bezos and Susan Blackmore are all used as positive examples. In each case, there are many negative things one could say about them, but she only says positive things about them which fit her narrative. She never tries to consider alternative views about them or explain any examples that don’t easily fit her narrative. Counter-examples or apparent counter-examples are simply left out of the book. Another potential counter-example is Steve Jobs, who is a better and more productive person than any people used as examples in her book, yet he has a reputation rather contrary to the scout mindset. That’s the kind of challenging apparent/potential counter-example that she could have engaged with but didn’t.

She uses an example of a Twitter thread where someone thought email greetings revealed sexism, and she (the tweet author) got cheered for sharing this complaint. Then she checked her data and found that her claim was factually wrong. She retracted. Great? Hold on. Let’s analyze a little more. Are there any other explanations? Even if the original factual claims were true, would sexism necessarily follow? Why not try to think about other narratives? For example, maybe men are less status oriented or less compliant with social norms, so that is why they are less inclined to use fancier titles when addressing her. It doesn’t have to be sexism. If you want to blame sexism, you should look at how they treat men, not just as how they treat one woman. Another potential explanation is that men dislike you individually and don’t treat other women the same way, which could be for some reason other than sexism. E.g. maybe it’s because you’re biased against men but not biased against women, so men pick up on that and respect you less. Galef never thinks anything through in depth and doesn’t consider additional nuances like these.

For Blackmore, the narrative is that anyone can go wrong and rationality is about correcting your mistakes. (Another example is someone who fell for a multi-level marketing scheme before realizing the error.) Blackmore had some experience and then started believing in the paranormal and then did science experiments to test that stuff and none of it worked and she changed her mind. Good story? Hold on. Let’s think critically. Did Blackmore do any new experiments? Were the old experiments refuting the paranormal inadequate or incomplete in some way? Did she review them and critique them? The story mentioned none of them. So why did she do redundant experiments and waste resources to gather the same evidence that already existed? And why did it change her mind when it had no actual new information? Because she was biased to respect the results of her own experiments but not prior experiments done by other people (that she pointed out no flaws in)? This fits the pro-evidence, pro-science-experiments bias of LW/Galef. They’re too eager to test things without considering that, often, we already have plenty of evidence and we just need to examine and debate it better. Blackmore didn’t need any new evidence to change her mind and getting funding to do experiments like that speaks to her privilege. Galef brings up multiple examples of privilege without showing any awareness of it; she just seems to want to suck up to high status people, and not think critically about their flaws, rather than to actually consider their privileges. Blackmore not only was able to fund bad experiments, then she was able to change her mind and continue her career. Why did she get more opportunities after doing such a bad job earlier in her career? Yes she improved (how much really though?). But other people didn’t suck in the first place, then also improved, and never got such great opportunities.

Possibly all the examples in the book of changing one’s mind were things that Galef’s social circle can agree with instead of be challenged by. They all changed their minds to agree with Galef more, not less. E.g. an example was used of becoming more convinced by global warming which, in passing, smeared some other people on the climate change skeptic side as being really biased, dishonest, etc. (True of some of them probably but not a good thing to throw around as an in-passing smear based on hearsay. And true of people on the opposite side of the debate too, so it’s biased to only say it about the side you disagree with to undermine and discredit them in passing while having the deniability of saying it was just an example of something else about rationality.) There was a pro-choicer who became less dogmatic but remained pro-choice, and I think Galef’s social circle also is pro-choice but trying not to be dogmatic about it. There was also a pro-vaccine person who was careful and strategic about bringing up the subject with his anti-vax wife but didn’t reconsider his own views at all, but he and the author did display some understanding of the other side’s point of view and why some superficial pro-vax arguments won’t work. So the narrative is if you understand the point of view of the people who are wrong, then you can persuade them better. But (implied) if you have center-left views typical of EA and LW people, then you won’t have to change your mind much since you’re mostly right.

Galef’s Misquote

Here’s a slightly edited version of my post on my CF forum about a misquote in the book. I expect the book has other misquotes (and factual errors, bad cites, etc.) but I didn’t look for them.

The Scout Mindset by Julia Galef quotes a blog post:

“Well, that’s too bad, because I do think it was morally wrong.”[14]

But the words in the sentence are different in the original post:

Well that’s just too bad, because I do think it was morally wrong of me to publish that list.

She left out the “just” and also cut off the quote early which made it look like the end of a sentence when it wasn’t. Also a previous quote from the same post changes the italics even though the italics match in this one.

The book also summarizes events related to this blog post, and the story told doesn’t match reality (as I see it by looking at the actual posts). Also I guess he didn’t like the attention from the book because he took his whole blog down and the link in the book’s footnote is dead. The book says they’re engaged so maybe he mistakenly thought he would like the attention and had a say in whether to be included? Hopefully… Also the engagement may explain the biased summary of the story that she gave in her book about not being biased.

She also wrote about the same events:

He even published a list titled “Why It’s Plausible I’m Wrong,”

This is misleading because he didn’t put up a post with that title. It’s a section title within a post and she didn’t give a cite so it’s hard to find. Also her capitalization differs from the original which said “Why it’s plausible I’m wrong”. The capitalization change is relevant to making it look more like a title when it isn’t.

BTW I checked archives from other dates. The most recent working one doesn’t have any edits to this wording nor does the oldest version.

What is going on? This book is from a major publisher and there’s no apparent benefit to misquoting it in this way. She didn’t twist his words for some agenda; she just changed them enough that she’s clearly doing something wrong but with no apparent motive (besides maybe minor editing to make the quote sound more polished?). And it’s a blog post; wouldn’t she use copy/paste to get the quote? Did she have the blog post open in her browser and go back and forth between it and her manuscript in order to type in the quote by hand!? That would be a bizarre process. Or does she or someone else change quotes during editing passes in the same way they’d edit non-quotes? Do they just run Grammarly or similar and see snippets from the book and edit them without reading the whole paragraph and realizing they’re within quote marks?

My Email to Julia Galef

Misquote in Scout Mindset:

“Well, that’s too bad, because I do think it was morally wrong.”[14]

But the original sentence was actually:

Well that’s just too bad, because I do think it was morally wrong of me to publish that list.

The largest change is deleting the word "just".

I wanted to let you know about the error and also ask if you could tell me what sort of writing or editing process is capable of producing that error? I've seen similar errors in other books and would really appreciate if I could understand what the cause is. I know one cause is carelessness when typing in a quote from paper but this is from a blog post and was presumably copy/pasted.

Galef did not respond to this email.


Elliot Temple | Permalink | Messages (0)

EA and Responding to Famous Authors

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I think EA has the resources to attempt to respond to every intellectual who sold over 100,000 books in English which make arguments that contradict EA. EA could write rebuttals to all popular, well known rival positions that are written in books. You could start with the authors who sold over a million books.

There are major downsides to using popularity as your only criterion for what to respond to. It’s important to also have ways that you respond to unpopular criticism. But responding to influential criticism makes sense because people know about it and just ignoring it makes it look like you don’t care to consider other ideas or have no answers.

Answering the arguments of popular authors could be one project, of 10+, in which EA attempts to engage with alternative ideas and argue its case.

EA claims to be committed to rationality but it seems more interested in getting a bunch of charity projects underway and/or funded better ASAP instead of taking the time to first do extensive rational analysis to figure out the right ideas to guide charity.

I understand not wanting to get caught up in doing planning forever and having decision paralysis, but where is the reasonably complete planning and debating that seems adequate to get started based on?

For example, it seems unreasonable to me to start an altruist movement without addressing Ayn Rand’s criticisms of altruism. Where are the serious essays summarizing, analyzing and refuting her arguments about altruism? She sold many millions of books. Where are the debates with anyone from ARI or the invitations for any online Objectivists who are interested to come debate with EA? Objectivism has a lot of fans today who are interested in rationality and debate (or at least claim to be), so ignoring them instead of writing anything that could change their minds seems bad. And being encouraging of discussion with them, instead of discouraging, would make sense and be more rational. (I’m aware that they aren’t doing better. They aren’t asking EA’s to come debate them, hosting more rational debates, writing articles refuting EA, etc. IMO both groups are not doing very well and there’s big room for improvement. I’ve tried to talk to Objectivists to get them to improve before and it didn’t work. Overall, although I’m a big fan of Ayn Rand, I think Objectivist communities today are less open to critical discussion and dissent than EA is.)


Elliot Temple | Permalink | Messages (0)

Is EA Rational?

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I haven’t studied EA much. There is plenty more about EA that I could read. But I don’t want to get involved much unless EA is rational.

By “rational”, I mean capable of (and good at) correcting errors. Rationality, in my mind, enables progress and improvement instead of stasis, being set in your ways, not listening to suggestions, etc. So a key aspect of rationality is being open to criticism, and having ways that changes will actually be made based on correct criticisms.

Is EA rational? In other words, if I study EA and find some errors, and then I write down those errors, and I’m right, will EA then make changes to fix those errors? I am doubtful.

That definitely could happen. EA does make changes and improvements sometimes. Is that not a demonstration of EA’s rationality? Partially, yes, sure. Which is why I’m saying this to EA instead of some other group. I think EA is better at that than most other groups.

But I think EA’s ability to listen to criticism and make changes is related to social status, bias, tribalism, and popularity. If I share a correct criticism and I’m perceived as high status, and I have friends in high places, and the criticism fits people’s biases, and the criticism makes me seem in-group not out-group, and the criticism gains popularity (gets shared and upvoted a bunch, gets talked about by many people), then I would have high confidence that EA would make changes. If all those factors are present, then EA is reliably willing to consider criticism and make changes.

If some of those factors are present, then it’s less reliable but EA might listen to criticism. If none of those factors are present, then I’m doubtful the criticism will be impactful. I don’t want to study EA to find flaws and also make friends with the right people, change my writing style to be a better culture fit with EA, form habits of acting in higher status ways, and focus on sharing criticisms that fit some pre-existing biases or tribal divisions.

What can be done as an alternative to listening to criticism based on popularity, status, culture-fit, biases, tribes, etc? One option is organized debate with written methodologies that make some guarantees. EA doesn’t do that. Does it do something else?

One thing I know EA does, which is much better than nothing (and is better than many other groups offer), is informal, disorganized debate following unwritten methodologies that vary some by the individuals you’re speaking with. I consider this option inadequately motivating to seriously research and critically scrutinize EA.

I could talk to EA people who have read essays about rationality and who are trying to be rational – individually, with no accountability, transparency, or particular responsibilities. I think that’s not good enough and makes it way too easy for social status hierarchies to matter. If EA offered more organized ways of sharing and debating criticism, with formal rules, then people would have to follow the rules and therefore not act based on status. Things like rules, flowcharted methods to follow, or step-by-step actions to take can all help fight against the people’s tendency to act based on status and other biases.

It’s good for informal options to exist but they rely on basically “everyone just personally tries to be rational” which I don’t think is good enough. So more formal options, with pro-rationality (and anti-status, anti-bias, etc.) design features should exist too.

The most common objection to such things is they’re too much work. On an individual level, it’s unclear to me that following a written methodology is more work than following an unwritten methodology. Whatever you do, you have some sort of methods or policies. Also, I don’t really think you can evaluate how much work a methodology is (and how much benefit it offers, since the cost/benefit ratio is what matters) without actually developing that methodology and writing it down first. I think rational debate methodologies which tries to reach conclusions about incoming criticisms are broadly untested empirically, so people shouldn’t assume they’d take too long or be ineffective when they can’t point to any examples of them being tried with that result. And EA has plenty of resources to e.g. task one full-time worker with engaging with community criticism and keeping organized documents that attempt to specify what arguments against EA exist, what counter-arguments there are, and otherwise map out the entire relevant debate as it exists today. Putting in less effort than that looks to me like not trying because the results are unwanted (some people prefer status hierarchies and irrationality, even if they say they like rationality) rather than because the results are highly prized but too expensive. There have been no research programs afaik to try to get these kinds of rational debate results more cheaply.

Also, suppose I research EA, come up with some criticisms, and I’m wrong. I informally share my criticisms on the forum and get some unsatisfactory, incomplete answers. I still think I’m right and I have no way to get my error corrected. The lack of access to debate symmetrically prevents whoever is wrong from learning better, whether that’s EA or me. So the outcome is bad either way. Either I’ve come up with a correct criticism but EA won’t change; or I’ve come up with any incorrect criticism but EA won’t explain to me why it’s incorrect in a way that’s adequate for me to change. Blocking conclusive rational debate blocks error correction regardless of which side is right. Should EA really explain to all their incorrect critics why those critics are wrong? Yes! I think EA should create public explanations, in writing, of why all objections to EA (that anyone actually raises) are wrong. Would that take ~infinite work? No because you can explain why some category of objection is wrong. You can respond to patterns in the objections instead of addressing every objection individually. This lets you re-use some answers. Doing this would persuade more people that EA is correct, make it much more rewarding to study EA and try to think critically about it, and turn up the minority of cases where EA lacks an adequate answer to a criticism, and also expose EA’s good answers to review (people might suggest even better answers or find that, although EA won the argument in some case, there is a weakness in EA’s argument and a better criticism of EA could be made).

In general, I think EA is more willing to listen to criticism that is based on a bunch of shared premises. The more you disagree with and question foundational premises, the less EA will listen and discuss. If you agree on a bunch of foundations then criticize some more detailed matters based on those foundations, then EA will listen more. This results in many critics having a reasonably good experience even though the system (or really lack of system) is IMO fundamentally broken/irrational.

I imagine EA people will broadly dislike and disagree with what I’ve said, in part because I’m challenging foundational issues rather than using shared premises to challenge other issues. I think a bunch of people trying to study rationality and do their best at it is … a lot better than not doing that. But I think it’s inadequate compared to having policies, methodologies, flowcharts, checklists, rules, written guarantees, transparency, accountability, etc., to enable rationality. If you don’t walk people step by step through what to do, you’re going to get a lot of social status behaviors and biases from people who are trying to be rational. Also, if EA has something else to solve the same problems I’m concerned about in a different way than how I suggest approaching them, what is the alternative solution?

Why does writing down step by step what to do help if the people writing the steps have biases and irrationalities of their own? Won’t the steps be flawed? Sure they may be, but putting them in writing allows critical analysis of the steps from many people. Improving the steps can be a group effort. Whereas many people separately following their own separate unwritten steps is hard to improve.

I do agree with the basic idea of EA: using reason and evidence to optimize charity. I agree that charity should be approached with a scientific and rational mindset rather than with whims, arbitrariness, social signaling or whatever else. I agree that cost/benefit ratios and math matter more than feelings about charity. But unfortunately I don’t think that’s enough agreement to get a positive response when I then challenge EA on what rationality is and how to pursue it. I think critics get much better responses from EA if they have major pre-existing agreement with EA about what rationality is and how to do it, but optimizing rationality itself is crucial to EA’s mission.

In other words, I think EA is optimized for optimizing which charitable interventions are good. It’s pretty good at discussing and changing its mind about cost/benefit ratios of charity options (though questioning the premises themselves behind some charity approaches is less welcome). But EA is not similarly good at discussing and changing its mind about how to discuss, change its mind, and be rational. It’s better at applying rationality to charity topics than to epistemology.

Does this matter? Suppose I couldn’t talk to EA about rational debate itself, but could talk to EA about the costs and benefits of any particular charity project. Is that good enough? I don’t think so. Besides potentially disagreeing with the premises of some charity projects, I also have disagreements regarding how to do multi-factor decision making itself.


Elliot Temple | Permalink | Messages (0)

EA and Paths Forward

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Suppose EA is making an important error. John knows a correction and would like to help. What can John do?

Whatever the answer is, this is something EA should put thought into. They should advertise/communicate the best process for John to use, make it easy to understand and use, and intentionally design it with some beneficial features. EA should also consider having several processes so there are backups in case one fails.

Failure is a realistic possibility here. John might try to share a correction but be ignored. People might think John is wrong even though he’s right. People might think John’s comment is unimportant even though it’s actually important. There are lots of ways for people to reject or ignore a good idea. Suppose that happens. Now EA has made two mistakes which John knows are mistakes and would like to correct. There’s the first mistake, whatever it was, and now also this second mistake of not being receptive to the correction of the first mistake.

How can John get the second mistake corrected? There should be some kind of escalation process for when the initial mistake correction process fails. There is a risk that this escalation process would be abused. What if John thinks he’s right but actually he’s wrong? If the escalation process is costly in time and effort for EA people, and is used frequently, that would be bad. So the process should exist but should be designed in some kind of conservative way that limits the effort it will cost EA to deal with incorrect corrections. Similarly, the initial process for correcting EA also needs to be designed to limit the burden it places on EA. Limiting the burden increases the failure rate, making a secondary (and perhaps tertiary) error correction option more important to have.

When John believes he has an important correction for EA, and he shares it, and EA initially disagrees, that is a symmetric situation. Each side thinks the other is wrong. (That EA is multiple people, and John also might actually be multiple people, makes things more complex, but without changing some of the key principles.) The rational thing to do with this kind of symmetric dispute is not to say “I think I’m right” and ignore the other side. If you can’t resolve the dispute – if your knowledge is inadequate to conclude that you’re right – then you should be neutral and act accordingly. Or you might think you have crushing arguments which are objectively adequate to resolve the dispute in your favor, and you might even post them publicly, and think John is responding in obviously unreasonable ways. In that case, you might manage to objectively establish some kind of asymmetry. How to do objectively establish asymmetries in intellectual disagreements is a hard, important question in epistemology which I don’t think has received appropriate research attention (note: it’s also relevant when there’s a disagreement between two ideas within one person).

Anyway, what can John do? He can write down some criticism and post it on the EA forum. EA has a free, public forum. That is better than many other organizations which don’t facilitate publicly sharing criticism. Many organizations either have no forum or delete critical discussions while making no real attempt at rationality (e.g. Blizzard has forums related to its games, but they aren’t very rational, don’t really try to be, and delete tons of complaints). Does EA ever delete dissent or ban dissenters? As someone who hasn’t already spent many years paying close attention, I don’t know and I don’t know how to find out in a way that I would trust. Many forums claim not to delete dissent but actually do; it’s a common thing to lie about. Making a highly credible claim not to delete or punish dissent is important or else John might not bother trying to share his criticism.

So John can post a criticism on a forum, and then people may or may not read it and may or may not reply. Will anyone with some kind of leadership role at EA read it? Maybe not. This is bad. The naive alternative “guarantee plenty of attention from important people to all criticism” would be even worse. But there are many other possible policy options which are better.

To design a better system, we should consider what might go wrong. How could John’s great, valuable criticism receive a negative reaction on an open forum which is active enough that John gets at least a little attention? And how might things go well? If the initial attention John gets is positive, that will draw some additional attention. If that is positive too, then it will draw more attention. If 100% of the attention John gets results in positive responses, his post will be shared and spread until a large portion of the community sees it including people with power and influence, who will also view the criticism positively (by premise) and so they’ll listen and act. A 75% positive response rate would probably also be good enough to get a similar outcome.

So how might John’s criticism, which we’re hypothetically supposing is true and important, get a more negative reception so that it can’t snowball to get more attention and influence important decision makers?

John might have low social status, and people might judge more based on status than idea quality.

John’s criticism might offend people.

John’s criticism might threaten people in some way, e.g. implying that some of them shouldn’t have the income and prestige (or merely self-esteem) that they currently enjoy.

John’s criticism might be hard to understand. People might get confused. People might lack some prerequisite knowledge and skills needed to engage with it well.

John’s criticism might be very long and hard to get value from just the beginning. People might skim but not see the value that they would see if they read the whole thing in a thoughtful, attentive way. Making it long might be an error by John, but it also might be really hard to shorten and still have a good cost/benefit ratio (it’s valuable enough to justify the length).

John’s criticism might rely on premises that people disagree with. In other words, EA might be wrong about more than one thing. An interconnected set of mistakes can be much harder to explain than a single mistake even if the critic understands the entire set of mistakes. People might reject criticism of X due to their own mistake Y, and criticism of Y due to their own mistake X. A similar thing can happening involving many more ideas in a much more complicated structure so that it’s harder for John to point out what’s going on (even if he knows).

What can be done about all these difficulties? My suggestion, in short, is to develop a rational debate methodology and to hold debates aimed at reaching conclusions about disagreements. The methodology must include features for reducing the role of bias, social status, dishonesty, etc. In particular, it must prevent people from arbitrarily stopping any debates whenever they feel like it (which tends to include shortly before losing, which prevents the debate from being conclusive). The debate methodology must also have features for reducing the cost of debate, and ending low value debates, especially since it won’t allow arbitrarily quitting at any moment. A debate methodology is not a perfect, complete solution to all the problems John may face but it has various merits.

People often assume that rational, conclusive debate is too much work so the cost/benefit ratio on it is poor. This is typically a general opinion they have rather than an evaluation of any specific debate methodology. I think they should reserve judgment until after they review some written debate methodologies. They should look at some actual methods and see how much work they are, and what benefits they offer, before reaching a conclusion about their cost/benefit ratio. If the cost/benefit ratios are poor, people would try to make adjustments to reduce costs and increase benefits before giving up on rational debate.

Can people have rational debate without following any written methodology? Sure that’s possible. But if that worked well for some people and resulted in good cost/benefit ratios, wouldn’t it make sense to take whatever those successful debate participants are doing and write it down as a method? Even if the method had vague parts that’d be better than nothing.

Although under-explored, debate methodologies are not a new idea. E.g. Russell L. Ackoff published one in a book in 1978 (pp. 44-47). That’s unfortunately the only very substantive, promising one I’ve found besides developing one of my own. I bet there are more to be found somewhere in existing literature though. The main reason I thought Ackoff’s was a valuable proposal were that 1) it was based on following specific steps (in other words, you could make a flowchart out of it); and 2) it aimed at completeness, including using recursion to enable it to always succeed instead of getting stuck. Partial methods are common and easy to find, e.g. “don’t straw man” is a partial debate method, but it’s just suitable for being one little part of an overall method (and it lacks specific methods of detecting straw men, handling them when someone thinks one was done, etc. – it’s more of an aspiration than specific actions to achieve that aspiration).

A downside of Ackoff’s method is that it lacks stopping conditions besides success, so it could take an unlimited amount of effort. I think unilateral stopping conditions are one of the key issues for a good debate method: they need to exist (to prevent abuse by unreasonable debate partners who don’t agree to end the debate) but be designed to prevent abuse (by e.g. people quitting debates when they’re losing and quitting in a way designed to obscure what happened). I developed impasse chains as a debate stopping condition which takes a fairly small, bounded amount of effort to end debates unilaterally but adds significant transparency about how and why the debate is ending. Impasse chains only work when the further debate is providing low value, but that’s the only problematic case – otherwise you can either continue or say you want to stop and give a reason (which the other person will consent to, or if they don’t and you think they’re being unreasonable, now you’ve got an impasse to raise). Impasse chains are in the ballpark of “to end a debate, you must either mutually agree or else go through some required post-mortem steps” plus they enable multiple chances at problem solving to fix whatever is broken about the debate. This strikes me as one of the most obvious genres of debate stopping conditions to try, yet I think my proposal is novel. I think that says something really important about the world and its hostility to rational debate methodology. (I don’t think it’s mere disinterest or ignorance; if it were, the moment I suggested rational debate methods and said why they were important a lot of people would become excited and want to pursue the matter; but that hasn’t happened.)

Another important and related issue is how can you write, or design and organize a community or movement, so it’s easier for people to learn and debate with your ideas? And also easier to avoid low value or repetitive discussion. An example design is an FAQ to help reduce repetition. A less typical design would be creating (and sharing and keeping updated) a debate tree document organizing and summarizing the key arguments in the entire field you care about.


Elliot Temple | Permalink | Messages (0)

Meta Criticism

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Meta criticism is potentially more powerful than direct criticism of specific flaws. Meta criticism can talk about methodologies or broad patterns. It’s a way of taking a step back, away from all the details, to look critically at a bigger picture.

Meta criticism isn’t very common. Why? It’s less conventional, normal, mainstream or popular. That makes it harder to get a positive reception for it. It’s less well understood or respected. Also, meta criticism tends to be more abstract, more complicated, harder to get right, and harder to act on. In return for those downsides, it can be more powerful.

On average or as some kind of general trend, is the cost-to-benefit ratio for meta criticism is better or worse than regular criticism? I don’t really know. I think neither one has a really clear advantage and we should try some of both. Plus, to some extent, they do different things so again it makes sense to use both.

I think there’s an under-exploited area with high value, which is some of the most simple, basic meta criticisms. These are easier to understand and deal with, yet can still be powerful. I think these initial meta criticisms tend to be more important than concrete criticisms. Also, meta criticisms are more generic so they can be re-used between different discussions or different topics more, and that’s especially true for the more basic meta criticisms that you would start with (whereas more advanced meta criticism might depend on the details of a topic more).

So let’s look at examples of introductory meta criticisms which I think have a great cost-to-benefit ratio (given that people aren’t hostile to them, which is a problem sometimes). These examples will help give a better sense of what meta criticisms are in addition to being useful issues to consider.

Do you act based on methods?

“You” could be a group or individual. If the answer is “no” that’s a major problem. Let’s assume it’s “yes”.

Are the methods written down?

Again, “no” is a major problem. Assuming “yes”:

Do the methods contain explicit features designed to reduce bias?

Again, “no” is a major problem. Examples of anti-bias features include transparency, accountability, anti-bias training or ways of reducing the importance of social status in decision making (such as some decisions being made in random or blinded ways).

Many individuals and organizations in the world have already failed within the first three questions. Others could technically say “yes” but their anti-bias features aren’t very good (e.g. I’m sure every large non-crypto bank has some written methods that employees use for some tasks which contain some anti-bias features of some sort despite not really even aiming at rationality).

But, broadly, those with “no” answers or poor answers don’t want to, and don’t, discuss this and try to improve. Why? There are many reasons but here’s a particularly relevant one: They lack methods of talking about it with transparency, accountability and other anti-bias features. The lack of rational discussion methodology protects all their other errors like lack of methodology for whatever it is that they do.

One of the major complicating factors is how groups work. Some groups have clear leadership and organization structures, with a hierarchical power structure which assigns responsibilities. In that case, it’s relatively easy to blame leadership for big picture problems like lack of rational methods. But other groups are more of a loose association without a clear leadership structure that takes responsibility for considering or addressing criticism, setting policies, etc. Not all groups have anyone who could easily decide on some methods and get others to use them. EA and LW are examples of groups with significant voids in leadership, responsibility and accountability. They claim to have a bunch of ideas, but it’s hard to criticize them because of the lack of official position statements by them (or when there is something kinda official, like The Sequences, the people willing to talk on the forum often disagree with or are ignorant of a lot of that official position – there’s no way to talk with a person who advocates the official position as a whole and will take responsibility for addressing errors in it, or who has the power to fix it). There’s no reasonable, reliable way to ask EA a question like “Do you have an a written methodology for rational debate?” and get an official answer that anyone will take responsibility for.

So one of the more basic, introductory areas for meta criticism/questioning is to ask about rational methodology. And a second is to ask about leadership, responsibility, and organization structure. If there is an error, who can be told who will fix it, and how does one get their attention? If some clarifying questions are needed before sharing the error, how does one get them answered? If the answers are things like “personally contact the right people and become familiar with the high status community members” that is a really problematic answer. There should be publicly accessible and documented options which can be used by people who don’t have social status within the community. Social status is a biasing, irrational approach which blocks valid criticism from leading to change. Also, even if the situation is better than that, many people won’t know it’s better, and won’t try, unless you publicly tell them it’s better in a convincing way. To be convincing, you have to offer specific policies with guarantees and transparency/accountability, rather than saying a variant of “trust us”.

Guarantees can be expensive especially when they’re open to the general public. There are costs/downsides here. Even non-guaranteed options, such as an option suggestion box for unsolicited advice, even if you never reply to anything, have a cost. If you promised to reply to every suggestion, that would be too expensive. Guarantees need to have conditions placed on them. E.g. “If you make a suggestion and read the following ten books and pay $100, then we guarantee a response (limit: one response per person per year).” That policy would result in a smaller burden than responding to all suggestions, but it still offers a guarantee. Would the burden still be too high? It depends how popular you are. Is a response a very good guarantee? Not really. You might read the ten books, pay the money, and get the response “No.” or “Interesting idea; we’ll consider it.” and nothing more. That could be unsatisfying. Some additional guarantees about the nature of the response could help. There is a ton of room to brainstorm how to do these things well. These kinds of ideas are very under-explored. An example stronger guarantee would be to respond with either a decisive refutation or else to put together an exploratory committee to investigate taking the suggestion. Such committees have a poor reputation and could be replaced with some other method of escalating the idea to get more consideration.

Guarantees should focus on objective criteria. For example, saying you’ll respond to all “good suggestions” would be a poor guarantee to offer. How can someone predictably know in advance whether their suggestion will meet that condition or not? Design policies to not let decision makers use arbitrary judgment which could easily be biased or wrong. For example, you might judge “good” suggestions using the “I’ll know it when I see it method”. That would be very arbitrary and a bad approach. If you say “good” means “novel, interesting, substantive and high value if correct” that is a little better, but still very bad, because a decision maker can arbitrary judge whatever he wants as bad and there’s no effective way to hold him accountable, determine his judgment was an error, get that error corrected, etc. There’s also there’s poor predictability for people considering making suggestions.

From what I can tell, my main disagreement with EA is I think EA should have written, rational debate methods, and EA doesn’t think so. I don’t know how to make effective progress on resolving that disagreement because no one from EA will follow any specific rational debate methods. Also EA offers no alternative solution, that I know of, to the same problem that rational debate methods are meant to solve. Without rational debate methods (or an effective alternative), no other disagreements really matter because there’s nothing good to be done about them.


Elliot Temple | Permalink | Messages (0)