EA and Responding to Famous Authors

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I think EA has the resources to attempt to respond to every intellectual who sold over 100,000 books in English which make arguments that contradict EA. EA could write rebuttals to all popular, well known rival positions that are written in books. You could start with the authors who sold over a million books.

There are major downsides to using popularity as your only criterion for what to respond to. It’s important to also have ways that you respond to unpopular criticism. But responding to influential criticism makes sense because people know about it and just ignoring it makes it look like you don’t care to consider other ideas or have no answers.

Answering the arguments of popular authors could be one project, of 10+, in which EA attempts to engage with alternative ideas and argue its case.

EA claims to be committed to rationality but it seems more interested in getting a bunch of charity projects underway and/or funded better ASAP instead of taking the time to first do extensive rational analysis to figure out the right ideas to guide charity.

I understand not wanting to get caught up in doing planning forever and having decision paralysis, but where is the reasonably complete planning and debating that seems adequate to get started based on?

For example, it seems unreasonable to me to start an altruist movement without addressing Ayn Rand’s criticisms of altruism. Where are the serious essays summarizing, analyzing and refuting her arguments about altruism? She sold many millions of books. Where are the debates with anyone from ARI or the invitations for any online Objectivists who are interested to come debate with EA? Objectivism has a lot of fans today who are interested in rationality and debate (or at least claim to be), so ignoring them instead of writing anything that could change their minds seems bad. And being encouraging of discussion with them, instead of discouraging, would make sense and be more rational. (I’m aware that they aren’t doing better. They aren’t asking EA’s to come debate them, hosting more rational debates, writing articles refuting EA, etc. IMO both groups are not doing very well and there’s big room for improvement. I’ve tried to talk to Objectivists to get them to improve before and it didn’t work. Overall, although I’m a big fan of Ayn Rand, I think Objectivist communities today are less open to critical discussion and dissent than EA is.)


Elliot Temple | Permalink | Messages (0)

Is EA Rational?

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I haven’t studied EA much. There is plenty more about EA that I could read. But I don’t want to get involved much unless EA is rational.

By “rational”, I mean capable of (and good at) correcting errors. Rationality, in my mind, enables progress and improvement instead of stasis, being set in your ways, not listening to suggestions, etc. So a key aspect of rationality is being open to criticism, and having ways that changes will actually be made based on correct criticisms.

Is EA rational? In other words, if I study EA and find some errors, and then I write down those errors, and I’m right, will EA then make changes to fix those errors? I am doubtful.

That definitely could happen. EA does make changes and improvements sometimes. Is that not a demonstration of EA’s rationality? Partially, yes, sure. Which is why I’m saying this to EA instead of some other group. I think EA is better at that than most other groups.

But I think EA’s ability to listen to criticism and make changes is related to social status, bias, tribalism, and popularity. If I share a correct criticism and I’m perceived as high status, and I have friends in high places, and the criticism fits people’s biases, and the criticism makes me seem in-group not out-group, and the criticism gains popularity (gets shared and upvoted a bunch, gets talked about by many people), then I would have high confidence that EA would make changes. If all those factors are present, then EA is reliably willing to consider criticism and make changes.

If some of those factors are present, then it’s less reliable but EA might listen to criticism. If none of those factors are present, then I’m doubtful the criticism will be impactful. I don’t want to study EA to find flaws and also make friends with the right people, change my writing style to be a better culture fit with EA, form habits of acting in higher status ways, and focus on sharing criticisms that fit some pre-existing biases or tribal divisions.

What can be done as an alternative to listening to criticism based on popularity, status, culture-fit, biases, tribes, etc? One option is organized debate with written methodologies that make some guarantees. EA doesn’t do that. Does it do something else?

One thing I know EA does, which is much better than nothing (and is better than many other groups offer), is informal, disorganized debate following unwritten methodologies that vary some by the individuals you’re speaking with. I consider this option inadequately motivating to seriously research and critically scrutinize EA.

I could talk to EA people who have read essays about rationality and who are trying to be rational – individually, with no accountability, transparency, or particular responsibilities. I think that’s not good enough and makes it way too easy for social status hierarchies to matter. If EA offered more organized ways of sharing and debating criticism, with formal rules, then people would have to follow the rules and therefore not act based on status. Things like rules, flowcharted methods to follow, or step-by-step actions to take can all help fight against the people’s tendency to act based on status and other biases.

It’s good for informal options to exist but they rely on basically “everyone just personally tries to be rational” which I don’t think is good enough. So more formal options, with pro-rationality (and anti-status, anti-bias, etc.) design features should exist too.

The most common objection to such things is they’re too much work. On an individual level, it’s unclear to me that following a written methodology is more work than following an unwritten methodology. Whatever you do, you have some sort of methods or policies. Also, I don’t really think you can evaluate how much work a methodology is (and how much benefit it offers, since the cost/benefit ratio is what matters) without actually developing that methodology and writing it down first. I think rational debate methodologies which tries to reach conclusions about incoming criticisms are broadly untested empirically, so people shouldn’t assume they’d take too long or be ineffective when they can’t point to any examples of them being tried with that result. And EA has plenty of resources to e.g. task one full-time worker with engaging with community criticism and keeping organized documents that attempt to specify what arguments against EA exist, what counter-arguments there are, and otherwise map out the entire relevant debate as it exists today. Putting in less effort than that looks to me like not trying because the results are unwanted (some people prefer status hierarchies and irrationality, even if they say they like rationality) rather than because the results are highly prized but too expensive. There have been no research programs afaik to try to get these kinds of rational debate results more cheaply.

Also, suppose I research EA, come up with some criticisms, and I’m wrong. I informally share my criticisms on the forum and get some unsatisfactory, incomplete answers. I still think I’m right and I have no way to get my error corrected. The lack of access to debate symmetrically prevents whoever is wrong from learning better, whether that’s EA or me. So the outcome is bad either way. Either I’ve come up with a correct criticism but EA won’t change; or I’ve come up with any incorrect criticism but EA won’t explain to me why it’s incorrect in a way that’s adequate for me to change. Blocking conclusive rational debate blocks error correction regardless of which side is right. Should EA really explain to all their incorrect critics why those critics are wrong? Yes! I think EA should create public explanations, in writing, of why all objections to EA (that anyone actually raises) are wrong. Would that take ~infinite work? No because you can explain why some category of objection is wrong. You can respond to patterns in the objections instead of addressing every objection individually. This lets you re-use some answers. Doing this would persuade more people that EA is correct, make it much more rewarding to study EA and try to think critically about it, and turn up the minority of cases where EA lacks an adequate answer to a criticism, and also expose EA’s good answers to review (people might suggest even better answers or find that, although EA won the argument in some case, there is a weakness in EA’s argument and a better criticism of EA could be made).

In general, I think EA is more willing to listen to criticism that is based on a bunch of shared premises. The more you disagree with and question foundational premises, the less EA will listen and discuss. If you agree on a bunch of foundations then criticize some more detailed matters based on those foundations, then EA will listen more. This results in many critics having a reasonably good experience even though the system (or really lack of system) is IMO fundamentally broken/irrational.

I imagine EA people will broadly dislike and disagree with what I’ve said, in part because I’m challenging foundational issues rather than using shared premises to challenge other issues. I think a bunch of people trying to study rationality and do their best at it is … a lot better than not doing that. But I think it’s inadequate compared to having policies, methodologies, flowcharts, checklists, rules, written guarantees, transparency, accountability, etc., to enable rationality. If you don’t walk people step by step through what to do, you’re going to get a lot of social status behaviors and biases from people who are trying to be rational. Also, if EA has something else to solve the same problems I’m concerned about in a different way than how I suggest approaching them, what is the alternative solution?

Why does writing down step by step what to do help if the people writing the steps have biases and irrationalities of their own? Won’t the steps be flawed? Sure they may be, but putting them in writing allows critical analysis of the steps from many people. Improving the steps can be a group effort. Whereas many people separately following their own separate unwritten steps is hard to improve.

I do agree with the basic idea of EA: using reason and evidence to optimize charity. I agree that charity should be approached with a scientific and rational mindset rather than with whims, arbitrariness, social signaling or whatever else. I agree that cost/benefit ratios and math matter more than feelings about charity. But unfortunately I don’t think that’s enough agreement to get a positive response when I then challenge EA on what rationality is and how to pursue it. I think critics get much better responses from EA if they have major pre-existing agreement with EA about what rationality is and how to do it, but optimizing rationality itself is crucial to EA’s mission.

In other words, I think EA is optimized for optimizing which charitable interventions are good. It’s pretty good at discussing and changing its mind about cost/benefit ratios of charity options (though questioning the premises themselves behind some charity approaches is less welcome). But EA is not similarly good at discussing and changing its mind about how to discuss, change its mind, and be rational. It’s better at applying rationality to charity topics than to epistemology.

Does this matter? Suppose I couldn’t talk to EA about rational debate itself, but could talk to EA about the costs and benefits of any particular charity project. Is that good enough? I don’t think so. Besides potentially disagreeing with the premises of some charity projects, I also have disagreements regarding how to do multi-factor decision making itself.


Elliot Temple | Permalink | Messages (0)

EA and Paths Forward

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Suppose EA is making an important error. John knows a correction and would like to help. What can John do?

Whatever the answer is, this is something EA should put thought into. They should advertise/communicate the best process for John to use, make it easy to understand and use, and intentionally design it with some beneficial features. EA should also consider having several processes so there are backups in case one fails.

Failure is a realistic possibility here. John might try to share a correction but be ignored. People might think John is wrong even though he’s right. People might think John’s comment is unimportant even though it’s actually important. There are lots of ways for people to reject or ignore a good idea. Suppose that happens. Now EA has made two mistakes which John knows are mistakes and would like to correct. There’s the first mistake, whatever it was, and now also this second mistake of not being receptive to the correction of the first mistake.

How can John get the second mistake corrected? There should be some kind of escalation process for when the initial mistake correction process fails. There is a risk that this escalation process would be abused. What if John thinks he’s right but actually he’s wrong? If the escalation process is costly in time and effort for EA people, and is used frequently, that would be bad. So the process should exist but should be designed in some kind of conservative way that limits the effort it will cost EA to deal with incorrect corrections. Similarly, the initial process for correcting EA also needs to be designed to limit the burden it places on EA. Limiting the burden increases the failure rate, making a secondary (and perhaps tertiary) error correction option more important to have.

When John believes he has an important correction for EA, and he shares it, and EA initially disagrees, that is a symmetric situation. Each side thinks the other is wrong. (That EA is multiple people, and John also might actually be multiple people, makes things more complex, but without changing some of the key principles.) The rational thing to do with this kind of symmetric dispute is not to say “I think I’m right” and ignore the other side. If you can’t resolve the dispute – if your knowledge is inadequate to conclude that you’re right – then you should be neutral and act accordingly. Or you might think you have crushing arguments which are objectively adequate to resolve the dispute in your favor, and you might even post them publicly, and think John is responding in obviously unreasonable ways. In that case, you might manage to objectively establish some kind of asymmetry. How to do objectively establish asymmetries in intellectual disagreements is a hard, important question in epistemology which I don’t think has received appropriate research attention (note: it’s also relevant when there’s a disagreement between two ideas within one person).

Anyway, what can John do? He can write down some criticism and post it on the EA forum. EA has a free, public forum. That is better than many other organizations which don’t facilitate publicly sharing criticism. Many organizations either have no forum or delete critical discussions while making no real attempt at rationality (e.g. Blizzard has forums related to its games, but they aren’t very rational, don’t really try to be, and delete tons of complaints). Does EA ever delete dissent or ban dissenters? As someone who hasn’t already spent many years paying close attention, I don’t know and I don’t know how to find out in a way that I would trust. Many forums claim not to delete dissent but actually do; it’s a common thing to lie about. Making a highly credible claim not to delete or punish dissent is important or else John might not bother trying to share his criticism.

So John can post a criticism on a forum, and then people may or may not read it and may or may not reply. Will anyone with some kind of leadership role at EA read it? Maybe not. This is bad. The naive alternative “guarantee plenty of attention from important people to all criticism” would be even worse. But there are many other possible policy options which are better.

To design a better system, we should consider what might go wrong. How could John’s great, valuable criticism receive a negative reaction on an open forum which is active enough that John gets at least a little attention? And how might things go well? If the initial attention John gets is positive, that will draw some additional attention. If that is positive too, then it will draw more attention. If 100% of the attention John gets results in positive responses, his post will be shared and spread until a large portion of the community sees it including people with power and influence, who will also view the criticism positively (by premise) and so they’ll listen and act. A 75% positive response rate would probably also be good enough to get a similar outcome.

So how might John’s criticism, which we’re hypothetically supposing is true and important, get a more negative reception so that it can’t snowball to get more attention and influence important decision makers?

John might have low social status, and people might judge more based on status than idea quality.

John’s criticism might offend people.

John’s criticism might threaten people in some way, e.g. implying that some of them shouldn’t have the income and prestige (or merely self-esteem) that they currently enjoy.

John’s criticism might be hard to understand. People might get confused. People might lack some prerequisite knowledge and skills needed to engage with it well.

John’s criticism might be very long and hard to get value from just the beginning. People might skim but not see the value that they would see if they read the whole thing in a thoughtful, attentive way. Making it long might be an error by John, but it also might be really hard to shorten and still have a good cost/benefit ratio (it’s valuable enough to justify the length).

John’s criticism might rely on premises that people disagree with. In other words, EA might be wrong about more than one thing. An interconnected set of mistakes can be much harder to explain than a single mistake even if the critic understands the entire set of mistakes. People might reject criticism of X due to their own mistake Y, and criticism of Y due to their own mistake X. A similar thing can happening involving many more ideas in a much more complicated structure so that it’s harder for John to point out what’s going on (even if he knows).

What can be done about all these difficulties? My suggestion, in short, is to develop a rational debate methodology and to hold debates aimed at reaching conclusions about disagreements. The methodology must include features for reducing the role of bias, social status, dishonesty, etc. In particular, it must prevent people from arbitrarily stopping any debates whenever they feel like it (which tends to include shortly before losing, which prevents the debate from being conclusive). The debate methodology must also have features for reducing the cost of debate, and ending low value debates, especially since it won’t allow arbitrarily quitting at any moment. A debate methodology is not a perfect, complete solution to all the problems John may face but it has various merits.

People often assume that rational, conclusive debate is too much work so the cost/benefit ratio on it is poor. This is typically a general opinion they have rather than an evaluation of any specific debate methodology. I think they should reserve judgment until after they review some written debate methodologies. They should look at some actual methods and see how much work they are, and what benefits they offer, before reaching a conclusion about their cost/benefit ratio. If the cost/benefit ratios are poor, people would try to make adjustments to reduce costs and increase benefits before giving up on rational debate.

Can people have rational debate without following any written methodology? Sure that’s possible. But if that worked well for some people and resulted in good cost/benefit ratios, wouldn’t it make sense to take whatever those successful debate participants are doing and write it down as a method? Even if the method had vague parts that’d be better than nothing.

Although under-explored, debate methodologies are not a new idea. E.g. Russell L. Ackoff published one in a book in 1978 (pp. 44-47). That’s unfortunately the only very substantive, promising one I’ve found besides developing one of my own. I bet there are more to be found somewhere in existing literature though. The main reason I thought Ackoff’s was a valuable proposal were that 1) it was based on following specific steps (in other words, you could make a flowchart out of it); and 2) it aimed at completeness, including using recursion to enable it to always succeed instead of getting stuck. Partial methods are common and easy to find, e.g. “don’t straw man” is a partial debate method, but it’s just suitable for being one little part of an overall method (and it lacks specific methods of detecting straw men, handling them when someone thinks one was done, etc. – it’s more of an aspiration than specific actions to achieve that aspiration).

A downside of Ackoff’s method is that it lacks stopping conditions besides success, so it could take an unlimited amount of effort. I think unilateral stopping conditions are one of the key issues for a good debate method: they need to exist (to prevent abuse by unreasonable debate partners who don’t agree to end the debate) but be designed to prevent abuse (by e.g. people quitting debates when they’re losing and quitting in a way designed to obscure what happened). I developed impasse chains as a debate stopping condition which takes a fairly small, bounded amount of effort to end debates unilaterally but adds significant transparency about how and why the debate is ending. Impasse chains only work when the further debate is providing low value, but that’s the only problematic case – otherwise you can either continue or say you want to stop and give a reason (which the other person will consent to, or if they don’t and you think they’re being unreasonable, now you’ve got an impasse to raise). Impasse chains are in the ballpark of “to end a debate, you must either mutually agree or else go through some required post-mortem steps” plus they enable multiple chances at problem solving to fix whatever is broken about the debate. This strikes me as one of the most obvious genres of debate stopping conditions to try, yet I think my proposal is novel. I think that says something really important about the world and its hostility to rational debate methodology. (I don’t think it’s mere disinterest or ignorance; if it were, the moment I suggested rational debate methods and said why they were important a lot of people would become excited and want to pursue the matter; but that hasn’t happened.)

Another important and related issue is how can you write, or design and organize a community or movement, so it’s easier for people to learn and debate with your ideas? And also easier to avoid low value or repetitive discussion. An example design is an FAQ to help reduce repetition. A less typical design would be creating (and sharing and keeping updated) a debate tree document organizing and summarizing the key arguments in the entire field you care about.


Elliot Temple | Permalink | Messages (0)

Meta Criticism

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Meta criticism is potentially more powerful than direct criticism of specific flaws. Meta criticism can talk about methodologies or broad patterns. It’s a way of taking a step back, away from all the details, to look critically at a bigger picture.

Meta criticism isn’t very common. Why? It’s less conventional, normal, mainstream or popular. That makes it harder to get a positive reception for it. It’s less well understood or respected. Also, meta criticism tends to be more abstract, more complicated, harder to get right, and harder to act on. In return for those downsides, it can be more powerful.

On average or as some kind of general trend, is the cost-to-benefit ratio for meta criticism is better or worse than regular criticism? I don’t really know. I think neither one has a really clear advantage and we should try some of both. Plus, to some extent, they do different things so again it makes sense to use both.

I think there’s an under-exploited area with high value, which is some of the most simple, basic meta criticisms. These are easier to understand and deal with, yet can still be powerful. I think these initial meta criticisms tend to be more important than concrete criticisms. Also, meta criticisms are more generic so they can be re-used between different discussions or different topics more, and that’s especially true for the more basic meta criticisms that you would start with (whereas more advanced meta criticism might depend on the details of a topic more).

So let’s look at examples of introductory meta criticisms which I think have a great cost-to-benefit ratio (given that people aren’t hostile to them, which is a problem sometimes). These examples will help give a better sense of what meta criticisms are in addition to being useful issues to consider.

Do you act based on methods?

“You” could be a group or individual. If the answer is “no” that’s a major problem. Let’s assume it’s “yes”.

Are the methods written down?

Again, “no” is a major problem. Assuming “yes”:

Do the methods contain explicit features designed to reduce bias?

Again, “no” is a major problem. Examples of anti-bias features include transparency, accountability, anti-bias training or ways of reducing the importance of social status in decision making (such as some decisions being made in random or blinded ways).

Many individuals and organizations in the world have already failed within the first three questions. Others could technically say “yes” but their anti-bias features aren’t very good (e.g. I’m sure every large non-crypto bank has some written methods that employees use for some tasks which contain some anti-bias features of some sort despite not really even aiming at rationality).

But, broadly, those with “no” answers or poor answers don’t want to, and don’t, discuss this and try to improve. Why? There are many reasons but here’s a particularly relevant one: They lack methods of talking about it with transparency, accountability and other anti-bias features. The lack of rational discussion methodology protects all their other errors like lack of methodology for whatever it is that they do.

One of the major complicating factors is how groups work. Some groups have clear leadership and organization structures, with a hierarchical power structure which assigns responsibilities. In that case, it’s relatively easy to blame leadership for big picture problems like lack of rational methods. But other groups are more of a loose association without a clear leadership structure that takes responsibility for considering or addressing criticism, setting policies, etc. Not all groups have anyone who could easily decide on some methods and get others to use them. EA and LW are examples of groups with significant voids in leadership, responsibility and accountability. They claim to have a bunch of ideas, but it’s hard to criticize them because of the lack of official position statements by them (or when there is something kinda official, like The Sequences, the people willing to talk on the forum often disagree with or are ignorant of a lot of that official position – there’s no way to talk with a person who advocates the official position as a whole and will take responsibility for addressing errors in it, or who has the power to fix it). There’s no reasonable, reliable way to ask EA a question like “Do you have an a written methodology for rational debate?” and get an official answer that anyone will take responsibility for.

So one of the more basic, introductory areas for meta criticism/questioning is to ask about rational methodology. And a second is to ask about leadership, responsibility, and organization structure. If there is an error, who can be told who will fix it, and how does one get their attention? If some clarifying questions are needed before sharing the error, how does one get them answered? If the answers are things like “personally contact the right people and become familiar with the high status community members” that is a really problematic answer. There should be publicly accessible and documented options which can be used by people who don’t have social status within the community. Social status is a biasing, irrational approach which blocks valid criticism from leading to change. Also, even if the situation is better than that, many people won’t know it’s better, and won’t try, unless you publicly tell them it’s better in a convincing way. To be convincing, you have to offer specific policies with guarantees and transparency/accountability, rather than saying a variant of “trust us”.

Guarantees can be expensive especially when they’re open to the general public. There are costs/downsides here. Even non-guaranteed options, such as an option suggestion box for unsolicited advice, even if you never reply to anything, have a cost. If you promised to reply to every suggestion, that would be too expensive. Guarantees need to have conditions placed on them. E.g. “If you make a suggestion and read the following ten books and pay $100, then we guarantee a response (limit: one response per person per year).” That policy would result in a smaller burden than responding to all suggestions, but it still offers a guarantee. Would the burden still be too high? It depends how popular you are. Is a response a very good guarantee? Not really. You might read the ten books, pay the money, and get the response “No.” or “Interesting idea; we’ll consider it.” and nothing more. That could be unsatisfying. Some additional guarantees about the nature of the response could help. There is a ton of room to brainstorm how to do these things well. These kinds of ideas are very under-explored. An example stronger guarantee would be to respond with either a decisive refutation or else to put together an exploratory committee to investigate taking the suggestion. Such committees have a poor reputation and could be replaced with some other method of escalating the idea to get more consideration.

Guarantees should focus on objective criteria. For example, saying you’ll respond to all “good suggestions” would be a poor guarantee to offer. How can someone predictably know in advance whether their suggestion will meet that condition or not? Design policies to not let decision makers use arbitrary judgment which could easily be biased or wrong. For example, you might judge “good” suggestions using the “I’ll know it when I see it method”. That would be very arbitrary and a bad approach. If you say “good” means “novel, interesting, substantive and high value if correct” that is a little better, but still very bad, because a decision maker can arbitrary judge whatever he wants as bad and there’s no effective way to hold him accountable, determine his judgment was an error, get that error corrected, etc. There’s also there’s poor predictability for people considering making suggestions.

From what I can tell, my main disagreement with EA is I think EA should have written, rational debate methods, and EA doesn’t think so. I don’t know how to make effective progress on resolving that disagreement because no one from EA will follow any specific rational debate methods. Also EA offers no alternative solution, that I know of, to the same problem that rational debate methods are meant to solve. Without rational debate methods (or an effective alternative), no other disagreements really matter because there’s nothing good to be done about them.


Elliot Temple | Permalink | Messages (0)

EA Misquoting Discussion Summary

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Let me summarize events from my perspective.

I read a book EA likes and found a misquote in it (and other problems).

Someone misquoted me twice in EA forum discussion. They seemed to think that was OK, not a big deal, etc. And no one in the audience took my side or said anything negative about misquotes.

The person who misquoted me (as well as anyone reading) didn’t want to talk about it or debate the matter.

In an open questions thread, I asked about EA’s norms regarding misquotes.

In response, someone misquoted the EA norms to me, which is pretty ironic and silly.

Their claim about EA norms was basically that misquotes aren’t important.

When I pointed out that they had misquoted, they didn’t really seem to care or think that was bad. Again, there were no signs the audience thought misquoting was bad, either.

Lizka, who was the person being misquoted since she wrote the EA norms document, commented on the matter. Lizka’s comment communicated:

  • She agrees with me that the norms were misquoted.
  • But she didn’t really mind or care.
  • EA has no strong norm against misquoting.
  • The attitude to misquotes is basically like typos: mistakes and accidents happen and we should be tolerant and forgiving about that.

Again, no one wanted to talk with me about the matter or debate it.

I wrote an article explaining that misquoting is bad. I compared misquoting to deadnaming because the misquoted norm was actually about deadnaming, and I thought that read as a whole it’s actually a good norm, and the same norm should be used for misquoting.

The EA norm on deadnaming is basically: first, don’t do it, and second, if it’s a genuine accident, that’s alright, but don’t do it again.

Whereas EA’s current misquoting norm is more like: misquotes are technically errors, so that’s bad, but no one particularly cares.

Misquotes are actually like deadnaming. Deadnaming involves exercising control over someone else’s name without their consent, when their name should be within their control. Misquotes involve exercising control over someone else’s words/speech without their consent, when their words/speech should be within their control. Misquotes and deadnaming both violate the personal boundaries of a person and violate consent.

Misquotes are also bad for reasons of scholarship, accuracy and truth seeking. I believe the general attitude of not caring about “small” errors is a big mistake.

Misquotes are accepted at EA due to the combination of not recognizing how they violate consent and victimize someone (like deadnaming), and having a culture tolerant of “small” errors and imprecision.

So, I disagree, and I have two main reasons. And people are not persuaded and don’t want to debate or give any counter-arguments. Which gets into one of the other main topics I’ve posted about at EA, which is debating norms and methodology.

All this so far is … fine. Whatever. The weird part comes next.

The main feedback I’ve gotten regarding misquoting and deadnaming is not disagreement. No one has clearly admitted to disagreeing with me and e.g. claimed that misquoting is not like deadnaming.

Instead, I’ve been told that I’m getting downvoted because people agree with me too much: they think it’s so obvious and uncontroversial that it’s a waste of time to write about.

That is not what’s happening and it’s a very bizarre excuse. People are so eager to avoid a debate that they deny disagreeing with me, even when they could tell from the title that they do disagree with me. None of them has actually claimed that they do think misquoting is like deadnaming, and should be reacted to similarly.

Partly, people are anti-misquoting in some weaker way than I am, just like they are anti-typos but not very strongly. The nuance of “I am more against misquoting than you are, so we disagree” seems too complex for some people. They want to identify as anti-misquoting, so they don’t want to take the pro-misquoting side of a debate. The actual issue is how bad misquoting is (or we could be more specific and specify 20 ways misquoting might be bad, 15 of which I believe, and only 5 of which they believe, and then debate the other 10).

I wrote a second article trying to clarify to people that they disagree with me. I gave some guided thinking so they could see it for themselves. E.g. if I pointed out a misquote in the sequences, would you care? Would it matter much to you? Would you become concerned and start investigating other quotes in the book? I think we all know that if I found a single misquote in that book, it would result in no substantive changes. I think it should; you don’t; we disagree.

After being downvoted without explanation on the second article about misquoting, I wrote an article about downvotes being evidence, in which I considered what different interpretations of downvotes and different reactions. This prompted the most mixed voting I’d gotten yet and a response saying people were probably just downvoting me because they didn’t see the point of my anti-misquoting articles because they already agree with me. That guy refused to actually say he agrees with me himself, saying basically (only when pressed) that he’s unsure and neutral and not very interested in thinking or talking about it. If you think it’s a low priority unimportant issue, then you disagree with me, since I think it’s very important. Does he also think deadnaming is low priority and unimportant? If not, then he clearly disagrees with me.

It’s so weird for people who disagree with me to insist they agree with me. And Lizka already clarified that she disagrees with me, and made a statement about what the EA norms are, and y’all are still telling me that the community in general agrees with me!?

Guys, I’ve struck a nerve. I got downvotes because people didn’t like being challenged in this way, and I’m getting very bizarre excuses to avoid debate because this is a sensitive issue that people don’t want to think or speak clearly about. So it’s important for an additional reason: because people are biased and irrational about it.

My opinions on this matter predate EA (though the specific comparison to deadnaming is a new way of expressing an old point).

I suspect one reason the deadnaming comparison didn’t go over well is that most EAers don’t care much about deadnaming either (and don’t have nuanced, thoughtful opinions about it), although they aren’t going to admit that.

Most deadnaming and most misquoting is not an innocent accident. I think people know that with deadnaming, but deny it with misquoting. But explain to me: how did the wording change in a quote that you could have copy/pasted? That’s generally not an innocent accident. How did you leave out the start of the paragraph and take a quote out of context? That was not a random accident. How did you type in a quote from paper and then forget to triple check it for typos? That is negligence at best, not an accident.

Negligently deadnaming people is not OK. Don’t do it. Negligently misquoting is bad too for more reasons: violates consent and harms scholarship.

This is all related to more complex and more important issues, but if I can’t persuade anyone of this smaller initial point that should be easy, I don’t think trying to say more complex stuff is going to work. If people won’t debate a relatively small, isolated issue, they aren’t going to successfully debate a complex topic involving dozens of issues of similar or higher difficulty as well as many books. One purpose of talking about misquoting is it it’s test issue to see how people handle debate and criticism, plus it’s an example of one of the main broader themes I’d like to talk about which is about the value of raising intellectual standards. If you can’t win with the small test issue that shouldn’t be very hard, you’ve gotta figure out what is going on. And if the responses to the small test issue are really bizarre and involve things like persistently denying disagreeing while obviously disagreeing … you really gotta figure out what is going on instead of ignore that evidence. So I’ve written about it again (this post).

If you want to find details of this stuff on the EA forum and see exactly what people said to me, besides what is linked in my two articles about misquoting that I linked above, you can also go to my EA user profile and look through my post and comment history there.


Elliot Temple | Permalink | Messages (0)

Downvotes Are Evidence

I also posted this on the Effective Altruism forum.


Downvotes are evidence. They provide information. They can be interpreted, especially when they aren’t accompanied by arguments or reasons.

Downvotes can mean I struck a nerve. They can provide evidence of what a community is especially irrational about.

They could also mean I’m wrong. But with no arguments and no links or cites to arguments, there’s no way for me to change my mind. If I was posting some idea I thought of recently, I could take the downvotes as a sign that I should think it over more. However, if it’s something I’ve done high-effort thinking about for years, and written tens of thousands of words about, then “reconsider” is not a useful action with no further information. I already considered it as best I know how to.

People can react in different ways to downvotes. If your initial reaction is to stop writing about whatever gets downvotes, that is evidence that you care a lot about social climbing and what other people think of you (possibly more than you value truth seeking). On the other hand, one can think “strong reactions can indicate something important” and write more about whatever got downvoted. Downvotes can be a sign that a topic is important to discuss further.

Downvotes can also be evidence that something is an outlier, which can be a good thing.

Downvoting Misquoting Criticism

One of the things that seems to have struck a nerve with some people, and has gotten me the most downvotes, is criticizing misquoting (examples one and two both got to around -10). I believe the broader issue is my belief that “small” or “pedantic” errors are (sometimes) important, and that raising intellectual standards would make a large overall difference to EA’s correctness and therefore effectiveness.

I’ll clarify this belief more in future posts despite the cold reception and my expectation of getting negative rewards for my efforts. I think it’s important. It’s also clarified a lot in prior writing on my websites.

There are practical issues regarding how to deal with “small” errors in a time-efficient way. I have some answers to those issues but I don’t think they’re the main problem. In other words, I don’t think many people want to be able to pay attention to small errors, but are limited by time constraints and don’t know practical time-saving solutions. I don’t think it’s a goal they have that is blocked by practicality. I think people like something about being able to ignore “small” or “pedantic” errors, and practicality then serves as a convenient excuse to help hide the actual motivation.

Why do I think there’s any kind of hidden motivation? It’s not just the disinterest in practical solutions to enable raising intellectual standards (which I’ve seen year after year in other communities as well, btw). Nor is it just the downvotes that are broadly not accompanied by explanations or arguments. It’s primarily the chronic ambiguity about whether people already agree with me and think obviously misquotes are bad on the one hand or disagree with me and think I’m horribly wrong on the other hand. Getting a mix of responses including both ~“obviously you’re right and you got a negative reaction because everyone already knows it and doesn’t need to hear it again” and ~“you’re wrong and horrible” is weird and unusual.

People generally seem unwilling to actually clearly state what their misquoting policies/attitudes are, but nevertheless say plenty of things that indicate clear disagreements with me (when they speak about it at all, which they often don’t but sometimes do). And this allows a bunch of other people to think there already are strong anti-misquoting norms, including people who do not actually personally have such a norm. In my experience, this is widespread and EA seems basically the same as most other places about it.

I’m not including examples of misquotes, or ambiguous defenses of misquotes, because I don’t want to make examples of people. If someone wants to claim they’re right and make public statements they stand behind, fine, I can use them as an example. But if someone merely posts on the forum a bit, I don’t think I should interpret that as opting in to being some kind of public intellectual who takes responsibility for what he says, claims what he says is important, and is happy to be quoted and criticized. (People often don’t want to directly admit that they don’t think what they post is important, while also not wanting to claim it’s important. That’s another example of chronic ambiguity that I think is related to irrationality.) If someone says to me “This would convince me if only you had a few examples” I’ll consider how to deal with that, but I don’t expect that reaction (and if you care that much you can find two good examples by reviewing my EA posting history, and many many examples of representative non-EA misquotes on my websites and forum).

Upvoting Downvoted Posts

There’s a pattern on Reddit, which I’ve also observed on EA, where people upvote stuff that’s a negative points which they don’t think deserves to be negative. They wouldn’t upvote it if it had positive votes. You can tell because the upvoting stops when it gets back to neutral karma (actually slightly less on EA due to strong votes – people tend to stop at 1, not at the e.g. 4 karma an EA post might start with).

In a lot of ways I think this is a good norm. Some people are quite discouraged by downvotes and feel bad about being disliked. The lack of reasons to accompany downvotes makes that worse for some types of people (though others would only feel worse if they were told reasons). And some downvotes are unwarranted and unreasonable so counteracting those is a reasonable activity.

However, there’s a downside to upvoting stuff that’s undeservedly downvoted. It hides evidence. It makes it harder for people to know what kinds of things get how many downvotes. Downvotes can actually be important evidence about the community. Reddit is larger and many subreddits have issues with many new posts tending to get a few downvotes that do not reflect the community and might even come from bots. I’m not aware of EA having this problem. It’s stuff that is downvoted more than normal which provides useful evidence. On EA, a lot of posts get no votes, or just a few upvotes. I believe getting to -10 quickly isn’t normal and is useful evidence of something, rather than something that should just be ignored as meaningless. Also it only happens to a minority of my posts. The majority get upvotes not downvotes.)


Elliot Temple | Permalink | Messages (0)

Misquoting and Scholarship Norms at EA

Link to the EA version of this post.


EA doesn’t have strong norms against misquoting or some other types of errors related to having high intellectual standards (which I claim are important to truth seeking). As I explained, misquoting is especially bad: “Misquoting puts words in someone else’s mouth without their consent. It takes away their choice of what words to say or not say, just like deadnaming takes away their choice of what name to use.”

Despite linking to lizka clarifying the lack of anti-misquoting norms, I got this feedback on my anti-misquoting article:

One of your post spent 22 minutes to say that people shouldn't misquote. It's a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.

So let me try to explain that EA really doesn’t have strong anti-misquoting norms or strong norms for high intellectual standards and scholarship quality. What would such norms look like?

Suppose I posted a single misquote in Rationalization: AI to Zombies. Suppose it was one word added or omitted and it didn’t change the meaning much. Would people care? I doubt it. How many people would want to check other quotes in the book for errors? Few, maybe zero. How many would want to post mortem the cause of the error? Few, maybe zero. So there is no strong norm against misquotes. Am I wrong? Does anyone really think that finding a single misquote in a book this community likes would result in people making large updates to their views (even is the misquote is merely inaccurate, but doesn’t involve a large change in meaning)?

Similarly, I’m confident that there’s no strong norm against incorrect citations. E.g. suppose in RAZ I found one cite to a study with terrible methodology or glaring factual errors. Or suppose I found one cite to a study that says something different than what it’s cited for (e.g. it’s cited as saying 60% X but the study itself actually says 45% X). I don’t think anything significant would change based on pointing out that one cite error. RAZ’s reputation would not go down substantially. There’d be no major investigation into what process created this error and what other errors the same process would create. It probably wouldn’t even spark debates. It certainly wouldn’t result in a community letter to EY, signed by thousands of people with over a million total karma, asking for an explanation. The community simply tolerates such things. This is an example of intellectual standards I consider too low and believe are lowering EA’s effectiveness a large amount.

Even most of RAZ’s biggest fans don’t really expect the book to be correct. They only expect it to be mostly correct. If I find an error, and they agree it’s an error, they’ll still think it’s a great book. Their fandom is immune to correction via pointing out one error.

(Just deciding “RAZ sucks” due to one error would be wrong too. The right reaction is more complicated and nuanced. For some information on the topic, see my Resolving Conflicting Ideas, which links to other articles including We Can Always Act on Non-Criticized Ideas.)

What about two errors? I don’t think that would work either. What about three error? Four? Five? Nah. What exactly would work?

What about 500 errors? If they’re all basically indisputable, then I’ll be called picky and pedantic, and people will doubt that other books would stand up to a similar level of scrutiny either, and people will say that the major conclusions are still valid.

If the 500 errors include more substantive claims that challenge the book’s themes and concepts, then they’ll be more debatable than factual errors, misquotes, wrong cites, simple, localized logic errors, grammar errors, etc. So that won’t work either. People will disagree with my criticism. And then they won’t debate their disagreement persistently and productively until we reach a conclusion. Some people won’t say anything at all. Others will comment 1-5 times expressing their disagreement. Maybe a handful of people will discuss more, and maybe even change their minds, but the community in general won’t change their minds just because a few people did.

There are errors that people will agree are in fact errors, but will dismiss as unimportant. And there are errors which people will deny are errors. So what would actually change many people’s minds?

Becoming a high status, influential thought leader might work. But social climbing is a very different process than truth seeking.

If people liked me (or whoever the critic was) and liked some alternative I was offering, they’d be more willing to change their minds. Anyone who wanted to say “Yeah, Critical Fallibilism is great. RAZ is outdated and flawed.” would be receptive to the errors I pointed out. People with the right biases or agendas would like the criticisms because the criticisms help them with their goals. Other people would interpret the criticism as fighting against their goals, not helping – e.g. AI alignment researchers basing a lot of their work on premises from RAZ would tend to be hostile to the criticism instead of grateful for the opportunity to stop using incorrect premises and thereby wasting their careers.

I’m confident that I could look through RAZ and find an error. If I thought it’d actually be useful, I’d do that. I did recently find two errors in a different book favored by the LW and EA communities (and I wasn’t actually looking for errors, so I expect there are many others – actually there were some other errors I noticed but those were more debatable). The first error I found was a misquote. I consider it basically inexcusable. It’s from a blog post, so it would be copy/pasted not typed in, so why would there be any changes? That’s a clear-cut error which is really hard to deny is an error. I found a second related error which is worse but requires more skill and judgment to evaluate. The book has a bunch of statements summarizing some events and issues. The misquote is about that stuff. And, setting aside the misquote, the summary is wrong too. It gives an inaccurate portrayal of what happened. It’s biased. The misquote error is minor in some sense: it’s not particularly misleading. The misleading, biased summary of events is actually significantly wrong and misleading.

I can imagine writing two different posts about it. One tries to point out how the summary is misleading in a point-by-point way breaking it down into small, simple points that are hard to deny. This post would use quotes from the book, quotes from the source material, and point out specific discrepancies. I think people would find this dry and pedantic, and not care much.

In my other hypothetical post, I would emphasize how wrong and misleading what the book says is. I’d focus more on the error being important. I’d make less clear-cut claims so I’d be met with more denials.

So I don’t see what would actually work well.

That’s why I haven’t posted about the book’s problems previously and haven’t named the guilty book here. RAZ is not the book I found these errors in. I used a different example on purpose (and, on the whole, I like RAZ, so it’s easier for me avoid a conflict with people who like it). I don’t want to name the book without a good plan for how to make my complaints/criticisms productive, because attacking something that people like, without an achievable, productive purpose, will just pointlessly alienate people.


Elliot Temple | Permalink | Messages (0)

Organized EA Cause Evaluation

I wrote this for the Effective Altruism forum. Link.


Suppose I have a cause I’m passionate about. For example, we’ll use fluoridated water. It’s poison. It lowers IQs. Changing this one thing is easy (just stop purposefully doing it) and has negative cost (it costs money to fluoridate water; stopping saves money) and huge benefits. That gives it a better cost to benefit ratio than any of EA’s current causes. I come to EA and suggest that fluoridated water should be the highest priority.

Is there any *organized** process by which EA can evaluate these claims, compare them to other causes, and reach a rational conclusion about resource allocation to this cause?* I fear there isn’t.

Do I just try to write some posts rallying people to the cause? And then maybe I’m right but bad at rallying people. Or maybe I’m wrong but good at rallying people. Or maybe I’m right and pretty good at rallying people, but someone else with a somewhat worse cause is somewhat better at rallying. I’m concerned that my ability to rally people to my cause is largely independent of the truth of my cause. Marketing isn’t truth seeking. Energy to keep writing more about the issue, when I already made points (that are compelling if true, and which no one has given a refutation of), is different than truth seeking.

Is there any reasonable on-boarding process to guide me to know how to get my cause taken seriously with specific, actionable steps? I don’t think so.

Is there any list of all evaluated causes, their importance, and the reasons? With ways to update the list based on new arguments or information, and ways to add new causes to the list? I don’t think so. How can I even know how important my cause is compared to others? There’s no reasonable, guided process that EA offers to let me figure that out.

Comparing causes often depends on some controversial ideas, so a good list would take that into account and give alternative cause evaluations based on different premises, or at least clearly specify the controversial premises it uses. Ways those premises can be productively debated are also important.

Note: I’m primarily interested in processes which are available to anyone (you don’t have to be famous or popular first, or have certain credentials given to you be a high status authority) and which can be done in one’s free time without having to get an EA-related job. (Let’s suppose I have 20 hours a week available to volunteer for working on this stuff, but I don’t want to change careers. I think that should be good enough.) Being popular, having credentials, or working at a specific job are all separate issues from being correct.

Also, based on a forum search, stopping water fluoridation has never been proposed as an EA cause, so hopefully it’s a fairly neutral example. But this appears to indicate a failure to do a broad, organized survey of possible causes before spending millions of dollars on some current causes, which seems bad. (It could also be related to the lack of any way good way to search EA-related information that isn’t on the forum.)

Do others think these meta issues about EA’s organization (or lack thereof) are important? If not, why? Isn’t it risky and inefficient to lack well-designed processes for doing commonly-needed, important tasks? If you just have a bunch of people doing things their own way, and then a bunch of other people reaching their own evaluations of the subset of information they looked at, that is going to result in a social hierarchy determining outcomes.


Elliot Temple | Permalink | Messages (0)