Misquoting Is Conceptually Similar to Deadnaming: A Suggestion to Improve EA Norms

Our society gives people (especially adults) freedom to control many aspects of their lives. People choose what name to go by, what words to say, what to do with their money, what gender to be called, what clothes to wear, and much more.

It violates people’s personal autonomy to try to control these things without their consent. It’s not your place to choose e.g. what to spend someone else’s money on, what clothes they should wear, or what their name is. It’d be extremely rude to call me “Joan” instead of “Elliot”.

Effective Altruism (EA) has written norms related to this:

Misgendering deliberately and/or deadnaming gratuitously is not ok, although mistakes are expected and fine (please accept corrections, though).

I think this norm is good. I think the same norm should be applied to misquoting for the same reasons. It currently isn’t (context).

Article summary: Misquoting is different than sloppiness or imprecision in general. Misquoting puts words in someone else’s mouth without their consent. It takes away their choice of what words to say or not say, just like deadnaming takes away their choice of what name to use.

I’d also suggesting applying the deadnaming norm to other forms of misnaming besides deadnaming, though I don’t know if those ever actually come up at EA, whereas misquoting happens regularly. I won’t include examples of misquotes for two reasons. First, I don’t want to name and shame individuals (especially when it’s a widespread problem and it could easily have been some other individuals instead). Second, I don’t want people to respond by trying to debate the degree of importance or inaccuracy of particular misquotes. That would miss the point about people’s right to control their own speech. It’s not your place to speak for other people, without their consent, even a little bit, even in unimportant ways.

I’ll clarify how I think the norm for deadnaming works, which will simultaneously clarify what I think about misquoting. There are some nuances to it. Then I’ll discuss misquoting more and discuss costs and benefits.

Accidents

Accidental deadnaming is OK but non-accidental deadnaming isn’t. If you deadname someone once, and you’re corrected, you should fix it and you shouldn’t do it again. Accidentally deadnaming someone many times is implausible or unreasonable; reasonable people who want to stop having those accidents can stop.

While “mistakes are expected and fine”, EA’s norm is that deadnaming on purpose is not fine nor expected. Misquotes, like deadnaming, come in accidental and non-accidental categories, and the non-accidental ones shouldn’t be fine.

How can we (charitably) judge what is an accident?

A sign that deadnaming wasn’t accidental is when someone defends, legitimizes or excuses it. If they say, “Sorry, my mistake.” it was probably a genuine accident. If they instead say “Deadnaming is not that bad.” or “It’s not a big deal.” or “Why do you care so much?”, or “I’m just using the name on your birth certificate.” then their deadnaming was partly due to their attitude rather than by accident. That violates EA norms.

When people resist a correction, or deny the importance of getting it right, then their mistake wasn’t just an accident.

For political reasons, some people resist using other people’s preferred name or pronouns. There’s a current political controversy about it. This makes deadnaming more common than it would otherwise be. Any deadnaming that occurs in part due to political attitudes is not fully accidental. Similarly, there is a current intellectual controversy about whether misquoting is a big deal or whether, instead, complaining about it is annoyingly pedantic and unproductive. This controversy increases the frequency of misquotes.

However, that controversy about misquotes and precision is separate from the issue of people’s right to control their own speech and choose what words to say or not say. Regardless of the outcome of the precision vs. sloppiness debate in general, misquotes are a special case because they non-consensually violate other people’s control over their own speech. It’s a non sequitur to go from thinking that lower effort, less careful writing is good to the conclusion that it’s OK to say that John said words that he did not say or choose.

People who deadname frequently claim it’s accidental when there are strong signs it isn’t accidental, such as resisting correction, making political comments that reveal their agenda, or being unapologetic. If they do that repeatedly, I don’t think EA would put up with it. Misquoting could be treated the same way.

Legitimacy

Sometimes people call me “Elliott” and I usually say nothing about the misspelling. I interpret it as an accident because it doesn’t fit any agenda. I don’t know why they’d do it on purpose. If I expected them to use my name many times in the future, or they were using it in a place that many people would read it, then I’d probably correct them. If I corrected them, they would say “oops sorry” or something like that; as long as they didn’t feel attacked or judged, and they don’t have a guilty conscience, then they wouldn’t resist the correction.

My internet handle is “curi”. Sometimes people call me “Curi”. When we’re having a conversation and they’re using my name repeatedly, I may ask them to use “curi”. A few people have resisted this. Why? Besides feeling hostility towards a debate opponent, I think some were unfamiliar with internet culture, so they don’t regard name capitalization as a valid, legitimate choice. They believe names should be formatted in a standard way. They think I’m in the wrong by wanting to have a name that starts with a lowercase letter. They think, by asking them to start a name with a lowercase letter, I’m the one trying to control them in a weird, inappropriate way.

People resist corrections when they think they’re in the right in some way. In that case, the mistake isn’t accidental. Their belief that it’s good in some way is a causal factor in it happening. If it was just an accident, they wouldn’t resist fixing the mistake. Instead, there is a disagreement; they like something about the alleged mistake. On the EA forum, you’re not allowed to disagree that deadnaming is bad and also act on that disagreement by being resistant to the forum norms. You’re required to go along with and respect the norms. You can get a warning or ban for persistent deadnaming.

People’s belief that they’re in the right usually comes from some kind of social-cultural legitimacy, rather than being their own personal opinion. Deadnaming and misgendering are legitimized by right wing politics and by some traditional views. Capitalizing the first letter of a name, and lowercasing the rest, is a standard English convention/tradition which some internet subcultures decided to violate, perhaps due to their focus on written over spoken communication. I think misquoting is legitimized primarily by anti-pedantry or anti-over-precision ideas (which is actually a nuanced debate where I think both standard sides are wrong). But viewpoints on precision aren’t actually relevant to whether it’s acceptable or violating to put unchosen words in someone else’s mouth. Also, each person has a right to decide how precise to be in their own speech. When you quote, it’s important to understand that that isn’t your speech; you’re using someone else’s speech in a limited way, and it isn’t yours to control.

When someone asks you not to deadname, you may feel that they’re asking you to go against your political beliefs, and therefore want to resist what feels like politicized control over your speech, which asks you to use your own speech contrary to your values. However, a small subset of speech is more about other people than yourself, so others need to have significant control over it. That subset includes names, pronouns and quotes. When asked not to misquote, instead of feeling like your views on precision are being challenged, you should instead recognize that you’re simply being asked to respect other people’s right to choose what words to say or not say. It’s primarily about them, not you. And it’s primarily about their control over their own life and speech, not about how much precision is good or how precisely you should speak.

Control over names and pronouns does have to be within reason. You can’t choose “my master who I worship” as a name or pronoun and demand that others say it. I’m not aware of anyone ever seriously wanting to do that. I don’t think it’s a real problem or what the controversy is actually about (even though it’s a current political talking point).

Our culture has conflicting norms, but it does have a very clear, well known norm in favor of exact quotes. That’s taught in schools and written down in policies at some universities and newspapers. We lack similarly clear or strong norms for many other issues related to precision. Why? Because the norm against misquoting isn’t primarily about precision. Misquoting is treated differently than other issues related to precision because it’s not your place to choose someone else’s words any more than it’s your place to choose their name or gender.

Misquotes Due to Bias

Misquotes usually aren’t random errors.

Sometimes people make a typo. That’s an accident. Typos can be viewed as basically random errors. I bet there are actually patterns regarding which letters or letter combinations get more typos. And people could work to make fewer typos. But there’s no biased agenda there, so in general it’s not a problem.

Most quotes can be done with copy/paste, so typos can be avoided. If someone has a general policy of typing in quotes and keeps making typos within quotes, they should switch to using copy/paste. At my forum, I preemptively ask everyone to use software tools like copy/paste when possible to avoid misquotes. I don’t wait and ask them to switch to less error-prone quoting methods after they make some errors. That’s because, as with deadnaming, those errors mistreat other people, so I’d rather they didn’t happen in the first place.

Except for typos and genuine accidents, misquotes are usually changed in some way that benefits or favors the misquoter, not in random ways.

People often misquote because they want to edit things in their favor, even in very subtle ways. Tiny changes can make a quote seem more or less formal or tweak the connotations. People often edit quotes to remove some ambiguity, so it reads as an author more clearly saying something than he did.

Sometimes people want their writing to look good with no errors, so they want to change anything in a quote that they regard as an error, like a comma or lack of comma. Instead of respecting the quote as someone else’s words – their errors are theirs to make (or to disagree are errors) – they want to control it because they’re using it within their own writing, so they want to make it conform to their own writing standards. People should understand that when they quote, they are giving someone else a space within their writing, so they are giving up some control.

People also misquote because they don’t respect the concept of accurate quotations. These misquotes can be careless with no other agenda or bias – they aren’t specifically edited to e.g. help one side of a debate. However, random changes to the wordings your debate partners use tend to be bad for them. Random changes tend to make their wordings less precise rather than more precise. As we know from evolution, random changes are more likely to make something less adapted to a purpose rather than more adapted.

If you deadname people because you don’t respect the concept of people controlling their name, that’s not OK. If you are creating accidents because you don’t care to try to get names right, you’re doing something wrong. Similarly, if you create accidental misquotes because you don’t respect the concept of people controlling their own speech and wordings, you’re doing something wrong.

Also, imprecision in general is an enabler of bias because it gives people extra flexibility. They get more options for what to say, think or do, so they can pick the one that best fits their bias. A standard example is rounding in their favor. If you’re 10 minutes late, you might round that down to 5 minutes in a context where plus or minus five minutes of precision is allowed. On the other hand, if someone else is 40 minutes late, you might round that up to an hour as long as that’s within acceptable boundaries of imprecision. People also do this with money. Many people round their budget up but round their expenses down, and the more imprecise their thinking, the larger the effect. If permissible imprecision gives people multiple different versions of a quote that they can use, they’ll often pick one that is biased in their favor, which is different than a fully accidental misquote.

Misquotes Due to Precise Control or Perfectionism

Some non-accidental misquotes, instead of due to bias, are because people want to control all the words in their essay (or book or forum post). They care so much about controlling their speech, in precise detail, that they extend that control to the text within quotes just because it’s within their writing. They’re used to having full control over everything they write and they don’t draw a special boundary for quotations; they just keep being controlling. Then, ironically, when challenged, they may say “Oh who cares; it’s just small changes; you don’t need precise control over your speech.” But they changed the quote because of their extreme desire to exactly control anything even resembling their own speech. If you don’t want to give up control enough to let someone else speak in entirely their own words within your writing, there is a simple solution: don’t quote them. If you want total control of your stuff, and you can’t let a comma be out of place even within a quote, you should respect other people wanting control of their stuff, too. Some people don’t fully grasp that the stuff within quotes is not their stuff even though it’s within their writing. Misquotes of this nature come more from a place of perfectionism and precise control, and lack of empathy, rather than being sloppy accidents. These misquotes involve non-random changes to make the text fit the quoter’s preferences better.

Types of Misquotes

I divide misquotes into two categories. The first type changes a word, letter or punctuation mark. It’s a factual error (the quote is factually wrong about what the person said). It’s inaccurate in a clear, literal way. Computers can pretty easily check for this kind of quotation error without needing any artificial intelligence. Just a simple string comparison algorithm can do it. In this case, there’s generally no debate about whether the quote is accurate or inaccurate. There are also some special rules that allow changing quotes without them being considered inaccurate, e.g. using square brackets to indicate changes or notes, or using ellipses for omitted words.

The second type of misquote is a misleading quote, such as taking words out of context. There is sometimes debate about whether a quote is misleading or not. Many cases are pretty clear, and some cases are harder to judge. In borderline cases, we should be forgiving of the person who did it, but also, in general, they should change it if the person being quoted objects. (Or, for example, if you’re debating someone about Socrates’ ideas, and they’re the one taking Socrates’ side, and they think your Socrates quote is misleading, then you should change it. You may say all sorts of negative things about the other side of the debate, but that’s not what quotation marks are for. Quotations are a form of neutral ground that should be kept objective, not a place to pursue your debating agenda.)

Here’s an example of a misleading quote that doesn’t violate the basic accuracy rules. You say, “I do not think John is great.” but I quote you as saying “John is great.” The context included an important “not” which has been left out. I think we can all agree that this counts as misquoting even though no words, letters or punctuation marks were changed. And, like deadnaming, it’s very rude to do this to someone.

Small Changes

Sometimes people believe it’s OK to misquote as long as the meaning isn’t changed. Isn’t it harmless to replace a word with a synonym? Isn’t it harmless to change a quote if the author agrees with the changed version? Do really small changes matter?

First of all, if the changes are small and don’t really matter, then just don’t do them. If you think there’s no significant difference, that implies there’s no significant upside, so then don’t misquote. It’s not like it takes substantial effort to refrain from editing a quote; it’s less work not to make changes. And copy/pasting is generally less work than typing.

If someone doesn’t mind a change to a quote, there are still concerns about truth and accuracy. Anyone in the audience may not want to read things he believes are exact quotes but which aren’t. He may find that misleading (and EA has a norm against misleading people). Also, if you ever non-accidentally use inaccurate quotes, then reasonable people will doubt that they can trust any of your quotes. They’ll have to check primary sources for any quotes you give, which will significantly raise the cost of reading your writing and reduce engagement with your ideas. But the main issue – putting words in someone’s mouth without their consent – is gone if they consent. Similarly, it isn’t deadnaming to use an old name of someone who consents to be called by either their old or new name.

However, it’s not your place to guess what words someone would consent to say. If they are a close friend, maybe you have a good understanding of what’s OK with them, and I guess you could try to get away with it. I wouldn’t recommend that and I wouldn’t want to be friends with someone who thought they could speak for me and present it as a quote rather than as an informed guess about my beliefs or about what I would say. But if you want to quote your friend (or anyone else) saying something they haven’t said, and you’re pretty sure they’d be happy to say it, there’s a solution: ask them to say it and then quote them if they do choose to say it. On the other hand, if you’re arguing with someone, you’re in a poor position to judge what words they would consent to saying or what kind of wording edits would be meaningful to them. It’s not reasonable to try to guess what wording edits a debate opponent would consent to and then go ahead with them unilaterally.

Inaccurately paraphrasing debate opponents is a problem too, but it’s much harder to avoid than misquoting is. Misquoting, like deadnaming, is something that you can almost entirely avoid if you want to.

The changes you find small and unimportant can matter to other people with different perspectives on the issues. You may think that “idea”, “concept”, “thought” and “theory” are interchangeable words, but someone else may purposefully, non-randomly use each of those words in different contexts. It’s important that people can control the nuances of their wordings when they want to (even if they can’t give explicit arguments for why they use words that way). Even if an author doesn’t (consciously) see any significant difference between his original wording and your misquote, the misquote is still less representative of his thinking (his subconscious or intuition chose to say it the other way, and that could be meaningful even if he doesn’t realize it).

Even if your misquote would be an accurate paraphrase, and won’t do a bunch of harm by spreading severe misinformation, there’s no need to put quote marks around it. If you’re using an edited version of someone else’s words, so leaving out the quote marks would be plagiarism, then use square brackets and ellipses. There’s already a standard solution for how to edit quotes, when appropriate, without misquoting. There’s no good reason to misquote.

Cost and Benefit

How costly is it to avoid misquotes or to avoid deadnaming? The cost is low but there are some reasons people misjudge it.

Being precise has a high cost, at least initially. But misquoting, like misnaming, is a specific case where, with a low effort, people can get things right with high reliability and few accidents. Reducing genuine accidents to zero is unnecessary and isn’t what the controversy is about.

When a mistake is just an accident, correcting it shouldn’t be a big deal. There is no shame is infrequent accidents. Attempts to correct misquotes sometimes turn into a much bigger deal, with each party writing multiple messages. It can even initiate drama. This is because people oppose the policy of not misquoting, rather than a cost inherent in a policy of not misquoting. It’s the resistance to the policy, not the policy itself, which wastes time and energy and derails conversations.

Most of the observed conversational cost, that goes to talking about misquotes, is due to people’s pro-misquoting attitudes rather than due to any actual difficulty of avoiding misquotes. This misleads people about how large the cost is.

Similarly, if you go to some right wing political forums, getting people to stop deadnaming would be very costly. They’d fight you over it. But if they were happy to just do it, then the costs would be low. It’s not very hard to very infrequently make updates to your memory about the names of a few people. Cost due to opposition to doing something correctly should be clearly differentiated from the cost of doing it correctly.

To avoid misquotes, copy and paste. If you type in a quote from paper, double check it and/or disclaim it as potentially containing a typo. Most books are available electronically so typing quotes in from paper is usually unnecessary and more costly. Most cases of misquoting that I’ve seen, or had a conflict over, involved a quote that could have been copy/pasted. Copy/pasting is easy not costly.

Avoiding misquotes also involves never adding quotation marks around things which are not quotes but which readers would think were quotes. For example, don’t write “John said” and then a paraphrase then also quote marks around it in order to make it seem more exact, precise, rigorous or official than it is. And don’t put quote marks around a paraphrase because you believe you should use a quote, but you’re too lazy to get the quote, and you want to hide that laziness by pretending you did quote.

Accurate quoting can be more about avoiding bias than about effort or precision. You have to want to do it and then resist the temptation to violate the rules in ways that favor you. For some people, that’s not even tempting. It’s like how some people resist the temptation to steal while others don’t find stealing tempting in the first place. You can get to the point that things aren’t tempting and really don’t take effort to not do. Norms can help with that. Due to better anti-stealing norms, many more people aren’t tempted to steal than aren’t tempted to misquote. Anyway, if someone gives in to temptation and steals, deadnames or misquotes, that is not an accident. It’s a different thing. It’s not permissible at EA to deadname because you gave in to temptation, and I suggest misquoting should work that way too.

What’s the upside of misquoting? Why are many people resistant to making a small effort to change? I think there are two main reasons. First, they confuse the misquoting issue with the general issue of being imprecise. They feel like someone asking them not to misquote is demanding that they be a more precise thinker and writer in general. Actually, people asking not to be misquoted, like people asking not to be deadnamed, don’t want their personal domain violated. Second, people like misquoting because it lets lets them make biased changes to quotes. People don’t like being controlled by rules that give them less choice of what to do and less opportunity to be flexible in their favor (a.k.a. biased). Many people have a general resistance to creating and following written policies. I’ve written about how that’s related to not understanding or resisting the rule of law.

Another cost of avoiding misquotes is that you should be careful when using software editing tools like spellcheck or Grammarly. They should have automatic quote detection features and warn you before making changes within quotes, but they don’t. These tools encourage people to quickly make many small changes without reading the context, so people may change something without even knowing it’s within a quote. People can also click buttons like “correct all” and end up editing quotes. Or they might decide to replace all instances of “colour” with “color” in their book, do a mass find/replace, and accidentally change a quote. I wonder how many small misquotes in recent books are caused this way, but I don’t think it’s the cause of many misquotes on forums. Again, the occasional accident is OK; perfection is not necessary but people could avoid most errors at a low cost and stop picking fights in defense of misquotes or deadnaming.

If non-accidental misquoting is prohibited at EA, just like deadnaming, then it will provide a primary benefit by defending people’s control over their own speech. It will also provide a secondary benefit regarding truth, accuracy and precision. It’s debatable how large that accuracy benefit is and how much cost it would be worth. However, in this case, the marginal cost of that benefit would be zero. If you change misquoting norms for another reason which is worth the cost by itself, then then the gain in accuracy is a free bonus.

There are some gray areas regarding misquoting, where it’s harder to judge whether it’s an error. Those issues are more costly to police. However, most of the benefit is available just by policing misquotes which are clearly and easily avoidable, which is the large majority of misquotes. Doing that will have a good cost to benefit ratio.

Another cost of misquoting is it can gaslight people, especially with small, subtle changes. It can cause them to doubt themselves or create false memories of their own speech to match the misquote. It takes work to double check what you actually said after reading someone quote you, which is a cost. Many people don’t do that work, which leaves them vulnerable. There’s a downside both do doing or not doing that work. That’s a cost imposed by allowing misquotes to be common and legitimized.

Tables

Benefits and costs of anti-misquoting norms:

Benefits Costs
Respect people’s control over their speech Avoiding carelessness
Accuracy Resisting temptation
Prevent conflicts about misquotes Not getting to bias quotes in your favor
No hidden, biased tweaks in quotes you read Learning to use copy/paste hotkeys
Less time editing quotes Not getting full control over quoted text like you have over other text in your post
Quotes and paraphrases differentiated Not getting to put quote marks around whatever you want to
Filter out persistent misquoters Lose people who insist on misquoting
Effort to spread and enforce norm

For comparison, here’s a cost/benefit table for anti-deadnaming norms:

Benefits Costs
Respect people's control over their name Avoiding carelessness
Accuracy Resisting temptation
Filter out persistent deadnamers Lose people who insist on deadnaming
Not getting to call people whatever you want
Effort to spread and enforce norm

Potential Objections

If I can’t misquote, how can I tweak a quote wording to fit my sentence? Use square brackets.

If I can’t misquote, how can I supply context for a quote and keep it short? Use square brackets or explain the context before giving the quote.

What if I want to type in a quote but then I make a typo? If you’re a good enough typist that you don’t mind typing extra words, I’m sure you can also manage to use copy/paste hotkeys.

What if I’m quoting a paper book? Double check what you typed in and/or put a disclaimer that it’s typed in by hand.

What if an accident happens? As with deadnaming, rare, genuine accidents are OK. Accidents that happen because you don’t really care about deadnaming or misquoting are not fine.

Who cares? People who think about what words to say and not say, and put effort into those decisions. They don’t want someone else to overrule those decisions. Whether you’re one of those people or not, people who think about what to say are people you should want to have on your forum.

Who else cares? People who want to form accurate beliefs about the world and have high standards don’t want to read misquotes and potentially be fooled by them or have to look stuff up in primary sources frequently. It’s much less work for people to not misquote in the first place than for readers (often multiple readers independently) to check sources.

Is it really that big a deal? Quoting accurately isn’t very hard and isn’t that big a deal to do. If this issue doesn’t matter much, just do it in the way that doesn’t cause problems and doesn’t draw attention to quoting. If people would stop misquoting then we could all stop talking about this.

Can’t you just ignore being misquoted? Maybe. You can also ignore being deadnamed, but you shouldn’t have to. It’s also hard enough to have discussions when people subtly reframe the issues, and indirectly reframe what you said (often by replying as if you said something, without claiming you said it), which is very common. Those actions are harder to deal with and counter when they involve misquotes – misquotes escalate a preexisting problem and make it worse. On the other hand, norms in favor of using (accurate) quotes more often would make it harder to be subtly biased and misleading about what discussion partners said.

Epistemic Status

I’ve had strong opinions about misquoting for years and brought these issues up with many people. My experiences with using no-misquoting norms at my own forum have been positive. I still don’t know of any reasonable counter-arguments that favor misquotes.

Conclusion

Repeated deadnaming is due to choice not accident. Even if a repeat offender isn’t directly choosing to deadname on purpose, they’re choosing to be careless about the issue on purpose, or they have a (probably political) bias. They could stop deadnaming if they tried harder. EA norms correctly prohibit deadnaming, except by genuine accident. People are expected to make a reasonable (small) effort to not deadname.

Like deadnaming, misquoting violates someone else’s consent and control over their personal domain. People see misquoting as being about the open debate over how precise people should be, but that is a secondary issue. They should have more empathy for people who want to control their own speech. I propose that EA’s norms should be changed to treat misquoting like deadnaming. Misquoting is a frequent occurrence and the forum would be a better place if moderators put a stop to it, as they stop deadnaming.

Norms that allow non-accidental misquoting alienate some people who might otherwise participate, just like allowing non-accidental deadnaming would alienate some potential participants. Try to visualize in your head what a forum would be like where the moderators refused to do anything about non-accidental deadnaming. Even if you don’t personally have a deadname, it’d still create a bad, disrespectful atmosphere. It’s better to be respectful and inclusive, at a fairly small cost, instead of letting some forum users mistreat others. It’s great for forums to enable free speech and have a ton of tolerance, but that shouldn’t extend to people exercising control over something that someone else has the right to control, such as his name or speech. It’s not much work to get people’s names right nor to copy/paste exact quotes and then leave them alone (and to refrain from adding quotation marks around paraphrases). Please change EA’s norms to be more respectful of people’s control over their speech, as the norms already respect people’s control over their name.


Elliot Temple | Permalink | Messages (0)

Finding Errors in The Case Against Education by Bryan Caplan

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post.

Introduction

I'm no fan of university nor academia, so I do partly agree with The Case Against Education by Bryan Caplan. I do think social climbing is a major aspect of university. (It's not just status signalling. There's also e.g. social networking.)

I'm assuming you can electronically search the book to read additional context for quotes if you want to.

Error One

For a single individual, education pays.

You only need to find one job. Spending even a year on a difficult job search, convincing one employer to give you a chance, can easily beat spending four years at university and paying tuition. If you do well at that job and get a few years of work experience, getting another job in the same industry is usually much easier.

So I disagree that education pays, under the signalling model, for a single individual. I think a difficult job search is typically more efficient than university.

This works in some industries, like software, better than others. Caplan made a universal claim so there's no need to debate how many industries this is viable in.

Another option is starting a company. That's a lot of work, but it can still easily be a better option than going to university just so you can get hired.

Suppose, as a simple model, that 99% of jobs hire based on signalling and 1% don't. If lots of people stop going to university, there's a big problem. But if you individually don't go, you can get one of the 1% of non-signalling jobs. Whereas if 3% of the population skipped university and competed for 1% of the jobs, a lot of those people would have a rough time. (McDonalds doesn't hire cashiers based on signalling – or at least not the same kind of signalling – so imagine we're only considering good jobs in certain industries so the 1% non-signalling jobs model becomes more realistic.)

When they calculate the selfish (or “private”) return to education, they focus on one benefit—the education premium—and two costs—tuition and foregone earnings.[4]

I've been reading chapter 5 trying to figure out if Caplan ever considers alternatives to university besides just entering the job market in the standard way. This is a hint that he doesn't.

Foregone earnings are not a cost of going to university. They are a benefit that should be added on to some, but not all, alternatives to university. Then univeristy should be compared to alternatives for how much benefit it gives. When doing that comparison, you should not subtract income available in some alternatives from the benefit of university. Doing that subtraction only makes sense and works out OK if you're only considering two options: university or get a job earlier. When there are only two options, taking a benfit from one and instead subtracting it from the other as an opportunity cost doesn't change the mathematical result.

See also Capitalism: A Treatise on Economics by George Reisman (one of the students of Ludwig von Mises) which criticizes opportunity costs:

Contemporary economics, in contrast, continually ignores the vital connection of income and cost with the receipt and outlay of money. It does so insofar as it propounds the doctrines of “imputed income” and “opportunity cost.”[26] The doctrine of imputed income openly and systematically avows that the absence of a cost constitutes income. The doctrine of opportunity cost, on the other hand, holds that the absence of an income constitutes a cost. Contemporary economics thus deals in nonexistent incomes and costs, which it treats as though they existed. Its formula is that money not spent is money earned, and that money not earned is money spent.

That's from the section "Critique of the Concept of Imputed Income" which is followed by the section "Critique of the Opportunity-Cost Doctrine". The book explains its point in more detail than this quote. I highly recommend Reisman's whole book to anyone who cares about economics.

Risk: I looked for discussion of alternatives besides university or entering the job market early, such as a higher effort job search or starting a business. I didn't find it, but I haven't read most of the book so I could have missed it. I primarily looked in chapter 5.

Error Two

The answer would tilt, naturally, if you had to sing Mary Poppins on a full-price Disney cruise. Unless you already planned to take this vacation, you presumably value the cruise less than the fare. Say you value the $2,000 cruise at only $800. Now, to capture the 0.1% premium, you have to fork over three hours of your time plus the $1,200 difference between the cost of the cruise and the value of the vacation.

(Bold added to quote.)

The full cost of the cruise is not just the fare. It's also the time cost of going on the cruise. It's very easy to value the cruise experience at more than the ticket price, but still not go, because you'd rather vacation somewhere else or stay home and write your book.

BTW, Caplan is certainly familiar with time costs in general (see e.g. the last sentence quoted).

Error Three

Laymen cringe when economists use a single metric—rate of return—to evaluate bonds, home insulation, and college. Hasn’t anyone ever told them money isn’t everything! The superficial response: Economists are by no means the only folks who picture education as an investment. Look at students. The Higher Education Research Institute has questioned college freshmen about their goals since the 1970s. The vast majority is openly careerist and materialist. In 2012, almost 90% called “being able to get a better job” a “very important” or “essential” reason to go to college. Being “very well-off financially” (over 80%) and “making more money” (about 75%) are almost as popular. Less than half say the same about “developing a meaningful philosophy of life.”[2] These results are especially striking because humans exaggerate their idealism and downplay their selfishness.[3] Students probably prize worldly success even more than they admit.

(Bold added.)

First, minor point, some economists have that kind of perspective about rate of return. Not all of them.

And I sympathize with the laymen. You should consider whether you want to go to university. Will you enjoy your time there? Future income isn't all that matters. Money is nice but it doesn't really buy happiness. People should think about what they want to do with their lives, in realistic ways that take money into account, but which don't focus exclusively on money. In the final quoted sentence he mentions that students (on average) probably "prize worldly success even more than they admit". I agree, but I think some of those students are making a mistake and will end up unhappy as a result. Lots of people focus their goals too much on money and never figure out how to be happy (also they end up unhappy if they don't get a bunch of money, which is a risk).

But here's the more concrete error: The survey does not actually show that students view education in terms of economic returns only. It doesn't show that students agree with Caplan.

The issue, highlighted in the first sentence, is "economists use a single metric—rate of return". Do students agree with that? In other words, do students use a single metric? A survey where e.g. 90% of them care about that metric does not mean they use it exclusively. They care about many metrics, not a single one. Caplan immediately admits that so I don't even have to look the study up. He says 'Less than half [of students surveyed] say the same [very important or essential reason to go to university] about “developing a meaningful philosophy of life.”' Let's assume less than half means a third. Caplan tries to present this like the study is backing him up and showing how students agree with him. But a third disagreeing with him on a single metric is a ton of disaagreement. If they surveyed 50 things, and 40 aren't about money, and just 10% of students thought each of those 40 mattered, then maybe around zero students would agree with Caplan about only the single metric being important (the answers aren't independent so you can't just use math to estimate this scenario btw).

Bonus Error

Self-help gurus tend to take the selfish point of view for granted. Policy wonks tend to take the social point of view for granted. Which viewpoint—selfish or social—is “correct”? Tough question. Instead of taking sides, the next two chapters sift through the evidence from both perspectives—and let the reader pick the right balance between looking out for number one and making the world a better place.

This neglects to consider the classical liberal view (which I believe, and which an economist ought to be familiar with) of the harmony of (rational) interests of society and the individual. There is no necessary conflict or tradeoff here. (I searched the whole book for "conflict", "harmony", "interests" and "classical" but didn't find this covered elsewhere.)

I do think errors of omission are important but I still didn't want to count this as one of my three errors. I was trying to find somewhat more concrete errors than just not talking about something important and relevant.

Bonus Error Two

The deeper response to laymen’s critique, though, is that economists are well aware money isn’t everything—and have an official solution. Namely: count everything people care about. The trick: For every benefit, ponder, “How much would I pay to obtain it?”

This doesn't work because lots of things people care about are incommensurable. They're in different dimensions that you can't convert between. I wrote about the general issue of taking into account multiple dimensions at once at https://forum.effectivealtruism.org/posts/K8Jvw7xjRxQz8jKgE/multi-factor-decision-making-math

A different way to look at it is that the value of X in money is wildly variable by context, not a stable number. Also how much people would pay to obtain something is wildly variable by how much money they have, not a stable number.

Potential Error

If university education correlates with higher income, that doesn't mean it causes higher income. Maybe people who are likely to get high incomes are more likely to go to university. There are also some other correlation isn't causation counter-arguments that could be made. Is this addressed in the book? I didn't find it, but I didn't look nearly enough to know whether it's covered. Actually I barely read anything about his claims that university results in higher income, which I assume are at least partly based on correlation data, but I didn't really check. So I don't know if there's an error here but I wanted to mention it. If I were to read the book more, this is something I'd look into.

Screen Recording

Want to see me look through the book and write this post? I recorded my process with sporadic verbal commentary:

https://www.youtube.com/watch?v=BQ70qzRG61Y


Elliot Temple | Permalink | Messages (0)

Critiquing an Axiology Article about Repugnant Conclusions

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post. I criticize Minimalist extended very repugnant conclusions are the least repugnant by Teo Ajantaival.

Error One

Archimedean views (“Quantity can always substitute for quality”)

Let us look at comparable XVRCs for Archimedean views. (Archimedean views roughly say that “quantity can always substitute for quality”, such that, for example, a sufficient number of minor pains can always be added up to be worse than a single instance of extreme pain.)

It's ambiguous/confusing about whether by "quality" you mean different quantity sizes, as in your example (substitution between small pains and a big pain), or you actually mean qualitatively different things (e.g. substitution between pain and the thrill of skydiving).

Is the claim that 3 1lb steaks can always substitute for 1 3lb steak, or that 3 1lb pork chops can always substitute for 1 ~3lb steak? (Maybe more or less if pork is valued less or more than steak.)

The point appears to be about whether multiple things can be added together for a total value or not – can a ton of small wins ever make up for a big win? In that case, don't use the word "quality" to refer to a big win, because it invokes concepts like a qualitative difference rather than a quantitative difference.

I thought it was probably about whether a group of small things could substitute for a bigger thing but then later I read:

Lexical views deny that “quantity can always substitute for quality”; instead, they assign categorical priority to some qualities relative to others.

This seems to be about qualitative differences: some types/kinds/categories have priority over others. Pork is not the same thing as steak. Maybe steak has priority and having no steak can't be made up for with a million pork chops. This is a different issue. Whether qualitative differences exist and matter and are strict is one issue, and whether many small quantities can add together to equal a large quantity is a separate issue (though the issues are related in some ways). So I think there's some confusion or lack of clarity about this.

I didn't read linked material to try to clarify matters, except to notice that this linked paper abstract doesn't use the word "quality". I think, for this issue, the article should stand on its own OK rather than rely on supplemental literature to clarify this.

Actually, I looked again while editing, and I've now noticed that in the full paper (as linked to and hosted by PhilPapers, the same site as before), the abstract text is totally different and does use the word "quality". What is going on!? PhilPapers is broken? Also this paper, despite using the word "quality" in the abstract once (and twice in the references), does not use that word in the body, so I guess it doesn't clarify the ambiguity I was bringing up, at least not directly.

Error Two

This is a strong point in favor of minimalist views over offsetting views in population axiology, regardless of one’s theory of aggregation.

I suspect you're using an offsetting view in epistemology when making this statement concluding against offsetting views in axiology. My guess is you don't know you're doing this or see the connection between the issues.

I take a "strong point in favor" to refer to the following basic model:

We have a bunch of ideas to evaluate, compare, choose between, etc.

Each idea has points in favor and points against.

We weight and sum the points for each idea.

We look at which idea has the highest overall score and favor that.

This is an offsetting model where points in favor of an idea can offset points against that same idea. Also, in some sense, points in favor of an idea offset points in favor of rival ideas.

I think offsetting views are wrong, in both epistemology and axiology, and there's overlap in the reasons for why they're wrong, so it's problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field.

Error Three

The article jumps into details without enough framing about why this matters. This is understandable for a part 4, but on the other hand you chose to link me to this rather than to part 1 and you wrote:

Every part of this series builds on the previous parts, but can also be read independently.

Since the article is supposed to be readable independently, then the article should have explained why this matters in order to work well independently.

A related issue is I think the article is mostly discussing details in a specific subfield that is confused and doesn't particularly matter – the field's premises should be challenged instead.

And another related issue is the lack of any consideration of win/win approaches, discussion of whether there are inherent conflicts of interest between rational people, etc. A lot of the article topics are related to political philosophy issues (like classical liberalism's social harmony vs. Marxism's class warfare) that have already been debated a bunch, and it'd make sense to connect claims and viewpoints to that the existing knowledge. I think imagining societies with different agents with different amounts of utility or suffering, fully out of context of imagining any particular type of society, or design or organization or guiding principles of society, is not very productive or meaningful, so it's no wonder it's gotten bogged down in abstract concerns like the very repugnant conclusion stuff with no sign of any actually useful conclusions coming up.

This is not the sort of error I primarily wanted to point out. However, the article does a lot of literature summarizing instead of making its own claims. So I noticed some errors in the summarized ideas but that's different than errors in the articles. To point out errors in an article itself, when its summarizing other ideas, I'd have to point out that it has inaccurately summarized the ideas. That requires reading the cites and comparing them to the summaries. Which I don't think would be especially useful/valuable to do. Sometimes people summarize stuff they agree with, so criticizing the content works OK. But here a lot of it was summarizing stuff the author and I both disagree with, in order to criticize it, which doesn't provide many potential targets for criticism. So that's why I went ahead and made some more indirect criticism (and included more than one point) for the third error.

But I'd suggest that @Teo Ajantaival watch my screen recording (below) which has a bunch of commentary and feedback on the article. I expect some of it will be useful and some of the criticisms I make will be relevant to him. He could maybe pick out some things I said and recognize them as criticisms of ideas he holds, whereas sometimes it was hard for me to tell what he believes because he was just summarizing other people's ideas. (When looking for criticism, consider if I'm right, does it mean you're wrong? If so, then it's a claim by me about an error, even if I'm actually mistaken.) My guess is I said some things that would work as better error claims than some of the three I actually used, but I don't know which things they are. Also, I think if we were to debate, discussing the underlying premises, and whether this sub-field even matters, would acutally be more important than discussing within-field details, so it's a good thing to bring up. I think my disagreement with the niche that the article is working within is actually more important than some of the within-niche issues.

Offsetting and Repugnance

This section is about something @Teo Ajantaival also disagrees with, so it's not an error by him. It could possibly be an error of omission if he sees this as a good point that he didn't know but would have wanted to think of but didn't. To me it looks pretty important and relevant, and problematic to just ignore like there's no issue here.

If offsetting actually works – if you're a true believer in offsetting – then you should not find the very repugnant scenario to be repugnant at all.

I'll illustrate with a comparison. I am, like most people, to a reasonable approximation, a true believer in offsetting for money. That is, $100 in my bank account fully offsets $100 of credit card debt that I will pay off before there are any interest charges. There do exist people who say credit cards are evil and you shouldn't have one even if you pay it off in full every month, but I am not one of those people. I don't think debt is very repugnant when it's offset by assets like cash.

And similarly, spreading out the assets doesn't particularly matter. A billion bank accounts with a dollar each, ignoring some adminstrative hassle details, are just as good as one bank account with a billion dollars. That money can offset a million dollars of credit card debt just fine despite being spread out.

If you really think offsetting works, then you shouldn't find it repugnant to have some negatives that are offset. If you find it repugnant, you disagree with offsetting in that case.

I disagree with offsetting suffering – one person being happy does not simply cancel out someone else being victimized – and I figure most people also disagree with suffering offsetting. I also disagree with offsetting in epistemology. Money, as a fungible commodity, is something where offsetting works especially well. Similarly, offsetting would work well for barrels of oil of a standard size and quality, although oil is harder to transport than money so location matters more.

Bonus Error by Upvoters

At a glance (I haven't read it yet as I write this section), the article looks high effort. It has ~22 upvoters but no comments, no feedback, no hints about how to get feedback next time, no engagement with its ideas. I think that's really problematic and says something bad about the community and upvoting norms. I talk about this more at the beginning of my screen recording.

Update after reading the article: I can see some more potential reasons the article got no engagement (too specialized, too hard to read if you aren't familiar with the field, not enough introductory framing of why this matters) but someone could have at least said that. Upvoting is actually misleading feedback if you have problems like that with the article.

Bonus Literature on Maximizing or Minimizing Moral Values

https://www.curi.us/1169-morality

This article, by me, is about maximizing squirrels as a moral value, and more generally about there being a lot of actions and values which are largely independent of your goal. So if it was minimizing squirrels or maximizing bison, most of the conclusions are the same.

I commented on this some in my screen recorded after the upvoters criticism, maybe 20min in.

Bonus Comments on Offsetting

(This section was written before the three errors, one of which ended up being related to this.)

Offsetting views are problematic in epistemology too, not just morality/axiology. I've been complaining about them for years. There's a huge, widespread issue where people basically ignore criticism – don't engage with it and don't give counter-arguments or solutions to the problems it raises – because it's easier to go get a bunch more positive points elsewhere to offset the criticism. Or if they already think their idea already has a ton of positive points and a significant lead, then they can basically ignore criticism without even doing anything. I commented on this verbally around 25min into the screen recording.

Screen Recording

I recorded my screen and talked while creating this. The recording has a lot of commentary that isn't written down in this post.

https://www.youtube.com/watch?v=d2T2OPSCBi4


Elliot Temple | Permalink | Messages (0)

Criticizing "Against the singularity hypothesis"

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post. I criticize Against the singularity hypothesis by David Thorstad.

Introduction

FYI, I disagree with the singularity hypothesis, but primarily due to epistemology, which isn't even discussed in this article.

Error One

As low-hanging fruit is plucked, good ideas become harder to find (Bloom et al. 2020; Kortum 1997; Gordon 2016). Research productivity, understood as the amount of research input needed to produce a fixed output, falls with each subsequent discovery.

By way of illustration, the number of FDA-approved drugs per billion dollars of inflation-adjusted research expenditure decreased from over forty drugs per billion in the 1950s to less than one drug per billion in the 2000s (Scannell et al. 2012). And in the twenty years from 1971 to 1991, inflation-adjusted agricultural research expenditures in developed nations rose by over sixty percent, yet growth in crop yields per acre dropped by fifteen percent (Alston et al. 2000). The problem was not that researchers became lazy, poorly educated or overpaid. It was rather that good ideas became harder to find.

There are many other reasons for drug research progress to slow down. The healthcare industry, as well as science in general (see e.g. the replication crisis), are really broken, and some of the problems are newer. Also maybe they're putting a bunch of work into updates to existing drugs instead of new drugs.

Similarly, decreasing crop yield growths (in other words, yields are still increasing but by lower percentages) could have many other causes. And also decreasing crop yields are a different thing than a decrease in the number of new agricultural ideas that researchers come up with – it's not even the right quantity to measure to make his point. It's a proxy for the actual thing his argument relies on, and he makes no attempt to consider how good or bad of a proxy it is, and I can easily think of some reasons it wouldn't be a very good proxy.

The comment about researchers not becoming lazy, poorly educated or overpaid is an unargued assertion.

So these are bad arguments which shouldn't convince us of the author's conclusion.

Error Two

Could the problem of improving artificial agents be an exception to the rule of diminishing research productivity? That is unlikely.

Asserting something is unlikely isn't an argument. His followup is to bring up Moore's law potentially ending, not to give an actual argument.

As with the drug and agricultural research, his points are bad because singularity claims are not based on extrapolating patterns from current data, but rather on conceptual reasoning. He didn't even claim his opponents were doing that in the section formulating their position, and my pre-existing understanding of their views is they use conceptual arguments not extrapolating from existing data/patterns (there is no existing data about AGI to extrapolate from, so they use speculative arguments, which is OK).

Error Three

one cause of diminishing research productivity is the difficulty of maintaining large knowledge stocks (Jones 2009), a problem at which artificial agents excel.

You can't just assume that AGIs will be anything like current software including "AI" software like AlphaGo. You have to consider what an AGI would be like before you can even know if it'd be especially good at this or not. If the goal with AGI is in some sense to make a machine with human-like thinking, then maybe it will end up with some of the weaknesses of humans too. You can't just assume it won't. You have to envision what an AGI would be like, or what many different things it might be like that would work (narrow it down to various categories and rule some things out) before you consider the traits it'd have.

Put another way, in MIRI's conception, wouldn't mind design space include both AGIs that are good or bad at this particular category of task?

Error Four

It is an unalterable mathematical fact that an algorithm can run no more quickly than its slowest component. If nine-tenths of the component processes can be sped up, but the remaining processes cannot, then the algorithm can only be made ten times faster. This creates the opportunity for bottlenecks unless every single process can be sped up at once.

This is wrong due to "at once" at the end. It'd be fine without that. You could speed up 9 out of 10 parts, then speed up the 10th part a minute later. You don't have to speed everything up at once. I know it's just two extra words but it doesn't make sense when you stop and think about it, so I think it's important. How did it seem to make sense to the author? What was he thinking? What process created this error? This is the kind of error that's good to post mortem. (It doesn't look like any sort of typo; I think it's actually based on some sort of thought process about the topic.)

Error Five

Section 3.2 doesn't even try to consider any specific type of research an AGI would be doing and claim that good ideas would get harder to find for that and thereby slow down singularity-relevant progress.

Similarly, section 3.3 doesn't try to propose a specific bottleneck and explain how it'd get in the way of the singularity. He does bring up one specific type of algorithm – search – but doesn't say why search speed would be a constraint on reaching the singularity. Whether exponential search speed progress is needed depends on specific models of how the hardware and/or software are improving and what they're doing.

There's also a general lack of acknowledgement of, or engagement with, counter-arguments that I can easily imagine pro-singularity people making (e.g. responding to the good ideas getting harder to find point by saying some stuff about mind design space containing plenty of minds that are powerful enough for a singularity with a discontinuity, even if progress slows down later as it approaches some fundamental limits). Similarly, maybe there is something super powerful in mind design space that doesn't rely on super fast search. Whether there is, or not, seems hard to analyze, but this paper doesn't even try. (The way I'd approach it myself is indirectly via epistemology first.)

Error Six

Section 2 mixes Formulating the singularity hypothesis (the section title) with other activities. This is confusing and biasing, because we don't get to read about what the singularity hypothesis is without the author's objections and dislikes mixed in. The section is also vague on some key points (mentioned in my screen recording) such as what an order of magnitude of intelligence is.

Examples:

Sustained exponential growth is a very strong growth assumption

Here he's mixing explaining the other side's view with setting it up to attack it (as requiring a super high evidential burden due to such strong claims). He's not talking from the other side's perspective, trying to present it how they would present it (positively); he's instead focusing on highlighting traits he dislikes.

A number of commentators have raised doubts about the cogency of the concept of general intelligence (Nunn 2012; Prinz 2012), or the likelihood of artificial systems acquiring meaningful levels of general intelligence (Dreyfus 2012; Lucas 1964; Plotnitsky 2012). I have some sympathy for these worries.[4]

This isn't formulating the singularity hypothesis. It's about ways of opposing it.

These are strong claims, and they should require a correspondingly strong argument to ground them. In Section 3, I give five reasons to be skeptical of the singularity hypothesis’ growth claims.

Again this doesn't fit the section it's in.

Padding

Section 3 opens with some restatements of material from section 2 which was also in the introduction some. And look at this repetitiveness (my bolds):

Near the bottom of page 7 begins section 3.2:

3.2 Good ideas become harder to find

Below that we read:

As low-hanging fruit is plucked, good ideas become harder to find

Page 8 near the top:

It was rather that good ideas became harder to find.

Later in that paragraph:

As good ideas became harder to find

Also, page 11:

as time goes on ideas for further improvement will become harder to find.

Page 17

As time goes on ideas for further improvement will become harder to find.

Amount Read

I read to the end of section 3.3 then briefly skimmed the rest.

Screen Recording

I recorded my screen and made verbal comments while writing this:

https://www.youtube.com/watch?v=T1Wu-086frA


Update: Thorstad replied and I wrote a followup post in response: Credentialed Intellectuals Support Misquoting


Elliot Temple | Permalink | Messages (0)

Organized EA Cause Evaluation

I wrote this for the Effective Altruism forum. Link.


Suppose I have a cause I’m passionate about. For example, we’ll use fluoridated water. It’s poison. It lowers IQs. Changing this one thing is easy (just stop purposefully doing it) and has negative cost (it costs money to fluoridate water; stopping saves money) and huge benefits. That gives it a better cost to benefit ratio than any of EA’s current causes. I come to EA and suggest that fluoridated water should be the highest priority.

Is there any *organized** process by which EA can evaluate these claims, compare them to other causes, and reach a rational conclusion about resource allocation to this cause?* I fear there isn’t.

Do I just try to write some posts rallying people to the cause? And then maybe I’m right but bad at rallying people. Or maybe I’m wrong but good at rallying people. Or maybe I’m right and pretty good at rallying people, but someone else with a somewhat worse cause is somewhat better at rallying. I’m concerned that my ability to rally people to my cause is largely independent of the truth of my cause. Marketing isn’t truth seeking. Energy to keep writing more about the issue, when I already made points (that are compelling if true, and which no one has given a refutation of), is different than truth seeking.

Is there any reasonable on-boarding process to guide me to know how to get my cause taken seriously with specific, actionable steps? I don’t think so.

Is there any list of all evaluated causes, their importance, and the reasons? With ways to update the list based on new arguments or information, and ways to add new causes to the list? I don’t think so. How can I even know how important my cause is compared to others? There’s no reasonable, guided process that EA offers to let me figure that out.

Comparing causes often depends on some controversial ideas, so a good list would take that into account and give alternative cause evaluations based on different premises, or at least clearly specify the controversial premises it uses. Ways those premises can be productively debated are also important.

Note: I’m primarily interested in processes which are available to anyone (you don’t have to be famous or popular first, or have certain credentials given to you be a high status authority) and which can be done in one’s free time without having to get an EA-related job. (Let’s suppose I have 20 hours a week available to volunteer for working on this stuff, but I don’t want to change careers. I think that should be good enough.) Being popular, having credentials, or working at a specific job are all separate issues from being correct.

Also, based on a forum search, stopping water fluoridation has never been proposed as an EA cause, so hopefully it’s a fairly neutral example. But this appears to indicate a failure to do a broad, organized survey of possible causes before spending millions of dollars on some current causes, which seems bad. (It could also be related to the lack of any way good way to search EA-related information that isn’t on the forum.)

Do others think these meta issues about EA’s organization (or lack thereof) are important? If not, why? Isn’t it risky and inefficient to lack well-designed processes for doing commonly-needed, important tasks? If you just have a bunch of people doing things their own way, and then a bunch of other people reaching their own evaluations of the subset of information they looked at, that is going to result in a social hierarchy determining outcomes.


Elliot Temple | Permalink | Messages (0)

Misquoting and Scholarship Norms at EA

Link to the EA version of this post.


EA doesn’t have strong norms against misquoting or some other types of errors related to having high intellectual standards (which I claim are important to truth seeking). As I explained, misquoting is especially bad: “Misquoting puts words in someone else’s mouth without their consent. It takes away their choice of what words to say or not say, just like deadnaming takes away their choice of what name to use.”

Despite linking to lizka clarifying the lack of anti-misquoting norms, I got this feedback on my anti-misquoting article:

One of your post spent 22 minutes to say that people shouldn't misquote. It's a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.

So let me try to explain that EA really doesn’t have strong anti-misquoting norms or strong norms for high intellectual standards and scholarship quality. What would such norms look like?

Suppose I posted a single misquote in Rationalization: AI to Zombies. Suppose it was one word added or omitted and it didn’t change the meaning much. Would people care? I doubt it. How many people would want to check other quotes in the book for errors? Few, maybe zero. How many would want to post mortem the cause of the error? Few, maybe zero. So there is no strong norm against misquotes. Am I wrong? Does anyone really think that finding a single misquote in a book this community likes would result in people making large updates to their views (even is the misquote is merely inaccurate, but doesn’t involve a large change in meaning)?

Similarly, I’m confident that there’s no strong norm against incorrect citations. E.g. suppose in RAZ I found one cite to a study with terrible methodology or glaring factual errors. Or suppose I found one cite to a study that says something different than what it’s cited for (e.g. it’s cited as saying 60% X but the study itself actually says 45% X). I don’t think anything significant would change based on pointing out that one cite error. RAZ’s reputation would not go down substantially. There’d be no major investigation into what process created this error and what other errors the same process would create. It probably wouldn’t even spark debates. It certainly wouldn’t result in a community letter to EY, signed by thousands of people with over a million total karma, asking for an explanation. The community simply tolerates such things. This is an example of intellectual standards I consider too low and believe are lowering EA’s effectiveness a large amount.

Even most of RAZ’s biggest fans don’t really expect the book to be correct. They only expect it to be mostly correct. If I find an error, and they agree it’s an error, they’ll still think it’s a great book. Their fandom is immune to correction via pointing out one error.

(Just deciding “RAZ sucks” due to one error would be wrong too. The right reaction is more complicated and nuanced. For some information on the topic, see my Resolving Conflicting Ideas, which links to other articles including We Can Always Act on Non-Criticized Ideas.)

What about two errors? I don’t think that would work either. What about three error? Four? Five? Nah. What exactly would work?

What about 500 errors? If they’re all basically indisputable, then I’ll be called picky and pedantic, and people will doubt that other books would stand up to a similar level of scrutiny either, and people will say that the major conclusions are still valid.

If the 500 errors include more substantive claims that challenge the book’s themes and concepts, then they’ll be more debatable than factual errors, misquotes, wrong cites, simple, localized logic errors, grammar errors, etc. So that won’t work either. People will disagree with my criticism. And then they won’t debate their disagreement persistently and productively until we reach a conclusion. Some people won’t say anything at all. Others will comment 1-5 times expressing their disagreement. Maybe a handful of people will discuss more, and maybe even change their minds, but the community in general won’t change their minds just because a few people did.

There are errors that people will agree are in fact errors, but will dismiss as unimportant. And there are errors which people will deny are errors. So what would actually change many people’s minds?

Becoming a high status, influential thought leader might work. But social climbing is a very different process than truth seeking.

If people liked me (or whoever the critic was) and liked some alternative I was offering, they’d be more willing to change their minds. Anyone who wanted to say “Yeah, Critical Fallibilism is great. RAZ is outdated and flawed.” would be receptive to the errors I pointed out. People with the right biases or agendas would like the criticisms because the criticisms help them with their goals. Other people would interpret the criticism as fighting against their goals, not helping – e.g. AI alignment researchers basing a lot of their work on premises from RAZ would tend to be hostile to the criticism instead of grateful for the opportunity to stop using incorrect premises and thereby wasting their careers.

I’m confident that I could look through RAZ and find an error. If I thought it’d actually be useful, I’d do that. I did recently find two errors in a different book favored by the LW and EA communities (and I wasn’t actually looking for errors, so I expect there are many others – actually there were some other errors I noticed but those were more debatable). The first error I found was a misquote. I consider it basically inexcusable. It’s from a blog post, so it would be copy/pasted not typed in, so why would there be any changes? That’s a clear-cut error which is really hard to deny is an error. I found a second related error which is worse but requires more skill and judgment to evaluate. The book has a bunch of statements summarizing some events and issues. The misquote is about that stuff. And, setting aside the misquote, the summary is wrong too. It gives an inaccurate portrayal of what happened. It’s biased. The misquote error is minor in some sense: it’s not particularly misleading. The misleading, biased summary of events is actually significantly wrong and misleading.

I can imagine writing two different posts about it. One tries to point out how the summary is misleading in a point-by-point way breaking it down into small, simple points that are hard to deny. This post would use quotes from the book, quotes from the source material, and point out specific discrepancies. I think people would find this dry and pedantic, and not care much.

In my other hypothetical post, I would emphasize how wrong and misleading what the book says is. I’d focus more on the error being important. I’d make less clear-cut claims so I’d be met with more denials.

So I don’t see what would actually work well.

That’s why I haven’t posted about the book’s problems previously and haven’t named the guilty book here. RAZ is not the book I found these errors in. I used a different example on purpose (and, on the whole, I like RAZ, so it’s easier for me avoid a conflict with people who like it). I don’t want to name the book without a good plan for how to make my complaints/criticisms productive, because attacking something that people like, without an achievable, productive purpose, will just pointlessly alienate people.


Elliot Temple | Permalink | Messages (0)

Downvotes Are Evidence

I also posted this on the Effective Altruism forum.


Downvotes are evidence. They provide information. They can be interpreted, especially when they aren’t accompanied by arguments or reasons.

Downvotes can mean I struck a nerve. They can provide evidence of what a community is especially irrational about.

They could also mean I’m wrong. But with no arguments and no links or cites to arguments, there’s no way for me to change my mind. If I was posting some idea I thought of recently, I could take the downvotes as a sign that I should think it over more. However, if it’s something I’ve done high-effort thinking about for years, and written tens of thousands of words about, then “reconsider” is not a useful action with no further information. I already considered it as best I know how to.

People can react in different ways to downvotes. If your initial reaction is to stop writing about whatever gets downvotes, that is evidence that you care a lot about social climbing and what other people think of you (possibly more than you value truth seeking). On the other hand, one can think “strong reactions can indicate something important” and write more about whatever got downvoted. Downvotes can be a sign that a topic is important to discuss further.

Downvotes can also be evidence that something is an outlier, which can be a good thing.

Downvoting Misquoting Criticism

One of the things that seems to have struck a nerve with some people, and has gotten me the most downvotes, is criticizing misquoting (examples one and two both got to around -10). I believe the broader issue is my belief that “small” or “pedantic” errors are (sometimes) important, and that raising intellectual standards would make a large overall difference to EA’s correctness and therefore effectiveness.

I’ll clarify this belief more in future posts despite the cold reception and my expectation of getting negative rewards for my efforts. I think it’s important. It’s also clarified a lot in prior writing on my websites.

There are practical issues regarding how to deal with “small” errors in a time-efficient way. I have some answers to those issues but I don’t think they’re the main problem. In other words, I don’t think many people want to be able to pay attention to small errors, but are limited by time constraints and don’t know practical time-saving solutions. I don’t think it’s a goal they have that is blocked by practicality. I think people like something about being able to ignore “small” or “pedantic” errors, and practicality then serves as a convenient excuse to help hide the actual motivation.

Why do I think there’s any kind of hidden motivation? It’s not just the disinterest in practical solutions to enable raising intellectual standards (which I’ve seen year after year in other communities as well, btw). Nor is it just the downvotes that are broadly not accompanied by explanations or arguments. It’s primarily the chronic ambiguity about whether people already agree with me and think obviously misquotes are bad on the one hand or disagree with me and think I’m horribly wrong on the other hand. Getting a mix of responses including both ~“obviously you’re right and you got a negative reaction because everyone already knows it and doesn’t need to hear it again” and ~“you’re wrong and horrible” is weird and unusual.

People generally seem unwilling to actually clearly state what their misquoting policies/attitudes are, but nevertheless say plenty of things that indicate clear disagreements with me (when they speak about it at all, which they often don’t but sometimes do). And this allows a bunch of other people to think there already are strong anti-misquoting norms, including people who do not actually personally have such a norm. In my experience, this is widespread and EA seems basically the same as most other places about it.

I’m not including examples of misquotes, or ambiguous defenses of misquotes, because I don’t want to make examples of people. If someone wants to claim they’re right and make public statements they stand behind, fine, I can use them as an example. But if someone merely posts on the forum a bit, I don’t think I should interpret that as opting in to being some kind of public intellectual who takes responsibility for what he says, claims what he says is important, and is happy to be quoted and criticized. (People often don’t want to directly admit that they don’t think what they post is important, while also not wanting to claim it’s important. That’s another example of chronic ambiguity that I think is related to irrationality.) If someone says to me “This would convince me if only you had a few examples” I’ll consider how to deal with that, but I don’t expect that reaction (and if you care that much you can find two good examples by reviewing my EA posting history, and many many examples of representative non-EA misquotes on my websites and forum).

Upvoting Downvoted Posts

There’s a pattern on Reddit, which I’ve also observed on EA, where people upvote stuff that’s a negative points which they don’t think deserves to be negative. They wouldn’t upvote it if it had positive votes. You can tell because the upvoting stops when it gets back to neutral karma (actually slightly less on EA due to strong votes – people tend to stop at 1, not at the e.g. 4 karma an EA post might start with).

In a lot of ways I think this is a good norm. Some people are quite discouraged by downvotes and feel bad about being disliked. The lack of reasons to accompany downvotes makes that worse for some types of people (though others would only feel worse if they were told reasons). And some downvotes are unwarranted and unreasonable so counteracting those is a reasonable activity.

However, there’s a downside to upvoting stuff that’s undeservedly downvoted. It hides evidence. It makes it harder for people to know what kinds of things get how many downvotes. Downvotes can actually be important evidence about the community. Reddit is larger and many subreddits have issues with many new posts tending to get a few downvotes that do not reflect the community and might even come from bots. I’m not aware of EA having this problem. It’s stuff that is downvoted more than normal which provides useful evidence. On EA, a lot of posts get no votes, or just a few upvotes. I believe getting to -10 quickly isn’t normal and is useful evidence of something, rather than something that should just be ignored as meaningless. Also it only happens to a minority of my posts. The majority get upvotes not downvotes.)


Elliot Temple | Permalink | Messages (0)

EA Misquoting Discussion Summary

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Let me summarize events from my perspective.

I read a book EA likes and found a misquote in it (and other problems).

Someone misquoted me twice in EA forum discussion. They seemed to think that was OK, not a big deal, etc. And no one in the audience took my side or said anything negative about misquotes.

The person who misquoted me (as well as anyone reading) didn’t want to talk about it or debate the matter.

In an open questions thread, I asked about EA’s norms regarding misquotes.

In response, someone misquoted the EA norms to me, which is pretty ironic and silly.

Their claim about EA norms was basically that misquotes aren’t important.

When I pointed out that they had misquoted, they didn’t really seem to care or think that was bad. Again, there were no signs the audience thought misquoting was bad, either.

Lizka, who was the person being misquoted since she wrote the EA norms document, commented on the matter. Lizka’s comment communicated:

  • She agrees with me that the norms were misquoted.
  • But she didn’t really mind or care.
  • EA has no strong norm against misquoting.
  • The attitude to misquotes is basically like typos: mistakes and accidents happen and we should be tolerant and forgiving about that.

Again, no one wanted to talk with me about the matter or debate it.

I wrote an article explaining that misquoting is bad. I compared misquoting to deadnaming because the misquoted norm was actually about deadnaming, and I thought that read as a whole it’s actually a good norm, and the same norm should be used for misquoting.

The EA norm on deadnaming is basically: first, don’t do it, and second, if it’s a genuine accident, that’s alright, but don’t do it again.

Whereas EA’s current misquoting norm is more like: misquotes are technically errors, so that’s bad, but no one particularly cares.

Misquotes are actually like deadnaming. Deadnaming involves exercising control over someone else’s name without their consent, when their name should be within their control. Misquotes involve exercising control over someone else’s words/speech without their consent, when their words/speech should be within their control. Misquotes and deadnaming both violate the personal boundaries of a person and violate consent.

Misquotes are also bad for reasons of scholarship, accuracy and truth seeking. I believe the general attitude of not caring about “small” errors is a big mistake.

Misquotes are accepted at EA due to the combination of not recognizing how they violate consent and victimize someone (like deadnaming), and having a culture tolerant of “small” errors and imprecision.

So, I disagree, and I have two main reasons. And people are not persuaded and don’t want to debate or give any counter-arguments. Which gets into one of the other main topics I’ve posted about at EA, which is debating norms and methodology.

All this so far is … fine. Whatever. The weird part comes next.

The main feedback I’ve gotten regarding misquoting and deadnaming is not disagreement. No one has clearly admitted to disagreeing with me and e.g. claimed that misquoting is not like deadnaming.

Instead, I’ve been told that I’m getting downvoted because people agree with me too much: they think it’s so obvious and uncontroversial that it’s a waste of time to write about.

That is not what’s happening and it’s a very bizarre excuse. People are so eager to avoid a debate that they deny disagreeing with me, even when they could tell from the title that they do disagree with me. None of them has actually claimed that they do think misquoting is like deadnaming, and should be reacted to similarly.

Partly, people are anti-misquoting in some weaker way than I am, just like they are anti-typos but not very strongly. The nuance of “I am more against misquoting than you are, so we disagree” seems too complex for some people. They want to identify as anti-misquoting, so they don’t want to take the pro-misquoting side of a debate. The actual issue is how bad misquoting is (or we could be more specific and specify 20 ways misquoting might be bad, 15 of which I believe, and only 5 of which they believe, and then debate the other 10).

I wrote a second article trying to clarify to people that they disagree with me. I gave some guided thinking so they could see it for themselves. E.g. if I pointed out a misquote in the sequences, would you care? Would it matter much to you? Would you become concerned and start investigating other quotes in the book? I think we all know that if I found a single misquote in that book, it would result in no substantive changes. I think it should; you don’t; we disagree.

After being downvoted without explanation on the second article about misquoting, I wrote an article about downvotes being evidence, in which I considered what different interpretations of downvotes and different reactions. This prompted the most mixed voting I’d gotten yet and a response saying people were probably just downvoting me because they didn’t see the point of my anti-misquoting articles because they already agree with me. That guy refused to actually say he agrees with me himself, saying basically (only when pressed) that he’s unsure and neutral and not very interested in thinking or talking about it. If you think it’s a low priority unimportant issue, then you disagree with me, since I think it’s very important. Does he also think deadnaming is low priority and unimportant? If not, then he clearly disagrees with me.

It’s so weird for people who disagree with me to insist they agree with me. And Lizka already clarified that she disagrees with me, and made a statement about what the EA norms are, and y’all are still telling me that the community in general agrees with me!?

Guys, I’ve struck a nerve. I got downvotes because people didn’t like being challenged in this way, and I’m getting very bizarre excuses to avoid debate because this is a sensitive issue that people don’t want to think or speak clearly about. So it’s important for an additional reason: because people are biased and irrational about it.

My opinions on this matter predate EA (though the specific comparison to deadnaming is a new way of expressing an old point).

I suspect one reason the deadnaming comparison didn’t go over well is that most EAers don’t care much about deadnaming either (and don’t have nuanced, thoughtful opinions about it), although they aren’t going to admit that.

Most deadnaming and most misquoting is not an innocent accident. I think people know that with deadnaming, but deny it with misquoting. But explain to me: how did the wording change in a quote that you could have copy/pasted? That’s generally not an innocent accident. How did you leave out the start of the paragraph and take a quote out of context? That was not a random accident. How did you type in a quote from paper and then forget to triple check it for typos? That is negligence at best, not an accident.

Negligently deadnaming people is not OK. Don’t do it. Negligently misquoting is bad too for more reasons: violates consent and harms scholarship.

This is all related to more complex and more important issues, but if I can’t persuade anyone of this smaller initial point that should be easy, I don’t think trying to say more complex stuff is going to work. If people won’t debate a relatively small, isolated issue, they aren’t going to successfully debate a complex topic involving dozens of issues of similar or higher difficulty as well as many books. One purpose of talking about misquoting is it it’s test issue to see how people handle debate and criticism, plus it’s an example of one of the main broader themes I’d like to talk about which is about the value of raising intellectual standards. If you can’t win with the small test issue that shouldn’t be very hard, you’ve gotta figure out what is going on. And if the responses to the small test issue are really bizarre and involve things like persistently denying disagreeing while obviously disagreeing … you really gotta figure out what is going on instead of ignore that evidence. So I’ve written about it again (this post).

If you want to find details of this stuff on the EA forum and see exactly what people said to me, besides what is linked in my two articles about misquoting that I linked above, you can also go to my EA user profile and look through my post and comment history there.


Elliot Temple | Permalink | Messages (0)

Meta Criticism

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Meta criticism is potentially more powerful than direct criticism of specific flaws. Meta criticism can talk about methodologies or broad patterns. It’s a way of taking a step back, away from all the details, to look critically at a bigger picture.

Meta criticism isn’t very common. Why? It’s less conventional, normal, mainstream or popular. That makes it harder to get a positive reception for it. It’s less well understood or respected. Also, meta criticism tends to be more abstract, more complicated, harder to get right, and harder to act on. In return for those downsides, it can be more powerful.

On average or as some kind of general trend, is the cost-to-benefit ratio for meta criticism is better or worse than regular criticism? I don’t really know. I think neither one has a really clear advantage and we should try some of both. Plus, to some extent, they do different things so again it makes sense to use both.

I think there’s an under-exploited area with high value, which is some of the most simple, basic meta criticisms. These are easier to understand and deal with, yet can still be powerful. I think these initial meta criticisms tend to be more important than concrete criticisms. Also, meta criticisms are more generic so they can be re-used between different discussions or different topics more, and that’s especially true for the more basic meta criticisms that you would start with (whereas more advanced meta criticism might depend on the details of a topic more).

So let’s look at examples of introductory meta criticisms which I think have a great cost-to-benefit ratio (given that people aren’t hostile to them, which is a problem sometimes). These examples will help give a better sense of what meta criticisms are in addition to being useful issues to consider.

Do you act based on methods?

“You” could be a group or individual. If the answer is “no” that’s a major problem. Let’s assume it’s “yes”.

Are the methods written down?

Again, “no” is a major problem. Assuming “yes”:

Do the methods contain explicit features designed to reduce bias?

Again, “no” is a major problem. Examples of anti-bias features include transparency, accountability, anti-bias training or ways of reducing the importance of social status in decision making (such as some decisions being made in random or blinded ways).

Many individuals and organizations in the world have already failed within the first three questions. Others could technically say “yes” but their anti-bias features aren’t very good (e.g. I’m sure every large non-crypto bank has some written methods that employees use for some tasks which contain some anti-bias features of some sort despite not really even aiming at rationality).

But, broadly, those with “no” answers or poor answers don’t want to, and don’t, discuss this and try to improve. Why? There are many reasons but here’s a particularly relevant one: They lack methods of talking about it with transparency, accountability and other anti-bias features. The lack of rational discussion methodology protects all their other errors like lack of methodology for whatever it is that they do.

One of the major complicating factors is how groups work. Some groups have clear leadership and organization structures, with a hierarchical power structure which assigns responsibilities. In that case, it’s relatively easy to blame leadership for big picture problems like lack of rational methods. But other groups are more of a loose association without a clear leadership structure that takes responsibility for considering or addressing criticism, setting policies, etc. Not all groups have anyone who could easily decide on some methods and get others to use them. EA and LW are examples of groups with significant voids in leadership, responsibility and accountability. They claim to have a bunch of ideas, but it’s hard to criticize them because of the lack of official position statements by them (or when there is something kinda official, like The Sequences, the people willing to talk on the forum often disagree with or are ignorant of a lot of that official position – there’s no way to talk with a person who advocates the official position as a whole and will take responsibility for addressing errors in it, or who has the power to fix it). There’s no reasonable, reliable way to ask EA a question like “Do you have an a written methodology for rational debate?” and get an official answer that anyone will take responsibility for.

So one of the more basic, introductory areas for meta criticism/questioning is to ask about rational methodology. And a second is to ask about leadership, responsibility, and organization structure. If there is an error, who can be told who will fix it, and how does one get their attention? If some clarifying questions are needed before sharing the error, how does one get them answered? If the answers are things like “personally contact the right people and become familiar with the high status community members” that is a really problematic answer. There should be publicly accessible and documented options which can be used by people who don’t have social status within the community. Social status is a biasing, irrational approach which blocks valid criticism from leading to change. Also, even if the situation is better than that, many people won’t know it’s better, and won’t try, unless you publicly tell them it’s better in a convincing way. To be convincing, you have to offer specific policies with guarantees and transparency/accountability, rather than saying a variant of “trust us”.

Guarantees can be expensive especially when they’re open to the general public. There are costs/downsides here. Even non-guaranteed options, such as an option suggestion box for unsolicited advice, even if you never reply to anything, have a cost. If you promised to reply to every suggestion, that would be too expensive. Guarantees need to have conditions placed on them. E.g. “If you make a suggestion and read the following ten books and pay $100, then we guarantee a response (limit: one response per person per year).” That policy would result in a smaller burden than responding to all suggestions, but it still offers a guarantee. Would the burden still be too high? It depends how popular you are. Is a response a very good guarantee? Not really. You might read the ten books, pay the money, and get the response “No.” or “Interesting idea; we’ll consider it.” and nothing more. That could be unsatisfying. Some additional guarantees about the nature of the response could help. There is a ton of room to brainstorm how to do these things well. These kinds of ideas are very under-explored. An example stronger guarantee would be to respond with either a decisive refutation or else to put together an exploratory committee to investigate taking the suggestion. Such committees have a poor reputation and could be replaced with some other method of escalating the idea to get more consideration.

Guarantees should focus on objective criteria. For example, saying you’ll respond to all “good suggestions” would be a poor guarantee to offer. How can someone predictably know in advance whether their suggestion will meet that condition or not? Design policies to not let decision makers use arbitrary judgment which could easily be biased or wrong. For example, you might judge “good” suggestions using the “I’ll know it when I see it method”. That would be very arbitrary and a bad approach. If you say “good” means “novel, interesting, substantive and high value if correct” that is a little better, but still very bad, because a decision maker can arbitrary judge whatever he wants as bad and there’s no effective way to hold him accountable, determine his judgment was an error, get that error corrected, etc. There’s also there’s poor predictability for people considering making suggestions.

From what I can tell, my main disagreement with EA is I think EA should have written, rational debate methods, and EA doesn’t think so. I don’t know how to make effective progress on resolving that disagreement because no one from EA will follow any specific rational debate methods. Also EA offers no alternative solution, that I know of, to the same problem that rational debate methods are meant to solve. Without rational debate methods (or an effective alternative), no other disagreements really matter because there’s nothing good to be done about them.


Elliot Temple | Permalink | Messages (0)

EA and Paths Forward

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Suppose EA is making an important error. John knows a correction and would like to help. What can John do?

Whatever the answer is, this is something EA should put thought into. They should advertise/communicate the best process for John to use, make it easy to understand and use, and intentionally design it with some beneficial features. EA should also consider having several processes so there are backups in case one fails.

Failure is a realistic possibility here. John might try to share a correction but be ignored. People might think John is wrong even though he’s right. People might think John’s comment is unimportant even though it’s actually important. There are lots of ways for people to reject or ignore a good idea. Suppose that happens. Now EA has made two mistakes which John knows are mistakes and would like to correct. There’s the first mistake, whatever it was, and now also this second mistake of not being receptive to the correction of the first mistake.

How can John get the second mistake corrected? There should be some kind of escalation process for when the initial mistake correction process fails. There is a risk that this escalation process would be abused. What if John thinks he’s right but actually he’s wrong? If the escalation process is costly in time and effort for EA people, and is used frequently, that would be bad. So the process should exist but should be designed in some kind of conservative way that limits the effort it will cost EA to deal with incorrect corrections. Similarly, the initial process for correcting EA also needs to be designed to limit the burden it places on EA. Limiting the burden increases the failure rate, making a secondary (and perhaps tertiary) error correction option more important to have.

When John believes he has an important correction for EA, and he shares it, and EA initially disagrees, that is a symmetric situation. Each side thinks the other is wrong. (That EA is multiple people, and John also might actually be multiple people, makes things more complex, but without changing some of the key principles.) The rational thing to do with this kind of symmetric dispute is not to say “I think I’m right” and ignore the other side. If you can’t resolve the dispute – if your knowledge is inadequate to conclude that you’re right – then you should be neutral and act accordingly. Or you might think you have crushing arguments which are objectively adequate to resolve the dispute in your favor, and you might even post them publicly, and think John is responding in obviously unreasonable ways. In that case, you might manage to objectively establish some kind of asymmetry. How to do objectively establish asymmetries in intellectual disagreements is a hard, important question in epistemology which I don’t think has received appropriate research attention (note: it’s also relevant when there’s a disagreement between two ideas within one person).

Anyway, what can John do? He can write down some criticism and post it on the EA forum. EA has a free, public forum. That is better than many other organizations which don’t facilitate publicly sharing criticism. Many organizations either have no forum or delete critical discussions while making no real attempt at rationality (e.g. Blizzard has forums related to its games, but they aren’t very rational, don’t really try to be, and delete tons of complaints). Does EA ever delete dissent or ban dissenters? As someone who hasn’t already spent many years paying close attention, I don’t know and I don’t know how to find out in a way that I would trust. Many forums claim not to delete dissent but actually do; it’s a common thing to lie about. Making a highly credible claim not to delete or punish dissent is important or else John might not bother trying to share his criticism.

So John can post a criticism on a forum, and then people may or may not read it and may or may not reply. Will anyone with some kind of leadership role at EA read it? Maybe not. This is bad. The naive alternative “guarantee plenty of attention from important people to all criticism” would be even worse. But there are many other possible policy options which are better.

To design a better system, we should consider what might go wrong. How could John’s great, valuable criticism receive a negative reaction on an open forum which is active enough that John gets at least a little attention? And how might things go well? If the initial attention John gets is positive, that will draw some additional attention. If that is positive too, then it will draw more attention. If 100% of the attention John gets results in positive responses, his post will be shared and spread until a large portion of the community sees it including people with power and influence, who will also view the criticism positively (by premise) and so they’ll listen and act. A 75% positive response rate would probably also be good enough to get a similar outcome.

So how might John’s criticism, which we’re hypothetically supposing is true and important, get a more negative reception so that it can’t snowball to get more attention and influence important decision makers?

John might have low social status, and people might judge more based on status than idea quality.

John’s criticism might offend people.

John’s criticism might threaten people in some way, e.g. implying that some of them shouldn’t have the income and prestige (or merely self-esteem) that they currently enjoy.

John’s criticism might be hard to understand. People might get confused. People might lack some prerequisite knowledge and skills needed to engage with it well.

John’s criticism might be very long and hard to get value from just the beginning. People might skim but not see the value that they would see if they read the whole thing in a thoughtful, attentive way. Making it long might be an error by John, but it also might be really hard to shorten and still have a good cost/benefit ratio (it’s valuable enough to justify the length).

John’s criticism might rely on premises that people disagree with. In other words, EA might be wrong about more than one thing. An interconnected set of mistakes can be much harder to explain than a single mistake even if the critic understands the entire set of mistakes. People might reject criticism of X due to their own mistake Y, and criticism of Y due to their own mistake X. A similar thing can happening involving many more ideas in a much more complicated structure so that it’s harder for John to point out what’s going on (even if he knows).

What can be done about all these difficulties? My suggestion, in short, is to develop a rational debate methodology and to hold debates aimed at reaching conclusions about disagreements. The methodology must include features for reducing the role of bias, social status, dishonesty, etc. In particular, it must prevent people from arbitrarily stopping any debates whenever they feel like it (which tends to include shortly before losing, which prevents the debate from being conclusive). The debate methodology must also have features for reducing the cost of debate, and ending low value debates, especially since it won’t allow arbitrarily quitting at any moment. A debate methodology is not a perfect, complete solution to all the problems John may face but it has various merits.

People often assume that rational, conclusive debate is too much work so the cost/benefit ratio on it is poor. This is typically a general opinion they have rather than an evaluation of any specific debate methodology. I think they should reserve judgment until after they review some written debate methodologies. They should look at some actual methods and see how much work they are, and what benefits they offer, before reaching a conclusion about their cost/benefit ratio. If the cost/benefit ratios are poor, people would try to make adjustments to reduce costs and increase benefits before giving up on rational debate.

Can people have rational debate without following any written methodology? Sure that’s possible. But if that worked well for some people and resulted in good cost/benefit ratios, wouldn’t it make sense to take whatever those successful debate participants are doing and write it down as a method? Even if the method had vague parts that’d be better than nothing.

Although under-explored, debate methodologies are not a new idea. E.g. Russell L. Ackoff published one in a book in 1978 (pp. 44-47). That’s unfortunately the only very substantive, promising one I’ve found besides developing one of my own. I bet there are more to be found somewhere in existing literature though. The main reason I thought Ackoff’s was a valuable proposal were that 1) it was based on following specific steps (in other words, you could make a flowchart out of it); and 2) it aimed at completeness, including using recursion to enable it to always succeed instead of getting stuck. Partial methods are common and easy to find, e.g. “don’t straw man” is a partial debate method, but it’s just suitable for being one little part of an overall method (and it lacks specific methods of detecting straw men, handling them when someone thinks one was done, etc. – it’s more of an aspiration than specific actions to achieve that aspiration).

A downside of Ackoff’s method is that it lacks stopping conditions besides success, so it could take an unlimited amount of effort. I think unilateral stopping conditions are one of the key issues for a good debate method: they need to exist (to prevent abuse by unreasonable debate partners who don’t agree to end the debate) but be designed to prevent abuse (by e.g. people quitting debates when they’re losing and quitting in a way designed to obscure what happened). I developed impasse chains as a debate stopping condition which takes a fairly small, bounded amount of effort to end debates unilaterally but adds significant transparency about how and why the debate is ending. Impasse chains only work when the further debate is providing low value, but that’s the only problematic case – otherwise you can either continue or say you want to stop and give a reason (which the other person will consent to, or if they don’t and you think they’re being unreasonable, now you’ve got an impasse to raise). Impasse chains are in the ballpark of “to end a debate, you must either mutually agree or else go through some required post-mortem steps” plus they enable multiple chances at problem solving to fix whatever is broken about the debate. This strikes me as one of the most obvious genres of debate stopping conditions to try, yet I think my proposal is novel. I think that says something really important about the world and its hostility to rational debate methodology. (I don’t think it’s mere disinterest or ignorance; if it were, the moment I suggested rational debate methods and said why they were important a lot of people would become excited and want to pursue the matter; but that hasn’t happened.)

Another important and related issue is how can you write, or design and organize a community or movement, so it’s easier for people to learn and debate with your ideas? And also easier to avoid low value or repetitive discussion. An example design is an FAQ to help reduce repetition. A less typical design would be creating (and sharing and keeping updated) a debate tree document organizing and summarizing the key arguments in the entire field you care about.


Elliot Temple | Permalink | Messages (0)