Friendliness or Precision

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


In a debate, if you’re unfriendly and you make a lot of little mistakes, you should expect the mistakes to (on average) be biased for your side and against their side. In general, making many small, biased mistakes ruins debates dealing with complex or subtle issues. It’s too hard to fix them all, especially considering you’re the guy who made them (if you had the skill to fix them all, you could have used that same skill to avoid making some of them).

In other words, if you dislike someone, being extremely careful, rigorous and accurate with your reasoning provides a defense against bias. Without that defense, you don’t have much of a chance.

If you have a positive attitude and are happy to hear about their perspective, that helps prevent being biased against them. If you have really high intellectual standards and avoid making small mistakes, that helps prevent bias. If you have neither of those things, conversation doesn’t work well.


Elliot Temple | Permalink | Messages (0)

Hard and Soft Rationality Policies

I have two main rationality policies that are written down:

  1. Debate Policy
  2. Paths Forward Policy

I have many other smaller policies that are written down somewhere in some form, like about not misquoting or giving direct answers to direct questions (like say "yes" or "no" first when answering a yes or no question. then write extra stuff if you want. but don't skip the direct answer.)

A policy I thought of the other day, and recognized as worth writing down, is my debate policy sharing policy. I've had this policy for a long time. It's important but it isn't written in my debate policy.

If someone seems to want to debate me, but they don't invoke my debate policy, then I should link them to the debate policy so they have the option to use it. I shouldn't get out of the debate based on them not finding my debate policy.

In practice, I link the policy to a lot of people who I doubt want to debate me. I like sharing it. That's part of the point. It’s useful to me. It helps me deal with some situations in an easy way. I get in situations where I want to say/explain something, but writing it every time would be too much work, but some of the same things come up over and over, so I can write them once and then share links instead of rewriting the same points. My debate policy says some of the things I want frequently want to tell people, and linking it lets me repeat those things with very low effort.

One can imagine someone who put up a debate policy and then didn't mention it to critics who didn't ask for a debate in the right words. One can imagine someone who likes having the policy so they can claim they're rational, but they'd prefer to minimize actually using it. That would be problematic. I wrote my debate policy conditions so that if someone actually meets them, I'd like to debate. I don't dread that or want to avoid it. If you have a debate policy but hope people don't use it, then you have a problem to solve.

If I'm going to ignore a question or criticism from someone I don't know, then I want to link my policy so they have a way to fix things if I was wrong to ignore them. If I don't link it, and they have no idea it exists, then the results are similar to not having the policy. It doesn't function as a failsafe in that case.

Some policies offer hard guarantees and some are softer. What enforces the softer ones so they mean something instead of just being violated as much as one feels like? Generic, hard guarantees like a debate policy which can be used to address doing poorly at any softer guarantee.

For example, I don't have any specific written guarantee for linking people to my debate policy. There's an implicit (and now explicit in this post) soft guarantee that I should make a reasonable effort to share it with people who might want to use it. If I do poorly at that, someone could invoke my debate policy over my behavior. But I don't care much about making a specific, hard guarantee about debate policy link sharing because I have the debate policy itself as a failsafe to keep me honest. I think I do a good job of sharing my debate policy link, and I don't know how to write specific guarantees to make things better. It seems like something where a good faith effort is needed which is hard to define. Which is fine for some issues as long as you also have some clearer, more objective, generic guarantees in case you screw up on the fuzzier stuff.

Besides hard and soft policies, we could also distinguish policies from tools. Like I have a specific method of having a debate where people choose what key points they want to put in the debate tree. I have another debate method where people say two things at a time (it splits the conversation into two halves, one led by each person). I consider those tools. I don't have a policy of always using those things, or using those things in specific conditions. Instead, they're optional ways of debating that I can use when useful. There's a sort of soft policy there: use them when it looks like a good idea. Making a grammar tree is another tool, and I have a related soft policy of using that tool when it seems worthwhile. Having a big toolkit with great intellectual tools, along with actually recognizing situations for using them, is really useful.


Elliot Temple | Permalink | Messages (0)

A Non-Status-Based Filter

Asking people if they want to have a serious conversation is a way of filtering, or gatekeeping, which isn’t based on social status. Regardless of one’s status, anyone can opt in. This does require making the offer to large groups, randomized people, or something else that avoids social status. If you just make the offer to people you like, then your choice of who to offer conversations to is probably status based.

This might sound like the most ineffective filter ever. People can just say “yes I want to pass your filter” and then they pass. But in practice, I find it effective – the majority of people decline (or don’t reply, or reply about something else) and are filtered out.

You might think it only filters out people who were not going to have a conversations with you anyway. However, people often converse because they’re baited into it, triggered, defensive, caught up in trying to correct someone they think is wrong, etc. Asking people to make a decision about whether they want to be in a conversation can help them realize that they don’t want to. That’s beneficial for both you and them. However, I’ve never had one of them thank me for it.

A reason people dislike this filter is they associate all filters with status and therefore interpret being filtered out as an attack on their status – a claim they are not good enough in some way. But that’s a pretty weird interpretation with this specific filter.

This filter is, in some sense, the nicest filter ever. No one is ever filtered out who doesn’t want to be filtered out. Only this filter and variants of it have that property. Filtering on anything else, besides whether the person wants to opt in or out, would filter out some people who prefer to opt in. However, no one has ever reacted to me like it’s a nice filter. Many reactions are neutral, and some negative, but no one has praised me for being nice.

Useful non-status-based filters are somewhat difficult to come by and really important/valuable. Most filters people use are some sort of proxy for social status. That’s one of the major sources of bias in the world. What people pay attention to – what gets to them through gatekeeping/filtering – is heavily biased towards status. So it’s hard for them to disagree with high status ideas or learn about low status ideas (such as outliers and innovation).


Elliot Temple | Permalink | Messages (0)

Controversial Activism Is Problematic

EA mostly advocates controversial causes where they know that a lot of people disagree with them. In other words, there exist lots of people who think EA’s cause is bad and wrong.

AI Alignment, animal welfare, global warming, fossil fuels, vaccinations and universal basic income are all examples of controversies. There are many people on each side of the debate. There are also experts on each side of the debate.

Some causes do involve less controversy, such as vitamin A supplements or deworming. I think that, in general, less controversial causes are better independent of whether they’re correct. It’s better when people broadly agree on what to do, and then do it, instead of trying to proceed with stuff while having a lot of opponents who put effort into working against you. I think EA has far too little respect for getting wider spread agreement and cooperation, and not trying to proceed with action on issues where there are a lot of people taking action on the other side who you have to fight against. This comes up most with political issues but also applies to e.g. AI Alignment.

I’m not saying it’s never worth it to try to proceed despite large disagreements, and win the fight. But it’s something people should be really skeptical of and try to avoid. It has huge downsides. There’s a large risk that you’re in the wrong and are actually doing something bad. And even if you’re right, the efforts of your opponents will cancel out a lot of your effort. Also, proceeding with action when people disagree basically means you’ve given up on persuasion working any time soon. In general, focusing on persuasion and trying to make better more reasonable arguments that can bring people together is much better than giving up on talking it out and just trying to win a fight. EA values persuasion and rational debate too little.


Suppose you want to make the world better in the short term without worrying about a bunch of philosophy. We try to understand the situation we’re in, what our goal is, what methods would work well, what is risky, etc. So how can we analyze the big picture in a fairly short way that doesn’t require advanced skill to make sense?

We can look at the world and see there are lots of disagreements. If we try to do something that lots of people disagree with, we might be doing something bad. It’s risky. Currently in the world, a ton of people on both sides of many controversies are doing this. Both sides have tons of people who feel super confident that they’re right, and who donate or get involved in activism. This is especially common with political issues.

So if you want to make the world better, two major options are:

  • Avoid controversy
  • Help resolve controversy

There could be exceptions, but these are broadly better options than taking sides and fighting in a controversy. If there are exceptions, correctly knowing about them would probably require a bunch of intellectual skill and study, and wouldn’t be compatible with looking for quicker, more accessible wins. A lot of people think their side of their cause is a special exception when it isn’t.

The overall world situation is there are far too many confident people who are far too eager to fight instead of seeking harmony, cooperation, working together, etc. Persuasion is what enables people to be on the same team instead of working against each other.

Causes related to education and sharing information can help resolve controversy, especially when they’re done in a non-partisan, unbiased way. Some education or information sharing efforts are clearly biased to help one side win, rather than focused on being fair and helpful. Stuff about raising awareness often means raising awareness of your key talking points and why your side is right. Propaganda efforts are very different than being neutral and helping enable people to form better opinions.

Another approach to resolving controversy is to look at intellectual thought leaders, and how they debate and engage with each other (or don’t), and try to figure out what’s going wrong there and what can be done about it.

Another approach is to look at how regular people debate each other and talk about issues, and try to understand why people on both sides aren’t being persuaded and try to come up with some ideas to resolve the issue. That means coming to a conclusion that most people on both sides can be happy with.

Another approach is to study philosophy and rationality.

Avoiding controversy is a valid option too. Helping people avoid blindness by getting enough Vitamin A is a pretty safe thing to work on if you want to do something good with a low risk that you’re actually on the wrong side.

A common approach people try to use is to have some experts figure out which sides of which issues are right. Then they feel safe to know they’re right because they trust that some smart people already looked into the matter really well. This approach doesn’t make much sense in the common case that there are experts on both sides who disagree with each other. Why listen to these experts instead of some other experts who say other things? Often people already like a particular conclusion or cause then find experts who agree with it. The experts offer justification for a pre-existing opinion rather than actually guiding what they think. Listening to experts can also run into issues related to irrational, biased gatekeeping about who counts as an “expert”.

In general, people are just way too eager to pick a side and fight for it instead of trying to transcend, avoid or fix such fighting. They don’t see cooperation, persuasion or harmony as powerful or realistic enough tools. They are content to try to beat opponents. And they don’t seem very interested in looking at the symmetry of how they think they’re right and their cause is worth fighting for, but so do many people on the other side.

If your cause is really better, you should be able to find some sort of asymmetric advantage for your side. If it can give you a quick, clean victory that’s a good sign. If it’s a messy, protracted battle, that’s a sign that your asymmetric advantage wasn’t good enough and you shouldn’t be so confident that you know what you’re talking about.


Elliot Temple | Permalink | Messages (0)

Rationality Policies Tips

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Suppose you have some rationality policies, and you always want to and do follow them. You do exactly the same actions you would have without the policies, plus a little bit of reviewing the policies, comparing your actions with the policies to make sure you’re following them, etc.

In this case, are the policies useless and a small waste of time?

No. Policies are valuable for communication. They provide explanations and predictability for other people. Other people will be more convinced that you’re rational and will understand your actions more. You’ll less often be accused of irrationality or bias (or, worse, have people believe you’re being biased without telling you or allowing a rebuttal). People will respect you more and be more interested in interacting with you. It’ll be easier to get donations.

Also, written policies enable critical discussion of the policies. Having the policies lets people make suggestions or share critiques. So that’s another large advantage of the policies even when they make no difference to your actions. People can also learn from your policies and use start using some of the same policies for themselves.

It’s also fairly unrealistic that the policies make no difference to your actions. Policies can help you remember and use good ideas more frequently and consistently.

Example Rationality Policies

“When a discussion is hard, start using an idea tree.” This is a somewhat soft, squishy policy. How do you know when a discussion is hard? That’s up to your judgment. There are no objective criteria given. This policy could be improved but it’s still, as written, much better than nothing. It will work sometimes due to your own judgment and also other people who know about your policy can suggest that a discussion is hard and it’s time to use an idea tree.

A somewhat less vague policy is, “When any participant in a discussion thinks the discussion is hard, start using an idea tree.” In other words, if you think the discussion is tough and a tree would help, you use one. And also, if your discussion partner claims it’s tough, you use one. Now there is a level of external control over your actions. It’s not just up to your judgment.

External control can be triggered by measurements or other parts of reality that are separate from other people (e.g. “if the discussion length exceeds 5000 words, do X”). It can also be triggered by other people making claims or judgments. It’s important to have external control mechanisms so that things aren’t just left up to your judgment. But you need to design external control mechanisms well so that you aren’t controlled to do bad things.

It’s also problematic if you dislike or hate something but your policy makes you do it. It’s also problematic to have no policy and just do what your emotions want, which could easily be biased. An alternative would be to set the issue aside temporarily to actively do a lot of introspection and investigation, possibly followed by self-improvement.

A more flexible policy would be, “When any participant in a discussion thinks the discussion is hard, start using at least one option from my Hard Discussion Helpers list.” The list could contain using an idea tree and several other options such as doing grammar analysis or using Goldratt’s evaporating clouds.

More about Policies

If you find your rationality policies annoying to follow, or if they tell you to take inappropriate actions, then the solution is to improve your policy writing skill and your policies. The solution is not to give up on written policies.

If you change policies frequently, you should label them (all of them or specific ones) as being in “beta test mode” or something else to indicate they’re unstable. Otherwise you would mislead people. Note: It’s very bad to post written policies you aren’t going to follow; that’s basically lying to people in an unusually blatant, misleading way. But if you post a policy with a warning that it’s a work in progress, then it’s fine.

One way to dislike a policy is you find it takes extra work to use it. E.g. it could add extra paperwork so that some stuff takes longer to get done. That could be fine and worth it. If it’s a problem, try to figure out lighter weight policies that are more cost effective. You might also judge that some minor things don’t need written policies, and just use written policies for more important and broader issues.

Another way to dislike a policy is you don’t want to do what it says for some other reason than saving time and effort. You actually dislike that action. You think it’s telling you to do something biased, bad or irrational. In that case, there is a disagreement between your ideas about rationality that you used to write the policy and your current ideas. This disagreement is important to investigate. Maybe your abstract principles are confused and impractical. Maybe you’re rationalizing a bias right now and the policy is right. Either way – whether the policy or current idea is wrong – there’s a significant opportunity for improvement. Finding out about clashes between your general principles and the specific actions you want to do is important and those issues are worth fixing. You should have your explicit ideas and intuitions in alignment, as well as your abstract and concrete ideas, your big picture and little picture ideas, your practical and intellectual ideas, etc. All of those types of ideas should agree on what to do. When they don’t, something is going wrong and you should improve your thinking.

Some people don’t value opportunities to improve their thinking because they already have dozens of those opportunities. They’re stuck on a different issue other than finding opportunities, such as the step of actually coming up with solutions. If that’s you, it could explain a resistance to written policies. They would make pre-existing conflicts of ideas within yourself more explicit when you’re trying to ignore a too-long list of your problems. Policies could also make it harder to follow the inexplicit compromises you’re currently using. They’d make it harder to lie to yourself to maintain your self-esteem. If you have that problem, I suggest that it’s worth it to try to improve instead of just kind of giving up on rationality. (Also, if you do want to give up on rationality, or your ideas are such a mess that you don’t want to untangle them, then maybe EA and CF are both the wrong places for you. Most of the world isn’t strongly in favor of rationality and critical discussion, so you’ll have an easier time elsewhere. In other words, if you’ve given up on rationality, then why are you reading this or trying to talk to people like me? Don’t try to have it both ways and engage with this kind of article while also being unwilling to try to untangle your contradictory ideas.)


Elliot Temple | Permalink | Messages (0)

My Early Effective Altruism Experiences

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.

This post covers some of my earlier time at EA but doesn’t discuss some of the later articles I posted there and the response.


I have several ideas about how to increase EA’s effectiveness by over 20%. But I don’t think think they will be accepted immediately. People will find them counter-intuitive, not understand them, disagree with them, etc.

In order to effectively share ideas with EA, I need attention from EA people who will actually read and think about things. I don’t know how to get that and I don’t think EA offers any list of steps that I could follow to get it, nor any policy guarantees like “If you do X, we’ll do Y.” that I could use to bring up ideas. One standard way to get it, which has various other advantages, is to engage in debate (or critical discussion) with someone. However, only one person from EA (who isn’t particularly influential) has been willing to try to have a debate or serious conversation with me. By a serious conversation, I mean one that’s relatively long and high effort, which aims at reaching conclusions.

My most important idea about how to increase EA’s effectiveness is to improve EA’s receptiveness to ideas. This would let anyone better share (potential) good ideas with EA.

EA views itself as open to criticism and it has a public forum. So far, no moderator has censored my criticism, which is better than many other forums! However, no one takes responsibility for answering criticism or considering suggestions. It’s hard to get any disagreements resolved by debate at EA. There’s also no good way to get official or canonical answers to questions in order to establish some kind of standard EA position to target criticism at.

When one posts criticism or suggestions, there are many people who might engage, but no one is responsible for doing it. A common result is that posts do not get engagement. This happens to lots of other people besides me, and it happens to posts which appear to be high effort. There are no clear goalposts to meet in order to get attention for a post.

Attention at the EA forum seems to be allocated in pretty standard social hierarchy ways. The overall result is that EA’s openness to criticism is poor (objectively, but not compared to other groups, many of which are worse).

John the Hypothetical Critic

Suppose John has a criticism or suggestion for EA that would be very important if correct. There are three main scenarios:

  1. John is right and EA is wrong.
  2. EA is right and John is wrong.
  3. EA and John are both wrong.

There should be a reasonable way so that, if John is right, EA can be corrected instead of just ignoring John. But EA doesn’t have effective policies to make that happen. No person or group is responsible for considering that John may be right, engaging with John’s arguments, or attempting to give a rebuttal.

It’s also really good to have a reasonable way so that, if John is wrong and EA is right, John can find out. EA’s knowledge should be accessible so other people can learn what EA knows, why EA is right, etc. This would make EA much more persuasive. EA has many articles which help with this, but if John has an incorrect criticism and is ignored, then he’s probably going to conclude that EA is wrong and won’t debate him, and lower his opinion of EA (plus people reading the exchange might do the same – they might see John give an criticism that isn’t answered and conclude that EA doesn’t really care about addressing criticism).

If John and EA are both wrong, it’d also be a worthwhile topic to devote some effort to, since EA is wrong about something. Discussing John’s incorrect criticism or suggestion could lead to finding out about EA’s error, which could then lead to brainstorming improvements.

I’ve written about these issues before with the term Paths Forward.

Me Visiting EA

The first thing I brought up at EA was asking if EA has any debate methodology or any way I can get a debate with someone. Apparently not. My second question was about whether EA has some alternative to debates, and again the answer seemed to be no. I reiterated the question again, pointing out the “debate methodology” plus “alternative to debate methodology” issues form a complete pair, and if EA has neither that’s bad. This time, I think some people got defensive about the title, which caused me to get more attention than when my post title didn’t offend people (the incentives there are really bad). The titled asked how EA was rational. Multiple replies seemed focused on the title, which I grant was vague, rather than the body text which gave details of what I meant.

Anyway, I finally got some sort of answer: EA lacks formal debate or discussion methods but has various informal attempts at rationality. Someone shared a list. I wrote a brief statement of what I thought the answer was and asked for feedback if I got EA’s position wrong. I got it right. I then wrote an essay criticizing EA’s position, including critiques of the listed points.

What happened next? Nothing. No one attempted to engage with my criticism of EA. No one tried to refute any of my arguments. No one tried to defend EA. It’s back to the original problem: EA isn’t set up to address criticism or engage in debate. It just has a bunch of people who might or might not do that in each case. There’s nothing organized and no one takes responsibility for addressing criticism. Also, even if someone did engage with me, and I persuaded them that I was correct, it wouldn’t change EA. It might not even get a second person to take an interest in debating the matter and potentially being persuaded too.

I think I know how to organize rational, effective debates and reach conclusions. The EA community broadly doesn’t want to try doing that my way nor do they have a way they think is better.

If you want to gatekeep your attention, please write down the rules you’re gatekeeping by. What can I do to get past the gatekeeping? If you gatekeep your attention based on your intuition and have no transparency or accountability, that is a recipe for bias and irrationality. (Gatekeeping by hidden rules is related to the rule of man vs. the rule of law, as I wrote about. It’s also related to security through obscurity, a well known mistake in software. Basically, when designing secure systems, you should assume hackers can see your code and know how the system is designed, and it should be secure anyway. If your security relies on keeping some secrets, it’s poor security. If your gatekeeping relies on adversaries not knowing how it works, rather than having a good design, you’re making the security through obscurity error. That sometimes works OK if no one cares about you, but it doesn’t work as a robust approach.)

I understand that time, effort, attention, engagement, debate, etc., are limited resources. I advocate having written policies to help allocate those resources effectively. Individuals and groups can both do this. You can plan ahead about what kinds of things you think it’s good to spend attention on, and write down decision making criteria, and share them publicly, and etc., instead of just leaving it to chance or bias. Using written rationality policies to control some of these valuable resources would let them be used more effectively instead of haphazardly. The high value of the resources is a reason in favor, not again, governing their use with explicit policies that are put in writing then critically analyzed. (I think intuition has value too, despite the higher risk of bias, so allocating e.g. 50% of your resources to conscious policies and 50% to intuition would be fine.)

“It’s not worth the effort” is the standard excuse for not engaging with arguments. But it’s just an excuse. I’m the one who has researched how to do such things efficiently, how to save effort, etc., without giving up on rationality. They aren’t researching how to save effort and designing good, effort-saving methods, nor do they want the methods I developed. People just say stuff isn’t worth the effort when they’re biased against thinking about it, not as a real obstacle that they actually want a solution to. They won’t talk about solutions to it when I offer, nor will they suggest any way of making progress that would work if they’re in the wrong.

LW Short Story

Here’s a short story as an aside (from memory, so may have minor inaccuracies). Years ago I was talking with Less Wrong (LW) about similar issues. LW and EA are similar places. I brought up some Paths Forward stuff. Someone said basically he didn’t have time to read it, or maybe didn’t want to risk wasting his time. I said the essay explains how to engage with my ideas in time-efficient, worthwhile ways. So you just read this initial stuff and it’ll give you the intellectual methods to enable you to engage with my other ideas in beneficial ways. He said that’d be awesome if true, but he figures I’m probably wrong, so he doesn’t want to risk his time. We appeared to be at an impasse. I have a potential solution with high value that addresses his problems, but he doubts it’s correct and doesn’t want to use his resources to check if I’m right.

My broad opinion is someone in a reasonably large community like LW should be curious and look into things, and if no one does then each individual should recognize that as a major problem and want to fix it.

But I came up with a much simpler, directer solution.

It turns out he worked at a coffee shop. I offered to pay him the same wage as his job to read my article (or I think it was a specific list of a few articles). He accepted. He estimated how long the stuff would take to read based on word count and we agreed on a fixed number of dollars that I’d pay him (so I wouldn’t have to worry about him reading slowly to raise his payment). The estimate was his idea, and he came up with the numbers and I just said yes.

But before he read it, an event happened that he thought gave him a good excuse to back out. He backed out. He then commented on the matter somewhere that he didn’t expect me to read, but I did read it. He said he was glad to get out of it because he didn’t want to read it. In other words, he’d rather spend an hour working at a coffee shop than an hour reading some ideas about rationality and resource-efficient engagement with rival ideas, given equal pay.

So he was just making excuses the whole time, and actually just didn’t want to consider my ideas. I think he only agreed to be paid to read because he thought he’d look bad and irrational if he refused. I think the problem is that he is bad and irrational, and he wants to hide it.

More EA

My first essay criticizing EA was about rationality policies, how and why they’re good, and it compared them to the rule of law. After no one gave any rebuttal, or changed their mind, I wrote about my experience with my debate policy. A debate policy is an example of a rationality policy. Although you might expect that conditionally guaranteeing debates would cost time, it has actually saved me time. I explained how it helps me be a good fallibilist using less time. No one responded to give a rebuttal or to make their own debate policy. (One person made a debate policy later. Actually two people claimed to, but one of them was so bad/unserious that I don’t count it. It wasn’t designed to actually deal with the basic ideas of a debate policy, and I think it was made in bad faith because they person wanted to pretend to have a debate policy. As one example of what was wrong with it, they just mentioned it in a comment instead of putting it somewhere that anyone would find it or that they could reasonably link to in order to show it to people in the future.)

I don’t like even trying to talk about specific issues with EA in this broader context where there’s no one to debate, no one who wants to engage in discussion. No one feels responsible for defending EA against criticism (or finding out that EA Is mistaken and changing it). I think that one meta issue has priority.

I have nothing against decentralization of authority when many individuals each take responsibility. However, there is a danger when there is no central authority and also no individuals take responsibility for things and also there’s a lack of coordination (leading to e.g. lack of recognition that, out of thousands of people, zero of them dealt with something important).

I think it’s realistic to solve these problems and isn’t super hard, if people want to solve them. I think improving this would improve EA’s effectiveness by over 20%. But if no one will discuss the matter, and the only way to share ideas is by climbing EA’s social hierarchy and becoming more popular with EA by first spending a ton of time and effort saying other things that people like to hear, then that’s not going to work for me. If there is a way forward that could rationally resolve this disagreement, please respond. Or if any individual wants to have a serious discussion about these matters, please respond.

I’ve made rationality research my primary career despite mostly doing it unpaid. That is a sort of charity or “altruism” – it’s basically doing volunteer work to try to make a better world. I think it’s really important, and it’s very sad to me that even groups that express interest in rationality are, in my experience, so irrational and so hard to engage with.


Elliot Temple | Permalink | Messages (0)

Talking With Effective Altruism

The main reasons I tried to talk with EA are:

  • they have a discussion forum
  • they are explicitly interested in rationality
  • it's public
  • it's not tiny
  • they have a bunch of ideas written down

That's not much, but even that much is rare. Some groups just have a contact form or email address, not any public discussion place. Of the groups with some sort of public discussion, most now use social media (e.g. a Facebook group) or a chatroom rather than having a forum, so there’s no reasonable way to talk with them. My policy, based on both reasoning and also past experience, is that social media and chatrooms are so bad that I shouldn’t try to use them for serious discussions. They have the wrong design, incentives, expectations and culture for truth seeking. In other words, social media has been designed and optimized to appeal to irrational people. Irrational people are far more numerous, so the winners in a huge popularity contest would have to appeal to them. Forums were around many years before social media, but now are far less popular because they’re more rational.

I decided I was wrong about EA having a discussion forum. It's actually a hybrid between a subreddit and a forum. It's worse than a forum but better than a subreddit.

How good of a forum it is doesn’t really matter, because it’s now unusable due to a new rule saying that basically you must give up property rights for anything you post there. That is a very atypical forum rule; they're the ones being weird here, not me. One of the root causes of this error is their lack of understanding of and respect for property rights. Another cause is their lack of Paths Forward, debate policies, etc., which prevents error correction.

The difficulty of correcting their errors in general was the main hard part about talking with them. They aren't open to debate or criticism. They say that they are, and they are open to some types of criticism which don't question their premises too much. They'll sometimes debate criticisms about local optima they care about, but they don't like being told that they're focusing on local optima and should change their approach. Like most people, they each of them tends to only want to talk about stuff he knows about, and they don't know much about their philosophical premises and have no reasonable way to deal with that (there are ways to delegate and specialize so you don't personally have to know everything, but they aren't doing that and don't seem to want to).

When I claim someone is focusing on local optima, it moves the discussion away from the topics they like thinking and talking about, and have experience and knowledge about. It moves the topic away from their current stuff (that I said is a local optima) to other stuff (the bigger picture, global optima, alternatives to what they’re doing, comparisons between their thing and other things).

Multiple EA people openly, directly and clearly admitted to being bad at abstract or conceptual thinking. They seemed to think that was OK. They brought it up in order to ask me to change and stop trying to explain concepts. They didn’t mean to admit weakness in themselves. Most (all?) rationality-oriented communities I have past experience with were more into abstract, clever or conceptual reasoning than EAers are. I could deal with issues like this if people wanted to have extended, friendly conversations and make an effort to learn. I don’t mind. But by and large they don’t want to discuss at length. The primary response I got was not debate or criticism, but being ignored or downvoted. They didn’t engage much. It’s very hard to make any progress with people who don’t want to engage because they aren’t very active minded or open minded, or because they’re too tribalist and biased against some types of critics/heretics, or because they have infallibilist, arrogant, over-confident attitudes.

They often claim to be busy with their causes, but it doesn’t make sense to ignore arguments that you might be pursuing the wrong causes in order to keep pursuing those possibly-wrong causes; that’s very risky! But, in my experience, people (in general, not just at EA) are very resistant to caring about that sort of risk. People are bad at fallibilism.

I think a lot of EAers got a vibe from me that I’m not one of them – that I’m culturally different and don’t fit in. So they saw me as an enemy not someone on their side/team/tribe, so they treated me like I wasn’t actually trying to help. Their goal was to stop me from achieving my goals rather than to facilitate my work. Many people weren’t charitable and didn’t see my criticisms as good faith attempts to make things better. They thought I was in conflict with them instead of someone they could cooperate with, which is related to their general ignorance of social and economic harmony, win/wins, mutual benefit, capitalism, classical liberalism, and the criticisms of conflicts of interest and group conflicts. (Their basic idea with altruism is to ask people to make sacrifices to benefit others, not to help others using mutually beneficial win/wins.)


Elliot Temple | Permalink | Messages (0)

Effective Altruism Hurts People Who Donate Too Much

I was having an extended discussion with CB from EA when the licensing rules were changed so I quit the EA forum. So I asked if he wanted to continue at my forum. He said yes and registered an account but got stuck before posting.

I clarified that he couldn’t post because he hadn’t paid the one-time $20 price of an account. I offered him a free account if the $20 would be a financial burden, but said if he could afford it then I request he pay because if he values conversing with me less than $20 then I don’t think it’s a good use of my time.

Despite (I think) wanting to talk with me more, and having already spent hours on it, he changed his mind over the one-time $20 price. He said:

I don't think I will pay $20 because all the money I earn beyond my basic needs is going to charities.

That makes EA sound somewhat like a cult which has brainwashed him. And I’ve heard of EA doing this to other people. Some highly involved and respected EA people have made admissions about feeling guilty about buying any luxuries, such as a coffee, and struggled to live normal lives. This is considered a known problem with EA for many years, and they have no good plan to fix it and are continuing to hurt people and take around the maximum amount of money you can get from someone, just like some cults do. Further, EA encourages people to change careers to do EA-related work; it tries to take over people’s entire lives just like cults often do. EAs dating other EAs is common too, sometimes polyamorously (dating an EA makes EA be a larger influence in your life, and weird sexual practices are common with cults).

I don’t recall ever accusing anything of being a cult before, and overall I don’t think EA is a cult. But I think EA crosses a line here and deserves to be compared to a cult. EA clearly has differences from a cult, but having these similarities with cults is harmful.

EA does not demand you donate the maximum. They make sure to say it’s OK to donate at whatever level you’re comfortable with, or something more along those lines. But they also do bring up ideas about maximizing giving and comparing the utility of every different action you could do and maximizing utility (or impact or effectiveness or good). They don’t have good ideas about where or how to draw a line to limiting your giving, so I think they leave that up to individuals, many of whom won’t come up with good solutions themselves.

CB’s not poor, and he wants something, and the stakes are much higher than $20, but he can’t buy it because he feels that he has to give EA all his money. I think he already put hundreds of dollars of his time into the conversation, and I certainly did, and I think he planned to put hundreds more dollars of his time into it, but somehow $20 is a dealbreaker. He works in computing so his time could easily be worth over $100/hr.

I wonder if he considered that, instead of talking with me, he could have spent those hours volunteering at a soup kitchen. Or he could have spent those hours working and making more money to donate. He might need a second job or side gig or something to adjust how many hours he works, but he could do that. If he’s a programmer, he could make an phone or web app on the side and set his own schedule for that additional work. (What about burn out? Having intellectual conversations also takes up mental energy. So he had some to spare.)

Anyway, it’s very sad to see someone all twisted up like this. From what I can tell, he’s fairly young and naive, and doesn’t know much about money or economics.

Note/update: After I finished writing this article, before I posted it, CB claimed that he exaggerated about how much he donates. That partial retraction has not changed my mind about the general issues, although it makes his individual situation somewhat less bad and does address some specific points like whether he could buy a (cheap) book.

Investing In Yourself

Buying a conversation where he’d learn something could make CB wiser and more effective, which could lead to him earning more money, making better decisions about which charities to donate to, and other benefits.

I wonder if CB also doesn’t buy books because they aren’t part of his “basic needs”.

People should be encouraged to invest in themselves, not discouraged from it. EA is harming intellectual progress by handicapping a bunch of relatively smart, energetic young people so they don’t use financial resources to support their own personal progress and development.

This one thing – taking a bunch of young people who are interested in ideas and making it harder for them to develop into great thinkers – may do a huge amount of harm. Imagine if Karl Popper or Richard Feynman had donated so much money he couldn’t buy any books. Or pick whoever you think is important. What if the rich people several hundred years ago had all donated their money instead of hiring tutors to teach their kids – could that have stopped the enlightenment? (Note how that would have been doubly bad. It’d prevent some of their kids from turning into productive scientists and intellectuals, and it’d also take away gainful employment from the tutors, who were often scientists or other intellectuals without much money to fund their life or work.)

On a related note, basically none of EA’s favored charities are about finding the smartest or most rational people and helping them. But helping some of the best people could easily do more good than helping some of the poorest people. If you help a poor person have a happier, healthier life, that does some good. If you help a smart, middle-class American kid become a great thinker who writes a few really effective self-help books, his books could improve the lives of millions of people.

Admittedly, it’s hard to figure out how to help that kid. But people at least try to work on that problem and brainstorm ideas and critique their initial plans. There could be ongoing research to try to develop a good approach. But there isn’t much interest in that stuff.

The least they could do is leave that kid alone, rather than convince him to donate all his money above basic needs when he’s a young adult so he can’t afford books, online courses, and other resources that would be useful and enjoyable to him.

Also, at EA, I’ve been talking about criticism, debate and error correction. I’ve been trying to get them to consider their fallibility and the risk of being wrong about things, and do more about that. So, for example, I think EA is mistaken about many of its causes. CB could estimate a 1% chance that I have a point he could learn, and assume that would only affect his own future donations, and talking to me would still be a good deal, because he’ll donate more than $2000 in the future, so even multiplied by 1% it’s better than $20. So talking to me more would be cost-effective (in dollars, which I think is CB’s concern even though time matters too). Not considering things like this, and seeking to invest in risk reduction, is partly related to not investing in yourself and partly related to poor, irrational attitudes related to fallibility.

Also, I do tons of work (philosophy research, writing, discussion and video creation) trying to make the world better, mostly for free. Isn’t investing in me a way to make the world better? If you pay me $20, why is that any worse than donating it to a charity? Some people literally donate money to me like a charity because they respect and value what I do. Similarly, some EA charities give grants to intellectuals to do work on topics such as rationality, so I could receive such a grant. Donating to a grant-making organization that gave me a grant would count as charity, but giving me money directly counts less so, especially if you’re buying something from me (forum access). The marginal cost of forum access for me is $0, so this isn’t like buying a hand-made table from me, where I had to put in some time and materials to make it, so my profit margin is only 25%. My marginal profit margin on forum memberships is 100% because I’m going to keep running the forum whether or not CB joins. EA focuses people’s attention on charities, has an incorrectly negative view of trade, and biases people against noticing that buying from small creators actually generally helps make the world better even though it’s not “charity”.

What CB Donates To

Are CB’s donations doing good?

For around $20, he could pay for six visits to a vaccination clinic for a baby in rural northern Nigeria. It can be half a day of travel to reach a clinic, so paying people a few dollars makes a meaningful difference to whether they make the trip.

I wonder which vaccinations are actually important for people living in small, isolated communities like that. Some vaccinations seem much more relevant in a city or if you come in contact with more people. How many of them will ever visit a big city in their life? I don’t know. Also even if vaccinations provide significant value to them, they’re really poor, so maybe something else would improve their lives more.

I looked through charities that EA recommends and that vaccination charity looked to me like one of the best options. Plus I read a bit about it, unlike some of the other more promising ones like a charity that gives people Vitamin A. Some charities get distracted by political activism, so I checked if they were taking a political side about the covid vaccine, and they didn’t appear to be, so that’s nice to see. I think finding charities that stay out of politics is one of the better selection methods that people could and should use. EA cares a lot about evaluating and recommending charities, but I’m not aware of them using being non-political as a criterion. EA itself is pretty political.

I’m doubtful that CB donates to that kind of cause that provides some kind of fairly concrete health benefits for poor people in distant countries. Based on our discussions and his profile, I think his top cause is animal welfare. He may also donate to left-wing energy causes (like anti fossil fuels) and possibly AI Alignment. I think those are terrible causes where his donations would likely do more harm than good. I’m not going to talk about AI Alignment here, which isn’t very political, and the problems are more about bad epistemology and moral philosophy (plus lack of willingness to debate with critics).

Animal welfare and anti fossil fuel stuff are left wing political activism. Rather than staying out of politics, those causes get involved in politics on purpose. (Not every single charity in those spaces is political, just most of them.)

Let me explain it using a different issue as an example where there’s tons of visible political propaganda coming from both sides. The US pro-life right puts out lots of propaganda, and they recently had a major victory getting Roe vs. Wade overturned. Now they’re changing some state laws to hurt people, particularly women. Meanwhile, the pro-choice left also puts out propaganda. To some extent, the propaganda from the two sides cancels each other out.

Imagine a pro-choice charity that said “Next year, the pro-lifers are expected to spend $10,000,000,000 on propaganda. We must counter them with truth. Please donate to us or our allies because we need $10 billion dollars a year just to break even and cancel out what they’re doing. If we can get $15B/yr, we can start winning.”

Imagine that works. They get $15B and outspend the pro-lifers who only spend $10B. The extra $5B helps shift public perception to be more pro-choice. Suppose pro-choice is the correct view and getting people to believe it is actually good. We’ll just ignore the risk of being on the wrong side. (Disclosure: I’m pro-abortion.)

Then there’s $25B being spent in total, and $20B is basically incinerated, and then $5B makes the world better. That is really bad. 80% of the money isn’t doing any good. This is super inefficient. In general, the best case scenario when donating to tribalist political activism looks kind of like this.

If you want to be more effective, you have to be more non-partisan, more focused on rationality, and stay out of propaganda wars.

In simplified terms, pro-choice activism is more right than wrong, whereas I fear CB is donating to activism which is more wrong than right.

Saving Money and Capital Accumulation

I fear that spending only on basic needs, and donating the rest, means CB isn’t saving (enough) money.

If you don’t save money, you may end up being a burden on society later. You may need to receive help from the government or from charities. By donating money that should be saved, one risks later taking money away and being a drain on resources because one doesn’t have enough to take care of himself.

CB’s kids may have to take out student loans, and end up in a bunch of debt, because CB donated a bunch of money instead of putting it in a college fund for them.

CB may end up disabled. He may get fired and struggle to find a new job, perhaps through no fault of his own. Jobs could get harder to come by due to recession, natural disaster, or many other problems. He shouldn’t treat his expected future income as reliable. Plus, he says he wants to stop working in computing and switch to an EA-related job. That probably means taking a significant pay cut. He should plan ahead, and save money now while he has higher income, to help enable him to take a lower paying job later if he wants to. As people get older, their expenses generally go up, and their income generally goes up too. If he wants to take a pay cut when he’s older, instead of having a higher income to deal with higher expenses, that could be a major problem, especially if he didn’t save money now to deal with it.

Does saving money waste it? No. Saving means refraining from consumption. If you want to waste your money, buy frivolous stuff. If you work and save, you’re contributing to society. You provide work that helps others, and by saving you don’t ask for anything in return (now – but you can ask for it later when you spend your money).

Saving isn’t like locking up a bunch of machine tools in a vault so they don’t benefit anyone. People save money, not tools or food. Money is a medium of exchange. As long as there is enough money in circulation, then money accomplishes its purpose. There’s basically no harm in keeping some cash in a drawer. Today, keeping money in a bank is just a number in a computer, so it doesn’t even take physical cash out of circulation.

Money basically represents a debt where you did something to benefit others, and now you’re owed something equally valuable in return from others. When you save money, you’re not asking for what you’re owed from others. You helped them for nothing in return. It’s a lot like charity.

Instead of saving cash, you can invest it. This is less like charity. You can get interest payments or the value of your investment can grow. In return for not spending your money now, you get (on average) more of it.

If you invest money instead of consuming it, then you contribute to capital accumulation. You invest in businesses not luxuries. In other words, (as an approximation) you help pay for machines, tools and buildings, not for ice cream or massages. You invest in factories and production that can help make the world a better place (by repeatedly creating useful products), not short term benefits.

The more capital we accumulate, the higher the productivity of labor is. The higher the productivity of labor, the higher wages for workers are and also the more useful products get created. There are details like negotiations about how much of the additional wealth goes to who, but the overall thing is more capital accumulation means more wealth is produced and there’s more for everyone. Making the pie bigger is the more important issue than fighting with people over who gets which slice, though the distribution of wealth does matter too.

When you donate to a charity which spends the money on activism or even vaccines, that is consumption. It’s using up wealth to accomplish something now. (Not entirely because a healthy worker is more productive, so childhood vaccines are to some extent an investment in human capital. But they aren’t even trying to evaluate the most effective way to invest in human capital to raise productivity. That isn’t their goal so they’re probably not being especially cost-effective at it.)

When you save money and invest it, you’re helping with capital accumulation – you’re helping build up total human wealth. When you consume money on charitable causes or luxuries, you’re reducing total human wealth.

Has EA ever evaluated how much good investing in an index fund does and compared it to any of their charities? I doubt it. (An index fund is a way to have your investment distributed among many different companies so you don’t have to guess which specific companies are good. It also doesn’t directly give money to the company because you buy stock from previous investors, but as a major simplification we can treat it like investing in the company.)

I’ve never seen anything from EA talking about how much good you’ll do if you don’t donate, and subtracting that from the good done by donating, to see how much additional good donating does (which could be below zero in some cases or even on average – who knows without actually investigating). If you buy some fancy dinner and wine, you get some enjoyment, so that does some good. If you buy a self-help book or online course and invest in yourself, that does more good. If you buy a chair or frying pan, that’s also investing in yourself and your life, and does good. If you invest the money in a business, that does some good (on average). Or maybe you think so many big businesses are so bad that you think investing in them makes the world worse, which I find plausible, and it reminds me of my view that many non-profits are really bad… I have a negative view of most large companies, but overall I suspect that, on average, non-profits are worse than for-profit businesses.

EA has a bunch of anti-capitalists who don’t know much about economics. CB in particular is so ignorant of capitalism that he didn’t know it prohibits fraud. He doesn’t know, in a basic sense, what the definition of capitalism even is. And he also doesn’t know that he doesn’t know. He thought he knew, and he challenged me on that point, but he was wrong and ignorant.

These people need to read Ludwig von Mises. Both for the economics and for the classical liberalism. They don’t understand harmony vs. conflicts of interest, and a lot of what they do, like political activism, is based on assuming there are conflicts of interest and the goal should be to make your side win. They often don’t aim at win/win solutions, mutual benefit and social harmony. They don’t really understand peace, freedom and how a free market is proposal about how to create social harmony and benefit everyone and some of its mechanisms for doing that are superior to what charities try to do – so getting capitalism working better could easily do more good than what they’re doing now, but they wouldn’t really even consider such a plan. (I’m aware that I haven’t explained capitalism enough here for people to learn about capitalism from this article. It may make sense to people who already know some stuff. If you want to know more, read Mises and read Capitalism: A Treatise on Economics and feel free to ask questions or seek debate at my forum. If you find this material difficult, you may first need to put effort into learning how to learn, getting better at reading, getting better at research, getting better at critical thinking, managing your schedule, managing your motivations and emotions, managing projects over time, etc.)

Conclusion

CB was more intellectually tolerant and friendly than most EAers. Most of them can’t stand to talk to someone like me who has a different perspective and some different philosophical premises. He could, so in that way he’s better than them. He has a ton of room for improvement at critical thinking, rigor and precision, but he could easily be categorized as smart.

So it’s sad to see EA hurt him in such a major way that really disrupts his life. Doing so much harm is pretty unusual – cults can do it but most things in society don’t. It’s ironic and sad that EA, which is about doing good, is harming him.

And if I was going to try to improve the world and help people, people like CB would be high on my list for who to help. I think helping some smart and intellectually tolerant people would do more good than childhood vaccines in Nigeria let alone leftist (or rightist) political activism. The other person I know of who thought this way – about prioritizing helping some of the better people, especially smart young people – was Ayn Rand.

I am trying to help these people – that’s a major purpose of sharing writing – but it’s not my top priority in my life. I’m not an altruist. Although, like Rand and some classical liberals, I don’t believe there’s a conflict between the self and the other. Promoting altruism is fundamentally harmful because it spreads the idea that you must choose yourself or others, and there’s a conflict there requiring winners and losers. I think Rand should have promoted harmony more and egoism or selfishness less, but at least her intellectual position was that everyone can win and benefit, whereas EA doesn’t say that and intentionally asks people like CB to sacrifice their own good to help others, thereby implying that there is a conflict between what’s good for CB and what’s good for others, thereby implying basically that social harmony is impossible because there’s no common good that’s good for everyone.

I’ll end by saying that EA pushes young people to rush to donate way too much money when they’re often quite ignorant and don’t even know much about which causes are actually good or bad. EA has some leaders who are more experienced and knowledgeable, but many of them have political and tribalist agendas, aren’t rational, and won’t debate or address criticism of their views. It’s totally understandable for a young person to have no idea what capitalism is and to be gullible in some ways, but it’s not OK for EA to take advantage of that gullibility, keep their membership ignorant of what capitalism is, and discourage their members from reading Mises or speaking with people like me who know about capitalism and classical liberalism. EA has leaders who know more about capitalism, and hate it, and won’t write reasonable arguments or debate the matter in an effective, truth seeking way. They won’t point out how/why/where Mises was wrong, and instead go guide young people to not read Mises and to donate all their money beyond basic needs to the causes that EA leaders like.

EDIT 2022-12-05: For context, see the section "Bad" EAs, caught in a misery trap in https://michaelnotebook.com/eanotes/ which had already previously alerted me that EA has issues with over-donating, guilt, difficulty justifying spending on yourself, etc., which affect a fair amount of people.


Elliot Temple | Permalink | Messages (0)