Altruism Contradicts Liberalism

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


Altruism means (New Oxford Dictionary):

the belief in or practice of disinterested and selfless concern for the well-being of others

Discussion about altruism often involves being vague about a specific issue. Is this selfless concern self-sacrificial? Is it bad for the self or merely neutral? This definition doesn’t specify.

The second definition does specify but isn’t for general use:

Zoology behavior of an animal that benefits another at its own expense

Multiple dictionaries fit the pattern of not specifying self-sacrifice (or not) in the main definition, then bringing it up in an animal-focused definition.

New Oxford’s thesaurus is clear. Synonyms for altruism include:

unselfishness, selflessness, self-sacrifice, self-denial

Webster’s Third suggests altruism involves lack of calculation, and doesn’t specify whether it’s self-sacrificial:

uncalculated consideration of, regard for, or devotion to others' interests sometimes in accordance with an ethical principle

EA certainly isn’t uncalculated. EA does stuff like mathematical calculations and cost/benefit analysis. Although the dictionary may have meant something more like shrewd, self-interested, Machiavellian calculation. If so, they really shouldn’t try to put so much meaning into one fairly neutral word like that without explaining what they mean.

Macmillan gives:

a way of thinking or behaving that shows you care about other people and their interests more than you care about yourself

Caring more about their interests than yourself suggests self-sacrifice, a conflict of interest (where decisions favoring you or them must be made), and a lack of win-win solutions or mutual benefit.

Does EA have any standard, widely read and accepted literature which:

  • Clarifies whether it means self-sacrificial altruism or whether it believes its “altruism” is good for the self?
  • Refutes (or accepts!?) the classical liberal theory of the harmony of men’s interests.

Harmony of Interests

Is there any EA literature regarding altruism vs. the (classical) liberal harmony of interests doctrine?

EA believes in conflicts of interest between men (or between individual and total utility). For example, William MacAskill writes in The Definition of Effective Altruism:

Unlike utilitarianism, effective altruism does not claim that one must always sacrifice one’s own interests if one can benefit others to a greater extent.[35] Indeed, on the above definition effective altruism makes no claims about what obligations of benevolence one has.

I understand EA’s viewpoint to include:

  • There are conflicts between individual utility and overall utility (the impartial good).
  • It’s possible to altruistically sacrifice some individual utility in a way that makes overall utility go up. In simple terms, you give up $100 but it provides $200 worth of benefit to others.
  • When people voluntarily sacrifice some individual utility to altruistically improve overall utility, they should do it in (cost) effective ways. They should look at things like lives saved per dollar. Charities vary dramatically in how much overall utility they create per dollar donated.
  • It’d be good if some people did some effective altruism sometimes. EA wants to encourage more of this, although it doesn’t want to be too pressuring, so it does not claim that large amounts of altruism are a moral obligation for everyone. If you want to donate 10% of your income to cost effective charities, EA will say that’s great instead of saying you’re a sinner because you’re still deviating from maximizing overall utility. (EA also has elements which encourage some members to donate a lot more than 10%, but that’s another topic.)

Finally, unlike utilitarianism, effective altruism does not claim that the good equals the sum total of wellbeing. As noted above, it is compatible with egalitarianism, prioritarianism, and, because it does not claim that wellbeing is the only thing of value, with views on which non-welfarist goods are of value.[38]

EA is compatible with many views on how to calculate overall utility, not just the view that you should add every individual utility. In other words, EA is not based on a specific overall/impersonal utility function. EA also is not based on any advocating that individuals have any particular individual utility function or any claim that the world population currently has a certain distribution of individual utility functions.

All of this contradicts the classical liberal theory of the harmony of men’s (long term, rational) interests. And doesn’t engage with it. They just seem unaware of the literature they’re disagreeing with (or they’re aware and refusing to debate with it on purpose?), even though some of it is well known and easy to find.

Total Utility Reasoning and Liberalism

I understand EA to care about total utility for everyone, and to advocate people altruistically do things which have lower utility for themselves but which create higher total utility. One potential argument is that if everyone did this then everyone would have higher individual utility.

A different potential approach to maximizing total utility is the classical liberal theory of the harmony of men’s interests. It says, in short, that there is no conflict between following self-interest and maximizing total utility (for rational men in a rational society). When there appears to be a conflict, so that one or the other must be sacrificed, there is some kind of misconception, distortion or irrationality involved. That problem should be addressed rather than accepted as an inherent part of reality that requires sacrificing either individual or total utility.

According to the liberal harmony view, altruism claims there are conflicts between the individual and society which actually don’t exist. Altruism therefore stirs up conflict and makes people worse off, much like the Marxist class warfare ideology (which is one of the standard opponents of the harmony view). Put another way, spreading the idea of conflicts of interest is an error that lowers total utility. The emphasis should be on harmony, mutual benefit and win/win solutions, not on altruism and self-sacrifice.

It’s really bad to ask people to make tough, altruistic choices if such choices are unnecessary mistakes. It’s bad to tell people that getting a good outcome for others requires personal sacrifices if it actually doesn’t.

Is there any well-known, pre-existing EA literature which addresses this, including a presentation of the harmony view that its advocates would find reasonably acceptable? I take it that EA rejects the liberal harmony view for some reason, which ought to be written down somewhere. (Or they’re quite ignorant, which would be very unreasonable for the thought leaders who developed and lead EA.) I searched the EA forum and it looks like the liberal harmony view has never been discussed, which seems concerning. I also did a web search and found nothing regarding EA and the liberal harmony of interests theory. I don’t know where or how else to do an effective EA literature search.


Elliot Temple | Permalink | Messages (0)

AGI Alignment and Karl Popper

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


On certain premises, which are primarily related to the epistemology of Karl Popper, artificial general intelligences (AGIs) aren’t a major threat. I tell you this as an expert on Popperian epistemology, which is called Critical Rationalism.

Further, approximately all AGI research is based on epistemological premises which contradict Popperian epistemology.

In other words, AGI research and AGI alignment research are both broadly premised on Popper being wrong. Most of the work being done is an implicit bet that Popper is wrong. If Popper is right, many people are wasting their careers, misdirecting a lot of donations, incorrectly scaring people about existential dangers, etc.

You might expect that alignment researchers would have done a literature review, found semi-famous relevant thinkers like Popper, and written refutations of them before being so sure of themselves and betting so much on the particular epistemological premises they favor. I haven’t seen anything of that nature, and I’ve looked a lot. If it exists, please link me to it.

To engage with and refute Popper requires expertise about Popper. He wrote a lot, and it takes a lot of study to understand and digest it. So you have three basic choices:

  • Do the work.
  • Rely on someone else’s expertise who agrees with you.
  • Rely on someone else’s expertise who disagrees with you.

How can you use the expertise of someone who disagrees with you? You can debate with them. You can also ask them clarifying questions, discuss issues with them, etc. Many people are happy to help explain ideas they consider important, even to intellectual opponents.

To rely on the expertise of someone on your side of the debate, you endorse literature they wrote. They study Popper, they write down Popper’s errors, and then you agree with them. Then when a Popperian comes along, you give them a couple citations instead of arguing the points yourself.

There is literature criticizing Popper. I’ve read a lot of it. My judgment is that the quality is terrible. And it’s mostly written by people who are pretty different than the AI alignment crowd.

There’s too much literature on your side to read all of it. What you need (to avoid doing a bunch of work yourself) is someone similar enough to you – someone likely to reach the same conclusions you would reach – to look into each thing. One person is potentially enough. So if someone who thinks similarly to you reads a Popper criticism and thinks it’s good, it’s somewhat reasonable to rely on that instead of investigating the matter yourself.

Keep in mind that the stakes are very high: potentially lots of wasted careers and dollars.

My general take is you shouldn’t trust the judgment of people similar to yourself all that much. Being personally well read regarding diverse viewpoints is worthwhile, especially if you’re trying to do intellectual work like AGI-related research.

And there aren’t a million well known and relevant viewpoints to look into, so I think it’s reasonable to just review them all yourself, at least a bit via secondary literature with summaries.

There are much more obscure viewpoints that are worth at least one person looking into, but most people can’t and shouldn’t try to look into most of those.

Gatekeepers like academic journals or university hiring committees are really problematic, but the least you should do is vet stuff that gets through gatekeeping. Popper was also respected by various smart people, like Richard Feynman.

Mind Design Space

The AI Alignment view claims something like:

Mind design space is large and varied.

Many minds in mind design space can design other, better minds in mind design space. Which can then design better minds. And so on.

So, a huge number of minds in mind design space work as starting points to quickly get to extremely powerful minds.

Many of the powerful minds are also weird, hard to understand, very different than us including regarding moral ideas, possibly very goal directed, and possibly significantly controlled by their original programming (which likely has bugs and literally says different things, including about goals, than the design intent).

So AGI is dangerous.

There is an epistemology which contradicts this, based primarily on Karl Popper and David Deutsch. It says that actually mind design space is like computer design space: sort of small. This shouldn’t be shocking since brains are literally computers, and all minds are software running on literal computers.

In computer design, there is a concept of universality or Turing completeness. In summary, when you start designing a computer and adding features, after very few features you get a universal computer. So there are only two types of computers: extremely limited computers and universal computers. This makes computer design space less interesting or relevant. We just keep building universal computers.

Every computer has a repertoire of computations it can perform. A universal computer has the maximal repertoire: it can perform any computation that any other computer can perform. You might expect universality to be difficult to get and require careful designing, but it’s actually difficult to avoid if you try to make a computer powerful or interesting.

Universal computers do vary in other design elements, besides what computations they can perform, such as how large they are. This is fundamentally less important than what computations they can do, but does matter in some ways.

There is a similar theory about minds: there are universal minds. (I think this was first proposed by David Deutsch, a Popperian intellectual.) The repertoire of things a universal mind can think (or learn, understand, or explain) includes anything that any other mind can think. There’s no reasoning that some other mind can do which it can’t do. There’s no knowledge that some other mind can create which it can’t create.

Further, human minds are universal. An AGI will, at best, also be universal. It won’t be super powerful. It won’t dramatically outthink us.

There are further details but that’s the gist.

Has anyone on the AI alignment side of the debate studied, understood and refuted this viewpoint? If so, where can I read that (and why did I fail to find it earlier)? If not, isn’t that really bad?


Elliot Temple | Permalink | Messages (0)

Betting Your Career

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


People bet their carers on various premises, outside their own expertise, e.g. AGI (alignment) researchers commonly bet on some epistemology without being experts on epistemology who actually read Popper and concluded, in their own judgment, that he’s wrong.

So you might expect them to be interested in criticism of those premises. Shouldn’t they want to investigate the risk?

But that depends on what you value about your career.

If you want money and status, and not to have to make changes, then maybe it’s safer to ignore critics who don’t seem likely to get much attention.

If you want to do productive work that’s actually useful, then your career is at risk.

People won’t admit it, but many of them don’t actually care that much about whether their career is productive. As long as they get status and money, they’re satisfied.

Also, a lot of people lack confidence that they can do very productive work whether or not their premises are wrong.

Actually, having wrong but normal/understandable/blameless premises has big advantages: you won’t come up with important research results but it’s not your fault. If it comes out that your premises were wrong, you did the noble work of investigating a lead that many people believed promising. Science and other types of research always involve investigating many leads that don’t turn out to be important. So if you find a lead people want investigated and then do nothing useful, and it turns out to be an important lead, then some other investigators outcompeted you. People could wonder why you didn’t figure out anything about the lead you worked on. But if the lead you work on turns out to be a dead end, then the awkward questions go away. So there’s an advantage to working on dead-ends as long as other people think it’s a good thing to work on.


Elliot Temple | Permalink | Messages (0)

Attention Filtering and Debate

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


People skim and filter. Gatekeepers and many other types of filters end up being indirect proxies for social status much more than they are about truth seeking.

Filtering isn’t the only problem though. If you have some credentials – awards, a PhD, a popular book, thousands of fans – people often still won’t debate you. Also, I certainly get through initial filtering sometimes. People talk with me some, and a lot more people read some of what I say.

After you get through filters, you run into problems like people still not wanting to debate or not wanting to put in enough effort to understand your point. We could call this secondary filtering. Maybe if you get through five layers of filters, then they’ll debate. Or maybe not. I think some of the filters are generated ad hoc because they don’t want to debate or consider (some types of) ideas that disagree with their current ideas. People can keep making up new excuses as necessary.

Why don’t people want to debate? Often because they’re bad at it.

And they know – even if they don’t consciously admit it – that debating is risky to their social status, and the expectation value for their debating result is to lose status.

And they know that, if they lose the debate, they will then face a problem. They’ll be conflicted. They will partly want to change their mind, but part of them won’t want to change their mind. So they don’t want to face that kind of conflict because they don’t know how to deal with it, so they’d rather avoid getting into that situation.

Also they already have a ton of urgent changes to make in their lives. They already know lots of ways they’re wrong. They already know about many mistakes. So they don’t exactly need new criticism. Adding more issues to the queue isn’t valuable.

All of that is fine but on the other hand anyone who admits that is no thought leader. So people don’t want to admit it. And if an intellectual position has no thought leaders capable of defending it, that’s a major problem. So people make excuses, pretend someone else will debate if debate is merited, shift responsibility to others (usually not to specific people), etc.

Debating is a status risk, a self-esteem risk, a hard activity, and they maybe don’t want to learn about (even more) errors which will lead to thinking they should change which is a hard thing they may fail at (which may further harm status and self-esteem, and be distracting and unpleasant).


Elliot Temple | Permalink | Messages (0)

Friendliness or Precision

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


In a debate, if you’re unfriendly and you make a lot of little mistakes, you should expect the mistakes to (on average) be biased for your side and against their side. In general, making many small, biased mistakes ruins debates dealing with complex or subtle issues. It’s too hard to fix them all, especially considering you’re the guy who made them (if you had the skill to fix them all, you could have used that same skill to avoid making some of them).

In other words, if you dislike someone, being extremely careful, rigorous and accurate with your reasoning provides a defense against bias. Without that defense, you don’t have much of a chance.

If you have a positive attitude and are happy to hear about their perspective, that helps prevent being biased against them. If you have really high intellectual standards and avoid making small mistakes, that helps prevent bias. If you have neither of those things, conversation doesn’t work well.


Elliot Temple | Permalink | Messages (0)

Hard and Soft Rationality Policies

I have two main rationality policies that are written down:

  1. Debate Policy
  2. Paths Forward Policy

I have many other smaller policies that are written down somewhere in some form, like about not misquoting or giving direct answers to direct questions (like say "yes" or "no" first when answering a yes or no question. then write extra stuff if you want. but don't skip the direct answer.)

A policy I thought of the other day, and recognized as worth writing down, is my debate policy sharing policy. I've had this policy for a long time. It's important but it isn't written in my debate policy.

If someone seems to want to debate me, but they don't invoke my debate policy, then I should link them to the debate policy so they have the option to use it. I shouldn't get out of the debate based on them not finding my debate policy.

In practice, I link the policy to a lot of people who I doubt want to debate me. I like sharing it. That's part of the point. It’s useful to me. It helps me deal with some situations in an easy way. I get in situations where I want to say/explain something, but writing it every time would be too much work, but some of the same things come up over and over, so I can write them once and then share links instead of rewriting the same points. My debate policy says some of the things I want frequently want to tell people, and linking it lets me repeat those things with very low effort.

One can imagine someone who put up a debate policy and then didn't mention it to critics who didn't ask for a debate in the right words. One can imagine someone who likes having the policy so they can claim they're rational, but they'd prefer to minimize actually using it. That would be problematic. I wrote my debate policy conditions so that if someone actually meets them, I'd like to debate. I don't dread that or want to avoid it. If you have a debate policy but hope people don't use it, then you have a problem to solve.

If I'm going to ignore a question or criticism from someone I don't know, then I want to link my policy so they have a way to fix things if I was wrong to ignore them. If I don't link it, and they have no idea it exists, then the results are similar to not having the policy. It doesn't function as a failsafe in that case.

Some policies offer hard guarantees and some are softer. What enforces the softer ones so they mean something instead of just being violated as much as one feels like? Generic, hard guarantees like a debate policy which can be used to address doing poorly at any softer guarantee.

For example, I don't have any specific written guarantee for linking people to my debate policy. There's an implicit (and now explicit in this post) soft guarantee that I should make a reasonable effort to share it with people who might want to use it. If I do poorly at that, someone could invoke my debate policy over my behavior. But I don't care much about making a specific, hard guarantee about debate policy link sharing because I have the debate policy itself as a failsafe to keep me honest. I think I do a good job of sharing my debate policy link, and I don't know how to write specific guarantees to make things better. It seems like something where a good faith effort is needed which is hard to define. Which is fine for some issues as long as you also have some clearer, more objective, generic guarantees in case you screw up on the fuzzier stuff.

Besides hard and soft policies, we could also distinguish policies from tools. Like I have a specific method of having a debate where people choose what key points they want to put in the debate tree. I have another debate method where people say two things at a time (it splits the conversation into two halves, one led by each person). I consider those tools. I don't have a policy of always using those things, or using those things in specific conditions. Instead, they're optional ways of debating that I can use when useful. There's a sort of soft policy there: use them when it looks like a good idea. Making a grammar tree is another tool, and I have a related soft policy of using that tool when it seems worthwhile. Having a big toolkit with great intellectual tools, along with actually recognizing situations for using them, is really useful.


Elliot Temple | Permalink | Messages (0)

A Non-Status-Based Filter

Asking people if they want to have a serious conversation is a way of filtering, or gatekeeping, which isn’t based on social status. Regardless of one’s status, anyone can opt in. This does require making the offer to large groups, randomized people, or something else that avoids social status. If you just make the offer to people you like, then your choice of who to offer conversations to is probably status based.

This might sound like the most ineffective filter ever. People can just say “yes I want to pass your filter” and then they pass. But in practice, I find it effective – the majority of people decline (or don’t reply, or reply about something else) and are filtered out.

You might think it only filters out people who were not going to have a conversations with you anyway. However, people often converse because they’re baited into it, triggered, defensive, caught up in trying to correct someone they think is wrong, etc. Asking people to make a decision about whether they want to be in a conversation can help them realize that they don’t want to. That’s beneficial for both you and them. However, I’ve never had one of them thank me for it.

A reason people dislike this filter is they associate all filters with status and therefore interpret being filtered out as an attack on their status – a claim they are not good enough in some way. But that’s a pretty weird interpretation with this specific filter.

This filter is, in some sense, the nicest filter ever. No one is ever filtered out who doesn’t want to be filtered out. Only this filter and variants of it have that property. Filtering on anything else, besides whether the person wants to opt in or out, would filter out some people who prefer to opt in. However, no one has ever reacted to me like it’s a nice filter. Many reactions are neutral, and some negative, but no one has praised me for being nice.

Useful non-status-based filters are somewhat difficult to come by and really important/valuable. Most filters people use are some sort of proxy for social status. That’s one of the major sources of bias in the world. What people pay attention to – what gets to them through gatekeeping/filtering – is heavily biased towards status. So it’s hard for them to disagree with high status ideas or learn about low status ideas (such as outliers and innovation).


Elliot Temple | Permalink | Messages (0)

Controversial Activism Is Problematic

EA mostly advocates controversial causes where they know that a lot of people disagree with them. In other words, there exist lots of people who think EA’s cause is bad and wrong.

AI Alignment, animal welfare, global warming, fossil fuels, vaccinations and universal basic income are all examples of controversies. There are many people on each side of the debate. There are also experts on each side of the debate.

Some causes do involve less controversy, such as vitamin A supplements or deworming. I think that, in general, less controversial causes are better independent of whether they’re correct. It’s better when people broadly agree on what to do, and then do it, instead of trying to proceed with stuff while having a lot of opponents who put effort into working against you. I think EA has far too little respect for getting wider spread agreement and cooperation, and not trying to proceed with action on issues where there are a lot of people taking action on the other side who you have to fight against. This comes up most with political issues but also applies to e.g. AI Alignment.

I’m not saying it’s never worth it to try to proceed despite large disagreements, and win the fight. But it’s something people should be really skeptical of and try to avoid. It has huge downsides. There’s a large risk that you’re in the wrong and are actually doing something bad. And even if you’re right, the efforts of your opponents will cancel out a lot of your effort. Also, proceeding with action when people disagree basically means you’ve given up on persuasion working any time soon. In general, focusing on persuasion and trying to make better more reasonable arguments that can bring people together is much better than giving up on talking it out and just trying to win a fight. EA values persuasion and rational debate too little.


Suppose you want to make the world better in the short term without worrying about a bunch of philosophy. We try to understand the situation we’re in, what our goal is, what methods would work well, what is risky, etc. So how can we analyze the big picture in a fairly short way that doesn’t require advanced skill to make sense?

We can look at the world and see there are lots of disagreements. If we try to do something that lots of people disagree with, we might be doing something bad. It’s risky. Currently in the world, a ton of people on both sides of many controversies are doing this. Both sides have tons of people who feel super confident that they’re right, and who donate or get involved in activism. This is especially common with political issues.

So if you want to make the world better, two major options are:

  • Avoid controversy
  • Help resolve controversy

There could be exceptions, but these are broadly better options than taking sides and fighting in a controversy. If there are exceptions, correctly knowing about them would probably require a bunch of intellectual skill and study, and wouldn’t be compatible with looking for quicker, more accessible wins. A lot of people think their side of their cause is a special exception when it isn’t.

The overall world situation is there are far too many confident people who are far too eager to fight instead of seeking harmony, cooperation, working together, etc. Persuasion is what enables people to be on the same team instead of working against each other.

Causes related to education and sharing information can help resolve controversy, especially when they’re done in a non-partisan, unbiased way. Some education or information sharing efforts are clearly biased to help one side win, rather than focused on being fair and helpful. Stuff about raising awareness often means raising awareness of your key talking points and why your side is right. Propaganda efforts are very different than being neutral and helping enable people to form better opinions.

Another approach to resolving controversy is to look at intellectual thought leaders, and how they debate and engage with each other (or don’t), and try to figure out what’s going wrong there and what can be done about it.

Another approach is to look at how regular people debate each other and talk about issues, and try to understand why people on both sides aren’t being persuaded and try to come up with some ideas to resolve the issue. That means coming to a conclusion that most people on both sides can be happy with.

Another approach is to study philosophy and rationality.

Avoiding controversy is a valid option too. Helping people avoid blindness by getting enough Vitamin A is a pretty safe thing to work on if you want to do something good with a low risk that you’re actually on the wrong side.

A common approach people try to use is to have some experts figure out which sides of which issues are right. Then they feel safe to know they’re right because they trust that some smart people already looked into the matter really well. This approach doesn’t make much sense in the common case that there are experts on both sides who disagree with each other. Why listen to these experts instead of some other experts who say other things? Often people already like a particular conclusion or cause then find experts who agree with it. The experts offer justification for a pre-existing opinion rather than actually guiding what they think. Listening to experts can also run into issues related to irrational, biased gatekeeping about who counts as an “expert”.

In general, people are just way too eager to pick a side and fight for it instead of trying to transcend, avoid or fix such fighting. They don’t see cooperation, persuasion or harmony as powerful or realistic enough tools. They are content to try to beat opponents. And they don’t seem very interested in looking at the symmetry of how they think they’re right and their cause is worth fighting for, but so do many people on the other side.

If your cause is really better, you should be able to find some sort of asymmetric advantage for your side. If it can give you a quick, clean victory that’s a good sign. If it’s a messy, protracted battle, that’s a sign that your asymmetric advantage wasn’t good enough and you shouldn’t be so confident that you know what you’re talking about.


Elliot Temple | Permalink | Messages (0)