[Previous] Finding Errors in The Case Against Education by Bryan Caplan | Home | [Next] Criticizing "Against the singularity hypothesis"

Critiquing an Axiology Article about Repugnant Conclusions

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post. I criticize Minimalist extended very repugnant conclusions are the least repugnant by Teo Ajantaival.

Error One

Archimedean views (“Quantity can always substitute for quality”)

Let us look at comparable XVRCs for Archimedean views. (Archimedean views roughly say that “quantity can always substitute for quality”, such that, for example, a sufficient number of minor pains can always be added up to be worse than a single instance of extreme pain.)

It's ambiguous/confusing about whether by "quality" you mean different quantity sizes, as in your example (substitution between small pains and a big pain), or you actually mean qualitatively different things (e.g. substitution between pain and the thrill of skydiving).

Is the claim that 3 1lb steaks can always substitute for 1 3lb steak, or that 3 1lb pork chops can always substitute for 1 ~3lb steak? (Maybe more or less if pork is valued less or more than steak.)

The point appears to be about whether multiple things can be added together for a total value or not – can a ton of small wins ever make up for a big win? In that case, don't use the word "quality" to refer to a big win, because it invokes concepts like a qualitative difference rather than a quantitative difference.

I thought it was probably about whether a group of small things could substitute for a bigger thing but then later I read:

Lexical views deny that “quantity can always substitute for quality”; instead, they assign categorical priority to some qualities relative to others.

This seems to be about qualitative differences: some types/kinds/categories have priority over others. Pork is not the same thing as steak. Maybe steak has priority and having no steak can't be made up for with a million pork chops. This is a different issue. Whether qualitative differences exist and matter and are strict is one issue, and whether many small quantities can add together to equal a large quantity is a separate issue (though the issues are related in some ways). So I think there's some confusion or lack of clarity about this.

I didn't read linked material to try to clarify matters, except to notice that this linked paper abstract doesn't use the word "quality". I think, for this issue, the article should stand on its own OK rather than rely on supplemental literature to clarify this.

Actually, I looked again while editing, and I've now noticed that in the full paper (as linked to and hosted by PhilPapers, the same site as before), the abstract text is totally different and does use the word "quality". What is going on!? PhilPapers is broken? Also this paper, despite using the word "quality" in the abstract once (and twice in the references), does not use that word in the body, so I guess it doesn't clarify the ambiguity I was bringing up, at least not directly.

Error Two

This is a strong point in favor of minimalist views over offsetting views in population axiology, regardless of one’s theory of aggregation.

I suspect you're using an offsetting view in epistemology when making this statement concluding against offsetting views in axiology. My guess is you don't know you're doing this or see the connection between the issues.

I take a "strong point in favor" to refer to the following basic model:

We have a bunch of ideas to evaluate, compare, choose between, etc.

Each idea has points in favor and points against.

We weight and sum the points for each idea.

We look at which idea has the highest overall score and favor that.

This is an offsetting model where points in favor of an idea can offset points against that same idea. Also, in some sense, points in favor of an idea offset points in favor of rival ideas.

I think offsetting views are wrong, in both epistemology and axiology, and there's overlap in the reasons for why they're wrong, so it's problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field.

Error Three

The article jumps into details without enough framing about why this matters. This is understandable for a part 4, but on the other hand you chose to link me to this rather than to part 1 and you wrote:

Every part of this series builds on the previous parts, but can also be read independently.

Since the article is supposed to be readable independently, then the article should have explained why this matters in order to work well independently.

A related issue is I think the article is mostly discussing details in a specific subfield that is confused and doesn't particularly matter – the field's premises should be challenged instead.

And another related issue is the lack of any consideration of win/win approaches, discussion of whether there are inherent conflicts of interest between rational people, etc. A lot of the article topics are related to political philosophy issues (like classical liberalism's social harmony vs. Marxism's class warfare) that have already been debated a bunch, and it'd make sense to connect claims and viewpoints to that the existing knowledge. I think imagining societies with different agents with different amounts of utility or suffering, fully out of context of imagining any particular type of society, or design or organization or guiding principles of society, is not very productive or meaningful, so it's no wonder it's gotten bogged down in abstract concerns like the very repugnant conclusion stuff with no sign of any actually useful conclusions coming up.

This is not the sort of error I primarily wanted to point out. However, the article does a lot of literature summarizing instead of making its own claims. So I noticed some errors in the summarized ideas but that's different than errors in the articles. To point out errors in an article itself, when its summarizing other ideas, I'd have to point out that it has inaccurately summarized the ideas. That requires reading the cites and comparing them to the summaries. Which I don't think would be especially useful/valuable to do. Sometimes people summarize stuff they agree with, so criticizing the content works OK. But here a lot of it was summarizing stuff the author and I both disagree with, in order to criticize it, which doesn't provide many potential targets for criticism. So that's why I went ahead and made some more indirect criticism (and included more than one point) for the third error.

But I'd suggest that @Teo Ajantaival watch my screen recording (below) which has a bunch of commentary and feedback on the article. I expect some of it will be useful and some of the criticisms I make will be relevant to him. He could maybe pick out some things I said and recognize them as criticisms of ideas he holds, whereas sometimes it was hard for me to tell what he believes because he was just summarizing other people's ideas. (When looking for criticism, consider if I'm right, does it mean you're wrong? If so, then it's a claim by me about an error, even if I'm actually mistaken.) My guess is I said some things that would work as better error claims than some of the three I actually used, but I don't know which things they are. Also, I think if we were to debate, discussing the underlying premises, and whether this sub-field even matters, would acutally be more important than discussing within-field details, so it's a good thing to bring up. I think my disagreement with the niche that the article is working within is actually more important than some of the within-niche issues.

Offsetting and Repugnance

This section is about something @Teo Ajantaival also disagrees with, so it's not an error by him. It could possibly be an error of omission if he sees this as a good point that he didn't know but would have wanted to think of but didn't. To me it looks pretty important and relevant, and problematic to just ignore like there's no issue here.

If offsetting actually works – if you're a true believer in offsetting – then you should not find the very repugnant scenario to be repugnant at all.

I'll illustrate with a comparison. I am, like most people, to a reasonable approximation, a true believer in offsetting for money. That is, $100 in my bank account fully offsets $100 of credit card debt that I will pay off before there are any interest charges. There do exist people who say credit cards are evil and you shouldn't have one even if you pay it off in full every month, but I am not one of those people. I don't think debt is very repugnant when it's offset by assets like cash.

And similarly, spreading out the assets doesn't particularly matter. A billion bank accounts with a dollar each, ignoring some adminstrative hassle details, are just as good as one bank account with a billion dollars. That money can offset a million dollars of credit card debt just fine despite being spread out.

If you really think offsetting works, then you shouldn't find it repugnant to have some negatives that are offset. If you find it repugnant, you disagree with offsetting in that case.

I disagree with offsetting suffering – one person being happy does not simply cancel out someone else being victimized – and I figure most people also disagree with suffering offsetting. I also disagree with offsetting in epistemology. Money, as a fungible commodity, is something where offsetting works especially well. Similarly, offsetting would work well for barrels of oil of a standard size and quality, although oil is harder to transport than money so location matters more.

Bonus Error by Upvoters

At a glance (I haven't read it yet as I write this section), the article looks high effort. It has ~22 upvoters but no comments, no feedback, no hints about how to get feedback next time, no engagement with its ideas. I think that's really problematic and says something bad about the community and upvoting norms. I talk about this more at the beginning of my screen recording.

Update after reading the article: I can see some more potential reasons the article got no engagement (too specialized, too hard to read if you aren't familiar with the field, not enough introductory framing of why this matters) but someone could have at least said that. Upvoting is actually misleading feedback if you have problems like that with the article.

Bonus Literature on Maximizing or Minimizing Moral Values

https://www.curi.us/1169-morality

This article, by me, is about maximizing squirrels as a moral value, and more generally about there being a lot of actions and values which are largely independent of your goal. So if it was minimizing squirrels or maximizing bison, most of the conclusions are the same.

I commented on this some in my screen recorded after the upvoters criticism, maybe 20min in.

Bonus Comments on Offsetting

(This section was written before the three errors, one of which ended up being related to this.)

Offsetting views are problematic in epistemology too, not just morality/axiology. I've been complaining about them for years. There's a huge, widespread issue where people basically ignore criticism – don't engage with it and don't give counter-arguments or solutions to the problems it raises – because it's easier to go get a bunch more positive points elsewhere to offset the criticism. Or if they already think their idea already has a ton of positive points and a significant lead, then they can basically ignore criticism without even doing anything. I commented on this verbally around 25min into the screen recording.

Screen Recording

I recorded my screen and talked while creating this. The recording has a lot of commentary that isn't written down in this post.

https://www.youtube.com/watch?v=d2T2OPSCBi4


Elliot Temple on November 14, 2022

Messages

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)