Fights, Games, and Debates by Anatol Rapoport (1960), p. 289 (btw I read 285-309 based on a lead from a different, better book):
The reciprocal task has been proposed as the foundation of ethical debate, namely, the task of stating the opponent's case as clearly and eloquently as possible (99).
There are reasons I find this difficult or problematic.
One is my standards of clarity and eloquence (and elegance, simplicity, generality, etc.) are often higher than my opponent. This leads to asking clarifying questions because his own presentation is inadequate and unclear. Or it leads to guessing what he means and making up parts of his position myself. More broadly it leads to me changing things.
And regardless, if I only repeated back his exact words, he wouldn’t think I understood him. So changing things is a necessary part of this task.
But when you change things, people often don’t like it. Even if it’s objectively better, they may think it’s worse. They will feel understood only if they like the changes (and don’t recognize substantive additions, even ones they like). What does that require? Cultural affinity. Rapport. Knowing and pandering to (or genuinely fulfilling) their values. etc. It requires some knowledge of their psychology and values.
Making their position clearer often makes some flaws more apparent. They never say anything that reveals those flaws so clearly and they dislike it when I do. They want me to make statements about their position which are less clear in some ways.
I often find it hard to state people’s positions when those positions are illogical. And they won’t be satisfied if my statement of their position makes the illogic very clear. They want me to give arguments from their perspective about why it is actually logical, rather than give my own opinion. OK but how when none of their arguments make sense?
There is a trick to talking like they do and glossing over certain issues, which is developed by exposure to their subculture. But learning that trick is quite a different thing than merely listening to their reasoning and debating statements and understanding what they said. Being able to satisfactorily pose as one of them, by having or imitating the same blind spots, is a huge and unreasonable burden to ask of debate partners. And no one does ask for that openly. They merely ask you to state their position in a way which is acceptable to them. And then they don’t analyze what that means, in general, or how it’s problematic. It sounds so reasonable to people but I’ve never seen them analyze it further.
Similarly the “principle of charity” and “steelmanning” (opposite of strawmanning) is popular among rationality-oriented people but I never see them do any analysis of the major difficulties with it. If someone says something that you think is wrong in all reasonable interpretations, then what is the charitable interpretation?
Similarly, the Rapoport book seems to think listening and understanding others is great and a big improvement over what people usually do. And that may well be so. But to me it’s basic and I have experience running into more advanced problems like understanding people too well and having too much insight into their flaws and contradictions. Part of the problem is they don’t like it. And part is that they don’t understand it. If you know more about them than they do, then you know stuff that they don’t, so they have to learn new things in order to understand what you’re talking about. This stuff works better when you’re on a similar intellectual level with the other guy, and also you were ignorant so you can just listen and find out some ideas you didn’t know before, and then you state them to show that you now know what they are instead of still not knowing. For simplistic people that’s pretty good and way better than many alternatives. But at a deeper level with more powerful thinking abilities, part of that is just basic stuff that should be taken for granted (of course one understands first and then engages with statements in the discussion rather than not listening) and part actually doesn’t work well.
People have demanded of me that I express their position but those same people are unsympathetic to me saying things like: “I have been unable to find any literature on your side which expresses your position in a reasonable way and I don’t know how to fix it to be reasonable; I think it’s wrong. What literature do you endorse?” In their minds, that doesn’t seem to qualify as properly listening, even though often part of what’s going on is that I’ve read more of their side’s literature than they have, and I know more about their side than they do.
Whenever I talk to inductivists, for example, I always want them to state their position and take a clear position, and I find them very ignorant of what induction is and unable to present any clear epistemological position. And when I try to state their position for them, I can successfully give an inductivist position, but I have no way to give theirs specifically since there is so much variation in inductivist positions, as well as dishonest goal-post moving (they might deny something is their position and adjust their position if my clear explanation starts revealing flaws in it or merely makes it seem vulnerable by being too well defined without enough wiggle room.)
How do you charitably explain someone’s position when part of their position is to dishonestly use ambiguity to sabotage debate? How do you charitably state someone’s position when a major part of their position is irrational strategies for entrenching bad ideas against criticism? They will think it’s very uncharitable to believe that about them, but what is the alternative? To think they are doing it on purpose, maliciously? That’d be more uncharitable. They want me to conclude that they are good and rational, but that position is sometimes not honestly available to me given my best effort at objective analysis. And they hate that and don’t want to talk with me because I’m too insightful and my reasons for my conclusions, that they find threatening, are too compelling, reasonable, objective, logical, detailed, etc.
As I’ve gotten better at debate, at understanding people, etc., I’ve found a lot of dishonesty, evasion, vagueness on key issues, and other problems. And they want me to just ignore that – ignore the reasons I see the discussion failing – and help them keep up pretenses.
They want me to state their position, and make it coherent, and make it not wrong. They don’t want me to clearly nail down their position and then refute it. Even if they agree that I stated their position correctly, they’ll usually try to walk that back after I refute what I stated. They will suddenly realize, when my version is clearly false, that it doesn’t match their position at all… They will start making ad hoc modifications to their position, moving the goal posts, etc.
That doesn’t happen every time. But one of the major factors is their social comfort: having rapport with them, them being in a situation where they feel threatened by potential loss of social status, them not feeling attacked, etc. Getting good reactions depends a lot on that problematic stuff, not just on listening or being logical. But even saying that is kind of an accusation that the other guy is not logical, pursues social status, etc., which they might take as offensive, threatening and uncharitable, even if it’s true.
Similarly, Rapoport talks about stating conditions under which the opponent’s position is correct and I’d agree with it (stating its region of validity). These can be real situations or just hypothetical ones. On what premises would I agree with them? And that makes sense when everyone has flawed compromise positions and you can (or should be able to) see some merits to the other side too. But what about when I think I have decisively correct ideas which are already optimized for all cases? Then any deviation is just bad.
I already aimed to take into account all the standard views and make something strictly better. So compared to one standard view, I can give their other standard view some credit for having some merits. But compared to my more sophisticated view, that already considered and learned what it could from all standard views, I can’t give their standard view any credit – it’s just strictly worse in all ways as far as I know (if that wasn’t my best understanding, I’d have a different position so that it would be the case – I choose positions so it’s the case).
The only premises on which I’d accept their view are (in my opinion) false, and often quite severe/drastic/weird/unintuitive like epistemology would have to be different which could mean changing the laws of physics so information processing is different and then if you do that in the right way – so you can throw out our current logic and concepts like true and false – then one of their ideas would work. It’d have to be a just so story though. That vast majority of large changes to physics would make their view incoherent. But if the change to physics is designed just right, it could make their idea happen to work. You could look through millions of different laws of physics until you find one where their position gets lucky and accidentally works well. They won’t appreciate that kind of charity though (and it isn’t really charitable to think they are “right” in some special cases of that nature – it’s just disagreeing with them and considering them wrong).
Rapoport says debates fail a ton because people won’t listen to each other. And that’s true. But I listen and people often don’t like it. They don’t want me to ask clarifying questions that they can’t answer which reveal they don’t have a clear view of their own position. They don’t want me to understand their position, their motive, their relevant psychological flaws, their errors, their defense mechanisms, etc. And they don’t want to talk about all that either. They get angry and defensive if I point some of that stuff out. Often they have little understanding of themselves, and little ability to reason, so they can’t even tell if I’m right. But they want to keep up a pretense of having great self-knowledge.
There is a widespread belief that people’s statements about their own psychology have strong authority and should be accepted basically without question. It’s also widely believed that knowing things about the psychology of others is very very hard and it’s arrogant to try. So then they think I’m being unreasonable to form an opinion about what’s going on in their head when we debate. Then they make a statement flatly contradicting me and they think that settles it and proves me wrong. But they don’t give an argument and don’t understand how to objectively analyze their mental states or how to provide evidence about their mental states which differs from their conclusions or opinions and is instead more raw and appropriate to analyze.
What do they want me to do? Not form a mental model of them when we talk? Not have any empathy or consider their feelings, motives or unstated reasoning at all? Not say any meta discussion? I’ve tried those things too and I’ve found that people hate it. They want me to make some meta statements and to have some mental model of them including their ideas and interests. A lot of what they are asking for with listening, charity, etc., is that I do pay attention to them, understand their ideas, mentally model them instead of not understanding their perspective, etc. But they want me to do all that and then agree with them.
They think that if I mentally model them and listen then I’ll see they’re right.
If I do that stuff – listen, mentally model them well, be charitable, etc – and reach a different conclusion, like that they’re wrong, that’s extra threatening. The better I listen and be fair and objective, the more insulting it is when I conclude that they’re wrong. Especially if I conclude they’re fully, decisively wrong instead of having some good points that can work with a few modifications. The more rational and high quality my analysis, the more threatening my continued disagreement with them is. The better a listener I am, the worse it is for them if I still think they’re wrong (and it’s even worse if I think it’s a big deal which is harming their life, rather than an irrelevant, minor issue).
Another thing people do is claim it’s boring for you to state their case correctly. Why bother stating obvious truths that they already know? For example, on HBL (an Objectivist group), I wrote some arguments against Popper before advocating CR ideas. No one later said “Since you do see flaws in Popper, we’ll take it seriously when you agree with some of his ideas”. No one praised or appreciated my rationality. Instead they got confrontational. Why bring up Popper? We already hate him. Why talk about him? He’s out-group. They didn’t use those words but they didn’t like my rational criticism of Popper. And I guess rational criticism of Popper is not their position. They never do it… They never point out real flaws in accurate quotes of Popper’s writing. Their real position is that Popper should be flamed and stigmatized, and that one shouldn’t ever listen to what Popper said and engage with it, not even to critique Popper’s errors. They don’t hide their prejudices very well. But they did pretend to agree with and be bored by some of my criticism – not see the point – rather than recognizing it as analysis that they’d never do, which they dislike because it’s not hostile enough towards Popper since it merely refuted some of his errors objectively.
Rapoport talks about an English speaker who helped a Russian with a speech (keep in mind that this book came out in 1960). The Russian said that Russia wants only peace but when he said it he sounded very hostile and aggressive. She coached him to say it in a different tone. Rapoport thinks this is wonderful. I think it’s dangerous to help war-mongering liars fool more people. It’s problematic to tell them how to lie more effectively. She didn’t merely listen to and understand his case. Rapoport thinks she helped him state his case most effectively. But he isn’t analyzing deeply. There are different types of effectiveness. She didn’t help him use logic more effectively. She helped him be more effective at lying.
Helping people use logic and reason more effectively, to help them make their cases better, works in a short term, local way. But if you do it too well, as I strive to, then global optima matter. Ultimately, if they are wrong and take the issues seriously and pursue it with high standards, then they will conclude they are wrong. They may think logic is on their side, but the more extensive the analysis done, the harder it may be for them to keep thinking that. As your logical reasoning is more powerful and developed, and takes into account more issues, you can go from an ally who is helping them be more logical to someone who is challenging their conclusions.
If they’re wrong, helping them reason better ultimately won’t lead to them making the case more effectively but to them changing their mind (or irrationally refusing to). This counters the goal of being on their side (on the side of their current conclusion, which they’re attached to).
I run into this. If I try to share all the logical analysis I know starting from their premises, I end up concluding they are wrong. I can go through the process from their perspective involving rationally changing their mind. But they just interpret that as an especially effective, scary attack, not as something they want to listen to that shows I listened to them. The problem is I used charitable premises and logic, but their real views are irrational, so the process of mind changing I wrote down won’t work for them. They will not change their mind like that. But they should. So it reveals they’re bad and they’re wrong (or lying) about what their premises are, their commitment to rationality, etc. It ends up being a debate about whether they are flawed. But I don’t know how to dumb down my analysis so they’ll reliably like it and keep them consistently liking it even as I progress the discussion to point out some of their errors and ways they should change.
People don’t like it when you’re super reasonable, a super good listener, and you show how their premises and points lead to your conclusions, not theirs. Unless they decide to actually change their mind. But they often don’t. And whether they are willing to change their mind generally depends on things other than what the truth is or how rational or good my analysis is.
I can try really hard to accommodate reasonable requests or concerns and reach a conclusion that doesn’t trample on or compromise anything good. Generally other people have nothing to add to that since their ideas are repetitive with other ideas I’ve encountered before (or sometimes thought of myself). But what I can’t accommodate is them being unreasonable and illogical. And they can’t and won’t just accept rational explanations. So we get stuck in a way that’s damning to them. So these debating techniques don’t work well and don’t fix things.
Broadly, generally, the issue is that the more I do rational things that should work and should get positive responses, the more it pressures people (who care about rationality or say they do) and makes me look better than them. The more I do all the right, rational things, the harder it is for them to come up with any excuses not to listen. So they fall apart. High quality rationality doesn’t leave space for them to keep having dumb views.
People want to be listened to, and interpreted charitably, because they think they will win. They think that the reason their arguments aren’t working is that other people don’t listen, are ignorant, are biased, are uncharitable, etc. If you do all the right things that they ask for, and still disagree with them, they often really, really hate it. It reveals to them what a fraud they are. It forces their rationalizing into overdrive. It gives them a big challenge to cope with when all they wanted was for you to change.
The book gets pretty silly and unrealistic. It says you can ~always find some region of validity for stuff people say. OK so far. And it gives various often-pedantic examples.
For example, if someone says paper is thick, you can agree that it’s thick compared to the sides of a soap bubble.
But the guy saying paper is thick probably never thought about soap bubbles… You’re partly agreeing with him in some technical sense but what you’re really doing is denying that paper is as thick as cardboard, and trying to insist on the usual view of paper, which you’re convinced is true. You aren’t listening to what he disagrees with the usual view about, or why. Rapoport is actually arrogant and a bad listener. He doesn’t take seriously that people get stuff wrong. He wants you to listen by finding some exception where their statement is true so you can grant them something. But that isn’t what they wanted or meant. They weren’t saying it because they know of that one exception while also agreeing that their statement is wrong in general. They made their statement because they like it in some kinda general way.
He gave an extended example with an arithmetic based on rotating dice in order to rescue some mathematical error as having a region of validity. But none of that shows any understanding of the perspective of the guy who is making a basic math error. It’s saying a bunch of stuff he doesn’t understand while also assuming the whole time that the actual thing he meant is false as he meant it. And without even acknowledging that you’re disagreeing with him and think he’s wrong. Instead you bizarrely try to agree with him about something he didn’t say, didn’t mean, and doesn’t understand. So you talk circles around him, and never inquire about what he really meant, and never explain why it’s wrong.
This stuff in the book from 1960 all reminds me of Less Wrong (LW) people. They seem like intellectual descendants of it (they aren’t the only ones, and also maybe there were other similar books that were more influential – I haven’t done any kind of comprehensive review of which books had a leadership role and which just repeated a trend). But LW lacks the intellectual leadership to go read old books to try to understand where their ideas come from. They are a disorganized community that does a poor job of citing its sources and influences, with some specific exceptions that they do credit like Bayes. And there’s no one in particular at LW to talk to about disagreements.
Part of the premise of this book, when it talks about debate, is that there are some people willing to talk. It’s all about what to say in a debate instead of the prior problem that people don’t want to have the conversation in the first place. Often they’d rather go debate someone else who agrees with them more or who they think they can beat. They don’t want to go through an elaborate procedure that requires them to try to understand my detailed and challenging views. Nor do they want to try to clarify their views to my satisfaction.
They don’t want me to ask too many questions about what they think. They don’t want me to show them up by taking their views more seriously than they do. (BTW Gigahurt said he liked that but in fact stopped talking with me.) They don’t want to face a bunch of hard, detailed questions. They want me to adopt their attitude of believing stuff with limited detail. They want me to adopt their low standards, not try to bring my standards to their position. Their low standards are idiosyncratic, inconsistent, illogical, unpredictable, etc., so even if I wanted to I could not adopt them without either a ton of questions they’ll hate or else a bunch of cultural assimilation over time. (Related to cultural assimilation – I remember being mocked at LW for mentioning that I had read HPMOR – a very long work, written by an LW leader, that is highly relevant for understanding and assimilating to LW culture.)
Another Rapoport book example is a man who says “black is white” and you’re supposed to agree regarding photographic negatives. But he wasn’t thinking of negatives, so with that response you aren’t listening and still have no idea what he meant or why he said it. Maybe he did want to shock or be contrarian (as Rapoport wanted to assume he didn’t, in the interests of charity). Maybe he’ll say “all colors are just light so it’s all the same” or “both have high contrast so they’re broadly equivalent – you can use either one and it doesn’t matter” or “both lack any real colors like green or blue. they’re perfectly neutral so they can’t actually be different. seeing them differently is an illusion of our eyes.” People often have dumb or weird ideas that are different than your attempt to rescue their position in some special case while continuing to deny it in general. You can’t know what he thinks without asking more and listening in a different way than Rapoport suggests.
Rapoport says it’s good to steer the discussion to organized analysis of contexts of validity. But that is threatening to people who didn’t take such things into account when forming their views, and who are going to consistently lose arguments when they do that kind of analysis with a clever intellectual. This is all a way to steer the debate to them losing while giving them some fake, pseudo credit/acknowledgment/sympathy but not really listening to what they actually meant. When you try to reinterpret their errors as some special case truths instead, you are not listening to them and are erasing their actual ideas from the conversation. This material is deeply insensitive, arrogant and condescending while presenting itself as being very nice, non-threatening, taking other people seriously and really listening, etc. This stuff is nasty. It helps experts and intellectuals put on a show of rationality and be mean to people and appear to win while seeming to be good listeners and kind people, etc., when they really aren’t.
Messages