Social Dynamics Summary Notes

These are some summary notes on social dynamics.

  • conformity
    • try to fit in
    • pandering
    • pleasing people
    • avoiding conflict
    • do whatever the group thinks is high status
      • follow trends
    • you need to already have friends. people are impressed by people who other people already like (pre-selection).
    • have standard interests like the right TV shows, music, movies and sports. talk about those. don’t say weird stuff.
      • you’re allowed to have interests people think they should have. lots of people think they should be more into art galleries and operas. you can talk about that stuff to people who don’t actually like it but pretend they want to. they’ll be impressed you actually do that stuff which seems a bit inaccessible but valuable to them.
  • law of least effort
    • being chased, not chasing
    • people come to you
    • opposite of tryhard
    • less reactive
      • don’t get defensive or threatened (important for confidence too)
      • hold frame without showing visible effort
      • but also don’t let people get away with attacking you
    • when you attack people, it should seem like the conflict isn’t your fault, was just a minor aside, no big deal to you, preferably you weren’t even trying to attack them
    • people do what you say
    • you don’t have to do what other people say
    • you generally aren’t supposed to care that much about stuff. instead, be kinda chill about life
      • if you get ahead while appearing this way, it looks like success comes naturally to you. that impresses people. (it should not look like you got lucky)
  • confidence
    • hide weakness
    • pretend to be strong
    • know what you’re doing, having a strong frame, have goals
    • be able to lead
    • best to already be leader of your social group, or at least high up like second in command
  • value
    • DHVs (demonstrations of higher value, e.g. mentioning high value things in passing while telling a story)
    • money, popularity, fame, social media followers, loyal friends, skills, knowledge, SMV (sexual market value, e.g. looks)
    • abundance mentality
    • well spoken, know other languages, can play an instrument or sing, cultured, can cook, etc.
    • virtues like being moral and enlightened are important. these are group specific. some groups value environmentalism, being woke, anti-racist signaling, inclusive attitudes towards various out groups and low status people (poor people, immigrants, disabled, homeless, drug addicts), etc. other groups value e.g. patriotism, toughness, guns, Christianity and limited promiscuity.
  • trend setting
    • this is hard and uncommon but possible somehow
    • mostly only available for very high status people (~top status in a small group can work; it doesn’t have to be overall societal status)
  • non-verbal communications
    • clothes send social signals
    • voice tones
    • eye contact
    • body language
    • posture
    • leaning in or having people lean to you
  • congruence
    • do not ever get caught faking social stuff; that looks really bad
  • compliance
    • getting compliance from other people, while expending low effort to get it, it socially great.
      • it can especially impress the person you get compliance from, even more than the audience
  • plausible deniability
    • there are often things (communications, actions) that a group understands but won’t admit that they understand the meaning of
    • there are ways to insult someone but, if called on it, deny you were attacking them, and most people will accept your denial
    • there are subtle, tricky rules about what is considered a covert attack that you’re allowed to deny (or e.g. a covert way to ask someone on a date, which you’re allowed to deny was actually asking them out if they say no) and what is an overt attack so denials would just make you look ridiculous.
    • social combat heavily uses deniable attacks. deniability is also great for risky requests
    • you’re broadly allowed to lie, even if most people know you’re lying, as long as it isn’t too obvious or blatant, so it’s considered deniable
    • basically social dynamics have their own rules of evidence about what is publicly, socially known or established. and these rules do not match logic or common analytical skill. so what people know and what is socially proven are different. sometimes it goes the other way too (something is considered socially proven even though people don’t know whether or not it’s true).
      • many social climbing techniques use the mismatch between what is socially known to the group and what is actually known to individuals. it lets you communicate stuff so that people understand you but, as far as the social group is concerned, you never said it.

Overall, high status comes from appearing to fit in effortlessly, while wanting to not being pushed into it, and not having social problems, weaknesses or conflicts. You can also gain status from having something valuable, e.g. money, looks, fame, followers or access to someone famous. Besides extreme cases, you still need to do pretty well at social skill even when you have value. Value is an advantage but if you act low status that can matter more than the value. If you have a billion dollars or you’re a movie star, you can get away with a ton and people will still chase you, but if you just have a million dollars or you’re really hot, then you can’t get away with so much.

Desired attitude: You have your own life going on, which you’re happy with. You’re doing your thing. Other people can join you, or not. It isn’t that big a deal for you either way. You don’t need them. You have value to offer, not to suck up to them, but because your life has abundance and has room for more people. You already have some people and aren’t a loaner. You only would consider doing stuff with this new person because they showed value X – you are picky but saw something good about them, but you wouldn’t be interested in just anyone. (Elicit some value from people and mention it so it seems like you’re looking for people with value to offer. You can do this for show, or you can do it for real if you have abundance. Lots of high status stuff is acting like what people think a person with a great life would do, whether you have that or not. Fake it until you make it!)

People socially attack each other. In this sparring, people gain and lose social status. Insults and direct attacks are less common because they’re too tryhard/reactive/chasing. It’s better to barely notice people you don’t like, be a bit dismissive and condescending (without being rude until after they’re overtly rude first, and even then if you can handle it politely while making them look bad that’s often better).

If you sit by a wall and lean back, you look more locked in and stable, so it appears that people are coming to you. Then you speak just slightly too softly to get people to lean in to hear you better, and now it looks like they care what you say and they’re chasing you.


These notes are incomplete. The responses I’d most value are some brainstorming about other social dynamics or pointing out data points (observed social behaviors) which aren’t explained by the above. Alternatively if anyone knows of a better starting point which already covers most of the above, please share it.


View on Less Wrong.


Elliot Temple | Permalink | Messages (11)

The Law of Least Effort Contributes to the Conjunction Fallacy

Continuing the theme that the “Conjunction Fallacy” experimental results can be explained by social dynamics, let’s look at another social dynamic: the Law of Least Effort (LoLE).

(Previously: Can Social Dynamics Explain Conjunction Fallacy Experimental Results? and Asch Conformity Could Explain the Conjunction Fallacy.)

The Law of Least Effort says:

the person who appears to put the least amount of effort out, while getting the largest amount of effort returned to him by others, comes across as the most socially powerful.

In other words, it’s higher status to be chased than to chase others. In terms of status, you want others to come to you, rather than going to them. Be less reactive than others.

Visible effort is a dominant issue even when it’s easy to infer effort behind the scenes. Women don’t lose status for having publicly visible hair and makeup which we can infer took two hours to do. You’re not socially permitted to call them out on that pseudo-hidden effort. Similarly, people often want to do learning and practice privately, and then appear good at stuff in front of their friends. Even if you can infer that someone practiced a bunch in private, it’s often socially difficult to point that out. Hidden effort is even more effective when people can’t guess that it happened or when it happened in the past (particularly childhood).

To consider whether LoLE contributes to the Conjunction Fallacy experimental results, we’ll consider three issues:

  1. Is LoLE actually part of the social dynamics of our culture?
  2. If so, would LoLE be active in most people while in the setting of Conjunction Fallacy research?
  3. If so, how would LoLE affect people’s behavior and answers?

Is LoLE Correct Today?

LoLE comes from a community where many thousands of people have put a large effort into testing out and debating ideas. It was developed to explain and understand real world observations (mostly made by men in dating settings across many cultures), and it’s stood up to criticism so far in a competitive environment where many other ideas were proposed and the majority of proposals were rejected.

AFAIK LoLE hasn’t been tested in a controlled, blinded scientific setting. I think academia has ignored it without explanation so far, perhaps because it’s associated with groups/subcultures that are currently being deplatformed and cancelled.

Like many other social dynamics, LoLE is complicated. There are exceptions, e.g. a scientist or CEO may be seen positively for working hard. You’re sometimes socially allowed to put effort into things you’re “passionate” about or otherwise believed to want to work hard on. But the broad presumption in our society is that people dislike most effort and avoid it when they can. Putting in effort generally shows weakness – failure to avoid it.

And like other social dynamics, while the prevalence is high, not everyone prioritizes social status all the time. Also, people often make mistakes and act in low social status ways.

Although social dynamics are subtle and nuanced, they aren’t arbitrary or random. It’s possible to observe them, understand them, organize that understanding into general patterns, and critically debate it.

Is there a rival theory to LoLE? What else would explain the same observations in a different way and reject LoLE? I don’t know of something like that. I guess the main alternative is a blue pill perspective which heavily downplays the existence or importance of social hierarchies (or makes evidence-ignoring claims about them in order to virtue signal) – but that doesn’t make much sense in a society that’s well aware of the existence and prevalence of social climbing, popularity contests, cliques, ingroups and outgroups, etc.

Would LoLE Be Active For Conjunction Fallacy Research?

People form habits related to high status behaviors. For many, lots of social behavior and thinking is intuitive and automatic before high school.

People don’t turn off social status considerations without a significant reason or trigger. The Conjunction Fallacy experiments don’t provide participants with adequate motivation to change or pause their very ingrained social-status-related habits.

Even with a major reason and trigger, like Coronavirus, we can observe that most people still mostly stick to their socially normal habits. If people won’t context switch for a pandemic, we shouldn’t expect it for basically answering some survey questions.

It takes a huge effort and established culture to get scientists to be less social while doing science. And even with that, my considered opinion is that over 50% of working scientists don’t really get and use the scientific, rationalist mindset. That’s one of the major contributors to the replication crisis.

How Would LoLE Affect Answers?

Math and statistics are seen as high effort. They’re the kinds of things people habitually avoid due to LoLE as well as other social reasons (e.g. they’re nerdy). So people often intuitively avoid that sort of thinking even if they could do it.

Even many mathematicians or statisticians learn to turn that mindset off when they aren’t working because it causes them social problems.

LoLE encourages people to try to look casual, chill, low effort, even a little careless – the opposite of tryhard. The experimental results of Conjunction Fallacy research fit these themes. Rather than revealing a bias regarding how people are bad at logic, the results may simply reveal that social behavior isn’t very logical. Behaving socially is a different thing than being biased. It’s not just an error. It’s a prioritization of social hierarchy issues over objective reality issues. People do this on purpose and I don’t think we’ll be able to understand or address the issues without recognizing the incentives and purposefulness involved.


View this post on Less Wrong.


Elliot Temple | Permalink | Messages (3)

Asch Conformity Could Explain the Conjunction Fallacy

I also posted this on Less Wrong.


This post follows my question Can Social Dynamics Explain Conjunction Fallacy Experimental Results? The results of the question were that no one provided any research contradicting the social dynamics hypothesis.

There is research on social dynamics. Asch’s conformity experiments indicate that wanting to fit in with a group is a very powerful factor that affects how people answer simple, factual questions like “Which of these lines is longer?” People will knowingly give wrong answers for social reasons. (Unknowingly giving wrong answers, e.g. carelessly, is easier.)

Conformity and other social dynamics can explain the conjunction fallacy experimental data. This post will focus on conformity, the dynamic studied in the Asch experiments.

This post assumes you’re already familiar with the basics of both the Asch and Conjunction Fallacy research. You can use the links if you need reminders.

First I’ll talk about whether conformity applies in the Conjunction Fallacy research setting, then I’ll talk about how conformity could cause the observed results.

Conformity in Groups

The Asch Conformity Experiments have people publicly share answers in a group setting. This was designed to elicit conformist behavior. Should we also expect conformist behavior in a different setting like the Conjunction Fallacy experiments setting? I suspect the different setting is a major reason people don’t connect the Asch and Conjunction Fallacy results.

I haven’t seen specific details of the Conjunction Fallacy research settings (in the text I read, details weren’t given) but I think basically people were given questionnaires to fill out, or something close enough to that. The setting is a bit like taking a test at school or submitting homework to a teacher. Roughly: Someone (who is not a trusted friend) will look over and judge your answers in some manner. In some cases, people were interviewed afterwards about their answers and asked to explain themselves.

Is there an incentive to conformity in this kind of situation? Yes. Even if there was no peer-to-peer interaction (not a safe assumption IMO), it’s possible to annoy the authorities. (Even if there were no real danger, how would people know that? They’d still have a reasonable concern.)

What could you do to elicit a negative reaction from the researchers? You could take the meta position that your answers won’t impact your life and choose the first option on every question. Effort expended on figuring out good answers to stuff should relate to its impact on your life, right? This approach would save time but the researchers might throw out your data, refuse to pay you, ask to speak with you, tell your professors about your (alleged) misbehavior (even if you didn’t violate any written rule or explicit request), or similar. You are supposed to abide by unwritten, unstated social rules when answering conjunction fallacy related questions. I think this is plenty to trigger conformity behaviors. It’s (like most of life) a situation where most people will try to get along with others and act in a way that is acceptable to others.

Most people don’t even need conformity behavior triggers. Their conformity is so automatic and habitual that it’s just how they deal with life. They are the blue pill normies, who aren’t very literal minded, and try to interpret everything in terms of its consequences for social status hierarchies. They don’t think like scientists.

What about the red pill autists who can read things literally, isolate scenarios from cultural context, think like a scientist or rationalist, and so on? Most of them try to imitate normies most of the time to avoid trouble. They try to fit in because they’ve been punished repeatedly for nonconformity.

(Note: Most people are some sort of hybrid. There’s a spectrum, not two distinct groups.)

When attending school people learn not to take questions (like those posed by the conjunction fallacy research) hyper literally. That’s punished. Test and homework questions are routinely ambiguous or flawed. What happens if you notice and complain? Generally you confuse and annoy your teacher. You can get away with noticing a few times, but if you complain about many questions on everything you’re just going to be hated and punished. (If people doubt this, we could analyze some public test questions and I can point out ambiguities and flaws.)

If you’re the kind of person who would start doing math when you aren’t in math class, you’ve gotten negative reactions in the past for your nonconformity. Normal people broadly dislike and avoid math. Saying “Hmm, I think we could use math to get a better answer to this.” is a discouraged attitude in our culture.

The Conjunction Fallacy research doesn’t say “We’re trying to test your math skills. Please do your best to use math correctly.” Even if it did, people routinely give misleading information about how literal/logical/mathematical they want things. You can get in trouble for using too much math, too advanced math, too complicated math, etc., even after being asked to use math. You can very easily get in trouble for being too literal after being asked to be literal, precise and rigorous.

So people see the questions and know that they generally aren’t supposed to sweat the details when answering questions, and they know that trying to apply math to stuff is weird, and most of them would need a large incentive to attempt math anyway, and the more rationalist types often don’t want to ruin the psychology study by overthinking it and acting weird.

I conclude that standard social behavior would apply in the Conjunction Fallacy research setting, including conformity behaviors like giving careless, non-mathematical answers, especially when stakes are low.

How Does Conformity Cause Bad Math?

Given that people are doing conformity behavior when answering Conjunction Fallacy research questions, what results should we expect?

People will avoid math, avoid being hyper literal, avoid being pedantic, not look very closely at the question wording, make normal contextual assumptions, and broadly give the same sorta answers they would if their buddy asked them a similar question in a group setting. Most people avoid developing those skills (literalism, math, ambiguity detection, consciously controlling the context that statements are evaluated in, etc.) in the first place, and people with those skills commonly suppress them, at least in social situations if not throughout life.

People will, as usual, broadly avoid the kinds of behaviors that annoy parents, teachers or childhood peers. They won’t try to be exact or worry about details like making probability math add up correctly. They’ll try to guess what people want from them and what other people will like, so they can fit in. They’ll try to take things in a “reasonable” (socially normal) way which uses a bunch of standard background assumptions and cultural defaults. That can mean e.g. viewing “Linda is a bank teller” as information a person chose to tell you, not as more like an out-of-context factoid chosen randomly by a computer, as I proposed previously.

Conformity routinely requires making a bunch of socially normal assumptions about how to read things, how to interpret instructions, how to take questions, etc. This includes test questions and similar, and most people (past early childhood) have past experiences with this. So many people won’t take conjunction fallacy questions literally.

People like the college students used in the research have taken dozens of ambiguous tests and had to figure out how to deal with it. Either they make socially normal assumptions (contrary to literalism and logic) without realizing they’re doing anything, or they noticed a bunch of errors and ambiguities but figured out a way to cope with tests anyway (or a mix like only noticing a few of the problems).

Conclusions

Conformity isn’t a straight error or bias. It’s strategic. It has upsides. There are incentives to do it and continue doing it (as well as major costs to transitioning to a different strategy).

If this analysis is correct, then the takeaway from the Conjunction Fallacy shouldn’t be along the lines of “People are bad at thinking.” It should instead be more like “People operate in an environment with complex and counter-intuitive incentives, including social dynamics.”

Social status hierarchies and the related social behaviors and social rules are one of the most important features of the world we live in. We should be looking to understand them better and apply our social knowledge more widely. It’s causally connected to many things, especially when there are interactions between people like interpretations of communications and requests from others, as is present in Conjunction Fallacy research.


Elliot Temple | Permalink | Messages (3)

Can Social Dynamics Explain Conjunction Fallacy Experimental Results?

I posted this on Less Wrong too.


Is there any conjunction fallacy research which addresses the alternative hypothesis that the observed results are mainly due to social dynamics?

Most people spend most of their time thinking in terms of gaining or losing social status, not in terms of reason. They care more about their place in social status hierarchies than about logic. They have strategies for dealing with communication that have more to do with getting along with people than with getting questions technically right. They look for the social meaning in communications. E.g. people normally try to give – and expect to receive – useful, relevant, reasonable info that is safe to make socially normal assumptions about.

Suppose you knew Linda in college. A decade later, you run into another college friend, John, who still knows Linda. You ask what she’s up to. John says Linda is a bank teller, doesn’t give additional info, and changes the subject. You take this to mean that there isn’t more positive info. You and John both see activism positively and know that activism was one of the main ways Linda stood out. This conversation suggests to you that she stopped doing activism. Omitting info isn’t neutral in real world conversations. People mentally model the people they speak with and consider why the person said and omitted things.

In Bayesian terms, you got two pieces of info from John’s statement. Roughly: 1) Linda is a bank teller. 2) John thinks that Linda being a bank teller is key info to provide and chose not to provide other info. That second piece of info can affect people’s answers in psychology research.

So, is there any research which rules out social dynamics explanations for conjunction fallacy experimental results?


Elliot Temple | Permalink | Messages (6)

Elliot Temple | Permalink | Messages (64)

Example of Rejecting TOC Improvements

(I also posted this on Less Wrong.)

Below I quote from Process of On Going Improvement forum, letter 6. Eli Goldratt shares a letter he received. I added a few notes to help people follow acronyms.

My question is: Does anyone know of any applications of Less Wrong philosophy to a situation like this? How can LW ideas about rationality explain or fix this sort of problem? The scenario is that someone tried to use rational thinking to make business improvements, was highly successful (which was measured and isn't in dispute), but nevertheless has met so much ongoing resistance that he's at the point of giving up.

I am no expert in TOC but I believe my recent experiences have impact as to what you are writing about.

TOC = Theory Of Constraints. Summary.

About 2 years ago I started on my TOC adventure. Read everything I could get a hold off etc. Tried to get the company interested, etc. In fact, I finally got them interested enough that we had multiple locations participate in the satellite sessions and had enough for three facilitators (myself included). However, I could never get the company to spend for training at AGI. So, in the old air cav fashion, I felt it was up to me to make it happen.

Last year we had real problems with cost, service, high inventory, etc. My plant, I am the plant manager, was being analyzed for a possible shutdown or sell off. We were asking for 17 machines at about $300,000 each due to "lack of capacity" and we were being supplemented by outside producers.

Again, I am not a TOC expert. Basically my exposure has been reading and researching and building computer simulations to understand. But I put on TOC classes for all of my associates (200). I spent 8 hours with each of these associates in multiple classes. We talked about the goal, TIOE, we played the dice game (push, KanBan, DBR) with poker chips, paper clips, and different variations of multiple sided dice and talked about its impact, etc.

The Goal (summary) is a book by Eli Goldratt that has sold over 6 million copies.

TIOE = Throughput, Inventory and Operating Expense. These are the measurements Goldratt recommends.

The dice game is explained in The Goal. It's also now taught by e.g. MIT (section 3-2).

DBR = drum buffer rope. It's about coordinating activities around the bottleneck/constraint.

Last summer we started development on DBR and a new distribution strategy based on what I have read and researched on TOC. I used Bill Dettmer‘s book to develop trees and the clouds. I check our plan against some presentations last November in Memphis when we attended the TOC symposium there.

We had many in the company who doubted but we stuck our necks out and started at the beginning of this year. And we knew we would not be perfect.

YTD results:

YTD = Year to date

Achieved Company President‘s Award for Safety (First Plant to do so) and the planning was based on things I had read about TOC and techniques on establishing teamwork.
Service is up from high 80 to low 90 percentile to averaging above 98.5%
Costs are under budget for the first time in some years
Total Inventory has decreased over 30% and is still dropping
No Longer being supplemented by outside companies for our production
No longer need additional machines to supply demand
We do need additional business to fill our machines
Plant is no longer being considered for close, in fact production from other
facilities are being transferred in.

The chief concern when we told the big wigs we were going to this, was that the cost of freight would go up because our transfer batch sizes would get small. I told them correct but we would stop shipping product back and forth between distribution centers and repacking of product would be almost non-existant. YTD: Our total freight dollars spent is 10% less than the previous year but they look at $/ lb of freight which has gone up. I know this is wrong, they state they know it is wrong, but it still gets measured and used for evaluations.

Anyway, as we shipped more often but smaller quantities our distribution centers complained that we were costing them too much. I have tried for 9 months to get them to quantify this to me. "If I increase batch size of the transfer how many people will it reduce or how much overtime will it reduce" or any other real incremental cost will it get rid off? The general response is, it is hard to quantify but we know it is there. Maybe their intuition is correct, but maybe it is not.

So finally, I am at my end. The DCs continue to insist that we are driving their costs up with small transfer batch sizes. They have complained greatly to my boss and my bosses boss. I am growing weary of the continual fight, which has cost me and my family so much time and effort. I have chosen togive up. I have grown tired of the comments, "Well it was said in a meeting that the concept did not deliver what we expected." Then I show them the numbers and ask, "What else was expected." The reply, "That is what I heard at the meeting."

DCs = distribution centers.

Maybe I made a mistake trying to bring TOC to my plant myself. I would have loved to hire a consultant who really knew what they were doing, but any mention of that brought long talks about cost, etc. I hate to give up but my frustration level has impacted my family, which is something I cannot let happen.

In the end, I have decided this week to give them their large transfer batch sizes while I begin to look for somewhere else to go.

I did not mean for this to be a bitch session. But I can not believe the sheer level of frustration on trying to achieve buy in, even when:
1) Prior to going to our concept we had meetings with our leadership where I presented the UDES from the previous year, and all agreed,

UDES = UnDesirable EffectS. He's saying that before starting he discussed what problems the company was facing with leadership and got unanimous agreement.

2) Showed our potential solution, not all agreed but they were willing to try it
3) Now showing the best numbers the plant has ever turned out.
I just cannot understand the skepticism.

What insight can LW bring to this problem of negative response to rational improvement?


Elliot Temple | Permalink | Messages (0)

Principles Behind Bottlenecks

(I also posted this at Less Wrong.)

This post follows my Chains, Bottlenecks and Optimization (which has the followups Bottleneck Examples and Comment Replies for Chains, Bottlenecks and Optimization). This post expands on how to think about bottlenecks.


There are deeper concepts behind bottlenecks (aka constraints, limiting factors, or key factors).

First, different factors contribute different amounts to goal success. Second, there’s major variation in the amounts contributed.

E.g. I’m adding new features to my software. My goal is profit. Some new features will contribute way more to my profit than others There are lots of features my (potential) customers don’t care about. There are a few features that tons of customers would pay a bunch for.

A bottleneck is basically just a new feature that matters several orders of magnitude more than most others. So most features are approximately irrelevant if the bottleneck isn’t improved.

Put another way: improving the bottleneck translates fairly directly to more goal success, while improving non-bottlenecks translates poorly, e.g. only at 1/1000th effectiveness, or sometimes 0. (It’s possible, but I think uncommon, to have many factors that contribute similarly effectively to goal success. Designing stuff that way doesn’t work well. It’s the same issue as balanced production lines being bad, which Eli Goldratt explains in The Goal: A Process of Ongoing Improvement. It’s also similar to the Pareto Principle which says 80% of effects come from 20% of the causes – meaning most factors aren’t very important.)

What about the software not crashing, not corrupting saved data, and not phoning home with location tracking data? People want those things but I could have them and easily still make zero profit.

A good, typical model for viewing goal pursuit is:

  1. There are many factors that would help, and just one or a few of them are the most important to focus on. This is because most factors have a significantly smaller impact. This is focusing on the key positives.
  2. There are also many dealbreaker factors that cause failure if screwed up. This is avoiding major negatives.

People care about (1) conditional on (2) not being broken. Avoid anything awful, then optimize in the right places.

When buying a cat, I might try to optimize cuteness and cheapness, while also making sure the cat has 4 paws, a tail, no rabies, is tame, and isn’t too old. I want to do well on a couple key factors and also a bunch of easy factors need to be non-broken. It’s generally not that hard to brainstorm dozens of dealbreakers, many of which are quite easy to avoid in your current situation, even to the point of sounding silly to mention it at all.

(Dealbreakers are also contextual. If there were no cats available meeting all my criteria, I might lower my standards.)

The type (2) factors don’t require much attention. If a factor did need attention, it’d switch categories. (2) is just for failure conditions which are pretty easy to handle. This means most of our attention is available to focus on a few key issues.

I think this model is more effective than e.g. something like “consider all the factors; find out it’s way too complicated; try to approximate what you’d do if you had enough attention for all the factors”.

The model I’m proposing can be thought of as a method of organized, effective approximation from a more complex “take everything fully into account” approach. It tells us how to approximate. Thought of another way, I’m saying don’t distribute your significant figures equally.

You might think “Why not just weight all the factors relevant to my goal, then distribute my attention and significant figures according to the weightings?” The difficulty with that is how to weight things. Having a cat that doesn’t attack me and give me rabies is really important. If I’m just weighting factors normally, I’ll give that high weight because I want to reject any cat purchase which fails at that issue.

So if you just start assigning weights straightforwardly, you’ll give the type (2) factors high weights, e.g. 50,000 each, and if they all pass then the type (1) factors will function as tiebreakers worth e.g. 1-50 points each (minor detail: you can scale the weights so they add up to 1, but it’s easier to do that after you have all the factors with weights assigned – I don’t know what fraction of 1 a big factor should be until I know how many big factors there are). But the high value type (1) factors are actually the best place to put a bunch of significant figures. We don’t need a bunch of precision to address our cat having a tail, 4 legs, and no rabies. So attention allocation shouldn’t correspond to weighting.

In general, when we pursue a goal, there are many important but easy factors, and a few important but hard factors. For goals which are achievable but not easy, it has to be this way. If there were dozens of hard factors, that basically means we’re not ready to do it (though with a huge budget and a big team, sometimes it can be done – that lets you have specialists each working on just one hard factor each, plus some additional people figuring out how to coordinate and combine the results). But the standard progression is: if a project has 10 hard factors, that’s too many for me to focus my attention on at once, so I need to work on some easier sub-projects first – e.g. learning about some of those issues in isolation or doing smaller projects that help build up to the bigger one.

Another way to view the difference is that an increase in the key factors increases our success at the goal. E.g. adding the right new feature will increase profit. Or getting a cuter cat will increase enjoyment. Loosely, the more the better (there’s sometimes an upper limit, at which point it stops helping or is even actively harmful, or it keeps helping but now some other factors matter more). But for type (2) factors, the attitude isn’t anything like “more is better”; it’s just “don’t screw it up”.

In this analysis, I’ve basically assumed that type (1) factors and goal success come in matters of degree (can have more or less of them), but type (2) factors have a binary, pass/fail evaluation. The analysis needs extending for how to deal with binary goals, binary type 1 factors, and matters of degree for type (2). Those issues will come up and we need some way to think about them. I’ll leave that extension for a future post.

That extension is part of a broader issue of how binary and degree issues come up in life, how to think about them, how to convert from one to the other (and when that’s possible or not), when one type is preferable to the other, and so on. They’re both important tools to know how to think about and work with.

Factory Example

Now let’s go through an extended example to clarify how some of these issues work.

In my factory, I’m combining foos and bars to make foobars, which I sell. I have more bars than foos. So foos are the bottleneck. Getting even more bars won’t result in producing additional foobars. I already have an excess capacity of bars.

I also have excess capacity for assembly and QA. My current work area and team could produce many more foobars without hiring new people, getting more space, or getting new tools. And they could already check more foobars for defects.

And I have excess capacity in the market: I could sell more foobars if only I could produce them.

I also have excess capacity on foobar quality. I could redesign them to be nicer, but they’re good enough. Customers are satisfied. They do the job.

And I have excess capacity on price. Cheaper would be nicer, sure, but there isn’t much competition and my customers are people with a good reason to get a foobar. They get benefits from the foobar which are well above the price I’m charging.

Excess capacity means non-bottleneck.

Supply of foos is the bottleneck and the other issues are non-bottlenecks.

Using bars is limited by the availability of foos. That’s a traditional, standard bottleneck.

I call niceness a non-bottleneck because, as with foos, there is excess capacity. It won’t make much difference to achieving more of my goal (profit via foobar sales).

Key factor and secondary factor may be better terminology. It has some advantages, mostly because 1) foo supply isn’t blocking niceness from mattering in the way it’s blocking more supply of bars from mattering 2) niceness would help a little (a few orders of magnitude less than getting more foos, but not zero), which contrasts with bars – getting more bars wouldn’t help at all (in current circumstances).

Bottlenecks can be changed. E.g. I find a new supplier who can deliver far more foos than I need. Foos are no longer a bottleneck. Now what’s the bottleneck?What limits my profit? Maybe I’ll start running out of bars now. Maybe I won’t have enough customers and I’ll need better marketing. Maybe I’ll need to hire more workers. Maybe price will become the crucial issue: if I could lower the price, it’d get me a million new customers. Maybe price is key to breaking into the hobbyist market whereas price isn’t so important for the business market I currently serve.

To break into the hobbyist market, I might need to expand production capacity and lower the price and do a new marketing campaign. There could be several key factors. Doing three things at once is realistic (though not ideal), but we can’t split our focus too much. It’d be nice to find a way to improve things more incrementally. Maybe I’ll figure out how to produce foobars more cheaply first while leaving my price the same, and I’ll get some immediate benefit from higher profit margins. Then once I have the price low enough I’ll try to start selling some to hobbyists as a test (sell in some small stores instead of the big chains, or I could try online sales), and only if that works will I try to ramp up production and hobbyist marketing together.

I can also view the new project (selling to hobbyists, via expanding production, producing more cheaply, and a new marketing campaign) as a whole and then look at what the bottleneck(s) and excess capacity are. They might be quite unequal between the different parts of the new project.

(This is just a toy example. I didn’t worry about new distribution for hobbyists nor about designing a different version of the product for them which better meets the needs of a different market, nor did I worry about market segmentation and how to maintain my higher prices for business customers (a separate production version is one way to do that, using different regions is another, e.g. I could do my hobbyist sales in a different country than my existing business sales.))

Category (1) above (key positives) is the bottlenecks, the things that are valuable to pay attention to and optimize. Category (2) above (avoiding major negatives) is the non-bottlenecks, the things with excess capacity, which I can view as either “good enough” or “failure”. Relevant non-bottlenecks are important. I can’t just ignore them. They need to work. They’re in a position to potentially cause failure. But I’m not very worried about getting them to work and I don’t need to optimize them.


Elliot Temple | Permalink | Message (1)

Less Wrong Comment Replies for Chains, Bottlenecks and Optimization

Read this post, with replies, on Less Wrong.


Replies to comments on my Chains, Bottlenecks and Optimization:

abramdemski and Hypothesis Generation

Following the venerated method of multiple working hypotheses, then, we are well-advised to come up with as many hypotheses as we can to explain the data.

I think come up with as many hypotheses as we can is intended within the context of some background knowledge (some of which you and I don’t share). There are infinitely many hypotheses that we could come up with. We’d die of old age while brainstorming about just one issue that way. We must consider which hypotheses to consider. I think you have background knowledge filtering out most hypotheses.

Rather than consider as many ideas as we can, we have to focus our limited attention. I propose that this is a major epistemological problem meriting attention and discussion, and that thinking about bottlenecks and excess capacity can help with focusing.

If you’ve already thought through this issue, would you please link to or state your preferred focusing criteria or methodology?

I did check your link (in the quote above) to see if it answered my question. Instead I read:

Now we've got it: we see the need to enumerate every hypothesis we can in order to test even one hypothesis properly. […]

It's like... optimizing is always about evaluating more and more alternatives so that you can find better and better things.

Maybe we have a major disagreement here?

abramdemski and Disjunction

The way you are reasoning about systems of interconnected ideas is conjunctive: every individual thing needs to be true. But some things are disjunctive: some one thing needs to be true. […]

A conjunction of a number of statements is -- at most -- as strong as its weakest element, as you suggest. However, a disjunction of a number of statements is -- at worst-- as strong as its strongest element.

Yes, introducing optional parts to a system (they can fail, but it still succeeds overall) adds complexity to the analysis. I think we can, should and generally do limit their use.

(BTW, disjunction is conjunction with some inversions thrown in, not something fundamentally different.)

Consider a case where we need to combine 3 components to reach our goal and they all have to work. That’s:

A & B & C -> G

And we can calculate whether it works with multiplication: ABC.

What if there are two other ways to accomplish the same sub-goal that C accomplishes? Then we have:

A & B & (C | D | E ) -> G

Using a binary pass/fail model, what’s the result for G? It passes if A, B and at least one of {C, D, E} pass.

What about using a probability model? Problematically assuming independent probabilities, then G is:

AB(1 - (1-C)(1-D)(1-E)))

Or more conveniently:

AB!(!C!D!E)

Or a different way to conceptualize it:

AB(C + D(1 - C) + E(1 - C - D(1 - C)))

Or simplified in a different way:

ABC + ABD + ABE - ABCD - ABCE - ABDE + ABCDE

None of this analysis stops e.g. B from being the bottleneck. It does give some indication of greater complexity that comes from using disjunctions.

There are infinitely many hypotheses available to generate about how to accomplish the same sub-goal that C accomplishes. Should we or together all of them and infinitely increase complexity, or should we focus our attention on a few key areas? This gets into the same issue as the previous section about which hypotheses merit attention.

Donald Hobson and Disjunction

Disjunctive arguments are stronger than the strongest link.

On the other hand, [conjunctive] arguments are weaker than the weakest link.

I don’t think this is problematic for my claims regarding looking at bottlenecks and excess capacity to help us focus our attention where it’ll do the most good.

You can imagine a chain with backup links that can only replace a particular link. So e.g. link1 has 3 backups: if it fails, it’ll be instantly replaced with one of its backups, until they run out. Link2 doesn’t have any backups. Link3 has 8 backups. Backups are disjunctions.

Then we can consider the weakest link_and_backups group and focus our attention there. And we’ll often find it isn’t close: we’re very unevenly concerned about the different groups failing. This unevenness is important for designing systems in the first place (don’t try to design a balanced chain; those are bad) and for focusing our attention.

Structures can also be considerably more complicated than this expanded chain model, but I don’t see that that should change my conclusions.

Dagon and Feasibility

I think I've given away over 20 copies of _The Goal_ by Goldratt, and recommended it to coworkers hundreds of times.

The limit is on feasibility of mapping to most real-world situations, and complexity of calculation to determine how big a bottleneck in what conditions something is.

Optimizing software by finding bottlenecks is a counter example to this feasibility claim. We do that successfully, routinely.

Since you’re a Goldratt fan too, I’ll quote a little of what he said about whether the world is too complex to deal with using his methods. From The Choice:

"Inherent Simplicity. In a nutshell, it is at the foundation of all modern science as put by Newton: 'Natura valde simplex est et sibi consona.' And, in understandable language, it means, 'nature is exceedingly simple and harmonious with itself.'"

"What Newton tells us is that […] the system converges; common causes appear as we dive down. If we dive deep enough we'll find that there are very few elements at the base—the root causes—which through cause-and-effect connections are governing the whole system. The result of systematically applying the question "why" is not enormous complexity, but rather wonderful simplicity. Newton had the intuition and the conviction to make the leap of faith that convergence happens, not just for the section of nature he examined in depth, but for any section of nature. Reality is built in wonderful simplicity."


Elliot Temple | Permalink | Messages (3)