[Previous] Errors Merit Post-Mortems | Home | [Next] Alan Discussion

Andy Discussion

This is a discussion topic for Andy Dufresne. Other people are welcome to make comments. Andy has agreed not to post anonymously in this topic.


Elliot Temple on April 24, 2019

Messages (30)

On Apr 23, 2019, at 4:28 PM, Elliot Temple <[email protected]> wrote:

> On Mar 16, 2019, at 8:32 AM, Andy Dufresne <[email protected]> wrote:

>

>> On Feb 26, 2019, at 3:55 PM, Elliot Temple [email protected] [fallible-ideas] <[email protected]> wrote:

>>> So then what’s left but to conclude that they would benefit greatly from being an expert at philosophy ASAP?

>>

>> I haven’t ever been able to translate a conclusion that I would benefit greatly from something into concerted, sustained effort towards that goal.

>

> That means you don’t understand the idea (conclusion) well enough. That’s not the only problem, nor the only path to a solution, but it is a problem which is present, and it is a path to a solution.

I think I agree that not understanding the conclusion well enough is a problem. But how would I come to understand it well enough without a concerted, sustained effort to do so?


Andy at 6:56 AM on April 24, 2019 | #12196 | reply | quote

> But how would I come to understand it well enough without a concerted, sustained effort to do so?

You can't reliably do that. It can work sometimes, but there's no method that you should actually expect to work.

You already understand some things and can build from there.

You also don't seem to be differentiating between

1) understanding conclusion X

2) understanding the conclusion that understanding conclusion X would be valuable


Anonymous at 11:39 AM on April 24, 2019 | #12198 | reply | quote

Andy, I understand you're unclear on goals currently. Do you have a goal of figuring out goals? A plan to achieve that? Any skills you think would help you do that which you plan to acquire or believe you already have (which would involve naming the skill, deciding how much of it is needed, and making a judgment that you exceed the minimum)?


curi at 12:17 PM on April 24, 2019 | #12201 | reply | quote

#12201 Yes I have a goal of figuring out goals. I don't have a plan to achieve that, or know what skills it will actually take.

Since I became an adult I have always had explicit goals. I adopted the tradition of posting my goals on my refrigerator as soon as I had my own place. I revise and repost them at least once a year, track progress throughout the year and mostly have achieved the goals I set for myself. How did I do that without a plan for picking goals in the first place?

Big picture hindsight, in the past I think most of my goals were specific achievements to get to something like 90th percentile for relatively culturally normal stuff. I just took high level goals from culture that sounded good to me and translated them into more specific goals that would put me ahead of the majority. This approach gave me plenty of goals in terms of stuff like family, finances, career, health, and travel.

I have problems with culturally normal goals now though: One is they generally don't sound as good to me as they used to. Another is that at the point in life I'm coming to most of the culturally normal goals are about preparing for retirement, retiring, and finding relatively pointless stuff to occupy your mind and body till you die. That stuff never sounded good to me in the first place. Another is that I have criticisms (mostly from FI) of my ability to effectively do even the culturally normal stuff that still sounds OK to me. Or I suspect I'd get such criticism if I posted about those things.

So the approach I used previously for generating goals is pretty broken. I still have a few remaining goals coming from the old approach. I'm getting some new technical certifications to help in my career, and I'm finishing up with helping my kids get established. But neither of those are very long term / major.


Andy at 3:10 PM on April 24, 2019 | #12203 | reply | quote

#12198 You are right that I didn't differentiate (1) from (2). I had (2) in mind.

In this case X is "being an expert at philosophy ASAP".

I don't think I understand either (1) or (2) for that X well enough.


Andy at 3:27 PM on April 24, 2019 | #12204 | reply | quote

#12203 So what have you done about this? E.g. have you tried to read/analyze/use/criticize books and blog posts that give advice about how to set goals, what goals to have, etc? Did you try to learn the mainstream answers to your problems?


Anonymous at 3:35 PM on April 24, 2019 | #12205 | reply | quote

#12205 I haven't done very much.

No reading or analysis lately. I started reading The Goal by Goldratt a while back but didn't get very far because I got tired or distracted and didn't come back to it. I did Perterson's future authoring a few years ago. Didn't find it hugely helpful and my situation has changed since then anyway.

I know a few mainstream answers like: start or buy a business, philanthropy, second career, stay closely involved in kids lives even when they don't want or need it, take care of elderly relatives. I've looked into some of those more than others, but I think they're all bad or uninteresting or too risky at the moment. If there's a severe enough economic downturn I think I might get more interested in buying a business or some other type of investing again.

I also know a few mainstream meta-answers like: pray, meditate, visualize the future, mastermind or other goal-oriented social gatherings, and wait. In those terms I guess I'm waiting, since I think my mindset is too different from the other mainstream options.


Andy at 6:05 PM on April 24, 2019 | #12208 | reply | quote

Why don't you do much? Do you care about your life? Is the goal of figuring out a goal not actually important to you? Do you actually have a bunch of goals, which you spend your time on, but don't want to admit to?


Anonymous at 6:20 PM on April 24, 2019 | #12209 | reply | quote

#12209 The goal of figuring out major goal(s) is not urgent to me. Maybe it's not important either, but my guess is that it is important.

> Do you actually have a bunch of goals, which you spend your time on, but don't want to admit to?

I do, though I consider them minor compared to the two I mentioned (tech certs and helping my kids). I think the existence of these minor goals is one of the reasons that figuring out major goal(s) isn't urgent. If I was actually sitting around bored I think figuring out goals would be a lot more urgent.


Andy at 6:36 PM on April 24, 2019 | #12210 | reply | quote

Aren't you wasting your life if you aren't making progress on any important goal? Why isn't it urgent to have an important purpose in your life?


Anonymous at 7:05 PM on April 24, 2019 | #12211 | reply | quote

#12211 Part of the reason is I have some explicit and inexplicit ideas along the lines of the purpose of life is to be happy or pursue happiness. I am happy most of the time already, so it seems less urgent to find other purposes.


Andy at 8:11 PM on April 24, 2019 | #12212 | reply | quote

I think only a small portion of what I do at work contributes to the advancement of technology. And when they are made, I think my contributions are small. And I am not sure I can reliably judge what's a contribution and what isn't, and what's an advancement and what isn't.

Despite all that, I think I like the idea that the work I do contributes to the advancement of technology. I'm afraid of losing the plausibility that I'm at least contributing something.


Andy at 2:29 PM on April 29, 2019 | #12233 | reply | quote

#12233 You are contributing something. More than the average American. It's a small amount, but it adds up when a lot of people do it.

There are a few people who contribute a lot. And there are a lot of people who contribute a little bit. Overall, it's hard to tell which group contributes more.

I don't think there's a lot of middle ground. Leader or follower. Pioneer or helper. Innovator or assistant. Inventor or grunt worker. Genius or normal guy.

There's some middle. There are 25th percentile programmers and there are "rockstar" programmers who are 10x more productive than them. But one Steve Jobs is worth a million rockstar programmers. Some startup founders are in the middle a bit – not that great, but maybe contributing 100x or 1000x a regular person. (There are also startup founders who are great and are in the big contributor category. It's the more mediocre ones, who are still pretty good, who are kinda in the middle.)

There are various reasons that the distribution is not like a bell curve, and there is somewhat of a lack of people in the middle. One is, in short, loosely, that either you're making unbounded progress or you aren't. (Or making some approximation of unbounded progress in a big enough scope that you can be great in a particular field.)


curi at 2:55 PM on April 29, 2019 | #12236 | reply | quote

Indirection

Here is an idea about why you have a hard time with indirection for your interests. Why you (and many, many other people) have a hard time following chains of steps and getting from "X connects to Y connects to Z so I will be interested in Z".

https://forum.effectivealtruism.org/posts/J3gZxFqsCFmzNosNa/ea-risks-falling-into-a-meta-trap-but-we-can-avoid-it

> When you chain different probabilities together, every additional step in the chain will, in almost every case, weaken it. This is also true with chaining together steps of meta-charity together -- while you’re getting higher returns in expected value, you’re also reducing the chance the impact will actually occur.

Look at the article for more context. But, basically, *under this model* longer chains are weaker and indirection is risky.

But the model is wrong. At least in theory. Some people live, act and think in a way where this model is OK, but they shouldn't and don't have to. This model is roughly accurate when all of your steps are fuzzy, vague, mediocre-quality ideas. There is a better way.

The better way is to figure out how to reach decisive, yes-or-no conclusions. Ten of those is approximately the same as one – you're fully convinced, rather than accumulating more and more doubts as each additional one is added – so the chain doesn't weaken with length because each new link is 100% not 90%. (Also you can reach a decisive yes-or-no conclusion about the meta theory that the whole chain is correct.)

Also another criticism of the model, via Eli Goldratt: the strength of a chain is not related to the strength of the average link. Increasing the strength of a particular link by 10x can add zero to the chain's strength. Why? Because a chain is only as strong as its weakest link.


curi at 3:02 PM on April 29, 2019 | #12237 | reply | quote

> Some people live, act and think in a way where this model is OK, but they shouldn't and don't have to. This model is roughly accurate when all of your steps are fuzzy, vague, mediocre-quality ideas. There is a better way.

Assuming the fuzzy etc. steps currently describe my ideas, then is the model roughly accurate for me *right now*?

Suppose the chained probability model is roughly accurate for me *right now*. If that model governs my interest level in (among other things) learning how to reach decisive, yes-or-no conclusions, does that put me at an impasse?

IOW, To get interested in learning yes-or-no, do I first have to learn yes-or-no (despite being uninterested)?


Andy at 6:13 PM on April 29, 2019 | #12239 | reply | quote

#12239 I discussed same issue in the Overreaching essay and various times after: short-termism b/c ppl's longer term projects predictably fail. It's a hard hole to get out of. Building up skills, knowledge and resources, step by step, remains possible if you value each step and choose small enough steps and choose steps which are actually within reach of your current abilities.

Essay quote:

> People want to aim big. They try to rush. People lack confidence in their ability to have a nice organized life that keeps gradually improving over a long time frame.


curi at 6:57 PM on April 29, 2019 | #12240 | reply | quote

#12240 I don't think this explanation applies well to my situation, or at least how I explicitly think about my situation.

One problem with the explanation is that I think my long term projects generally succeed rather than predictably fail. I can think of lots of long term projects throughout my life I consider to have been successful, and relatively few failures.

Another problem with the explanation is that I think I do have confidence in my ability to have a nice organized life that keeps gradually improving over a long time frame. That is approximately how I would describe my life up to this point.

These are both guesses from introspection and of course they could be wrong. One criticism that readily comes to mind and I have considered is about standards. Descriptions like "success", "nice", "organized", and "improving" all compare to some standard.

I am aware that my standards aren't as high as FI's. But I think the explanation refers to my standards rather than FI's with regard to things like "predictably fail" and "lack confidence".

Even if my long term projects are failures by FI standards, I don't think the explanation works when they are successes by my own standards. So it seems like something else is going on, or there's something about the explanation I don't understand.


Andy at 4:44 AM on April 30, 2019 | #12244 | reply | quote

> One problem with the explanation is that I think my long term projects generally succeed rather than predictably fail.

That sounds like the *multiplying a chain of probabilities model* did *not* apply to those projects. You had a bunch of solid links, or else it wouldn't have worked.

Put the philosophy project chain along the X axis with the goal at 100. Maybe just links 97 and 91 are the problems for you, but all you've identified is that everything left of 80 feels unreliable (which is true even if only 97 and 91 are the causes).

Or maybe you have unaddressed doubts about the value of the goal, or about its relative value compared to other goals. Or maybe you have doubts about being able to apply philosophy to other goals, after learning it better, because that involves a second project with a second chain which you have not yet outlined or examined. So the second chain is unknown to you, b/c you don't yet have the philosophy skill to design it and understand it in detail, whereas the projects you've done in the past had more of an "end in themselves" character (no second chain) or else had an easier to understand second chain (learning programming leads to job doing programming is more straightforward than how philosophy knowledge is later applied to other projects).

BTW, the *multiplying a chain of probabilities model* falls down in lots of ways even when it sort of applies. Consider a primitive hunter or someone in a remote area of Alaska. His hunting project has a LOT of steps. He has to get up at the right time of day for the hunt (within a limited range). He has to correctly match left boot to left foot. He has to correctly identify which objects are and aren't boots. He has to find pants. His home is pants-sparse – most locations in his home do not contain pants. He has to do the step of opening the dresser drawer that has pants in it. He has to do successful motor control over his hand. There are *so many steps*, and he hasn't even gotten outside yet. I think you can imagine how I could list *thousands* of steps involved in going out to hunt a moose. The chain probabilities model is a stupid way to analyze all the steps. At least you should just try to use it with the *hard* or *risky* steps, something more like that. Everyone can do lots of steps in a decisive, ~100% way, like finding and putting on their boots. Overreaching problems are when people try to do a bunch of steps which are unlike that, all at once, and introduce an overwhelming amount of mistakes into their life, instead of learning the hard steps better (a few at a time, using methods like intentional practice and study).


curi at 11:25 AM on April 30, 2019 | #12245 | reply | quote

> Everyone can do lots of steps in a decisive, ~100% way, like finding and putting on their boots.

I mean all adults. Babies and young children can't. They have to learn that stuff. They do learn it, successfully, in a high quality, low-error, low-attention, automated way.

The project of making ongoing, unbounded intellectual improvement is like extending what babies and children do. Keep doing that with progressively more stuff instead of stopping at some point. Don't be done learning.

Babies and young children develop many highly-reliable, very-low-error rate skills. First they learn some motor control. Then they build on it to walk. Then they build on that to carry something while walking. Then they build on that to hold a conversation while carrying something while walking (which also builds on their skill of holding a conversation while sitting, which builds on their skills with words and grammar).

That process of getting really solid knowledge you can build on can and should continue past childhood and is how you get good at anything else, too. It needs to be built up from a complex tree, with many parts, and the vast majority of the little parts need to be automated, intuitive, easy, very unlikely to cause trouble, etc. That is achieved with, in outline, getting it correct initially and then practicing so it gets easier and more automatic.


curi at 12:27 PM on April 30, 2019 | #12246 | reply | quote

#12245

>> One problem with the explanation is that I think my long term projects generally succeed rather than predictably fail.

> That sounds like the *multiplying a chain of probabilities model* did *not* apply to those projects. You had a bunch of solid links, or else it wouldn't have worked.

One way I think about my success at long term projects is the opportunity to cheaply error correct. Like with the boots and hunting example, I think it's totally reasonable that:

10% of the time I don't remember where I put my boots.

20% of the time I stick my left foot in the right boot.

30% of the time I try to put a boot on my foot I left the laces too tight when the boots were warm to fully insert my foot when they're cold.

5% of the time I try to tie a knot in the laces after a boot is on my foot, I screw up the knot and it won't hold.

etc.

These kind of probabilities in isolation would make the hunting project risky! I can't do any of that stuff at really high reliability.

But what is reliable is my ability to error correct that stuff at low cost. If I don't remember where I put my boots I have a limited search area. If I stick my foot in the wrong boot I switch boots. If the laces are too tight to insert my foot I know how to loosen them. If I screw up the knot I can try again until I get it right.

Stuff I don't think I can error correct at low cost feels risky to me. Maybe I can't error correct at low cost cuz of lack of knowledge I could learn, or maybe cuz of lack of buffer / scale to render the error small, or maybe cuz there's no currently known way to cheaply recover from an error I'm likely to make (like if I tried to free climb El Capitan).

So ya I agree @ "solid links" for my successful projects, but to be solid I think they rely on cheap error correction.

What about philosophy?

> Put the philosophy project chain along the X axis with the goal at 100. Maybe just links 97 and 91 are the problems for you, but all you've identified is that everything left of 80 feels unreliable (which is true even if only 97 and 91 are the causes).

Maybe 97 and 91 are things I don't think I can cheaply error correct. Sounds plausible at least. Like if I get good enough at philosophy to start alienating my social contacts, but not good enough to stop caring. That kind of thing could be hard and costly to correct.


Andy at 7:02 PM on May 1, 2019 | #12262 | reply | quote

Yes, error correction mechanisms are a necessary part of solidity. They can be conceived of as part of the link itself, or as an external add-on. Or sometimes they are a different link, e.g. an actual QA step in a manufacturing process. If step 5 is QA, then it's *not an error* for step 2 to produce 3 out of 100 widgets that are out of spec, that's actually an expected and acceptable part of the *successful* chain.


Anonymous at 1:18 PM on May 7, 2019 | #12294 | reply | quote

There are some projects with steps where I don't think I can error correct enough to be successful. So I don't want to do those projects. Like:

- Becoming a billionaire

- Developing a cure for aging

- Making an AGI

I think I would want to do those things if I had confidence in my ability to succeed at them.

I don't think philosophy fits on that list. There's something else going on. I think I could error correct enough to be successful at philosophy, but I still don't want to do it.


Andy at 7:15 AM on May 9, 2019 | #12300 | reply | quote

#12300 You sound outcome-oriented. Like you have to reach that whole big goal or else you failed. For big projects you have to value the journey, not be so focused on the destination.


Anonymous at 2:05 PM on May 12, 2019 | #12352 | reply | quote

I think you're mostly right about me being outcome-oriented.

One exception is hobbies. I do them mostly or entirely for their own sake rather than for the outcome. None of my hobbies are good candidates for a big project though.

For some projects in the past I think I enjoyed the journey. I think one prerequisite of that, though, was the confidence that I was on the path to a good outcome.


Andy at 5:30 PM on May 12, 2019 | #12354 | reply | quote

"Outcome-oriented" isn't precise. It's more like *big* outcome-oriented.

Learning one little thing is a small outcome.

Big outcomes generally come from many small outcomes combined. How can one fail to value the small outcomes individually, but value what they combine into? How can one value the sum but not the parts? There are ways, but I think there's confusion there and it's worth analyzing one's reasons.


curi at 5:42 PM on May 12, 2019 | #12355 | reply | quote

I don't think I am oriented towards big outcome vs. small outcome. I value plenty of small outcomes, including learning some little things.

I think I value things that I want for their own sake vs. things that are only means to something else that I want.

Another way to say it is there are things I *wouldn't* regret doing even if nothing else happens later because of it. And there are things I *would* regret doing if something else didn't happen later because of it.

The things I would regret doing if something else doesn't happen later are the journey, the non-outcomes that I don't value.

> How can one value the sum but not the parts?

I think the sum is an abstraction of its parts. The parts aren't (necessarily) valuable on their own; only as a complete whole do they become valuable.


Andy at 6:53 PM on May 16, 2019 | #12410 | reply | quote

>> How can one value the sum but not the parts?

> I think the sum is an abstraction of its parts. The parts aren't (necessarily) valuable on their own; only as a complete whole do they become valuable.

You took that out of context and didn't give a serious reply to:

>> There are ways, but I think there's confusion there and it's worth analyzing one's reasons.

You didn't analyze. You seem to disagree that there's confusion there. You think it's a trivial issue (for you) to be covered in 2 sentences. So you are maybe claiming that *curi is mistaken*, but you didn't actually say that, or anything even close to that. If that's what you meant, you're being dishonest to refuse to say it. If you don't mean/think/claim that, your answer is bad.


Anonymous at 10:43 AM on June 11, 2019 | #12733 | reply | quote

I didn't intend to claim curi was mistaken about anything in:

> Big outcomes generally come from many small outcomes combined. How can one fail to value the small outcomes individually, but value what they combine into? How can one value the sum but not the parts? There are ways, but I think there's confusion there and it's worth analyzing one's reasons.

I don't think it's a trivial issue. Yet I still think I did the right amount of analysis in my reply, though I now believe I failed to communicate that well.

I think my response was short because I assumed the single word "abstraction" would stand in for quite a lot of shared background knowledge without further explanation. That assumption now seems to have been mistaken so I'll explain it further.

Specifically, I had in mind the BoI chapter, "The Reality of Abstractions" when I wrote my reply.

So a more explicit explanation of my first sentence would be: I think of the sum for any complex goal as approximately a BoI real abstraction of its parts. I don't think of the sum as a mere arithmetic accumulation, as the word sum normally implies.

I also think my claim that the sum is a BoI real abstraction was an unusual / risky claim with significant implications. So I think it made sense to not go much further until hearing what curi or others might have to say about that claim.

If I'm correct to think of the sum as approximately a BoI real abstraction of its parts, I think that explains something important about how I can value the sum but not the parts, and what direction for further analysis might be helpful. Such as: when and what parts of an abstraction should one value and why?

On the other hand if I'm mistaken to think about the sum as a BoI real abstraction of its parts, I think there's little point in doing further analysis until that error is corrected and I'm thinking about it in some different way.


Andy at 7:09 PM on June 12, 2019 | #12756 | reply | quote

> Specifically, I had in mind the BoI chapter, "The Reality of Abstractions" when I wrote my reply.

Have you ever tried to write down what you think that chapter says and found out if anyone else from FI agrees with your understanding?


Anonymous at 8:58 PM on June 12, 2019 | #12757 | reply | quote

No, I haven't written down what I think the chapter says. That's one reason why my claim that the sum is a BoI real abstraction was an unusual / risky claim.

I do recall that I discussed the chapter some, and not having difficulty using the idea of a real abstraction. I don't recall the specifics of that discussion though.


Andy at 11:39 AM on June 13, 2019 | #12764 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)