How Much Philosophy?

I'm a philosopher. It's my #1 favorite thing. I'm happy to learn all kinds of stuff about philosophy. (BTW I didn't just naturally grow up that way, or anything like that. I changed. I chose philosophy over various other interests I already had, and many other options I could have had if I wanted.)

Some people don't want to be philosophers.

But everyone needs philosophy. If you have NO philosophy, you're fucked. You'll make tons of mistakes, suck at solving problems, suck at noticing problems, and generally be a fuck up.

Some people want philosophy for a practical purpose – learn some philosophy to be a better parent.

Learn some philosophy to stop fighting with spouse.

Learn some philosophy to understand political debates better, like liberalism vs socialism.

Those are a bit narrow. One also needs some philosophy just to have a better life in general – it helps with everything.

Why does philosophy help with everything? Because that's the name of the field which includes topics like:

- how to think well, in general, about everything

- how to learn

- generic methods of solving problems

- generic methods of identifying and understanding problems

- generic methods of truth seeking, question answering, and idea understanding


So of course you need a bunch of that, no matter what sort of life you want.

It is acceptable not to have philosophy as your #1 interest. But it needs to be an interest.

I do philosophy that is not strictly required, because I like it. Other people like it less. Partly their preferences should be improved, but partly it's OK to have different interests.

So there's a question: how much philosophy do you need? How much is enough? When can you stop?

(Another distinction worth considering: do you want to make progress in philosophy, or just learn what others already know?)

The current situation looks something like this:

- Philosophy is a pretty small field with a limited amount of productive work ever done in it, despite dating back over 2000 years. It's possible, and helpful, to be loosely familiar with most philosophy.

- Some topics, like Objectivism, liberalism, Critical Rationalism and Fallible Ideas require detailed study. This is not at all optional. If you don't do that, you're missing out, hugely.

- If you're not one of the top 100 philosophers in the world, you're not even close to good enough. Virtually everyone is super super bad at philosophy, way below the basic amount you'd want to not fuck up your life.

- Philosophy courses (and professors) at universities are very bad.

This doesn't tell you the exact answer. But it gives enough of an indication to start with: you're not there yet.

There's no need to try to understand at what point you could stop learning philosophy until you're already most of the way there. Then you'd have a lot more skill to use for figuring it out. Trying to understand it right away would basically be the general, common mistake of trying to do stuff before having skill at philosophy (aka skill at thinking).

Elliot Temple | Permalink | Messages (3)

Ayn Rand Quotes Discussion

The Return of the Primitive, The “Inexplicable Personal Alchemy”:
Who can take any values seriously if he is offered, for moral inspiration, a choice between two images of youth: an unshaved, barefooted Harvard graduate, throwing bottles and bombs at policemen—or a prim, sun-helmeted, frustrated little autocrat of the Peace Corps, spoon-feeding babies in a jungle clinic?

No, these are not representative of America’s youth—they are, in fact, a very small minority with a very loud group of unpaid p.r. [agents] on university faculties and among the press—but where are its representatives? Where are America’s young fighters for ideas, the rebels against conformity to the gutter—the young men of “inexplicable personal alchemy,” the independent minds dedicated to the supremacy of truth?

With very rare exceptions, they are perishing in silence, unknown and unnoticed. Consciously or subconsciously, philosophically and psychologically, it is against them that the cult of irrationality—i.e., our entire academic and cultural Establishment—is directed.

They perish gradually, giving up, extinguishing their minds before they have a chance to grasp the nature of the evil they are facing. In lonely agony, they go from confident eagerness to bewilderment to indignation to resignation—to obscurity. And while their elders putter about, conserving redwood forests and building sanctuaries for mallard ducks, nobody notices those youths as they drop out of sight one by one, like sparks vanishing in limitless black space; nobody builds sanctuaries for the best of the human species.

So will the young Russian rebels perish spiritually—if they survive their jail terms physically. How long can a man preserve his sacred fire if he knows that jail is the reward for loyalty to reason? No longer than he can preserve it if he is taught that that loyalty is irrelevant—as he is taught both in the East and in the West. There are exceptions who will hold out, no matter what the circumstances. But these are exceptions that mankind has no right to expect.
This is about Western culture (it's 45 years old, but still applies). Few people care about truth and reason. There are some loud people who claim to be free thinkers, but actually conform to gutter standards.

The people who care about ideas are discouraged because, wherever they look, it's hard to find anyone else who does. So they are isolated, and surrounded by a culture of irrationality. It wears them down and beats them up, and eventually they lose some of their confident eagerness, and start to see the evil in the world, and find it confusing and awful, and eventually they give up, alone. That's the standard story that happens to most of the best of the human species.

And (almost) no one cares. These bright young minds are not an object of sympathy and charity. Far more help goes to trees and ducks than to men with intellectual integrity. Isn't that awful?

Ayn Rand tried to help these people. I try, too. I pursue ideas publicly and offer the Fallible Ideas Discussion Group. There, people can experience rational discussion in an atmosphere that puts truth before conformity. They can see that some people take ideas seriously, and are eager for criticism and bold thinking. That can be part of their life. And they can learn about and ask questions about philosophy, liberalism, and any other topics.

A few men can hold purely to reason without help, alone, in a world that punishes them for it. But we must not rely on heroes like that for the future of humanity. We should lead the way and offer some better voices into the public discussion. There are people out there to hear reason, and appreciate it, and they could really use the help.



The Virtue of Selfishness, Doesn’t Life Require Compromise?:
The excuse, given in all such cases, is that the “compromise” is only temporary and that one will reclaim one’s integrity at some indeterminate future date. But one cannot correct a husband’s or wife’s irrationality by giving in to it and encouraging it to grow. One cannot achieve the victory of one’s ideas by helping to propagate their opposite. One cannot offer a literary masterpiece, “when one has become rich and famous,” to a following one has acquired by writing trash. If one found it difficult to maintain one’s loyalty to one’s own convictions at the start, a succession of betrayals—which helped to augment the power of the evil one lacked the courage to fight—will not make it easier at a later date, but will make it virtually impossible.
If you aren't taking reason seriously NOW, when will you? How will waiting help? When will things be easier? Never. If you can't stick to principles now, spending a year compromising them won't help. If purity is tough now, how much harder will it be after you spend more time learning to live in a less pure way?

Lowering your standards temporarily is not how you get high standards. Your standards are never going to go back up. You'll get used to living with lower standards. You'll do more things which violate the higher standards. So, later, the higher standards will be more inaccessible than they were before.

Taking life seriously, and really insisting on the best right now, is the only way to live. Pursuing the truth with no boundaries is completely urgent. Do it now, or you never will.



Philosophy: Who Needs It, An Untitled Letter:
Like any overt school of mysticism, a movement seeking to achieve a vicious goal has to invoke the higher mysteries of an incomprehensible authority. An unread and unreadable book serves this purpose. It does not count on men’s intelligence, but on their weaknesses, pretensions and fears. It is not a tool of enlightenment, but of intellectual intimidation. It is not aimed at the reader’s understanding, but at his inferiority complex.

An intelligent man will reject such a book [like Rawl's A Theory of Justice or Kant's Critique of Pure Reason] with contemptuous indignation, refusing to waste his time on untangling what he perceives to be gibberish—which is part of the book’s technique: the man able to refute its arguments will not (unless he has the endurance of an elephant and the patience of a martyr). A young man of average intelligence—particularly a student of philosophy or of political science—under a barrage of authoritative pronouncements acclaiming the book as “scholarly,” “significant,” “profound,” will take the blame for his failure to understand. More often than not, he will assume that the book’s theory has been scientifically proved and that he alone is unable to grasp it; anxious, above all, to hide his inability, he will profess agreement, and the less his understanding, the louder his agreement—while the rest of the class are going through the same mental process. Most of them will accept the book’s doctrine, reluctantly and uneasily, and lose their intellectual integrity, condemning themselves to a chronic fog of approximation, uncertainty, self doubt. Some will give up the intellect (particularly philosophy) and turn belligerently into “pragmatic,” anti-intellectual Babbitts. A few will see through the game and scramble eagerly for the driver’s seat on the bandwagon, grasping the possibilities of a road to the mentally unearned.
It's so hard to stand up to authority after an entire childhood being bullied by your parents and teachers, and taught to obey authority, and punished for disobedience.

Every "Because I said so" from a parent teaches the child to do things because the government said so, too. Or to believe things because Kant or Rawls said so.

Parents are so shortsighted. They are in a position of temporary power over their kid. To make the most of it, they demand universal obedience to authority from their kid. He ends up obeying many other authorities too, some of which they parents don't even like. And once the kid can read books and get access to ideas his parents don't control, he may well find some greater authority than his parents, so they begin losing control.

One of the saddest things is I have refuted a lot of awful ideas, carefully in writing which is publicly available. And what are the results? Hardly anyone wants it. I don't have Kant's authority. They go by authority, not understanding. So it doesn't matter if my arguments are better than Kant, they aren't thinking through the ideas. If it was effective, I'd be happy to untangle more gibberish. I still do it sometimes, but a man has to have some merit to seek out and benefit from the untangling. And it's hard to find many people with merit. Their parents and teachers attack their minds, and their culture tells them that's life and offers rolemodels who no man of intellectual integrity could seek to emulate.

Most of academia is like Rand describes, but on a smaller scale. Not many read it, but fewer will stand up to it. Most of it isn't as confusing as Kant's writing, but it's still awful and littered with gross errors. And when you try to tell people not to believe some "scientific" conclusion which they read second hand in a magazine, because the actual paper is crap, they don't want to think through the issues themselves and they don't want to take your word for it, they just want to accept the authority of academia and magazine writers.

See also my searches for other people discussing this stuff online. In summary, no one else cares.

Elliot Temple | Permalink | Messages (4)

Induction is Authoritarian

Induction is about authority.

You come up with an idea. And someone asks, "How do you know that's right?"

And what do you say? How do you answer that.

Induction is one of many attempts to answer that question. It's a positive way to know you're right, to build up your idea. You say, "My idea is good because I induced it."

Another tempting answer is, "Because Einstein said so." An appeal to authority is a natural answer to how you know an idea is right. Ultimately that is what the question seeks – some kind of authority, above your judgment, which you can appeal to. By it Einstein or induction, no authority is necessary.

What they want, the motivation behind the question, is a guarantee that'll hold into the future. A defense against the uncertainty of new ideas and new thinking.

The question, "How do you know that's right?" is a bad question. It's inherently bad. It begs for an authoritarian answer. And, worse, it drops the proper context.

(A little like how "Who should rule?" begs for an authoritarian answer, like Karl Popper explains. Questions can be bad and designed to prompt bad answers. Sometimes you have to dispute the question itself.)

A good reply is, "You got a better idea?"

The only context in which it's proper to dispute an idea is if you have an alternative idea, or you see something wrong with the idea (a criticism).

Offer a rival idea, or criticism, or stop complaining. If you can't point out any problem with an idea, and no one knows any alternative, you should be accepting the idea, not raising meaningless, nonsense doubts (which is what "How do you know that's right?" does).

The question, "How do you know that's right?" offers neither a rival nor a criticism. It doesn't provide the appropriate context to defend an idea. An idea can be defended against a criticism. And it can be argued against a rival. But an idea cannot be defended against NOTHING, against arbitrary contextless demands that your idea be better, somehow, and justify itself in a vacuum.

How do I know it's right? Well, how do you know it's wrong?

I'm not omniscient. I don't know it's right in that sense. What I know is it doesn't contradict any of my observations, it doesn't come into conflict with my other knowledge, it's not refuted by any criticism I know of. And what I know is, it's useful, it solves some problem, that's why I made the idea and what it's for.

If an idea solves a problem, and no one knows anything wrong with it (the idea or the problem) or any alternatives, then that's the highest standard of knowledge possible to man (who is fallible and non-omniscient, which is fine, that's not a bad thing). By asking for more, the questioner tries to hold knowledge to an impossible standard. That is a generic tactic he could use to attack any and all knowledge, and is therefore a recipe for complete skepticism. It should be rejected out of hand.

I know it's right – in the fallible, contextual way – because I thought about it. I judged it. I exposed it to criticism, I sought out rivals, I used the methods of reasoning proper to man. I did what I could. What'd you do, Mr. Generic Doubter? These actions I took do not ensure it's right, but they are actually useful things to do, so that's good, not bad.

If you come up with a criticism or an alternative, none of that stuff I did is any protection for my idea. I can't refer to it to win the debate. My idea is on its own, left to its merits, to be judged by its content and nothing else.

What people want to do is set up positive authorities so they can stop worrying about their ideas. They know it's right, so they don't have to fear criticism or alternatives, since they already have the answer. They are trying to close the book on the issue, permanently. They want an out-of-context way to positively support an idea so that it will apply to all future contexts, so they'll never have to think again.

That is what the tradition of positive justification of ideas – the "justification" found in the ubiquitous "knowledge is justified true belief" – is all about. It's about out-of-context authority to preemptively defend against unknown future criticisms and new alternative ideas. It's about setting up an authority for all to bow down to, and ending progress there. So that when rebellious thinkers dare to criticize the status quo, instead of addressing the criticism, they can simply give their generic (contextless) answer to how they know they are right, the same one they've always given, and always will give.

No matter how much support, authority, justification, or positive validation an idea has, that is no defense against criticism. If there is a reason your idea is false, then it's false, too bad about all the authority you made up for it. It's not relevant, it's useless, it shouldn't be part of the discussion, it's just a bunch of nonsense with no functional purpose in a debate. You can never answer a reason your idea is false by saying how much evidence supports it. So what? An idea with a bunch of evidential support can still be false, can't it? No matter how much authority of any kind is behind your idea, it can still be false, can't it? So what good is that authority? What's it for? (Disclaimer: I do not accept that evidential support is a meaningful concept. But I think those that do accept it, also accept that it doesn't guarantee against falseness.)

Do you intend deal with alternative, rival ideas by adding up the positive authority for each and seeing which gets a higher score. That method is terrible. One problem is there's no way to do the scoring objectively. What you should do is point out something wrong with the rival idea – a criticism. If you can't do that, why are you opposing it anyway?

Elliot Temple | Permalink | Messages (0)

Reason is Urgent; Now or Never

Imagine a person finds Fallible Ideas (FI) philosophy and they agree with 20% initially and contradict 80%. And they are excited and think FI's amazing. Sounds like a really good start, right? I think it is. That's a lot more than you could really expect at the start. Most promising newcomers will have less pre-existing knowledge and compatibility.

(FI is the best, purest advocacy of reason. But if you disagree with that, no problem, just substitute in Objectivism, Critical Rationalism, or something else. The points I'm making here do not depend on which philosophy of reason you think is best.)

(The percentages are a loose approximation to let me write this point in a simpler way. If you don't like them, consider what's going on when someone partly agrees and post a comment explaining how you think that works, and how you think I should have written this without percents. I'm trying to discuss the case of a new person who agrees with some stuff, disagrees or doesn't know a lot more, and learns a bit more over time.)

Now, imagine over the next 5 years they increase their agreement to 30%. Is that good progress? A nice achievement? A proper application of gradualism?

No, I think that's a disaster.

In that scenario, they just lived for 5 years while contradicting at least 70% of FI. How can they do that? Why don't they completely hate themselves? Here they are finding out about reason, and then living a 70% anti-reason lifestyle. How do they live with that?

The answer is: they deny that 70% of FI is good. They oppose it. To not hate themselves, they have to hate most of FI instead. They have to come up with a bunch of evasions and rationalizations, and they've had 5 years to entrench those.

The moment you find out about reason, there is a ticking clock, because it's so very hard to live with contradictions. It's not viable to just live for 5 years half liking reason and half hating it. You'd tear yourself apart. You have to do something about this tension. FI offers ways to deal with it, but to use those you'd have to learn more about FI and embrace it more thoroughly. And irrationality offers ways to deal with it – rationalizations, evasions, self-lies, etc...

The middle, caught in between reason and unreason, is not a viable long term place to be. It doesn't work. It's not just a mess of contradictions like many people's lives, it's more like the strongest contradiction there is. And who could live with that? The only person who perhaps could, like John Galt, would be a better person and wouldn't even be in that situation, since he'd embrace reason more.

So at the same time this person learned 10% more about reason in 5 years, they also figured out how to rationalize not learning the rest, and be OK with that. They made up stories about how they will learn it one day, later, but not now. They backed off from feeling like reason is truly sacred in order to to reduce the contradictions in their life. They lost their sense of urgency and excitement about new possibilities, most of which they've now put off for 5 years. Most of which they still don't plan to start learning for years.

When there's a contradiction, something has to give. When you have such a strong major contradiction that's so hard to ignore – like life vs. death, reason vs. unreason, thinking vs. unthinking, open society vs. closed society, problem solving vs. destruction, initiative vs. passivity, independence vs. obedience, infinity vs. finite limits – then something has to and will change pretty quickly. And if they don't embrace reason in a big way, then it's clear enough what happened: while making their bits and pieces of supposed progress, they actually managed to find a way to either deny all these major contradictions exist or take the wrong side of them and be OK with that. There's no other way.

Once someone finds out about an idea and finds it notable and important, they have to take a position.
E.g. that it's good in theory but not very practical to use in life all the time. That's an example of a well known evasion. Or they think it's pretty good, but it's for geniuses. Or they think it'd be nice to learn it and they will work on it, later, but they are busy right now. There's many other evasions possible, many ways to rationalize why they aren't acting on the idea. Or they could believe it's really urgent and serious and try their best to learn and use it, which would be a good attitude, but is very rare. People always take some kind of position on ideas once they find out about them and acknowledge those ideas matter.

So the scenario I talked about, which I think lots of people see as an ideal to strive for, is actually really bad, and helps explain why the people pursing that plan seem to be stuck indefinitely and never become amazing.

Life is now. Reason is urgent. These things get much worse over time unless you're making rapid progress and pursuing reason with the utmost seriousness and vigor. There can be no compromises where you work on rational philosophy a little bit here and there in your spare time. It can't wait. Nothing's more important than your mind. Prioritize your mind now or, by betraying it, you will destroy it and never again want to prioritize it.

As always with these things, there are rare heroic exceptions which no one knows how to duplicate on purpose, or predict, or how it works, etc. The human spirit, or something, is very hard to crush with literally-exactly 100% reliability, and there's billions of people. Here's a few quotes about that from The Return of the Primitive, by Ayn Rand:
“Give me a child for the first seven years,” says a famous maxim attributed to the Jesuits, “and you may do what you like with him afterwards.” This is true of most children, with rare, heroically independent exceptions.
With very rare exceptions, [young men with independent minds dedicated to the supremacy of truth] are perishing in silence, unknown and unnoticed.
There are exceptions who will hold out, no matter what the circumstances. But these are exceptions that mankind has no right to expect.
Finally I'll leave you with one of my favorite Ayn Rand quotes about urgency, about now, not later:

The Virtue of Selfishness, Doesn’t Life Require Compromise?:
The excuse, given in all such cases, is that the “compromise” is only temporary and that one will reclaim one’s integrity at some indeterminate future date. But one cannot correct a husband’s or wife’s irrationality by giving in to it and encouraging it to grow. One cannot achieve the victory of one’s ideas by helping to propagate their opposite. One cannot offer a literary masterpiece, “when one has become rich and famous,” to a following one has acquired by writing trash. If one found it difficult to maintain one’s loyalty to one’s own convictions at the start, a succession of betrayals—which helped to augment the power of the evil one lacked the courage to fight—will not make it easier at a later date, but will make it virtually impossible.

Elliot Temple | Permalink | Messages (20)

Skepticism vs. Infallibilism vs. Critical Rationalism

skeptics have the idea you can't be sure of anything. maybe you're right, maybe you're wrong. men can't have knowledge, it's kinda hopeless to figure things out.

this is weird because how did they figure it out?

then their opponents, the infallibilists, say they are sure of things.

but sometimes the stuff they are sure about turns out wrong later

both sides have the same hidden idea: that ideas should be proved or established or supported to make them sure or more sure.

and one side is saying we can do that, and the other side says it doesn't work so we're screwed.

the majority think we can be sure. because people do have knowledge. we build computers that work. we figured out how to make airplanes and bicycles.

but the doubters have some good points. there are logical reasons that the sureness stuff doesn't work. no one has ever been able to answer those logical arguments.

another approach is that we don't need to be sure. we can make an iPhone without being sure of anything, and it can still work. sureness was the wrong thing to look for. we should be looking for other stuff instead. so the whole debate was missing the point.

everyone was stuck on this issue for over 2000 years. Karl Popper got it unstuck like 50 years ago.

being sure is like trying to say "this idea is good because..." and then it scores points for every argument you give. people then compare how much sureness or points different ideas have.

the alternative is to look for problems with your ideas. try to figure out what's bad about them. if you can't find any problems, it's a good idea to use for now.

we don't have to be sure, but we can improve our ideas. if we see a problem and make a change to fix it, now we have a better idea than before. we don't know if it's true. we don't know if it has a bunch more problems. but we learned something. we made progress.

if an idea has a problem that isn't fixed, then we shouldn't use it no matter how sure anyone is. sureness isn't relevant.

and if there's no problems anyone knows of, then why wouldn't you use it? there's no objections. so sureness doesn't matter here either.

Example

so there's a cow farmer, and he says he's sure he has 3 cows. but a skeptic says "how do you know you have 3 cows? you can't be sure of anything. maybe you've been hallucinating and have goats"

the cow farmer is saying how sure he is when actually he shouldn't be sure. maybe he DID hallucinate. or lots of other things. there's ways he could be wrong. it's POSSIBLE.

it turns out some wolves ate one of the cows last night, and he didn't check yet. so actually he has 2 cows. he was wrong. he shouldn't have been so SURE.

the skeptic is dumb too b/c he just doubts everything. except not really. it's kinda random. he didn't point out that maybe the cow farmer didn't exist and he (the skeptic) was hallucinating. he didn't worry that maybe he hallucinated his dinner.

the skeptic didn't know the wolves attacked. he didn't have any information that there weren't 3 cows.

he wasn't saying something useful. there wasn't any way the cow farmer should act differently once he finds out the skeptic's idea.

so the guy who was sure was risking being wrong. he can't be SURE there were no hallucinations or wolves. but the skeptic is bringing up hallucinations without seeing any LSD lying around, without seeing any goats outside, without any reason to suspect a hallucination in this case.

this whole thing is silly and is pretty much how everyone thinks.

the cow farmer should say:
i'm not sure i have 3 cows. but i think i do. i saw 3 cows yesterday, and the day before. my family and i harvest their milk and it fills up the right number of bottles for 3 cows. it takes my son 3 times longer to clean up their poop than when we had 1 cow. they eat pizza like normal cows, not sushi like goats always want.

do you have any argument i'm hallucinating? do you know something i don't, which should change my view? do you have a criticism of the idea that i have 3 cows? not a reason it isn't guaranteed, but a reason it's actually wrong?
this way he's explaining why he thinks he has 3 cows, and asking for new information or criticism that would let him change his mind to a better idea.

if the skeptic doesn't have any info or criticism like that, then 3 cows is the best guess (idea). even if the wolves attacked and they don't know that, it was still the best guess given the information available.

Elliot Temple | Permalink | Messages (2)

Fallibilism

everyone has some mistaken ideas. and some good ideas.

they don't know which are which. some ideas they think are good are actually mistaken. some ideas they think are mistaken are actually good.

so then we can look at lots of a person's ideas and evaluations and ask: what if this one is mistaken? how might they find out? how might they fix it? if they're mistaken and they never find out, that means they won't fix it. is that a big deal?

often it is a big deal, and there's no serious, realistic efforts going into finding out what one is mistaken about.

Elliot Temple | Permalink | Messages (17)

Pragmatism

A lot of pragmatism is because people lose arguments but still disagree. They don't know how to deny the truth of an idea, but they still don't want to do it.

There is a gap between the knowledge they live by and the knowledge they use in debates. The knowledge applied to debates is what they call ivory tower abstractions, and the knowledge applied to life they call pragmatic.

This gap is a very very bad thing.

This separation results in lots of bad intellectual ideas that contradict reality. And lots of bad life choices that contradict principles and logic, e.g. by being superstitious.

Being able to speak intelligently about your life knowledge allows for getting advice and learning from criticism. Being able to apply abstract knowledge to life allows for using the scientific method, free trade, or successfully finding a book in a Dewey Decimal organized library.

Elliot Temple | Permalink | Messages (7)

Automizing

Objectivism discusses automizing the use of your ideas. For example, you automized walking. You can walk without consciously thinking about it. Walking works automatically. Walking is actually pretty complex and involves moving multiple muscles and balancing, but you can do all that automatically. Pretty cool!

Some people think automizing sounds mindless and are wary of it. What if I automate how I handle a situation and then I keep doing the same actions over and over without thinking? How do you automatize anything without losing control over your life?

Let's step back. There's a simple concept here. You do some stuff and the first time it takes time, effort, attention, work. But if you do it often, you learn how to do it easier. This frees up effort for other stuff. Learning better ways to do things, that consume less resources, isn't bad. That isn't losing control over your life.

You need to make good choices about what to use when. If you have a method of doing something without thinking about it consciously, that's a good tool. You can still choose when to use this method, or not. If you know how to clean your house without thinking about it (letting you focus on listening to audiobooks), that doesn't make you clean your house. You still get to control your life and choose if and when to clean.

People's methods of doing something – automatic or not – can be used as building blocks. You use the walking method while doing cleaning. The cleaning method involves doing multiple simpler methods together. (If you're a programmer, think of these as functions. You can build a cleaning function out of a walking function, a looking around function, an identifying dirt from visual data function, and so on. You would not want to write a cleaning function only in terms of basic actions like moving individual muscles.)

People build up many layers of complexity. They automate things like a life schedule, and routine cleaning, and routine cooking and eating for mealtimes, and so on. Those automizations threaten their control over their life. They get so set in their ways, they have trouble choosing whether to keep doing that. The problem here isn't automization itself. It's having a bland repetitive life and basically habitually not thinking. That's a totally different sort of thing than creating building block methods – like walking, or cleaning – to use in your life or in other methods. And figuring out how to do them better, faster, easier.


Elliot Temple | Permalink | Messages (2)

Having Reasons

People on FI were discussing having reasons for things and saying it was justificationist and you should only worry about whether there is a negative problem with something, not a positive reason for something.

If someone asks why you're doing something, that isn't bad. It's good to have some concept of what you're doing, and why. What problem are you trying to solve and how will this solve it?

If you can't answer – if you can't say any reasons for what you're doing – prima facie there is a criticism there. Why don't you know in words what's going on? Why are you choosing to do it?

This is not unanswerable. But you should have an answer. If you can't say any reasons for what you're doing and you also don't have an answer to why you're doing it anyway (to address the kinda default well known criticism that knowing what problem you're trying to solve and how this will solve it is generally a pretty good idea), then that's bad. You should either have a reason you can say, or a reason to do it without a reason you can say.

If you can't say a reason to do it without a reason which you can say, what about a reason for doing it without that? Whatever you don't have, you could have a reason for doing it despite not having that.

The point is, you ought to be able to say something of some sort. If you can't, there is a criticism – that you have no idea what you're doing. (If you can argue against that – if you do have some idea what you're doing – then you could have said that info in the first place when questioned.)

I'm not convinced the quotes are substantively justificationist. And I'm really not convinced by like, "Don't ask reasons for doing stuff, only point out criticisms." Doing stuff for no reason is a criticism. In general people ought to do stuff to solve problems, and have some concept of how doing this will solve a problem they want to solve. If they aren't doing that, that isn't necessarily bad but they should have some idea of why it makes sense to do something else in this case.

You can't even criticize stuff in the usual way if you don't know what their goal is. You normally criticize stuff by whether it solves the problem it's aiming to. But if you don't know what they are aiming for, then you can't criticize in the normal way of pointing out a difference between the results you think they'll get and the results they are aiming for.

And if they can tell you a goal, or a problem they want to solve, then they do have a reason for doing it. They are doing it to accomplish that goal / solve that problem.

Elliot Temple | Permalink | Messages (4)

Interests in Problems or Topics

people wanting to get back to the "main" topic they're interested in is a really common mistake i've noticed.

people are interested in X. X leads to Y which leads to Z. people are much less interested in Z than X, even though pursuing Z is the way to pursue X.

this is really broken. it gets in the way of making progress. it gets in the way of truth-seeking wherever it leads. it gets in the way of interdisciplinary learning. it means people want to learn only as long as the learning stays within certain boundaries.

here's one of my explanations of what's going on:

people want to work in particular fields rather than solve particular problems.

if your focus is purely on solving a problem (X), you'd be interested in whatever helps accomplish that goal.

but suppose instead your focus is on "i like woodworking. i want to work with wood". then you won't be interested in philosophy related to learning which could help with woodworking. cuz you want to do woodworking, not philosophy.

if your focus was on solving a really hard woodworking problem, then it'd lead you to philosophy and you'd be interested in philosophy because it helps with your problem.

i think a lot of people care more about what kind of activity they are doing – e.g. woodworking not philosophy – than they care about problem solving.

people have interests in topics (e.g. woodworking, dance, psychology, literature, architecture, programming, chemistry, politics) rather than having problem-directed interests.

another reason people lose interest is:

the more steps there are, and the more complicated the project gets, and the more tangents it follows ... then the more it's a big, longterm project. and they don't expect to successfully complete big, longterm projects. so what's the point?

Elliot Temple | Permalink | Messages (220)

curi Writes Statements

People are really complex.

Sometimes people are really stupid, cruel, mean, nasty, petty.

Sometimes people are heroic, productive, logical, innovative.

Most people are mixed.

People will be stupid about one issue and smart about another issue.

Most working people are more productive at work than outside of work.

People are individuals. No one is typical about everything. Everyone has some
"quirks".

Intolerance of unconventional ideas and behaviors can affect everyone. Everyone does/thinks some stuff that many people would punish as deviance.

Presenting as mostly normal usually, but not always, gets people to forgive a few quirks.

In very short online interactions, most ways of presenting as mostly normal don't work. People don't hear that you have a normal accent, don't see you make normal facial expressions, don't see your normal clothes, don't know your location, haven't come to the same physical location as you, and more.

There's social pressures that push people to be polite. Most of them work better in person than online.


Approximately no one is looking to learn philosophy.

People getting philosophy degrees are looking to get philosophy degrees, they aren't looking to actually learn philosophy. They often also want some other things like to join a subculture.

If philosophy degree students cared much about learning philosophy they would read and discuss more philosophers on their own. They'd want to be familiar with more philosophers than their classes focus on. This would be visible at discussion forums for Rand, Popper and others.

If philosophy degree students cared much about learning philosophy, some of them would have substantial success at learning philosophy. This would be visible. There would be more skilled philosophers writing great stuff.

The vast majority of people are very passive.

People don't usually look for much of anything. They usually follow, obey, conform. Most people put a lot of effort into doing what they are "supposed to".

Many people think statements like these (about passivity) don't apply to them. They think to themselves that they are the exception. But most of them aren't exceptions, they're passive too.

Learning philosophy requires initiative, persistence and wanting to.

Learning philosophy requires being willing to think unpopular thoughts and do unconventional actions.

It works way better to see unusual ideas in a neutral or positive way, not as a downside or tradeoff.

Making progress in philosophy has the same requirements mentioned for learning it.

This isn't a complete list of requirements.


Elliot Temple | Permalink | Messages (0)

Writing For Audiences

Writing is impossible with no context.

Writing requires some kind of purpose or goal. That's part of the context.

Writing requires some concept of an audience. Who will read it? That's part of the context.

Does your audience speak English? That's important. If you are writing for people who speak English, that's an audience.

Is your audience people who are alive today, have internet access, and know how to read English? That affects writing decisions.

Are you focused on people reading your essay in the next 3 months? The next 3 years? The next 30 years? These are different audiences.

Are you writing stuff that you think is good? You're part of the audience.


Writers usually try to write for multiple different people at the same time. Not one-size-fits-all. That'd be too hard. But they aim for one-size-fits-many.

You can pick a single person, like yourself, and write primarily for that audience. But then whenever you write something you think would confuse most people, you change it. Whenever you think something would be a problem for lots of readers, you change it. This removes most of the quirks from your writing and makes it one-size-fits-many.

Some ways to try to please multiple people in the audience at the same time are messier. It can be a mess because audience members have contradictory ideas. How do you appeal to both sides of a disagreement?

It's generally best to take sides in disagreements that are important to what you're saying.

People often try to be neutral about controversies so they don't alienate either side.


Writing is communication.

Writing is always done in the context of some problem.

Having an idea of the problem(s) you're trying to solve helps you write better.

Because writing communicates, there's always an audience (person(s) receiving the communication) involved.

Even if the audience is only the writer.

Even if the writer never rereads what they write, and then deletes it, they communicates with themselves while they write it.

So the audience is always involved in the problem(s) writing addresses.

Generically, action always happens in context and tries to address some problems. And specifically writing involves an audience.


There are many ways to write the same idea.

Which way to write something depends on what you're trying to accomplish. Different ways have different advantages and disadvantages.

Which way to write it depends on the audience. Which way will be clearest to them? Which way will mislead them about something?

Without thinking about your audience, it's hard to make good choices about what to include in your writing. There's always more that could be said about a topic. You can't include it all.

Writing is always selective. The writer selects which stuff to include out of the infinity of possible ideas to write about.

When arguing a point, a writer decides which arguments to include and which not to mention. You can't mention every possible argument on a topic.

Some writers don't give their audience much thought.

They write for conventional people by default. But they don't realize they're doing that.

How do they decide which way to write something? By what seems normal to them.

People often write carelessly and haphazardly. That's another option.


Elliot Temple | Permalink | Messages (0)

Passivity

Reasons people are passive:

  • People are destructive, especially self-destructive. Not doing much limits the destruction.
  • People don't know what to do.
  • People don't want to make the wrong choices, try not to choose.
  • People don't want to be responsible for choosing stuff.

People broadly don't want to live. They don't want to do things, make choices, decide what happens – and maybe make mistakes and be responsible for some non-ideal outcomes. That's what life is. Acting and choosing. People don't like that. Passivity is their attempt to approximate death. It's their attempt to limit their lives.

Passivity is a choice and they're responsible for the consequences. There's no way to stop living besides actually dying. But being passive helps them minimize what actions they've clearly taken and what decisions they are clearly responsible for.

Like if I suggest we go to McDonalds and you say "OK" and then you have a bad time, you'll have an easier time lying to yourself that it isn't your fault. Your choice to say OK will seem different to you than leading the way. It's still your fucking life. You're still deciding what to do with your time. But you'd rather have someone else to (falsely) blame for your suffering than suffer less. So you avoid the sorts of situations where you'd be clearly responsible for problems. You avoid leading and being first and wait around for someone to tell you what to do, or even just make suggestions you can obey.


Elliot Temple | Permalink | Messages (0)

Philosophy

Anonymous asked a few questions:

What exactly is Philosophy?

there are lots of ideas in the world. it's confusing. people divide them up. math. chemistry. biology. economics. sports. poker. philosophy. we'll call these different fields.

philosophy is a really big group of ideas. it's not very specific.

the most important area of philosophy is about ideas. how do you get ideas? which ideas are good or bad? why? how do you find the truth? how do you find and deal with mistakes? how do you know an idea doesn't have any mistakes? how do you learn? what is learning? which ideas should you have?

this stuff is sometimes called other names like "critical thinking", "reason", "logic", "epistemology".

when i say "philosophy" this is the main stuff i usually have in mind. stuff about thinking well, dealing with ideas well. that's really important to every single field.

want to play poker well? you better have the right ideas about which hands to fold or not. want to be a good chemist? you better have the right ideas about how chemicals react, lab procedures, etc. want to be good at sports? you better have good ideas about how to train effectively and some good strategies to use in the game.

in each case there are a lot of ideas out there. some are good. some suck. there's lots of bad ideas about how to do stuff. it's pretty easy to go wrong.

ideas are the most important thing in the world. they determine how well you do at everything. so philosophy – which has ideas about dealing with ideas well – is the most important field.

there are other parts of philosophy. they include:

moral philosophy – another super important part of philosophy. what's a good life? what should people do in their lives? what are good goals and values? what's right or wrong? should you be honest? why? what are bad ways to treat people? like don't murder them, but also more subtle stuff like don't be an asshole. but it depends on the situation and can be complicated.

moral philosophy comes down to choices. every action you take in your life, you had a choice about which action to take. you could have done something else. moral philosophy guides you about what to choose to do.

ontology – ideas about existence. like: is reality an illusion? and where did the universe come from? you may noticed sometimes fields get mixed up together a bit. like where the universe came from is also a physics question. labeling fields is just to try to keep things organized, but it's not that big a deal and doesn't have to be perfect, just useful.

philosophy of science – how does science work to get good ideas? how do scientists learn? it's a lot of the same stuff about dealing with ideas. but science is really important so it's worth some extra attention.

political philosophy – when people argue the current issues they call it politics. but when they try to talk about principles about how a country should be set up, how to organize society, etc, it's political philosophy. political philosophy looks at the big picture of politics. it's pretty necessary to understand this before you can deal with regular politics well, but most people who try to debate politics don't have much of a clue about it. this has some overlap with economics.

And should I learn philosophy?

yes.

you need to deal with ideas and choices in life.

if you deal with ideas badly, you will have a bad life.

everyone has a philosophy. everyone deals with ideas one way or another. the question is: do you put effort into getting philosophy right and judging for yourself which philosophy you want to follow? otherwise you'll just have a contradictory mix of things you heard here and there and didn't think about very carefully. (the argument in this paragraph is from Ayn Rand.)

How do I learn it?

there's lots of stuff about philosophy.

and lots of it disagrees with other stuff. there's tons of ongoing debates where people disagree.

you should look around at a wide variety of philosophy stuff and see what you think makes sense. you can find books, blog posts, youtube videos, discussion forums, etc

most people who look around choose lots of the wrong philosophy. it's easy to make mistakes.

what can you do about that? write your ideas down in public and listen to criticism from anyone. so if you're mistaken, and someone knows why you're mistaken, and can explain it in a way that you'll understand, and is willing to help, then you can find out. that helps a lot. most people won't do that.

you should include FI (Fallible Ideas) people in the "anyone" who can offer comments on your ideas. if you want other perspectives you can look around or ask us about them.

FI emails are public and have links on yahoo's website. anyone can read it. the link to an email can be shared just like any other website. people have to sign up and use email software to reply though. another way to share your ideas is make a public blog and turn on comments.

if you look into some non-FI philosophy you can talk about it here and get our perspective.

for learning FI philosophy you should do a mix of:

if you have a problem reading a book – any problem – stop reading and ask about it. bored? confused? something seems false? want more details on some part? discuss it. don't just give up or try to push forward and finish the whole book.

people can suggest answers or ways to get answers.

the more stuff you do alone, the more mistakes you can make that no one could tell you. and lots of it could be wasted time. you could make a mistake then build on it.

even if everything is going well, discuss frequently. read something and think you understand it? cool, but write down what you think it's saying anyway. you might have it wrong. you might have half of it right but missed half.

lots of times people think they understand stuff but claim they have nothing to say. they don't understand it. if you're learning much you will have stuff to say. you can write ideas you learned you think are good. you can write questions you don't know the answer to yet. you can write additional ideas you have. you can make an example to illustrate an idea. you can say a counter-argument and why the counter-argument is wrong. and more.


Elliot Temple | Permalink | Messages (0)

Implementing Ideas

with startups people say the idea is worthless. there's only value in executing on an idea. making an actual business is the hard part. ideas are a dime a dozen.

in philosophy i think ideas have large value.

one difference is i mean fleshed out ideas. the worthless startup ideas are super vague and lacking detail. one of the reasons they lack value is when you try to build the company you have to figure out the 99% of the idea you left out initially.

what is the implementation of philosophy ideas, anyway? what do you do with them to add value?

you can work out the conclusions a principle leads to. but people won't be persuaded without understanding it themselves. and a list of conclusions is too inflexible and too hard to use if you don't understand the reasoning for them.

you can't do someone else's learning for them. they have to learn it. you can make some material to help an idea be easier to learn. you can organize it, add examples, answer common questions and criticisms, etc. i already do some of that.

if someone learns an idea well enough it's easy to use it in their life. the people who "know" or "agree with" an idea, but struggle to implement it, only know and agree with it by some low, inadequate standards. with a startup, implementing the business is a huge part of it. but with an idea, knowing it properly is 99% of the work.

if someone half-knows an idea, you could help them implement it early, or help them learn the rest. i think learning the rest is the way to go. it's the same principle as powering up until stuff is easy, then acting. implementing ideas when they are hard to implement is early action when you'd be better off powering up more. only doing powering up and easy things is way more efficient. doing hard things is hard and consumes tons of resources (time, attention, energy, effort, sometimes help from helpful people, sometimes money, etc). this connects to the powering up from squirrel morality.

backing up, let's list some meanings of implementing philosophy ideas:

  • learn them yourself
  • learn them for someone else
  • use them in your life
  • get them to be used in someone else's life
  • teach someone to use them in their life
  • work out the details of the ideas
  • be a politician or something and apply them to decisions for a country
  • figure out how to persuade yourself of the ideas, not just know what the ideas are, and do it
  • figure out how to persuade others of the ideas and do it
  • figure out how to persuade others to learn the ideas and do it
  • change your culture
  • change all cultures

i think a good idea, including the details of how it works, why all known rival ideas are mistaken, answers to known criticisms, etc, is a great value. that includes information about why it matters and what problems it solves, so people can see the importance and value.

that's enough.

if someone learned it, they'd be able to use it and benefit a ton. and it already says why they should learn it, why alternatives are worse, etc.

lots of people still won't learn ideas in that scenario. why? because they are irrational. they hate learning and change. they don't respond well to logical reasoning about what's best. they get emotional and defensive. all kinds of crap.

does an idea have to also deal with someone's irrationalities in order to have value? i don't think so, though it'd sure be valuable if it did.

another issue is people have to apply ideas to their lives. this is easy if you know enough and aren't irrationally sabotaging things, but it's not zero. so it's a sense in which the idea is incomplete. a good idea will basically have instructions for how to adjust your actions to a different details, but you still have to think some to do it. it's like "some assembly required" furniture. which certainly does have value even though you have to screw in a few screws yourself.


Elliot Temple | Permalink | Messages (0)

Measurement Omission Disagreement

I consider measurement omission a narrow aspect of a broader issue. Objectivism, on the other hand, presents measurement omission as a huge, broad principle. There's a disagreement there.

When looking at stuff, we always must choose which attributes to pay attention to, because there are infinitely many attributes which are possible to look at. (This idea partly comes from Karl Popper.) We have to find ways to omit or condense some stuff or we'll have too much information to handle. Like Peikoff's principle of the crow, we can only deal with so much at once. So we use techniques like integrating, condensing, omitting, and providing references (like footnotes and links).

Regarding infinite attributes, let's look at a table. A table has infinitely many attributes you can define and could pay attention to. Most of them are dumb and irrelevant. Examples: the number of specks of dust on the table, the number of specks of dust with weight in a certain range, the number of specs of dust with color in a certain range. And just by varying the start and end of those ranges, you can get infinitely many attributes you could measure.

The way we choose to pay attention to some attributes in life, and not others, is not especially about measurement. Some attributes aren't measurements. I think some attributes aren't quantifiable in principle. Some attributes may be quantifiable in the future, but we don't know how to quantify them today. For example, do you feel inspired when looking at a painting? We don't know how to measure inspiration or what units to quantify it in.

Deciding which attributes are relevant to what you're doing requires judgement. While many cases are pretty easy to judge, some cases are more borderline and tricky. How do you judge well? I'm not going to try to explain that right now, I just want to say I don't think omitting measurements answers it overall (the measurement omission stuff definitely does help with some cases).


Elliot Temple | Permalink | Messages (0)

Paths Forward Short Summary

When there's a disagreement, ask yourself: "Suppose hypothetically that I'm wrong and the other guy is right. In what way would I ever find out and learn better?" If there's no good, realistic answer then you're bad at paths forward.

There exist methods for finding out you're mistaken about disagreements that aren't overly time consuming, and paths forward discusses them. (This has some overlap with Popper, but also adds ideas like having a public, written account of your position, by you or someone else, that you believe is correct and will take responsibility for. Popper didn't cover how to address all criticism without it taking too long.)

If you want to understand how paths forward work, go through these links:

http://fallibleideas.com/paths-forward

https://www.youtube.com/watch?v=zFpKP21u5Dc

http://curi.us/1761-paths-forward-summary

http://curi.us/1806-alans-paths-forward-summary

http://curi.us/1629-paths-forward-additional-thoughts


Elliot Temple | Permalink | Message (1)

Rejecting Gradations of Certainty

Mike S. asks:

How should we think about gradations of certainty in Critical Rationalist terms?

don't.

there are the following 3 situations regarding one single unambiguous problem. this is complete.

1) you have zero candidate solutions that aren't refuted by criticism.

gradations of certainty won't help. you need to brainstorm!

2) you have exactly one candidate solution which is not refuted by criticism.

tentatively accept it. gradations of certainty won't help anything.

(if you don't want to tentatively accept it – e.g. b/c you think it'd be better to brainstorm and criticize more – then that is a criticism of accepting it at this time.)

3) you have more than one candidate solution which is not refuted by criticism.

this is where gradations of certainty are mainly meant to help. but they don't for several reasons. here are 6 points, 3A-3F:

3A) you can convert this situation (3) into situation (1) via a criticism like one of these 2:

3A1) none of the ideas under consideration are good enough to address their rivals.

3A2) none of these ideas under consideration tell me what to do right now given the unsettled dispute between them.

(if no criticisms along those lines apply, then that would mean some of the ideas you have solve your problem. they tell you what to do or think given the various ideas and criticism. in which case, do/think that. it's situation (2).)

3B) when it comes to taking action in life, you can and should come up with a single idea about what to do, which you have no criticism of, given the various unresolved issues.

3C) if you aren't going to take any actions related to the issue, then there's no harm in leaving it unresolved for now and not knowing the answer. you don't have to rate gradations of certainty, you can just say there's several candidates and you haven't sorted it out yet. you would only need to rank them, or otherwise decide which to pursue, if you were going to take some action in relation to the truth of this matter (in which case see 3B)

3D) anything you could use to rank one idea ahead of another (in terms of more gradations of certainty, more justification, more whatever kind of score) either does or doesn't involve a criticism.

if it doesn't involve a criticism of any kind, then why/how does it provide a reason to rank one uncriticized reason above another one (or add to the score of one over another)?

if it does involve a criticism, then the criticism should be addressed. criticisms are explanations of problems. addressing it requires conceptual thinking such as counter-arguments, explanations of why it's not a problem after all in this context, explanations of how to improve the idea to also address this criticism, etc. either you can address the criticism or you can't. if you can't that's a big deal! criticisms you see no way to address are show stoppers.

one doesn't ever have to act on or believe an idea one knows an unanswered criticism of. and one shouldn't.

also to make criticism more precise, you want to look at it like first you have:

  • problem
  • context (background knowledge, etc)
  • idea proposed to solve that problem

then you criticize whether the idea solves the problem in the context. (i consider context implied as part of a problem, so i won't always mention it.)

if you have a reason the idea does not solve the problem, that's a show stopper. the idea doesn't work for what it's supposed to do. it doesn't solve the problem. if you don't have a criticism of the idea successfully solving the problem, then you don't have a criticism at all.

this differs from some loose ways to think about criticism which are often good enough. like you can point out a flaw, a thing you'd like to be better, without any particular problem in mind. then when you consider using the idea as a solution to some problem, in some context, you will find either the flaw does or doesn't prevent the idea from solving that problem.

in general, any flaw you point out ruins an idea as a solution to some problems and does not ruin it as a solution to some other problems.

3E) ranking or scoring anything using more than one variable is very problematic. it often means arbitrarily weighting the factors. this is a good article: http://www.newyorker.com/magazine/2011/02/14/the-order-of-things

3F) suppose you have a big pile of ideas. and then you get a list of criticisms. (it could be pointing out some ideas contradict some evidence. or whatever else). then you go through and check which ideas are refuted by at least one criticism, and which aren't. this does nothing to rank ideas or give gradations. it only divides ideas into two categories – refuted and not refuted. all the ideas in the non-refuted category were refuted by NONE of the criticism, so they all have equal status.

i think what some people do is basically believe all their ideas are wrong, bad, refuted. and then they try to approach gradations of certainty by which ones are less wrong. e.g. one idea is refuted by 20 criticisms, and another idea is only refuted by 5 criticisms. so the one that's only refuted 5 times has a higher degree of certainty. this is a big mistake. we can do better. and also the way they count how much is one criticism (with or without weighing how much each criticism counts) is arbitrary and fruitless.

something they should consider instead is forming a meta idea: "Idea A is refuted in like a TON of ways and seems really bad and show-stopping to me b/c... Idea B has some known flaws but i think there's a good shot they won't ruin everything, in regards to this specific use case, b/c... And all the other ideas I know of are even worse than A b/c... So i will use idea B for this specific task."

then consider this meta idea: do you have a criticism of it, yes or no? if no, great, you've got a non-refuted idea to proceed with. if you do have a criticism of this meta idea, you better look at what it is and think about what to do about it.


for a lot more info, see this post: http://curi.us/1595-rationally-resolving-conflicts-of-ideas


Elliot Temple | Permalink | Messages (27)

Presupposing Intelligence in Epistemology

I've been discussing with Objectivists. I learned something new:

Lots of their thinking about epistemology presupposes an intelligent consciousness and proceeds from there.

They don't say this clearly. They claim to have answers to epistemological problems about how learning works (with perception, concept formation and induction). They claim to start at the beginning and work everything out.

Traditional approaches to induction try to say how intelligence works. They claim they solved the problem of induction. But they aren't actually focusing on the traditional problem. They aren't very clear to themselves about what problem each idea is meant to answer, and don't consistently stick to addressing the same problem.

Their approach to concept formation presupposes intelligence. How do you know which concepts to form? How do you know which similarities and differences are important? How do you decide which of the many patterns in the world to pay attention to? Use common sense. Use intelligent judgement. Think about it. Use your mind. Consider what you value and which patterns are relevant to pursuing your values. Consider your interests and which patterns are relevant to your interests. And, anyway, why do you want a mindless, mechanical answer someone could use without thinking, anyway?

So induction requires concept formation which requires being intelligent. Their take on induction presupposes, rather than explains, intelligence. It's kinda like saying, "You learn by using your intelligence to learn. It handles the learning, somehow. Now here are some tips on how to use your intelligence more effectively..."

They don't realize what's going on but this is a dirty trick. Induction doesn't work. How do you fix it? Well, induction plus intelligent thought is adequate to get intelligent answers. The intelligent thought does all the work! Any gaps in your theory of learning can be filled in if you presuppose an intelligence that is able to learn somehow.

One of the big points of epistemology is to figure out how intelligence learns without presupposing it works somehow. Yes it does work somehow, but let's figure out the details of the somehow!

I say new knowledge is created by evolution. They don't address the problem of how new knowledge can be created. Intelligence can do that, somehow. They don't know how. They seem to think they know how. They say intelligence creates new knowledge using perception, concept formation and induction. But then when you ask about the details of concept formation and induction, they presuppose intelligence...

Note: I do not blame Ayn Rand for this. I don't know how much of this is her fault. As far as I know from studying her writing, she didn't do this herself in her published works.


Elliot Temple | Permalink | Messages (4)

Screencast of my Objectivism Discussion Thinking and Writing Process

I recorded a screencast while writing replies on HBL about epistemology.

Link: Video: HBL Thinking and Writing Process

Watch to see me think out loud about HBL posts. See how I approach the topics, how I organize my thoughts, and how I write.

Talking allows me to provide different information about where I’m coming from than text does.

I’d appreciate comments, including criticism, on my method. You can see my process instead of just the final product.

HBL people tell me I’m mistaken about epistemology. Presumably there’s something wrong with my approach behind those mistakes. Please tell me if anyone can point out something I’m doing wrong.

The video will help people understand what I mean better and how I’m approaching HBL discussion. I hope the extra perspective on my views will clear up some misunderstandings.

I like seeing other people’s processes when I can. I can learn from how they do things, and it’s uncommon to get to see behind the scenes. Perhaps you could pick up a few tips and tricks from me, too.

You can get text copies of my replies on HBL or in my blog comments. (The linked comment plus the next 5.)

I talk a lot in this video. Strongly recommended! It was 3.5 hours raw. I reduced that to 2.5 hours in editing. I sped up the whole thing to 125%, then sped up some parts where I'm not talking to 300%.

If you like it, check out my other videos:

Philosophy Writing Playlist

Evidence and Criticism Playlist

My Gumroad store sells some newer videos I put extra effort into.


Elliot Temple | Permalink | Messages (3)

Indirection

I've identified a common, huge problem people have. They struggle with indirection.

They want Z. They find out that doing W will help them figure out X which will help them solve one problem with Y which is a component of Z. But they don't care about W much. They wanted to deal with stuff more directly related to Z. At every step in the chain of indirection, their motivation/interest/etc drops off significantly.

This ruins their lives.

Indirection is pretty much ever-present. Doing things well consistently requires doing some other stuff that's connected to it via several steps.

Say you want to be a great artist, but you're bad at English. This gets in the way of improving at art, e.g. by discouraging you from reading art books (reading is a difficult, slow struggle for you) and causing frequent misunderstandings of the content of art books and lecture videos. Do you then spend significant time and effort improving at English in order to improve your art? Many people wouldn't. They wanted to spend time working on art. They like art but not English. They're relatively rational about art, but not about English. And they suck at indirection. They do things like forget how working on English connects to their goal of making progress at art.

A lot more indirection than this is typical. When working on English, they will run into some other problems. While working on those, they'll run into sub-problems. While working on those, they'll run into sub-sub-problems. They'll need to solve some sub-sub-problems to make progress on the sub-problems to make progress on the problems in the way of English progress to enable making more progress with art books.

Sub-sub-sub-problems often get into philosophy and some other generic issues. They are bad at learning. They dislike criticism. They have problems with emotions. They aren't very precise or logical. They're biased rather than objective. They don't understand effective methods of problem-solving. They aren't persistent and just want things to be quick and easy or they give up and look for something they find more intuitive and straightforward. They are too "busy" or "tired". They are directing a lot of their effort towards their social life, and getting along with people, rather than to problem solving. etc, etc, etc

People are fine with indirection sometimes. They want a cookie, and they spend time reaching for a cookie jar and opening it, rather than only directly eating the cookie. That bit of indirection doesn't bother them.

One reason people have a problem with indirection is they have little confidence in their ability to complete long range projects. They don't expect to get to a positive conclusion they can't reach very quickly. They have a long history of giving up on projects after a short time if it isn't done yet. So any project with a lot of steps is suspect to them. Especially when some of the steps fall outside their primary interests. A physicist will work on a 20-step physics project, and if he doesn't finish it's ok because he was working on physics the whole time. But he won't work on learning philosophy of science in order to do physics better because if he doesn't complete that project (not only learn useful things about philosophy of science, but then also use them to make physics progress) he'll be unhappy because he enjoys physics but does not enjoy philosophy of science.

A major reason people suck at longterm projects is because their lives are overwhelmed with errors. Their ability to correct errors and solve problems is in a constant state of being overloaded and failing, and they end up having chronic problems in their lives. There are other reasons including that people have little clue what they want and that they have little freedom for the first 20 years of their lives so they can't reliably pursue longterm projects because the projects are disrupted by the people who control their lives (especially parents and teachers). After a whole childhood of only succeeding much with shortterm projects, people carry what's worked – and what they've actually learned how to do – into their adult life.

People also, frequently correctly, lack confidence in their own judgement. They think there is a chain of connections where they work on W to work on X to work on Y to get Z. But they don't trust their judgement. Often correctly. Often they're wrong over and over and their judgement sucks. It requires better judgement to deal with indirection. People with bad judgement (almost everyone) can have somewhat more success when focusing on limited, easy, short projects with fewer layers to them. But that's no real solution. The structure of life involves many connections between different areas (like English skill being relevant to being an artist, and philosophy skill being relevant to being a scientist) rather than being a bunch of narrow, separate, autonomous fields.

Pursuing problems in an open-ended way often takes you far afield.

One of the other issues present here is people have limited interests, rather than open-ended interests. That's really bad. People ought to have broader curiosity and interest in anything useful and important. One of the reasons for such limited interests is most people are really irrational with a few exceptions, so their interests are limited to the exceptions where they are less irrational. This gets in the way of open-ended problem solving where one seeks the truth wherever it may be found instead of sticking to a predetermined field.


a typical example of people sucking with indirection is they don't click on links much. they treat native content (directly in front of them) considerably differently than content one step removed (click a link, then see it).

this comes up in blog posts, newsletters, emails, forum discussions, on twitter, on facebook, in reddit comments, etc.

it's much worse when you reference a book. but even a link is such a big hurdle that most people won't click through and even check the length or see what sort of content it has.

this is pathetic and speaks very badly of the large majority of people who are so hostile to links. but there it is.

people do click more when you use crude manipulation, "link bait", cat pictures, etc. hell, a lot of people even click on ads. nevertheless the indirection of a link is often enough to kill a philosophy discussion. partly because their interest in philosophy is really fragile and limited in the first place, and partly because "do X (click link) to get Y (read more details on this point)" is actually a problematic amount of indirection for people.

another problematic kind of indirection for most people is discussing the terms or purpose or goals of a discussion, rather than just proceeding directly with the discussion itself.


Elliot Temple | Permalink | Messages (21)

Follow Your Interests

To a first approximation, follow your interests. If you see a problem with that, take an interest in fixing your (other) interests.


Elliot Temple | Permalink | Messages (2)

Ideas Matter

My new newsletter is out! It's a philosophy essay which you can read below. (Sign up here to receive newsletters!)

Explaining Philosophy Is Hard

There's several important ideas about philosophy to explain first. But you can't talk about them all at once. That's difficult to deal with. The issues are:

1) Explain that philosophy is the most important thing in the world, and in your individual life.

2) Explain specific philosophy ideas, e.g. how to discuss rationally, how to judge ideas, and how to treat children decently.

3) Explain how to learn philosophy instead of just reading a little bit and thinking it sounds nice.

4) Explain what philosophy is (ideas about how to think well and effectively, which is necessary for solving problems). And explain that everyone uses philosophy (the type of philosophy mentioned in previous sentence, not all types), and it's better to know what you're doing.

If I talk about (1) first, people often won't listen and claim it's false without understanding what it means. And even if they'll listen, they don't yet know how to judge ideas rationally. They don't know what to make of it or how to have a rational discussion to a conclusion. So their judgement and discussion of (1) are bad. And even if they decide philosophy is important, they still don't really know what to learn or how to learn it.

If I talk about (2) first, people generally like that better and agree more. But they treat it as a fun diversion or hobby, not something of the utmost importance. They don't study it seriously and learn it in depth, they only pursue a superficial understanding (which they overestimate because they don't realize what high quality of ideas is achievable).

If I talk about (3) or (4) first, people don't care because they don't see philosophy as really important. They'd rather learn something about philosophy (2) than something about how to learn philosophy (3). A learning method (3) only gets anywhere if you also care (1) and have some things you want to learn with it (2).

The men who are not interested in philosophy need it most urgently: they are most helplessly in its power.

The men who are not interested in philosophy absorb its principles from the cultural atmosphere around them—from schools, colleges, books, magazines, newspapers, movies, television, etc. Who sets the tone of a culture? A small handful of men: the philosophers. Others follow their lead, either by conviction or by default.

-- Ayn Rand

Reaching Actual Conclusions

In each case, people usually don't learn philosophy and don't discuss the disagreement to a resolution. They silently ignore rational philosophy, or silently judge it's false. Or occasionally they argue back and forth a couple times, then quit without the discussion reaching a conclusion.

People don't know how to pursue issues to conclusions. And generally don't want to. They think it's too time consuming, and don't make the effort to learn how to do it faster (which they often think is hopeless because the methods taught at schools, and which are well known, don't work). The reason it takes too long (or usually never reaches a conclusion at all) is because they're doing it wrong and are ignorant of the correct methods.

People think that's just how life is. You disagree, everyone has their own opinions, and so what? Answer: whenever two people have contradictory ideas, at least one – and often both – are mistaken and could learn better ideas.

Chronic disagreements often cause misery in families and elsewhere. Disagreements become chronic because people don't discuss them to a conclusion, so issues don't get resolved. This misery is due to having no idea how to resolve disagreements rationally, rather than being a necessary fact of life.

There is a way to reach conclusions about ideas, and discuss disagreements, in a timely manner. I call it Paths Forward. It addresses all criticism in a time-efficient way so that if you're mistaken, and a better idea is known, you won't ignore it and stay mistaken unnecessarily.

Intellectuals

Lots of people say they care about ideas and care about the truth. Maybe not the majority. But many people think they are rational people with good ideas who think about things. That's pretty common. They have some respect for thinking, truth, and reason. These people ought to learn about how to think (philosophy) and how to discuss (philosophy) and specifically how to reach conclusions in discussions. But they usually either want to do something else or think they already know how to think and discuss.

To some extent, people are lying about their interest in the truth (lying to themselves even more than to others). They bicker and treat intellectual debate as a game. They don't systematically pursue ideas in a way that gets anywhere – produces actual conclusions. And they don't address all criticism of their positions. They ignore many known reasons they're mistaken which someone is willing to tell them, which isn't how you find the truth.

Self-Discussion

Note: Reaching conclusions in one's own mind is fundamentally the same issue as reaching conclusions in discussion. It uses the same methods. Self-discussion – thinking over issues in your head alone – works the same as discussion with others. In both cases, there's multiple contradicting ideas and you need to sort out what's true or false.

The main difference with self-discussion is people are biased and pick a side without actually knowing the answer. They don't have someone else arguing against them to pick up the slack for pointing out flaws in the ideas they are biased for, and good aspects of ideas they are biased against. (And when they do have someone arguing with them, they usually find it frustrating and want the guy to concede without actually figuring the issues out.)

I Address All Criticism

I have a different approach. I've addressed every criticism of my positions. There are exactly zero outstanding criticisms of my views. And I've energetically searched for criticism, I'm well read, and I've written tens of thousands of discussion contributions – so this isn't from lack of exposure to rival ideas. I seek out critics and will talk to anyone in public. But, sadly, I find other people don't want to understand or address my criticisms of their ideas.

Many people think this sounds impossible. How could I address every criticism? But when you're able to actually reach conclusions, there's no reason you can't do that on every common issue related to your thinking. Reaching conclusions one by one adds up. If you reach two conclusions per week, you'll have over 1000 in 10 years. And once you know what you're doing, in a good week, if you focus on thinking, you could figure out 20+ things, not just 2.

And some ideas and arguments are able to address dozens of criticisms (or more) at once because they involve a general principle. Good arguments usually address many criticisms, not just one, which conveniently saves a ton of time.

Sometimes you need to revise conclusions you reached in the past. Most people have such shoddy thinking that more or less all of it needs revision. But if you do more error-correction in the first place then less is needed later on.

Parents and Teachers Destroy Children's Minds

I possess ideas that would change the world if people cared to think. But they don't want to learn ideas or address criticism of their status quo beliefs.

One example: Current parenting and educational practices destroy children's minds. They turn children into mental cripples, usually for life. They create helpless followers who look to others to know what to do.

This is an opportunity to stop destroying your children, and also explains much of why it's so hard to find anyone who will discuss rationally or learn philosophy. Almost everyone is broken by being psychologically tortured for the first 20 years of their life. Their spirit is broken, their rationality is broken, their curiosity is broken, their initiative and drive are broken, and their happiness is broken. And they learn to lie about what happened (e.g. they make a Facebook page with only happy photos and brag that their life is wonderful). Occasionally a little piece of a person survives and that's what's currently considered a great man.

When I use words like "torture" regarding things done to children or to the "mentally ill", people often assume I'm exaggerating or speaking about the past when kids were physically beaten much more. But I mean psychological "torture" literally and they won't discuss the matter to a conclusion. It's one of many issues where the opposition refuses to think.

Typical parenting and educational practices are psychologically worse than torture in some ways, better in other ways, and comparable overall.

Parenting more reliably hurts people in a longterm way than torture, but has less overt malice and cruelty. Parenting is more dangerous because it taps into anti-rational memes better, but it also has upsides whereas torture has no upside for the victim.

Parents follow static memes to get obedience and pass on various ideas whether the child likes it or not. When children react with things like heavy crying and "tantrums", parents don't even realize that means they're hurting their child badly (much like torturers ignore the screams of their victims). And when the child stops crying and "throwing fits" so much because he learns he'll only be punished more for it, parents take that as evidence their child loves them. Stop and think about that for a minute. Everyone knows parents make their children cry hundreds of times and throw dozens of "tantrums". Everyone knows children often go through a "rebellious phase" (fighting with their hated parents) when they're age two, and when they're a teenager, and often during any or all of the years in between as well. Everyone knows there routinely are massive conflicts between parents and their children.

If you're blind to children being psychologically tortured, it's because you went through it too and rationalized it. Your parents hurt you and hurt you and crushed you until you became obedient and started thinking what you were told to think. Including believing, as demanded, that they were kind and gentle and loved you.

Punishments hurt children. That is their only purpose. Parents punish children to beat obedience into them. Period. And why do schools have tests and grades? So they can find and punish the children who didn't do as they were told (learn to repeat some ideas they aren't interested in and aren't allowed to disagree with).

It's so sad to watch after you see what's going on. But people don't want to learn to change. People would rather deny the world's problems than seriously consider – and discuss to a conclusion – ideas like these (which strike them as extreme and out of bounds).

You Could Be A Great Thinker

If you wanted to, you could ask a thousand questions, read everything you could get your hands on, and energetically pursue a better life with rational ideas. And you could pretty quickly be one of the world's best philosophers, since there isn't much competition. The world needs more thinkers very badly. You could help. (All people without major brain damage are plenty capable because innate degrees of intelligence and innate talents are a nasty myth. That's one of the things you could learn about.)

Or you could think I'm wrong, and not say anything in order to prevent me from pointing out the holes in your reasoning. Or you could think I have good points and then do little or nothing, but console yourself by pretending you intended to and telling yourself you appreciate most of what I write. Or you could think you're doing something else that's even more important, and never discuss which is actually more important. That's up to you.

I'll Continue Regardless

What's up to me is to continue improving the cutting edge ideas in philosophy, even if I must do it alone. And to seek out anyone who cares to think and learn, even though I live in an irrational, anti-intellectual culture. Whatever you do, I'll continue. I, for one, know that good ideas are the most important thing on Earth.

If you're interested, act like it. Read, learn, think, discuss.

A philosophic system is an integrated view of existence. As a human being, you have no choice about the fact that you need a philosophy. Your only choice is whether you define your philosophy by a conscious, rational, disciplined process of thought and scrupulously logical deliberation—or let your subconscious accumulate a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain in the place where your mind’s wings should have grown.

You might say, as many people do, that it is not easy always to act on abstract principles. No, it is not easy. But how much harder is it, to have to act on them without knowing what they are?

-- Ayn Rand


Links

The Pursuit of Happiness.

No One Else Discusses Ayn Rand.

Ayn Rand Quotes Discussion.

Critical Review of Ayn Rand Contra Human Nature.

Paths Forward links. These talk about how to rationally discuss to a conclusion instead of dropping out of discussions while not addressing some criticism.

Rationally Resolving Conflicts of Ideas. If you genuinely want to learn, it involves reading multiple links and books, and discussing them to clear up misunderstandings, find out details, get questions answered, etc...


Want more? Sign up to receive free newsletters.


Elliot Temple | Permalink | Messages (7)

Discussion Basics

you have a problem. e.g. you want an answer to a question like whether the many-worlds interpretation of quantum physics is true. or you want to know how to build a submarine. or you want to know how to win Overwatch games. or you want to know how to treat your children.

this leads to other problems:

  • how do you ask a question?
  • how do you read the answer to a question from someone else and understand it?
  • how do you judge if an answer is good or bad?

and working on this leads to other problems, e.g.:

  • how do you take one or more answers with some value, but some flaws, and improve them into one good answer?
  • how do you know if you understood an answer well enough or should ask clarifying questions?

and working on those leads to other problems, e.g.:

  • how do you communicate effectively instead of ineffectively?
  • what info should you include or not include in communications?
  • what are examples useful for?
  • how and when should you use examples, and how do you make them effective?
  • what topics should i be interested in and talk about and ask questions about?
  • how do you use abstract ideas in your life? what do you do with them besides remember them and occasionally mention them in conversations?

(and you need to be able to come up with questions like these on your own, and come up with more detailed ones and come up with your own thoughts about it, not just ask a really broad generic question with none of your own thinking in it. don't use my list. make your own list. this is a demo, not something you should copy. pursue your own questions, not my questions.)

lots of these problems involve basic stuff that comes up over and over when dealing with many different problems.

things like asking questions and communicating are skills that you'll use over and over. that's why they are basic. they are so important to so many things that people figure you need to learn them early on so you can be reasonably effective in life. everyone is expected to know them.

but most people are awful at lots of basic stuff like this.

and then they keep trying to have discussions while fucking up the basics, and so the discussions fail.

and they never find their way from the discussions to the basics. they don't, on wanting to ask a question, wonder about how to ask questions. they don't, on wanting to communicate something, wonder about how to communicate. they don't take an interest in the skills they are trying to use.

this is horribly broken and is a huge part of how people suck so much and stay so shitty.

you need to learn basic skills. you need an understanding of how to discuss, how to communicate, how to ask and answer questions, how to judge ideas, etc.

if you aren't interested in this, you should become interested in it by seeing how it's needed for dealing with more or less all of your actual interests. your interests lead to these basics (this needs to be an active process of you finding and following leads, not a passive process of being led). unless you're blocking and sabotaging, or passive and helpless.


Elliot Temple | Permalink | Messages (0)

Can Win/Win Solutions Take Too Long?

Win/win solutions don't ever take too long.

Suppose you have conflicting ideas X and Y. Then you can decide: "this would take too long to sort out whether X or Y is better. so I will just do Z right away b/c it's not worth optimizing". Z can be a win/win.

note: Z could be X or Y, but is more often similar to X or Y but not exactly identical. Z can also be some kinda compromise thing that mixes X and Y. or Z can be something else, like a simple, unambitious alternative.

if doing Z is something that the pro-X and pro-Y factions in your mind can be happy with (since they value saving time and not over-optimizing), then you have a win/win.

so that's why win/wins never take too long. the cases where choosing between X and Y would take too long are addressed in this way.

if you cannot find a Z which is a win/win, you have a problem to address there. it's worth some attention. why does one or both factions in you reject every Z you think of? the reason is worth considering more than zero. it ought to be addressed somehow. you need to know what's going on there and come up with something OK (not terrible) to do about it; don't just ignore the problem.


Elliot Temple | Permalink | Messages (10)

Epistemology

I wrote:

The thing to do [about AI] is figure out what programming constructs are necessary to implement guesses and criticism.

Zyn Evam replied (his comments are green):

Cool. Any leads? Can you tell more? That's is what I have problems with. I cannot think of anything else than evolution to implement guesses and criticism.

the right answer would have to involve evolution, b/c evolution is how knowledge is created. i wonder why you were looking for something else.

one of the hard problems is:

suppose you:

  1. represent ideas in code, in a general way
  2. represent criticism in code (this is actually implied by (1) since criticisms are ideas)
  3. have code which correctly detects which ideas contradict each other and which don't
  4. have code to brainstorm new ideas and variants of existing ideas

that's all hard. but you still have the following problem:

two ideas contradict. which one is wrong? (or both could be wrong.)

this is a problem which could use better philosophy writing about it, btw. i'd expect that philosophy work to happen before AI gets anywhere. it's related to what's sometimes called the duhem-quine problem, which Popper wrote about too.

one of my own ideas about epistemology is to look at symmetries. two ideas contradicting is symmetric.

what do you mean by symmetries? how two ideas contradicting symmetric? could you give an example?

"X contradicts Y" means that "Y contradicts X". When two ideas contradict, you know at least one of them is mistake, but not which one. (Actually it's harder than that because you could be mistaken that they contradict.)

Criticism fundamentally involves contradiction. Sometimes a criticism is right, and sometimes the idea being criticized is right, and how do you decide which from the mere fact that they contradict each other?

With no additional information beyond "X and Y contradict", you have no way to take sides. And labelling Y a criticism of X doesn't mean you should side with it. X and Y have symmetric (equal) status. In order to decide whether to judge X or Y positively you need some kind of method of breaking the symmetry, some way to differentiate them and take sides.

Arguments are often symmetric too. E.g., "X is right because I said so" can be used equally well to argue for Y. And "X is imperfect" can be used equally well to argue against Y.

How to break this kind of symmetry is a major epistemology problem which is normally discussed in other terms like: When evidence contradicts a hypothesis, it's possible to claim the evidence is mistaken rather than the hypothesis. (And sometimes it is!) How do you decide?

So when two ideas contradict we know one of them at least is mistaken, but not which one. When we have evidence that seems to contradict a hypothesis we can never be sure that it indeed contradicts it. From the mere fact of contradiction, without additional information, we cannot decide which one is false. We need additional information.

Hypotheses are built on other hypotheses. We need to break the symmetry by looking at the hypotheses on which the contradicting ideas depend. And the question is: how would you do that? Is that right?

Mostly right. You can also look at the attributes of the contradicting ideas themselves, gather new observational data, or consider whatever else may be relevant.

And there are two separate questions:

  1. How do you evaluate criticisms at all?

  2. How do you evaluate criticisms formally, in code, for AIs?

I believe I know a lot amount about (1), and have something like a usable answer. I believe I know only a little about (2) and have nothing like a usable answer to it. I believe further progress on (1) -- refining, organizing, and clarifying the answer -- will help with solving (2).

Below I discuss some pieces of the answer to (1), which is quite complex in full. And there's even more complexity when you consider it as just one piece fitting into an evolutionary epistemology. I also discuss typical wrong answers to (1). Part of the difficult is that what most people believe they know about (1) is false, and this gets in the way of understanding a better answer.

My answer is in the Popperian tradition. Some bits and pieces of Popper's thinking have fairly widespread influence. But his main ideas are largely misunderstood and consequently rejected.

Part of Popper's answer to (1) is to form critical preferences -- decide which ideas better survive criticism (especially evidentiary criticism from challenging test experiments).

But I reject scoring ideas in general epistemology. That's a pre-Popper holdover which Popper didn't change.

Note: Ideas can be scored when you have an explanation of why a particular scoring system will help you solve a particular problem. E.g. CPU benchmark scores. Scoring works when limited to a context or domain, and when the scores themselves are treated more like a piece of evidence to consider in your explanations and arguments, rather than a final conclusion. This kind of scoring is actually comparable to measuring the length of an object -- you define a measure and you decide how to evaluate the resulting length score. This is different than an epistemology score, universal idea goodness score, or truth score.

I further reject -- with Popper -- attempts to give ideas a probability-of-truth score or similar.

Scores -- like observations -- can be referenced in arguments, but can't directly make our decisions for us. We always must come up with an explanation of how to solve our problem(s) and expose it to criticism and act accordingly. Scores are not explanations.

This all makes the AI project harder than it appears to e.g. Bayesians. Scores would be easier to translate to code than explanations. E.g. you can store a score as a floating point number, but how do you store an explanation in a computer? And you can trivially compare two scores with a numerical comparison, but how do you have a computer compare two explanations?

Well, you don't directly compare explanations. You criticize explanations and give them a boolean score of refuted or non-refuted. You accept and act on a single non-refuted explanation for a particular problem or context. You must (contextually) refute all the other explanations, rather have one explanation win a comparison against the others.

This procedure doesn't need scores or Popper's somewhat vague and score-like critical preferences.

This view highlights the importance of correctly judging whether an idea refutes another idea or not. That's less crucial in scoring systems where criticism adds or subtract points. If you evaluate one issue incorrectly and give an idea -5 points instead of +5 points, it could still end up winning by 100 points so your mistake didn't really matter. That's actually bad -- it essentially means that issue had no bearing on your conclusion. This allows for glossing over or ignoring criticisms.

A correct criticism says why an idea fails to solve the problem(s) of interest. Why it does not work in context. So a correct criticism entirely refutes an idea! And if a criticism doesn't do that, then it's harmless. Translating this to points, a criticism should either subtract all the points or none, and thus using a scoring system correctly you end up back at the all-or-nothing boolean evaluation I advocate.

This effectively-boolean issue comes up with supporting evidence as well. Suppose some number of points is awarded for fitting with each piece of evidence. The points can even vary based on some judgement of how importance each piece of evidence is. The importance judgement can be arbitrary, it doesn't even matter to my point. And consider evidence fitting with or supporting a theory to refer to non-contradiction since the only known alternatives basically consist of biased human intuition (aka using unstated, ambiguous ideas without figuring out what they are very clearly).

So you have a million pieces of evidence, each worth some points. You may, with me, wish to score an idea at 0 points if it contradicts a single piece of evidence. That implies only two scores are possible: 0 or the sum total of the point value of every piece of evidence.

But let's look at two ways people try to avoid that.

First, they simply don't add (or subtract) points for contradiction. The result is simple: some ideas get the maximum score, and the rest get a lower score. Only the maximum score ideas are of interest, and the rest can be lumped together as the bad (refuted) category. Since they won't be used at all anyway, it doesn't matter which of them outscore the others.

Second, they score ideas using different sets of evidence. Then two ideas can score maximum points, but one is scored using a larger set of evidence and gets a higher score. This is a really fucked up approach! Why should one rival theory be excluded from being considered against some of the evidence? (The answer is because people selectively evaluate each idea against a small set of evidence deemed relevant. How are the selections made? Biased intuition.)

There's an important fact here which Popper knew and many people today don't grasp. There are infinitely many theories which fit (don't contradict) any finite set of evidence. And these infinitely many theories include ones which offer up every possible conclusion. So there are always max-scoring theories, of some sort, for every position. Which makes this kind of scoring end up equivalent to the boolean evaluations I advocated in the first place. Max-score or not-max-score is boolean.

Most of these infinitely many theories are stupid which is why people try to ignore them. E.g. some of the form, "The following set of evidence is all correct, and also let's conclude X." X here is a completely unargued non sequitur conclusion. But this format of theory trivially allows a max-score theory for every conclusion.

The real solution to this problem is that, as Deutsch clearly explained in FoR (with the grass cure for the cold example), most bad ideas are rejected without experimental testing. Most ideas are refuted on grounds like:

  1. bad explanation

I was going to make a longer list, but everything else on my list can be considered a type of bad explanation. The categorizations aren't fundamental anyway, it's just organizing ideas for human convenience. A non sequitur is a type of bad explanation (non explanation). And a self-contradictory idea is a type of bad explanation too. And having a bad explanation (including none) of how it solves the problem it's supposed to solve is another important case. That gets into something else important which is understood by Popper and partly by Rand, but isn't well known:

Ideas are contextual. And the context is, specifically, that they address problems. Whether a criticism refutes an idea has to be evaluated in a particular context. The same idea (as stated in English) can solve one problem and fail to solve another problem. One way to approach this is to bundle ideas with their context and consider that whole thing the idea.

Getting back to the previous point, it's only ideas which survive our initial criticism (including doesn't blatantly contradict evidence we know offhand) that we take more interest in them and start carefully comparing them against the evidence and doing experimental tests. Testing helps settle a small number of important cases, but isn't a primary method. (Popper only partly understood this, and Deutsch got it right.)

The whole quest -- to judge ideas by how well (degree, score) they fit evidence -- is a mistake. That's a dead end and distraction. Scores are a bad idea, and evidence isn't the the place to focus. The really important thing is evaluating criticism in general, most of which broadly related to: what makes explanations bad?

BTW, what is an explanation? Loosely it's the kind of statement which answers why or how. The word "because" is the most common signal of explanations in English.

Solving problems requires some understanding of 1) how to solve the problem and 2) why that solution will work (so you can judge if the solution is correct). So explanation is required at a basic level.

So, backing up, how do you address all those stupid evidence-fitting rival ideas? You criticize them (by the category, not individually) for being bad explanations. In order to fit the evidence and have dumb conclusion, they have to have a dumb part you can criticize (unless the rival idea actually isn't so dumb as you thought, a case you have to be vigilant for). It's just not an evidence-based criticism (and nor should the criticism by done with unstated, based commonsense intuitions combined with frustration at the perversity of the person bringing an arbitrary, dumb idea into the discussion). And how do you address the non-evidence-fitting rival ideas? By rejecting them for contradicting the evidence (with no scoring).

Broadly it's important to take seriously that every flaw with an idea (such as contradicting evidence, having a self-contradiction, having a non sequitur, or having no explanation of how or why it solves the problem it claims to solve) either 1) ruins it for the problem context or 2) doesn't ruin it. So every criticism is either decisive or (contextually) a non-criticism. So evaluations of ideas have to be boolean.

There is no such thing as weak criticism. Either the criticism implies the idea doesn't solve the problem (strong criticism), or it doesn't (no criticism). Anything else is, at best, more like margin notes which may be something like useful clues to think about further and may lead to a criticism in the future.

The original question of interest was how to take sides between two contradicting ideas, such as an idea and a criticism of it. The answer requires a lot of context (only part of which I've covered above), but then it's short: reject the bad explanations! (Another important issue I haven't discussed is creating variants of current ideas. A typical reaction to a criticism is to quickly and cheaply make a new idea which is a little different in such a way that the criticism no longer applies to it. If you can do this without ruining the original idea, great. But sometimes attempts to do this run into problems like all the variants with the desired-traits-to-address-the-criticism ruin the explanation in the original idea.)


Elliot Temple | Permalink | Messages (0)

Aristotle (and Peikoff and Popper)

I just listened to Peikoff's lectures on Aristotle. I also reread Popper's WoP introduction about Aristotle. some thoughts:

http://www.peikoff.com/courses_and_lectures/the-history-of-philosophy-volume-1-–-founders-of-western-philosophy-thales-to-hume/

btw notice what's missing from the lecture descriptions: Parmenides and Xenophanes.

this is mostly Peikoff summary until i indicate otherwise later.

Aristotle is a mixed thinker. some great stuff and some bad stuff.

Part of the mix is because it's ancient philosophy. They didn't have modern science and some other advantages back then. It's early thinking. So Aristotle is kinda confused about God and his four causes. It was less clear back then what is magical thinking and what's rational-scientific thinking.

Aristotle is bad on moderation. He thought (not his original idea) that the truth is often found between two extremes.

Aristotle invented syllogism and formal logic. this is a great achievement. very worthwhile. it has a bad side to it which is causing problems today, but i don't blame Aristotle for that. it was a good contribution, a good idea, and it's not his fault that people still haven't fixed some of its flaws. actually it's really impressive he had some great ideas and the flaws are so subtle they are still fooling people today. i'll talk about the bad side later.

it's called formal logic because you can evaluate it based on the form. like:

All M are P.
S is an M.
Therefore, S is P.

this argument works even if you don't know what M, P and S are. (they stand for middle, predicate and subject.) (the classical example is M=man/men, P=mortal, S=Socrates.) Aristotle figured out the types of syllogism (there's 256. wikipedia says only 24 of them are valid though.)

Aristotle was apparently good on some biology and other science stuff but i don't really know anything about that.

Aristotle started out as a student of Plato but ending up rejecting many of Plato's ideas.

Aristotle didn't say a ton about politics. What he said is mixed. Better than Plato.

Aristotle – like the Greeks in general (as opposed to e.g. pre-modern Christians) – cared about human happiness and life on Earth. and he thought morality was related to human happiness, success, effectiveness, etc. (as opposed to duty moralities from e.g. early Christians and Kant which say morality means doing your duty and this is separate from what makes you happy or makes your life good.)

Aristotle advocated looking at the world, empirical science. he invented induction.

Aristotle was confused about infinity. (Peikoff and some other Objectivists today like Harry Binswanger roughly agree with Aristotle's infinity mistakes.)

Aristotle was generally pro-human and pro-reason. in a later lecture Peikoff says the dark ages were fixed because European Christendom got some copies of Aristotle's writing from the Muslims and Jews (who were trying to reconcile him with their religions) and then Thomas Aquinas attempted to reconcile Aristotle with Christianity and this made it allowable for Christians to read and think about Aristotle which is what got progress going again.


now Popper's perspective, which Peikoff basically agrees with most of the facts about, but evaluates differently.

Popper agrees Aristotle did some great stuff and got a few things wrong. like Peikoff and a ton of other people. But there's a major thing Popper doesn't like. (BTW William Godwin mentioned disliking Aristotle and Plato but didn't say why.)

Aristotle wanted to say I HAVE KNOWLEDGE. this is good as a rejection of skepticism, but bad as a rejection of fallibility. Aristotle and his followers, including Peikoff, equivocate on this distinction.

Part of the purpose of formal logic is an attempt to achieve CERTAINTY – aka infallibility. that's bad and is a problem today.

Objectivism says it uses the word "certain" to refer to fallible knowledge (which they call non-omniscient knowledge. Objectivism says omniscience is impossible and isn't the proper standard of something qualify as knowledge). and Ayn Rand personally may have been OK about this (despite the bad terminology decision). but more or less all other (non-Popperian) Objectivists equivocate about it.

this confusion traces back to Aristotle who knew induction was invalid and deduction couldn't cover most of his claims. (Hume was unoriginal in saying induction doesn't work, not only because of Aristotle but also various others. i don't know why Hume gets so much credit about this from Popper and others. Popper wrote that Aristotle not only invented induction but knew it didn't work.)

and it's not just induction that has these problems and equivocations, it's attempts at proof in general ("prove" is another word, like "certain", which Objectivists use to equivocate about fallibility/infallibility). how do you justify your proof? you use an argument. but how do you justify that argument? another argument. but then you have an infinite regress.

Aristotle knew about this infinite regress problem and invented a bad solution which is still in popular use today including by Objectivism. his solution is self-evident, unquestionable foundations.

Aristotle also has a reaffirmation by denial argument, which Peikoff loves, which has a similar purpose. which, like the self-evident foundations, is sophistry with logical holes in it.

Popper says Aristotle was the first dogmatist in epistemology. (Plato was dogmatic about politics but not epistemology). And Aristotle rejected the prior tradition of differentiating episteme (divine, perfect knowledge) and doxa (opinion which is similar to the truth).

the episteme/doxa categorization was kinda confused. but it had some merit in it. you can interpret it something like this: we don't know the INFALLIBLE PERFECT TRUTH, like the Gods would know, episteme. but we do have fallible human conjectural knowledge which is similar to the truth (doxa).

Aristotle got rid of the two categories, said he had episteme, and equivocated about whether he was a fallibilist or not.

here are two important aspects of the equivocation and confusion.

  1. Aristotle claimed his formal logic could PROVE stuff. (that is itself problematic.) but he knew induction wasn't on the same level of certainty as deduction. so he came up with some hedges, excuses and equivocations to pretend induction worked and could reach his scientific conclusions. Popper thinks there was an element of dishonesty here where Aristotle knew better but was strongly motivated to reach certain conclusions so came up with some bullshit to defend what he wanted to claim. (Popper further thinks Aristotle falsely attributed induction to Socrates because he had a guilty conscience about it and didn't really want the burden of inventing something that doesn't actually work. and also because if Socrates -- the ultimate doubter and questioner -- could accept inductive knowledge then it must be really good and meet a high quality standard!)

  2. I talk about equivocating about fallible vs. infallible because I conceive of it as one or the other, with two options, rather than a continuum. But Peikoff and others usually look at a different way. instead of asking "fallible or infallible?" they ask something like "what quality of knowledge is it? how good is it? how justified? how proven? how certain?" they see a continuum and treat the issue as a matter of degree. this is perfect for equivocating! it's not INFALLIBLE, it's just 90% infallible. then when i talk about fallible knowledge, they think i'm talking about a point on the continuum and hear like 0% infallible (or maybe 20%) and think it's utter crap and i have low standards. so they accuse me and Popper of being skeptics.

the concept of a continuum for knowledge quality – something like a real number line on which ideas are scored with amount of proof, amount of supporting evidence/arguments, amount of justification, etc, and perhaps subtracting points for criticism – is a very bad idea. and look at it that way, rather than "fallible or not?" and "there is a known refutation of this or there isn't?" and other boolean questions is really bad and damaging.

Peikoff refers to the continuum with his position that ideas can be arbitrary (no evidence for it. reject it!), plausible (some evidence, worth some consideration), probable (a fair amount of evidence, pretty good idea), or certain (tons of evidence, reasonable people should accept it, there's no real choice or discretion left). he uses these 4 terms to refer to points on the continuum. and he is clear that it's a continuum, not just a set of 4 options.

But there is no something more beyond fallible knowledge, before infallible knowledge. And the ongoing quest for something fundamentally better than unjustified fallible knowledge has been a massive dead end. All we can do is evolve our ideas with criticism – which is in fact good enough for science, economics and every other aspect of life on Earth.


Elliot Temple | Permalink | Message (1)

Reading Recommendations

I made a reading list. If you want to be good at thinking and know much about the world, these are the best books to read by the best thinkers. In particular, if you don't understand Ayn Rand and Karl Popper then you're at a huge disadvantage throughout life. (Almost everyone is at this huge disadvantage. It's a sad state of affairs. You don't have to be, though.) I put lots of effort into selecting the best books and chapters to highlight, and including brief summaries. The selected chapters are especially important for Karl Popper, who I don't think you should read cover-to-cover.

Many other philosophy books, including common recommendations, are actually so bad that people think intellectual books suck and give up on learning. So I want to help point people in the right direction. (If you think my recommendations are bad, speak up and state your judgement and criticisms. Don't silently dismiss the ideas with no possibility of being corrected if you're mistaken.)

Ayn Rand is the best moral philosopher. That covers issues like how to be happy, what is a good life, and how to make decisions. There's no avoiding those issues in your life. Your choice is whether to read the best ideas on the topic or muddle through life with some contradictions you picked up from your culture and never critically considered.

Karl Popper is the best philosopher of knowledge. That covers issues like how to learn, how to come up with solutions to problems (solutions are a type of knowledge, and problem solving is a type of learning), and how to evaluate ideas as good, bad, true or false. Critical thinking skills like this are part of everyone's life. Your choice is whether to use half-remembered half-false critical thinking skills you picked up in school, or to learn from the best humanity has ever had and consciously think things through.

I made a free video presentation covering the reading list. It'll help you understand the authors, find out which books interest you, and read more effectively. Take a look at the reading list, then check out my video overview.

Watch: Elliot presents the reading list. (This video is also a good introduction to philosophy and Fallible Ideas.)

If you have some interest in learning about reason, morality, liberalism, etc, please take a look at the reading list and watch the video. This was a big project to create a helpful resource and I highly recommend at least looking it over.

I also recorded two 3-hour discussions. I talked with other philosophers who are familiar with the material. We talk about what the books say and how they're valuable, who the authors are and what they think, why people have trouble reading, and some philosophical issues and tangents which come up.

If you love reading books, dive right in! But if you're like most people, you'll find podcasts easier. Most people find verbal discussion more fun and engaging than books. The podcasts will help you get information about what the books are like, which can help you become interested in the first place.

Buy: Alan Forrester Discussion

Buy: Justin Mallone Discussion


Elliot Temple | Permalink | Messages (23)

What Philosophy Takes

suppose someone wanted to know what i know today about philosophy.

they better be as smart, honest and good at learning as me or put in as much time/attention/effort as me. if they are way behind on both, how is that going to work?

if you aren't even close in either area, but you pretend you're learning FI, you're being irresponsible and lying to yourself. you don't actually have a plan to learn it which makes sense and which appears workable if you stop and look at it in broad strokes.

consider, seriously, what advantages you have, compared to me, if any. consider your actual, realistic capabilities. if the situation looks bad, that is good information to know, which you can use to formulate a plan for dealing with your actual situation. it's better to have some kind of plan than to ignore the situation and work with no plan or with a plan for a different (more positive) situation you aren't in.

if you're young, this stuff still applies to you. if you aren't doing much to learn philosophy now, when will you? it doesn't get easier if you wait. it gets harder. over time you will get less honest and more tied up in a different non-FI life.

whatever issues you have with FI, they won't go away by themselves. waiting won't fix anything. face them now, or don't pretend you're going to face them at all.

if you're really young, you may find it helpful to do things like learn to read first. there's audiobooks, but it isn't really just about reading, it's also vocabulary and other related skills. putting effort into improving your ability to read is directly related to FI, it's directly working on one of the issues separating you from FI. that's fine.

if you're doing something which isn't directly related, but which you think will help with FI, post it and see if others agree with your plan or think you're fooling yourself. if you're fooling yourself, the sooner you find out the sooner you can fix it. (or do you want to fool yourself?)


Elliot Temple | Permalink | Messages (4)

25 Robert Spillane Replies

Robert Spillane (RS) is a philosopher who worked with Thomas Szasz for decades. He comments on Critical Rationalism (CR) in his books. I think he liked some parts of CR, but he disagrees with CR about induction and some other major issues. Attempting to clear up some disagreements, I sent him a summary of CR I wrote (not published yet).

Previously I criticized a David Stove book he recommended, responded to him about RSI (we agree), replied positively to his article on personality tests, explained a Popper passage RS didn't understand, and wrote some comments about Popper to him.

RS replied to my CR article with 25 points. Here are my replies:

I am reluctant to comment on your article since it is written in a 'popular' style - as you say it is a summary article. Nonetheless, since you ask.......

I think writing in a popular (clear and readable) style is good. I put effort into it.

Speaking of style, I also think heavy use of quoting is important to serious discussions. It helps with responding more precisely to what people said, rather than to the gist of it. And it helps with engaging with people rather than talking past them.

(I've omitted the first point because it was a miscommunication issue where RS didn't receive my Stove reply.)

2. Your summary article is replete with tautologies which, while true, are trivial. The first paragraph is, therefore, trivial. And from trivial tautologies one can only deduce tautologies.

I’m not trying to approach philosophy by deduction (or induction or abduction), which I consider a mistaken approach.

Here's the paragraph RS refers to:

Humans are fallible. That means we’re capable of being mistaken. This possibility of making a mistake applies to everything. There’s no way to get a guarantee that one of your ideas is true (has no mistakes). There’s no guaranteed way to limit where a mistake could be (saying this part of my idea could be mistaken but not that part) or the size a mistake could be.

This makes claims which I believe most people disagree with or don’t understand, so I disagree that it’s trivial. I think it’s an important position statement to differentiate CR’s views from other views. I wish it was widely considered trivial!

I say, "There’s no way to get a guarantee that one of your ideas is true”. I don’t see how that's a tautology. Maybe RS interprets it as being a priori deducible from word definitions? Something like that? That kind of perspective is not how I (or Popper) approach philosophy.

I wrote it as a statement about how reality actually is, not how reality logically must be. I consider it contingent on the laws of physics, not necessary or tautological. I didn’t discover it by deduction, but by critical argument (and even some scientific observations were relevant). And I disagree with and deny the whole approach of a priori knowledge and the analytic/synthetic dichotomy.

3. Why are informal arguments OK? What is an example of an informal argument? It can't be an invalid one since that would not be OK philosophically, unless one is an irrationalist.

An example of an informal argument:

Socialism is a system of price controls. These cause shortages (when price ceilings are too low), waste (when price floors are too high), and inefficient production (when the controlled prices don’t match what market prices would be). Price floors cruelly keep goods out of the hands of people who want to purchase the goods to improve their lives, while denying an income to sellers. Price ceilings prevent the people who most urgently need goods from outbidding others for those goods. This creates a system of first-come-first-serve (rather than allocating goods where they will provide the most benefit), a shadow market system of friendships and favors (to obtain the privilege of buying goods), and a black market. Socialism sacrifices the total amount wealth produced (which is maximized by market prices), and what do we get in return for a reduction in total wealth? People are harmed!

Szasz’s books are full of informal arguments of a broadly similar nature to this one. He doesn’t write deductions, formal logic, and syllogisms.

Informal arguments are invalid in the sense that they don’t conform to one of the templates for a valid deduction. I don't think that makes them false.

I don’t think it’s irrationalism to think there’s value and knowledge in that price controls argument against socialism, even though it’s not a set of syllogisms and doesn't reduce to a set of syllogisms.

The concept of formal logic means arguments which are correct based on their form, regardless of some of the specifics inserted. E.g. All X are Y. Z is X. Therefore Z is Y.

The socialism argument doesn’t work that way. It depends on the specific terms chosen. If you replace them with other terms, it wouldn’t make sense anymore. E.g. if you swapped each use of "floor" and "ceiling" then the argument would be wrong. Or if you replaced "socialism" with "capitalism" then it'd be wrong because capitalism doesn't include price controls.

The socialism argument is also informal in the sense that it’s fairly imprecise. It omits many details. This could be improved by further elaborations and discussion. It could also be improved with footnotes, e.g. to George Reisman’s book, Capitalism: A Treatise on Economics, which is where I got some of the arguments I used.

Offering finite precision, and not covering every detail, is also something I consider reasonable, not irrationalist. And I’d note Szasz did it in each of his books.

Informal arguments are OK because there’s nothing wrong with them (no criticism refuting their use in general – though some are mistaken). And because informal arguments are useful and effective for human progress (e.g. science is full of them) and for solving problems and creating knowledge.

4. I wasn't aware that there was A key to philosophy of knowledge (metaphor?). And how is 'fixing' mistakes effective if we are condemned to fallibility?

It's not a metaphor, it’s a dictionary definition. E.g. OED for key (noun): "A means of understanding something unknown, mysterious, or obscure; a solution or explanation.”

What does RS mean “condemned” to fallibility? If one puts effort into detecting and correcting errors, then one can deal with errors effectively and have a nice life and modern science. There’s nothing miserable about the ongoing need for critical consideration of ideas.

In information theory, there are methods of communicating with arbitrarily high (though not 100%) reliability over channels with a permanent situation of random errors. The mathematical theory allows dealing with error rates up to but not including 50%! In practice, error correction techniques do not reach the mathematical limits, but are still highly effective for enabling e.g. the modern world with hard disks and internet communications. (Source: Feynman Lectures On Computation, ch. 4.3, p. 107)

The situation is similar in epistemology. Error correction methods like critical discussion don't offer any 100% guarantees, nor any quantifiable guarantees, but are still effective.

5. Critical rationalists leave themselves open to the charge of frivolity if they maintain that the 'sources of ideas aren't very important'. How is scientific progress possible without some 'knowledge' of ideas from the past?

Learning about and building on old ideas is fine.

The basic point here is to judge an idea by what it says, rather than by who said it or how he came up with it.

You may learn about people from the past because you find it interesting or inspiring, or in order to use contextual information to better understand their ideas. For example, I read biographies of William Godwin, his family, and Edmund Burke, in order to better understand Godwin’s philosophy ideas (and because it’s interesting and useful information).

6. Why must we be tolerant with, say, totalitarians? Do you really believe that Hitler could be defeated through argumentation?

I think Hitler could easily have been stopped without violence if various people had better ideas early enough in the process (e.g. starting at the beginning of WWI). And similarly the key to our current struggles with violent Islam is philosophical education –- proudly standing up for the right values. The mistaken ideas of our leaders (and most citizens) is what lets evil flourish.

7. One of the most tendentious propositions in philosophy is 'There is a real world.' Popper's 'realism' is Platonic.

So what if it's "tendentious"? What's the point of saying that? Is that intended to argue some point?

Popper isn't a Platonist and his position is that there is a real, objective reality and we can know about it. I was merely stating his position. Sample quote (Objective Knowledge, ch. 2.3, p. 36):

And Reid, with whom I share adherence to realism and to common sense, thought that we had some very direct, immediate, and secure perception of external, objective reality.

Popper's view is that there is an external, objective reality, and we can know about it. However, all our observations are theory-laden – we have to think and interpret in order to figure out what exists.

8. How can an idea be a mistake if its source is irrelevant?

Its content can be mistaken. E.g. "2+3=6" is false regardless of who writes it.

RS may be thinking of a statement like, "It is noon now." Whether that's true depends on the context of the statement, such as what time it is and what language it's written in. Using context to understand the meaning/content of a statement, and then judging by the meaning/content, is totally different than judging an idea by its source (such as judging an idea to be true or probably true because an authority said it, or because the idea was created by attempting to follow the scientific method).

9. One of the many stupid things Popper said was 'All Life is Problem Solving'. Is having sexual intercourse problem-solving? Is listening to Mozart problem-solving?

Yes.

RS calls it stupid because he don't understand it. He doesn't know what Popper means by the phrase "problem solving". Instead of finding out Popper's meaning, RS interpreted that phrase in his own terminology, found it didn't work, and stopped there. That's a serious methodological error.

Having sex helps people solve problems related to social status and social role, as well as problems related to the pursuit of happiness.

Listening to Mozart helps people solve the problem of enjoying their life.

The terminology issue is why I included multiple paragraphs explaining what CR means in my article. For example, I wrote, "[A problem] can be answering a question, pursuing a goal, or fixing something broken. Any kind of learning, doing, accomplishing or improving. Problems are opportunities for something to be better."

Despite this, RS still interpreted according to his own standard terminology. Understanding other perspectives, frameworks and terminology requires effort but is worthwhile.

The comment RS is replying to comes later and reads:

Solving problems always leads to a new situation where there’s new problems you can work on to make things even better. Life is an infinite journey. There’s no end point with nothing left to do or learn. As Popper titled a book, All Life is Problem Solving.

I brought up All Life is Problem Solving because part of its meaning is that we don't run out of problems.

10. 'All problems can be solved if you know how' is a tautology and has no contingent consequences.

It's not a tautology because there's an alternative view (which is actually far more popular than the CR view). The alternative is that there exist insoluble problems (they couldn't be solved no matter what knowledge you had). If you think that alternative view is wrong on a priori logical grounds, I disagree, I think it depends on the laws of physics.

11. 'Knowledge is power' entails 'power is knowledge' which is clearly false as an empirical generalisation.

"Knowledge is power" is a well known phrase associated with the Enlightenment. It has a non-literal meaning which RS isn't engaging with. See e.g. Wikipedia: Scientia potentia est.

I would be very surprised if RS is unfamiliar with this phrase. I don't know why he chose to split hairs about it instead of responding to what I meant.

12. 'If you have a correct solution, then your actions will work' is a tautology.

It's useful to point out because some people wouldn't think of it. If I omitted that sentence, some readers would be confused.

13. 'Observations play no formal role in creating ideas' is clearly false. Semmelweis based his idea about childbirth fever on observations and inductive inferences therefrom.

RS states the CR view is "clearly false". That's the fallacy of begging the question. Whether it's false is one of the things being debated.

Rather than assume CR is wrong, RS should learn or ask what CR's interpretation of that example is (and more broadly CR's take on scientific discovery). Popper explained this in his books, at length, including going through a variety of examples from the history of science, so there shouldn't be any mystery here about CR's position.

I don't think discussing this example is a good idea because it's full of historical details which distract from explaining issues like why induction is a myth and what can be done instead. If RS understood CR's position on those issues, then he could easily answer the Semmelweis example himself. It poses no particular challenge for CR.

Anyone who can't explain the Semmelweis example in CR terms is not adequately familiar with CR to reject CR. You have to know what CR would say about a scientific discovery like that before you decide CR is "clearly false".

14. 'Knowledge cannot exist outside human minds'. Of course it can if there are no human minds. I agree with Thomas Szasz who, in 'The Meaning of Mind' argued that while we are minded (mind the step) we do not have minds. 'Mind' should only be used as a verb, never as a noun. Popper's mind-body dualism is bad enough, but his pluralism is embarrassing.

I wrote "Knowledge can exist outside human minds." and this changes "can" to "cannot". RS, please use copy/paste for quotes to avoid misquotes.

I'm not a dualist.

It's fine to read my statement as "Knowledge can exist outside human brains" or outside people entirely. The point is knowledge can exist separate from an intelligent or knowing entity.

15. 'A dog's eyes contain knowledge'. I don't understand this since to know x is to know that x is true. Since truth is propositional, dogs don't have to deal with issues of truth. Lucky dogs!

CR disagrees with RS about what knowledge is, and claims e.g. that there is knowledge in books and in genes. Knowledge in genes has nothing to do with a dog knowing anything.

RS, what is your answer to Paley's problem? And what do you think genetic evolution creates?

16. Your use of 'knowledge' is somewhat eccentric if you claim that trees know that x.

I don't claim trees know anything, I claim that the genes in trees have knowledge of how to construct tree cells.

CR acknowledges its view of knowledge is non-standard, but nevertheless considers it correct and important.

17. 'Knowledge is created by evolution' is a tautology if we accept a liberal interpretation of 'created'. If we do not and we assume strict causation, it is false.

That knowledge can be created by evolution is contingent on the laws of physics, not tautological. RS does not state what the "liberal interpretation" he refers to is, nor what "strict causation" refers to, so I don't know how to answer further besides to request that he provide arguments on the matter (preferably arguments that would persuade me that RS understands evolution).

18. Ideas cannot literally replicate themselves.

This is an unargued assertion. Literally, they can. I think RS is simply concluding something is wrong because he doesn't understand it, which is a methodological error.

David Deutsch has explained this matter in The Fabric of Reality, ch. 8:

a replicator is any entity that causes certain environments to copy it.

...

I shall also use the term niche for the set of all possible environments which a given replicator would cause to make copies of it....

Not everything that can be copied is a replicator. A replicator causes its environment to copy it: that is, it contributes causally to its own copying. (My terminology differs slightly from that used by Dawkins. Anything that is copied, for whatever reason, he calls a replicator. What I call a replicator he would call an active replicator.) What it means in general to contribute causally to something is an issue to which I shall return, but what I mean here is that the presence and specific physical form of the replicator makes a difference to whether copying takes place or not. In other words, the replicator is copied if it is present, but if it were replaced by  almost any other object, even a rather similar one, that object would not be copied.

...

Genes embody knowledge about their niches.

...

It is the survival of knowledge, and not necessarily of the gene or any other physical object, that is the common factor between replicating and non-replicating genes. So, strictly speaking, it is a piece of knowledge rather than a physical object that is or is not adapted to a certain niche. If it is adapted, then it has the property that once it is embodied in that niche, it will tend to remain so.

...

But now we have come almost full circle. We can see that the ancient idea that living matter has special physical properties was almost true: it is not living matter but knowledge-bearing matter that is physically special. Within one universe it looks irregular; across universes it has a regular structure, like a crystal in the multiverse.

Add to this that ideas exist physically in brain matter, (in the same way data can be stored on computer disks), and they do cause their own replication.

Understanding evolution in a precise, modern way was Deutsch's largest contribution to CR.

I don't expect RS to understand this material from these brief quotes. It's complicated. I'm trying to give an indication that there's substance here that could be learned. If he wants to understand it, he'll have to read Deutsch's books (there's even more material about memes in The Beginning of Infinity) or ask a lot of questions. I do hope he'll stop saying this is false while he doesn't understand it.

19. You claim that CR 'works'. According to what criteria - logical? empirical? pragmatic? If it is pragmatism - or what Stove calls the American philosophy of self-indulgence' - then all philosophies, religions and superstitions 'work' (for their believers).

CR works logically, empirically, and practically. That is, there's no logical, empirical or practical refutation of its effectiveness. (I'm staying away from the word "pragmatic" on purpose. No thanks!)

What CR works to do, primarily, is create knowledge. The way I judge that CR works is by looking at the problems it claims to solve, how it claims to solves them, and critically considering whether its methods would work (meaning succeed at solving those problems).

CR offers a conception of what knowledge is and what methods create it (guesses and criticism – evolution). CR offers substantial detail on the matter. I know of no non-refuted criticism of the ability of CR's methods to create knowledge as CR defines knowledge.

There's a further issue of whether CR has the right goals. We can all agree we want "knowledge" in some sense, but is CR's conception of knowledge actually the thing we want? Not for everyone, e.g. infallibilists. But CR explains why conjectural knowledge is the right conception of knowledge to pursue, which I don't know any non-refuted criticism of. Further, there are no viable rival conceptions of knowledge that anyone knows how to pursue. Basically, all other conceptions of knowledge are either vague or wrong (e.g. infallibilist). This claim depends on a bunch of arguments – RS if you state your conception of knowledge then I'll comment on it.

20. You are right to say that '90% certain' is an oxymoron. But so is 'conjectural knowledge'.

Here RS interprets "knowledge" and perhaps also "conjectural" in his own terminology, rather than learning what CR means.

The most important part of CR's conception of knowledge is that fallible ideas can be knowledge. Conjectures are fallible.

"Conjectural knowledge" is also an anti-authoritarian concept. Popper is saying that mere guesses (even myths) can be knowledge (if they solve a problem and are subjected to critical scrutiny). An idea doesn't have to be created by an authority-granting method (e.g. deduction, induction, abduction, "the scientific method", etc) or come from an authority-granting source (e.g. a famous scientist) in order to be knowledge.

21. 'Actually, the possibility for further progress is a good thing' is a value judgement. But how can progress be a feature of CR? Was not Thomas Kuhn right to claim that Popper's position leads to rampant relativism (as Kuhn's does).

No, Popper isn't a relativist about anything. Popper wrote a ton about progress and took the position that progress is possible, objective and desirable. (E.g. "Equating rationality with the critical attitude, we look for theories which, however fallible, progress beyond their predecessors" from C&R.) And Popper thought we have objective knowledge, including about value judgements and morality. Some of Popper's comments on the matter in The World of Parmenides:

Every rational discussion, that is, every discussion devoted to the search for truth, is based on principles, which in actual fact are ethical principles.

...

All this shows that ethical principles form the basis of science. The most important of all such ethical principles is the principle that objective truth is the fundamental regulative idea of all rational discussion. Further ethical principles embody our commitment to the search for truth and the idea of approximation to truth; and the importance of intellectual integrity and of fallibility, which lead us to a self-critical attitude and to toleration. It is also very important that we can learn in the field of ethics.

...

Should this new ethics [that Popper proposes] turn out to be a better guide for human conduct than the traditional ethics of the intellectual professions ... then I may be allowed to claim that new things can be learnt even in the field of ethics.

...

in the field of ethics too, one can put forward suggestions which may be discussed and improved by critical discussion

In CR's view, the ability to learn in a field requires that there's objective knowledge in that field. Under relativism, you can't learn since there's no mistakes to correct and no objective truth to seek. So Popper thinks there is objective ethical knowledge.

22. Your claim that 'induction works by inducing' applies also to 'deduction works by deducing'.

The statement "deduction works by deducing" would be a bad argument for deduction or explanation of how deduction works.

Inductivists routinely state that induction works by generalizing or extrapolating from observation and think they've explained how to do induction (rather than recognizing the relation of their statement to "induction works by inducing").

23. Inductivists do have an answer for you. Stove has argued, correctly in my view, that there are good reasons to believe inductively-derived propositions. I paraphrase from my book 'An Eye for an I' (pp.183-4) for your readers who have no knowledge of my book.

'Hume's scepticism about induction - that it is illogical and hence irrational and unreasonable - is the basis for his scepticism about science. His two main propositions are: inference from experience is not deductive; it is therefore a purely irrational process. The first proposition is irrefutable. 'Some observed ravens are black, therefore all ravens are black' is an invalid argument: this is the 'fallibility of induction.' But the second proposition is untenable since it assumes that all rational inference is deductive. Since 'rational' means 'agreeable to reason', it is obvious that our use of reason often ignores deduction and emphasises the facts of experience and inferences therefrom.

Stove defends induction from Hume's scepticism by arguing that scepticism about induction is the result of the 'fallibility of induction' and the assumption that deduction is the only form of rational argument. The result is inductive scepticism, which is that no proposition about the observed is a reason to believe a contingent proposition about the unobserved. The fallibility of induction, on its own, does not produce inductive scepticism because from the fact that inductive arguments are invalid it does not follow that something we observe gives us no reason to believe something we have not yet observed. If all our experience of flames is that they burn, this does give us a reason for assuming that we will get burned if we put our hand into some as yet unobserved flame. This is not a logically deducible reason but it is still a good reason. But once the fallibility of induction is joined with the deductivist assumption that the only acceptable reasons are deductive ones, inductive scepticism does indeed follow.

Hume's scepticism about science is the result of his general inductive scepticism combined with his commitment to empiricism, which holds that any reason to believe a contingent proposition about the unobserved is a proposition about the observed. So the general proposition about empiricism needs to be joined with inductive scepticism to produce Hume's conclusion because some people believe that we can know the unobserved by non-empirical means, such as faith or revelation. As an empiricist Hume rules these means out as proper grounds for belief. So to assert the deductivist viewpoint is to assert a necessary truth, that is, something that is trivially true not because of any way the world is organised but because of nothing more than the meanings of the terms used in it. When sceptics claim that a flame found tomorrow might not be hot like those of the past, they have no genuine reason for this doubt, only a trivial necessary truth.'

What, then, is the bearing of 'all observed ravens have been black' on the theory 'all ravens are black'? Stove's answer is based on an idea of American philosopher Donald Cary Williams, which is to reduce inductive inference to the inference from proportions in a population. It is a mathematical fact that the great majority of large samples of a population are close to the population in composition. In the case of the ravens, the observations are probably a fair sample of the unobserved ravens. This applies equally in the case where the sample is of past observations and the population includes future ones. Thus, probable inferences are always relative to the available evidence.

The claim "there are good reasons to believe inductively-derived propositions" doesn't address Popper's arguments that inductively-derived propositions don't exist.

Any finite set of facts or observations is compatible with infinitely many different ideas. So which idea(s) does one induce?

Note that this argument is not about the "fallibility of induction". So Stove is mistaken when he says that's the source of skepticism of induction. (No doubt it's a source of some skepticism of induction, but not of CR's.) The claim that deduction is the only form of rational argument is also not CR's position. So Stove isn't answering CR. Yet RS said this was an inductivist answer to me.

This is typical. I had an objection to the first sentence following "Inductivists do have an answer for you." It made an assumption I consider false. It then proceeded to build on that assumption rather than answer me.

Where RS writes, "it is still a good reason", no statement of why it's a good reason or in what sense it's "good" or why being good in that sense matters is given. Avoiding some technical details, CR says approximately that it's a good reason because we don't have a criticism of it, rather than for an inductive reason. Why does no criticism matter? What's good about that? Better an idea you don't see anything wrong with than one you do see something wrong with.

Nothing in the paragraphs answers CR. They just demonstrate unfamiliarity with CR's standard arguments. Consider:

When sceptics claim that a flame found tomorrow might not be hot like those of the past, they have no genuine reason for this doubt, only a trivial necessary truth.

Many things in the future are different than the past. So one has to understand explanations of in what ways the future will resemble the past, and in what ways it won't. Induction offers no help with this project. Induction doesn't tell us in which ways the future will resemble the past and in which ways it won't (or in which ways the unobserved resembles the observed and in which ways it doesn't). But explanations (which can be improved with critical discussion) do tell us this.

For example, modern science has an explanation of what the sun is made of (mostly hydrogen and helium), its mass (4.385e30 lbs), why it burns (nuclear fusion), etc. These explanations let us understand in what respects the sun will be similar and different tomorrow, when it will burn out, what physical processes would change the date it burns out, what will happen when it burns out, and so on. Explanations simply aren't inferences from observations using some kind of inductive principle about the future probably resembling the past while ignoring the "in which respects?" question. And the sort of skeptic being argued with in the quote has nothing to do with CR.

I won't get into probability math here (we could do that in the future if desired), but I will mention that Popper already addressed that stuff. And the object of this exercise was to answer CR, but that would take something like going over Popper's arguments about probability (with quotes) and saying why they are mistaken or how to get around them.

24. You state that Popper invented critical rationalism around 1950. I would have thought it was around the mid-1930s.

Inventing CR was an ongoing process so this is approximate. But here are some of the book publication dates:

Objective Knowledge, 1972. Conjectures and Refutations, 1963. Realism and the Aim of Science, 1983 (circulated privately in 1956). The Logic of Scientific Discovery, 1934 (1959 in English). Since I don't consider LScD to be anything like the whole of CR, I chose a later date.

[25.] Your last paragraph is especially unfortunate because you accuse those philosophers who are not critical rationalists (which is most of them) of not understanding 'it enough to argue with it.' With respect Elliot, this is arrogant and ill-informed. Many philosophers understand it only too well and have written learned books on it. Some are broadly sympathetic but critical (David Miller, Anthony O'Hear) while others (Stove, James Franklin) are critical and dismissive. To acknowledge that CR 'isn't very popular, but it can win any debate' is nonsensical and carries the whiff of the 'true believer', which would seem to be self-contradictory for a critical rationalist.

It may be arrogant, but I don't think it's ill-informed. I've researched the matter and don't believe the names you list are counter-examples.

What's nonsensical about an idea which can win in debate, but which most people don't believe? Many scientific ideas have had that status at some time in their history. Ideas commonly start off misunderstood and unpopular, even if there's an advocate who provides arguments which most people later acknowledge were correct.

I think I'm right about CR. I'm fallible, but I know of no flaws or outstanding criticisms of any of my take on CR, so I (tentatively) accept it. I have debated the matter with all critics willing to discuss for a long time. I have sought out criticism from people, books, papers, etc. I've made an energetic effort to find out my mistakes. I haven't found that CR is mistaken. Instead, I've found the critics consistently misunderstand CR, do not provide relevant arguments which address my views, do not address key questions CR raises, and also have nothing to say about Deutsch's books.

I run a public philosophy discussion forum. I have visited every online philosophy discussion forum I could find which might offer relevant discussion and criticism. The results were pathetic. I also routinely contact people who have written relevant material or who just seem smart and potentially willing to discuss. For example, I contacted David Miller and invited him to discuss, but he declined.

Calling this arrogant (Because I think I know something important? Because I think many other people are mistaken?), doesn't refute my interpretation of these life experiences. RS, if you have a proposal for what I should do differently (or a different perspective I should use), I'll be happy to consider it. And if you know of any serious critics of CR who will discuss the matter, please tell me who they are.

None of RS's 25 points were difficult for me to answer. If RS knew of any refutation of CR by any author which I couldn't answer, I would have expected him to be able to pose a difficult challenge for me within 25 comments. But, as usual with everyone, so far nothing RS has said gives even a hint of raising an anti-CR argument which I don't have a pre-existing answer for.


Elliot Temple | Permalink | Messages (6)

Reply to Robert Spillane

I'm not trying to make ad hominem remarks. I put effort into avoiding them. It is nevertheless possible that an argument targets idea X, but CR was saying Y, not X. It's also possible that CR makes a statement in its own terminology which is misread by substituting some word meanings with those favored by a rival philosophy. I don't see anything against-the-person about bringing up these issues.

I reject Popper's three worlds. I think there's one world, the physical world. I think minds and ideas have physical existence in that one world, just like running computer software and computer data physically exist. More broadly, the laws of physics say that information exists and specify rules for it (the rules of computation); ideas are a type of information.

I've never selected philosophy ideas by nationality, and never found pragmatism appealing. Nor am I getting material from Quine. And I don't accept the blame for Feyerabend, who made his own bad choices. Here's a list of philosophers I consider especially important: Karl Popper, David Deutsch, Ayn Rand, Ludwig von Mises, William Godwin, Edmund Burke, Thomas Szasz, and some ancient Greeks.

All propositions are synthetic because the laws of logic and math depend on the laws of computation (including information processing) which depend on the laws of physics. Our understanding of physics involves observation, and the particular laws of physics we have are contingent. Epistemology and evolution depend on physics too, via logic and computation, and also because thinking and evolving are physical processes.

Of course I agree with you that the goal is to find truth, not power or bullying or popularity.

Stove on Paley didn't answer my questions, but gave me some indication of some of your concerns, so:

I do not accept any kind of genetic or biological determinism, nor Darwinian "survival of the fittest" morality. Men have free will and are not controlled by a mixture of "influences" like genes, memes, culture, etc. By "influences" I include claims like "that personality trait is under 60% genetic control" – in that way genes are claimed to partially influence, but not fully control, some human behavior.

I have read some of the studies in this field and their quality is terrible. I could tell you how to refute some of their twin studies, heritability claims, etc, but I'm guessing you already know it.

I think "influences" may play a significant role in two ways:

1) A man may like and agree with an "influence", and pursue it intentionally. E.g. his culture praises soldiers, and he finds the profession appealing and chooses to become a soldier. Here the "influence" is actually just an option or piece of information which the man judges.

or

2) "Influences" matter more when a man is irresponsible and passive. If you don't take responsibility for your life, someone or something else may partially fill the void. If you don't actively control your life, then there's room for external control. A man who chooses to play the role of a puppet, and lets "influences" control him, may partially succeed.

Regarding Miller: by your terminology, I'm also a critic of Popper.

When two philosophers cannot agree on basic definitions,

could you give definitions of knowledge and induction? for clarity, i'll be happy to call my different concepts by other words such as CR-knowledge.

You state that 'I disagree with and deny the whole approach of a priori knowledge and the analytic/synthetic dichotomy.' But Popper, as a rationalist, relies on a priori knowledge, i.e. primitive theories which are progressively modified by trial and error elimination.

Inborn theories aren't a priori, they were created by genetic evolution. (They provide a starting point but DO NOT determine people's fate.)

when I try to argue with you, and you disagree with my mode of arguing, which is widely accepted in philosophical circles, it is difficult to know how to respond to your questions.

i think this is important. I have views which disagree with what is, i agree with you, "widely accepted in philosophical circles". it is difficult to understand different frameworks than the standard one, but necessary if you want to e.g. evaluate CR.

For example, with respect to Szasz you write that he 'doesn't write deductions, formal logic and syllogisms'. True, he doesn't use symbolic logic but his life's work was based on the following logic (see Szasz Under Fire, pp.321-2 where he relies on the analytic-synthetic distinction):

"When I [Szasz] assert that (mis)behaviors are not diseases I assert an analytic truth, similar to asserting that bachelors are not married...InThe Myth of Mental Illness, I argued that mental illness does not exist not because no one has yet found such a disease, but because no one can find such a disease: the only kind of disease medical researchers can find is literal, bodily disease."

I acknowledge that I disagree with Szasz about analytic/synthetic. Unfortunately he died before we got to resolve the matter.

However, I think Szasz's main point is that no observations of "patients" could refute him. I agree. Facts about "patients" can't challenge logical arguments.

However, as I explained above, I don't think logic itself is analytic. I think observations which led to a new understanding of physics could theoretically (I don't expect it) play a role in challenging Szasz's logical arguments.

Here is Szasz's logic:

  • Illness affects the human body (by definition);
  • The 'mind' is not a bodily organ;
  • Therefore, the mind cannot be or become ill;
  • Therefore mental illness is a myth.
  • If 'mind' is really the brain or a brain process;
  • Then mental illnesses are brain illnesses.
  • Since brain illnesses are diagnosed by objective medical signs,
  • And mental illnesses are diagnosed by subjective moral criteria;
  • Mental illnesses are not literal illnesses
  • And mental illness is still a myth.

If this is not deductive reasoning, then what is?

That isn't even close to a deductive argument. For example, look how "myth" is used in a conclusion statement (begins with "therefore"), without being introduced previously. You couldn't translate this into symbolic logic and make it work. Deduction has very strict rules, which you haven't followed.

As to what is deductive reasoning: no one does complex, interesting philosophy arguments using only deduction. Deduction is fine but limited.

I do appreciate the argument you present. I think it's well done, valuable, and rational. It's just not pure deduction (nor a combination of deduction and induction).

I would normally just call it an "argument". CR doesn't have some special name for what type of reasoning it is. We could call it a CR-argument or CR-reasoning if you like. You ask what's left for reasoning besides induction and deduction, and I'd point to your example and say that's just the kind of thing I think is a typical argument. (Your argument is written to appear to resemble deduction more than is typical, so the style is a bit uncommon, but the actual way it works is typical.)

'The basic point here is to judge an idea by what it says..." Quite so. But how do you do that?

By arguments like the "mental illness" example you provided, and the socialism and price controls example I provided previously. By using arguments to criticize mistakes in ideas. etc.

You write: 'The claim [Stove's and mine] that "there are good reasons to believe inductively-derived propositions" doesn't address Popper's arguments that inductively-derived propositions don't exist.' This follows more than half a page of reasons why they do exist. And, contrary to your claim, I gave you an example of a good (i.e reasonable, practical, useful) reason to believe an inductively-derived proposition. What more can I say?

You write: 'This is typical. I had an objection to the first sentence following "Inductivists do have answer for you." It made an assumption I consider false. It then proceeds to build on that assumption rather than answer me.' That obnoxious sentence is 'Stove has argued, correctly in my view, that there are good reasons to believe inductively-derived propositions.' What is the assumption you consider false? I then proceed to provide Stove's arguments. Is not this what critical rationalists encourage us to do with their platitudes about fallibility, willingness to argue a point of view? Those arguments, whether valid or invalid, do provide reasons why one might reject Popper's authoritarian pronouncement that inductively-derived propositions don't exist. Of course, they exist, even if Popper does not grant them legitimacy.

We're talking about too many things at once. If you think this is particularly important, I could answer it. I do attempt to continue the discussion of induction below.

You write: 'But as usual with everyone, so far nothing RS has said gives even a hint of raising an anti-CR argument which I don't have a pre-existing answer for.' Well, then future argument is pointless because your 'fallibilism' is specious. If you have already decided in favour of CR, I doubt there are any critical arguments which you will consider. You appear to have developed your personal version of CR and immunised yourself against criticism, a vice which Popper in theory, if not in practice, warned against.

I'm open to changing my mind.

I have discussed these issues in the past and made judgements about some ideas. To change my mind, you'll have to say something new to me. I expect the same the other way around: if I don't have anything to say that you haven't heard before, then I won't change your mind.

I have a lot of previous familiarity with these issues. So far you haven't come near the edges of my pre-existing knowledge. You haven't said something about epistemology which is surprising or new for me (nor has Stove in what I read). Minor details differ, but not main points.

That's OK, I would expect it to take more discussion than we've done so far to get to get beyond people's already-known arguments.

It's right and proper that we each have initial (level 1) responses ready which cover many issues a critic could raise. And when he responds to one of them, we again have level 2 responses ready for many things he may say next. Very educated, experienced persons may have a dozen levels of responses that they already know. So to change their minds, one has to either say something surprising early on (that they didn't hear or think of previously, so they don't have a pre-existing answer) or else go through the levels and then argue with some of their ideas near the limits of their knowledge.

So far your comments regarding induction have been typical of other inductivists I've spoken with.

A reviewer of Popper's work was published in 1982 in The New York Review (Nov.18 (pp.67-68) and Dec.2 (pp.51-56). I could not express my reservations better than this:

'Popper's philosophy of science is profoundly ambiguous: it is, he says, "empirical", but it is left unclear why scientists should consult experience.

The reason for consulting experience is to criticize ideas which contradict experience (because we want ideas which match reality). That is not "left unclear", it's stated clearly by Popper.

It is called "fallibilism", in which we learn from our mistakes", but it is really an ill-concealed form of skepticism.

The skepticism accusation is an assertion, not an argument.

It claims to surrender the quest for certainty, but it is precisely the standards of this quest - that if one is not certain of a proposition, one can never be rationally justified in claiming it to be true - that underlie Popper's rejection of induction (and the numerous doctrines that stem from this rejection).

Popper did NOT reject induction for being fallible or imperfect, he rejected it for reasons like:

1) Any finite set of data is compatible with infinitely many generalizations, so by what method does induction select which generalizations to induce from those infinite possibilities?

2) How much support does X give Y, in general? And what difference does that make?

Induction fails to meet these challenges, and without answers to those issues induction can't be used at all. These aren't "it's imperfect" type issues, they are things that must be addressed to use induction at all for anything.

There have been some attempts to meet these challenges, but I don't think any succeeded, and Popper pointed out flaws in some of them. If you can answer the questions, or give page numbers where Stove does, I will comment.

If you wish to address (2), note that "in general" includes non-mathematical issues, e.g. the beauty of a piece of music or flower. (And if you think induction can't address those beauty issues, then I'm curious what you propose instead. Deduction? Some third thing which will, on examination, turn out to have a lot in common with CR?)


Elliot Temple | Permalink | Messages (8)

More Robert Spillane Discussion

This reply to Robert Spillane follows up on this previous discussion. Here's a full list of posts related to Spillane.

Thank you for your respectful reply. I think we are making progress.

It has been helpful to have you clarify which parts of Popper you accept.

Great.

I am reminded of an interesting chapter in Ernest Gellner's bookRelativism and the Social Sciences, (1985, Ch. 1: 'Positivism and Hegelianism), where he discusses early versus late Popper, supports the former against the latter, and concludes that Popper is (a sort of) positivist. It is an interesting chapter and one I would happily discuss with you.

Like Gellner, I am sympathetic to Popper's 'positivism' but cannot accept his rejection of inductive reasoning. Like you (and Szasz), I reject his 3 Worlds model.

Popper was an opponent of the standard meaning of positivism. I mean something like this dictionary definition: "a philosophical system that holds that every rationally justifiable assertion can be scientifically verified or is capable of logical or mathematical proof, and that therefore rejects metaphysics and theism."

So what sort of "positivism" are you attributing to Popper?

I've ordered the book.

Re your favourite philosophers: you might read Szasz's critical comments on Rand, Branden, Mises, Hayek, Rothbard and Nozick in Faith in Freedom: Libertarian Principles and Psychiatric Practices, (Transaction Publishers, 2004). Even though I received the Thomas Szasz Award in 2006, I told Tom that I could not commit myself to (economic) libertarianism in the way that he did and you appear to do. I accept the primacy of personal freedom but do not accept the economic freedom favoured by libertarians. Indeed, I would have thought that by now, in the age of huge corporations, neo-liberalism is on its last legs. I respect your position, however.

Yes, I'm fully in favor of capitalism.

Yeah, I discussed Faith in Freedom with Szasz, but I don't have permission to share the discussion. One thing Szasz did in the book was use some criticism of Rand from Rothbard. I could tell you criticism of Rothbard's arguments if you wanted, though I think he's best ignored. I do not consider Rothbard or Justin Raimondo to be decent human beings, let alone reliable narrators regarding Rand. I was also unimpressed by Szasz's criticisms of Rand's personal life in the book, and would prefer to focus on her ideas. And I think Szasz made a mistake by quoting Whittaker Chambers' ridiculous slanders.

FYI I only like Rand and Mises from the list of people you mention, and I agree with Szasz that they were mistaken regarding psychiatry. (Rand didn't say much on psychiatry, and some of it good, as Szasz discusses. But e.g. she got civil commitment partly wrong.)

You may be interested to know that Rand spoke very critically of libertarians, especially Hayek and Friedman (who both sympathized with socialism, as did Popper). She thought libertarians were harming the causes of liberty and capitalism with their unprincipled, bad philosophy. I agree with her.

Rand did appreciate Mises because he was substantially different than the others: he was an anti-anarchy classical liberal, a consistent opponent of socialism, and he was very good at economics.

We have criticisms of many libertarian ideas from the right.

Let me mention that I'm not an orthodox Objectivist. I do not like the current Objectivist leadership like Peikoff, Binswanger, and the Ayn Rand Institute. I am banned from the main Objectivist forum for dissenting regarding epistemology (especially induction, fallibilism and perception). I also dissented regarding psychiatry, but discussion of psychiatry was banned before much was said.

If you're interested, I wrote about what the disagreements were and the decision to ban me. I pointed out various ways my views and actions are in line with Ayn Rand's philosophy and theirs aren't. It clarifies some of my philosophy positions:

http://curi.us/1930-harry-binswanger-refuses-to-think

There was no reply, no counter-argument. I am aware that they will hold a grudge for life because I wrote that.

I also made a public record of what I said in my discussions with them:

http://curi.us/1921-the-harry-binswanger-letter-posts

Warning: my comments are book length.

I have spent my career in the space between neo-positivism (Hume, Stove) and a critical existentialism (Sartre, Szasz). You might see inconsistencies here but I have always agreed with Kolakowski who wrote in his excellent book Positivist Philosophy, (pp. 242-3):

'The majority of positivists tend to follow Wittgenstein's more radical rule: they do not simply reject the claims of metaphysics to knowledge, they refuse it any recognition whatever. The second, more moderate version is also represented, however, and according to it a metaphysics that makes no scientific claims is legitimate. Philosophers who, like Jaspers, do not look upon philosophy as a type of knowledge but only as an attempt to elucidate Existenz, or even as an appeal to others to make such an attempt, do not violate the positivist code. This attitude is nearly universal in present-day existential phenomenology. Awareness of fundamental differences between 'investigation' and 'meditation', between scientific 'accuracy' and philosophic 'precision', between 'problems' and 'questioning' or 'mystery' is expressed by all existential philosophers...'

I broadly disagree with attempts to separate some thinking or knowledge from reality.

As an aside: I asked Tom Szasz that since he has been appropriated by some existentialists, whether he accepted that label. He thought about it for an hour and said: 'Yes, I'm happy to be included among the existentialists. However, if Victor Frankl is an existentialist, I'm not!' Frankl, despite his reputation as a humanist/existentialist boasted of having authorised many and conducted a few lobotomies on people without their consent.

Your criticism of the analytic/synthetic dichotomy reminds me of Quine but expressed differently. I disagree with you (and Quine) and agree with Hume, Stove and Szasz (and many others) on this issue. I am confident that had Szasz lived for another 50 years, you would not have convinced him that all propositions are synthetic and therefore are either true or false. He and I believe that the only necessities (i.e necessary truths) in the world are those expressed as analytic propositions and these tell us nothing about the world of (empirical) facts.

I don't believe necessary truths like that exist. I think people mistake features of reality (the actual reality they live in) for necessary truths. In our world, logic works a particular way, but it didn't necessarily have to. People fail to imagine how some things could be otherwise because they are used to the laws of physics we live with.

If you have a specific criticism of my view, I'll be happy to consider it.

I think I would have persuaded Szasz in much less than 50 years, if I'm right. Or else Szasz would have persuaded me. I don't think it would have stayed unresolved.

I found Szasz extraordinarily rational and open to criticism, more so than anyone else I've ever discussed with.

I'm delighted that you do not buy into Dawkins' nonsense about 'memes' even if you use 'ideas' as if they are things. Stove on Dawkins hits the mark.

There may be a misunderstanding here. I do buy into David Deutsch's views about memes! I accept memes exist and matter. But I think memes are popularly misunderstood and don't lead to the conclusions others have said they do.

I know that Szasz disagreed with me about memes. He did not, however, provide detailed arguments regarding evolution.

'Knowledge' and 'idea' are abstract nouns and therefore, as a nominalist, I'm bound to say they don't exist, except as names.

I consider them the names of either physical objects (like chairs) or attributes of physical objects (like the color red). As a computer hard drive can contain a file, a brain can contain an idea.

I encourage my students to rely less on nouns and more on verbs (from which most nouns originated). You asked for two definitions:

To 'know' means 'to perceive or understand as fact or truth' (Macquarie Dictionary, p.978). Therefore 'conjectural knowledge' is oxymoronic.

This is ambiguous about whether the understanding may be fallible or not.

Do you need a guarantee of truth to have knowledge, or just an educated guess which is correct according to your current best-efforts at understanding?

Why can't one conjecturally (fallibly) understand something to be a fact?

Induction: 'the process of discovering explanations for a set of particular facts, by estimating the weight of observational evidence in favour of a proposition which asserts something about the entire class of facts (MD, p.904).

Induction: 'a method of reasoning by which a general law or principle is inferred from observed particular instances...The term is employed to cover all arguments in which the truth of the premise, or premises, while not entailing the truth of the conclusion, or conclusions, nevertheless purports to constitute good reasons for accepting it, or them... With the growth of natural science philosophers became increasingly aware that a deductive argument can only bring out what is already implicit in its premises, and hence inclined to insist that all new knowledge must come from some form of induction. (A Dictionary of Philosophy, Pan Books, 1979, pp.171-2).

I agree that those are typical statements of induction. How do you address questions like:

Which general laws, propositions, or explanations should one consider? How are they chosen or found? (And whatever method you answer, how does it differ from CR's brainstorming and conjecturing?)

When and why is one idea estimated to have a higher weight of observational evidence in favor of it than another idea? Given the situation that neither idea is contradicted by any of the evidence.

I think these issues are very important to our disagreement, and to CR's criticism of induction.

You say that 'inborn theories are not a priori'. But a priori means prior to sense experience and so anything 'inborn'must be a priori be definition.

A priori means "relating to or denoting reasoning or knowledge that proceeds from theoretical deduction rather than from observation or experience" (New Oxford American Dictionary)

Inborn theories, which come from genes, don't come from theoretical deduction, nor from observation. Their source is evolution. This definition offers a false dichotomy.

Another definition (OED):

"A phrase used to characterize reasoning or arguing from causes to effects, from abstract notions to their conditions or consequences, from propositions or assumed axioms (and not from experience); deductive; deductively."

that doesn't describe inborn theories from genes.

inborn theories are like the software which comes pre-installed on your computer, which you can replace with other software if you prefer.

inborn theories don't control your life, it's just that thinking needs a starting point. similar to how your life has a starting time and place, which does matter, but doesn't control your fate.

these inborn theories are nothing like analytical ideas or necessary truths. they're just regular ideas, e.g. we might have inborn ideas about the danger of snakes (the details of which ideas are inborn is largely unknown) which were created because of actual encounters with snakes before we were born. but that's still not created by observation or experience, because genes and evolution can neither observe nor experience.

Spillane wrote previously:

Here is Szasz's logic:

  • Illness affects the human body (by definition);
  • The 'mind' is not a bodily organ;
  • Therefore, the mind cannot be or become ill;
  • Therefore mental illness is a myth.
  • If 'mind' is really the brain or a brain process;
  • Then mental illnesses are brain illnesses.
  • Since brain illnesses are diagnosed by objective medical signs,
  • And mental illnesses are diagnosed by subjective moral criteria;
  • Mental illnesses are not literal illnesses
  • And mental illness is still a myth.

If this is not deductive reasoning, then what is?

I denied that this is deduction, and I pointed out that "myth" is introduced for the first time in a conclusion statement, so it doesn't follow the rules of deduction. Spillane now says:

If the example of Szasz's logic is not deductive - the truth of the conclusion is implicit in the premise - what sort of argument is it? If you remove #4, would you accept it as a deductive argument?

I think it deviates from deduction in dozens of ways, so removing #4 won't help. For example, the terms "objective", "subjective" and "literal" are introduced towards the end without using previous premises and syllogisms to establish anything about them. I also consider it incomplete in dozens of ways (as all complex arguments always are). You could try to write it as formal (deductive) logic, but I think you'd either omit most of the content or fail.

I don't think the truth of the conclusion is implicit in the premises. I think many philosophers have massively overestimated what they could translate to equivalent formal deductions. So I regard it simply as an "argument", just like most other arguments which don't fall into the categories non-Popperian philosophers are so concerned with.

And even if some arguments could be rewritten as strict deductions, people usually don't do that, and they can still learn and make progress anyway.

Rather than worrying about what category an argument falls into, CR is concerned with whether you have a criticism of it – that is, an argument for why it's false.

I don't think pointing out "that isn't deduction" is a criticism, because being non-deductive is compatible with being true. (The same comment applies to induction.)

I also don't think that pointing out an idea is incomplete is a criticism without further elaboration. What matters is if the idea can succeed at it's purpose, e.g. solve a problem, answer a question, explain an issue. An idea may do that despite being incomplete in some way because the incompleteness may be
irrelevant.

My epistemological position should be clear from what I have said above - it is consistent with a moderate form of neo-positivism.

That Popper's fallibilism is ill-concealed skepticism has been argued at length, by many Popper scholars, e.g. Anthony O'Hear. It was even argued in the book review mentioned.

I don't care how many people argued something at what length. I only care if there are specific arguments which are correct.

Are you denying that you are fallible (capable of making mistakes)? Do you think you sometimes have 100% guarantees against error?

Or do you just deny the second part of Popper's fallibilism? His claim that, in the world today, mistakes are common even when people feel certain they're right.

If it's neither of those, then I don't know what your issue with fallibilism is.

I have already given you (in a long quote) examples of inductively-derived propositions that are 'reasonable'. Now they may not be reasonable to a deductivist, but that only shows that deductivists have a rigid definition of 'rational', 'reasonable' and 'logical'. Given that a very large number of observations of ravens has found that they are black without exception, I have no good reason to believe the next one will be yellow, even though it is possible. That the next raven may be yellow is a trivial truth since it is a tautology. Accordingly, I have a good reason to believe that the raven in the next room is black.

OK I'll address this topic after you answer my two questions about induction above.


Elliot Temple | Permalink | Messages (0)

Discussing Necessary Truths and Induction with Spillane

You often ask me for information/arguments that I have already given you

We're partially misunderstanding each other because communication is hard and we have different ways of thinking. I'm trying to be patient, and I hope you will too.

Please address these two questions about induction. Answering with page numbers from a book would be fine if they directly address it.

I've read lots of inductivist explanations and found they consistently don't address these questions in a clear, specific way, with actual instructions one could follow to do induction if one didn't already know how. I've found that sometimes accounts of induction give vague answers, but not actionable details, and sometimes they give specifics unconnected to philosophy. Neither of those are adequate.

1) Which general laws, propositions, or explanations should one consider? How are they chosen or found? (And whatever method you answer, how does it differ from CR's brainstorming and conjecturing?)

2) When and why is one idea estimated to have a higher weight of observational evidence in favor of it than another idea? Given the situation that neither idea is contradicted by any of the evidence.

These are crucial questions to what your theory of induction says. The claimed specifics of induction vary substantially even among people who would agree with the same dictionary definition of "induction".

I've read everything you wrote to me, and a lot more in references, and I don't yet know what your answers are. I don't mind that. Discussion is hard. I think they are key questions for making progress on the issue, so I'm trying again.

As a fallibilist, you acknowledge that the 'real world' is a contingent one and there are no necessary truths. But is not 1+1=2 a necessary truth? Is not 'All tall men are men' a necessary truth since its negation is self-contradictory?

I'll focus on the math question because it's the easier case to discuss first. If we agree on it, then I'll address the A is A issue.

I take it you also think the solution to 237489 * 879234 + 8920343 is a necessary truth, as well as much more complex math. If instead you think that's actually a different case than 1+1, please let me know.

OK, so, how do you know 1+1=2? You have to figure out what 1+1 sums to. You have to calculate it. You have to perform addition.

The only means you have to calculate sums involve physical objects which obey the laws of physics.

You can count on your fingers, with an abacus, or with marbles. You can use a Mac or iPhone calculator. Or you can use your brain to do the calculation.

Your knowledge of arithmetic sums depends on the properties of the objects involved in doing the addition. You believe those objects, when used in certain ways, perform addition correctly. I agree. If the objects had different properties, then they'd have to be used in different ways to perform addition, or might be incapable of it. (For example, imagine an iPhone had the same physical properties as an iPhone-shaped rock. Then the sequences of touches the currently sum 1 and 1 on an iPhone would no longer work.)

Your brain, your fingers, computers, marbles, etc, are all physical objects. The properties of those objects are specified by the laws of physics. The objects have to be used in certain ways, and not other ways, to add 1+1 successfully. What ways work depends on the laws of physics which say that, e.g., marbles don't duplicate themselves or disappear when arranged in piles.

So I don’t think 1+1=2 is a truth independent of the laws of physics. If there's a major, surprising breakthrough in physics and it turns out we're mistaken about the properties of the physical objects used to perform addition, then 1+1=2 might have to be reconsidered because all our ways of knowing it depended on the old physics, and we have to reconsider it using the new physics. So observations which are relevant to physics are also relevant to determining that 1+1=2.

This is explained in "The Nature of Mathematics", which is chapter 10 of The Fabric of Reality by David Deutsch. If you know of any refutation of Deutsch's explanation, by yourself or others, please let me know. Or if you know of a view on this topic which contradicts Deutsch's, but which his critical arguments don't apply to, then please let me know.

I believe that Einstein is closer to the truth of what you call the real world than was Aristotle. So when I'm told by this type of fallibilist that we don't know anymore today than we did 400 years ago, I demur.

Neither Popper nor I believe that "we don't know anymore today than we did 400 years ago".

Given your comments on LSD and the a-s dichotomy, after reading this I conclude that you are a fan of late Popper (LP) and I prefer early Popper (EP).

Yes.

You think EP is wrong, and I think LP is right, so I don't see the point of talking about EP.

(I disagree with your interpretation of EP, but that's just a historical issue with no bearing on which philosophy of knowledge ideas are correct. So I'm willing to concede the point for the purpose of discussion.)

Gellner argued that Popper is a positivist in the logical positivist rather than the Comtean positivist sense. His discussion proceeded from the contrasting of positivists and Hegelians and so he put (early) Popper in the positivist camp - Popper was certainly no Hegelian. Of course, Popper never tired of reminding us that he destroyed the positivism of the Vienna Circle and went to great pains to declare himself opposed to neo-positivism. For example, he says that he warmly embraces various metaphysical views which hard positivists would dismiss as meaningless. Moderate positivists, however, accept metaphysical views but deny them scientific status. Does not Popper do this too, even if some of these views may one day achieve scientific status?

Yes: (Late) Popper accepts metaphysical and philosophical views, but doesn't consider them part of science.

CR (late-CR) says non-science has to be addressed with non-observational criticisms, instead of what we do in science, which is a mix of observational and non-observational criticism.

If by fallibilism you mean searching for evidence to support or falsify a theory, I'm a fallibilist. If, however, you mean embracing Popper's view of 'conjectural knowledge' and the inability, even in principle, or arriving at the truth, then I'm not. I believe, against Popper, Kuhn and Feyerabend, that the history of science is cumulative.

No, fallibilism means that (A) there are no guarantees against error. People are capable of making mistakes and there's no way around that. There's no way to know for 100% sure that a proposition is true.

CR adds that (B) errors are common.

Many philosophers accept (A) as technically true on logical grounds they can't refute, but they don't like it, and they deny (B) and largely ignore fallibilism.

I bring this up because, like many definitions of knowing, yours was ambiguous about whether infallibility is a requirement of knowing. So I'm looking for a clear answer about your conception of knowing.


Elliot Temple | Permalink | Messages (0)

Plateauing

I wrote these comments for the Fallible Ideas discussion group:

Plateauing while learning is an important issue. How do people manage that initial burst of progress? Why does it stop? How can they get going again?

This comes up in other contexts too, e.g. professional gamers talk about it. World class players in e.g. Super Smash Bros. Melee talk about how you have to get through several plateaus to get to the top, and have offered thoughts on how to do that. While they’ve apparently successfully done it themselves, their advice to others is usually not very effective for getting others past plateaus.

One good point I’ve heard skilled gamers say is that plateauing is partly just natural due to learning things with more visible results sometimes, and learning more subtle skills other times. So even if you learn at a constant rate, your game results will appear to have some plateauing anyway. Part of the solution is to be patient and don’t get disheartened and keep trying. Persistence is one of the tools for beating plateaus (and persistence is especially effective when part of the plateau is just learning some stuff with less visible benefits – though if you’re stuck on some key point then mere persistence won’t fix that problem).

When gamers talking about “leveling up” their play, or taking their play “to another level” it implicitly refers to plateaus. If skill increases were just a straight 45 degree line then there’d be no levels, it’d all just blend together. But with plateaus, there are distinguishable different levels you can reach.

It can be really hard to tell how much people plateau because they’re satisfied and don’t care about making further progress vs. they got stuck and rationalize it that way. That applies both to gamers and also to philosophy learners . [A poster] in various ways acted like he was done learning instead of trying to get past his plateau – but was that the cause of him being stuck, or was it a reaction to being stuck?


A while after people plateau, they commonly go downhill. They don’t just stay stable, they fall apart. Elements of this have been seen with many posters. (Often it’s ambiguous because people do things like quit philosophy without explaining why. So one can presume they fell apart in some way, some kind of stress got to them, but who knows, maybe they got hit by a car or got recruited by the CIA.)

In general, stagnation is unstable. This is something BoI talks about. It’s rapid progress or else things fall apart. Why? Problems are inevitable. Either you solve them (progress) or things start falling apart (unsolved problems have harmful consequences).

New problems will come up. If your problem solving abilities are failing, you’re screwed. If your problem solving abilities are working, you’ll make progress. You don’t just get to stand still and nothing happens. There are constantly issues coming up threatening to make things worse, and the only solution is problem solving which actually takes you forward.

So anyway people come to philosophy, make progress, get stuck, then fall apart.

A big part of why this happens is they find some stuff intuitively easy, fun, etc, and they get that far, then get stuck at the part where it requires more “work”, organization, studying books, or whatever else they find hard. People have the same issue in school sometimes – they are smart and coast along and find classes easy, then they eventually run into a class where they find the material hard and it can be a rough transition to deal with that or they can just abruptly fail.

Also people get excited and happy and stuff. Kinda like being infatuated with a new person they are dating. People do that with hobbies too. And that usually only happens once per person per hobby. Usually once their initial burst of energy slows down (even if they didn’t actually get stuck and merely were busy for a month) then they don’t know how to get it back and be super interested again.

After people get stuck, for whatever reason, they have a situation with some unsolved problems. What then happens typically is they try to solve those problems. And fail. Repeatedly. They try to get unstuck a bunch and it doesn’t work (or it does work, and then quite possibly no one even notices what happened or regards it as a plateau or being stuck). Usually if people are going to succeed at solving a problem they do it fast. If you can’t solve a problem within a week, will a month or year help? Often not. If you knew how to solve it, you’d solve it now. So if you’re stuck or plateauing it means all your regular methods of solving problems didn’t work. You had enough time to try everything you know how to do and that still didn’t work. Some significant new idea, new creativity, new method, etc, is needed. And people don’t know how to persistently and consistently pursue that in an organized effective way – they can just wait and hope for a Eureka that usually never comes, or go on with their life and hope somehow, someway, something ends up helping with the problem or they find other stuff to do in life instead.

People will try a bunch of times to solve a problem. They aren’t stuck quietly, passively, inactively. They don’t like the problem(s) they’re stuck on. They try to do something about it. This repeated failure takes a toll on their morale. They start questioning their mental capacity, their prospects for a great life, etc. They get demoralized and pessimistic. Some people last much longer than others, but you can see why this would often happen eventually.

And people who are living with this problem they don’t like, and this recurring failure, often turn to evasion and rationalization. They lie to themselves about it. They say it’s a minor problem, or it’s solved. They find some way not to think about it or not to mind it. But this harms their own integrity, it’s a deviation from reason and it opens the door to many more deviations from reason. This often leads to them falling apart in a big way and getting much worse than they were previously.

And people often want to go do something else where their pre-existing methods of thinking/learning/etc work, so they can have success instead of failure. So they avoid the stuff they are stuck on (after some number of prior failures which varies heavily from just a couple to tons). This is a bad idea when they are stuck on something important to their life and end up avoiding the issue by spending their time on less important stuff.

So there’s a common pattern:

  1. Progress. They use their existing methods of making progress and make some progress.

  2. Stuck. They run into some problems which can't be solved with their pre-existing methods of thinking, learning, problem solving, etc.

  3. Staying stuck. They try to get unstuck a bunch and fail over and over.

  4. Dishonesty. They don’t like chronic unsolved problems, being stuck, failing so much, etc. So they find some other way to think about it, other activities to do, etc. And they don’t like the implications (e.g. that they’ve given up on reason and BoI-style progress) so they are dishonest about that too.

  5. Falling apart. The dishonesty affects lots of stuff and they get worse than when they started in various ways.


Elliot Temple | Permalink | Messages (0)

Lots of Thoughts

BoI is about unbounded progress, and this is very different than what people are used to.

It means any kinds of bounds – like some topic being off limits – is problematic.

The standard expectation elsewhere is a little of this, a little of that, and great, good job, you’re a success. Around here it’s more like: lots of everything. More, more, more. And it’s hard to find break points for praise and basking in glory b/c there’s always further to go. And anyway were you seeking glory and praise, or interested in learning for its own sake?

What do you want a break for? Don’t you like making progress more than anything else? What else would you want to do? Some rest is necessary, but not resting on one’s laurels.

You’re still at the beginning of infinity. You still have infinite ignorance. Keep going!

People say they want to learn. But how much? How fast? Why not more, faster?

What is there to stop this? To restrain it from intruding on their whole life and disrupting everything? They don’t know, and don’t want to give up or question various things, so, when it comes down to it, they just give up on reason instead.

People expect social structures to determine a lot. If you learn at the pace of your university class, who could ask more of you? If you publish more than enough peer-reviewed papers to keep your job as a professor, aren’t you doing rather well? There are socially approved lifestyles which come with certain expectations for how much you should be learning. Do anything more than that and you’re in extra credit territory – which is awesome but (socially) one can’t be faulted for not getting even more extra credit...

People interact and learn in limited ways. They don’t want to deal with e.g. some philosophy ideas invalidating their whole field – like AGI, psychiatry, most of the social “sciences”. That’s too out of control. People want ideas to be bounded contrary to the inherent reach of the ideas. What an idea applies to is a logical matter, not a human choice, but people aren’t used to that and it disrupts their social structures.


I can break anyone. I can ask questions, criticize errors, and advocate for more progress until they give up and refuse to speak. No one can handle that if I really try. I can bring up enough of people’s flaws that it’s overwhelming and unwanted.

There are limits on what criticism people want to hear, what demons they want to face, what they want to question. Perhaps they’ll expand those limits gradually. But I, in the spirit of BoI, approach things differently. i take all criticism and questions from all comers without limiting rules and without being overwhelmed.

BTW, people, so used to their limits – and to statements like this being lies – still usually won’t ask me much or be very challenging.

I used to be confused by people breaking. I expected people to be more similar to myself. I thought they’d want to know about problems. i thought that of course they’d value truth-seeking above all else. I thought they’d take responsibility for organizing incoming information in good ways instead of being overwhelmed. I thought they’d make rapid progress. Instead, it turns out, people don’t know how to handle such things, and don’t ask, and get progressively more emotional while hiding the problem until they burst.

It’s foreign to me how people are. But it’s pretty predictable now. I stopped giving people unbounded criticism. It’s not appreciated. I just give little fraction of the criticism I could, to people who come to me – and still that’s usually more than enough that they hate it.

occasionally people here ask for full, maximum criticism. they don’t like the idea that i’m holding back – that i know problems in their lives and their thinking that i’m not telling them, that are going unsolved. (or that i could quickly discover such problems if i thought about them, asked them some questions, etc). i’ve often responded by testing them in some little way which was too much and they didn’t persist in asking for more.

it’s difficult b/c i prefer to be honest and say what i think openly. i generally don’t lie. but i neglect to say lots of things i could. i neglect to energetically pursue things involving other ppl which could/should be pursued if they were better and more capable. i could write 10+ replies each to most posts here with questions and arguments (often conditional on some guesses about incomplete information). there’s so much more to be said, so many connections to other stuff. people don’t want to deal with that. they want bounds on discussion.

they don’t have a grip on Paths Forward. they don’t have a functional home base to avoid Overreaching. they don’t have a beachhead of stuff they’ve gotten right to expand on. whenever they try to deal with unlimited criticism it’s chaos b/c, long story short, they are wrong about everything – both b/c they are at the beginning of infinity and also b/c they aren’t at the cutting edge of what’s already known. and progress from where they are to being way better doesn’t just consist of adding things while keeping what they already know, a ton of it is error correction.

whenever people try to deal with unbounded criticism, everything starts falling apart. their whole childhood and education was a tragic mess and they don’t want to redo it.

people don’t even get started on the Paths Forward project of dealing with all public criticism of ideas. and so basically all their ideas are already refuted and they just think that’s how knowledge is, and if you suddenly demand a new standard of actually getting stuff right – of actually addressing all the problems and criticisms and issues – then they totally lose their footing in the world of ideas b/c they never developed their ideas to that standard. and the project of starting thinking in that way and building up knowledge to that proper standard is fucking daunting and approximately no one else wants to do it.


people don’t like to be picked apart, like DD talking to the cryptoinductivist in FoR ch. 7 and continuing even after the guy conceded (and without even trying to manage the guy’s schedule for him by delaying communications until after he had time to think things over).

FoR and BoI held back 99% of what DD knows.


people want a forum where they go to get small doses of things they already want, while having total control over the whole process. they don’t want to feel bad about some surprise criticism about something they weren’t even trying to talk about.


people all have something they're dishonest about and don't want to talk about.

people all have some anti-rational memes.

and this stuff doesn't stay in neat little boundaries.

all the really powerful, general, abstract, important ideas with tons of reach are threatening to these entrenched no-progress zones.

it doesn't matter if the issue is just some little dumb thing like being scared of spiders. ideas have consequences. how do you feel good about yourself while knowing about some problem and being unwilling/unable to fix it? so you better not know much about spiders – so you better have poor research methods. so you better not know much about memes – so you better not come to understand what the current state of the world is or you'll have questions which memes are part of the answer to.

your best bet is to admit there seems to be a problem there but decide it's a low priority and you're going to do some other stuff and maybe get to it later. that can work with stuff that genuinely isn't very important, like about spiders. then you can learn about memes, and realize maybe you have a nasty meme about spiders, and that isn't destabilizing cuz u already thought there's a problem there, just not one that is affecting your life enough to prioritize over other issues you could work on first.

but what do you do when it isn't a low priority thing? what do you do when it's way harder to isolate than fear of spiders, and has much more immediate and large downsides? like when it's about family, relationships, parenting, your intelligence, your honesty, your rationality?

the more you learn and think and actually start to make some progress with reason, the harder it is to just be a collection of special cases. the more you start to learn and apply some principles and try to be more consistent. and then you run into clashes as you find internal contradictions. and that's not just ignorable, something's gotta give.


people have identities they're attached to. they want to already be wise. if not about everything, about some particular things they think they're good at. that's one of the things people really seem to dislike – when i'm way better at their specialty than they are, when they can't win any arguments with me in their own speciality that i've barely spent time on.

when i found FoR/DD/TCS i was fine with being wrong about more or less everything. i didn't mind. i didn't respect my own existing education in general. i thought school was shit and i'd barely learned anything since elementary school besides some math and programming. i was very good at chess, but i was well aware of the existence of people way better than me at chess – i'd lost tons of chess games and had a positive history of interacting about chess with people i didn't have much chance to beat (both chess friends and chess teachers).

my chess, math and programming have never got especially challenged since finding FoR/etc. but if they were – if there was some whole better way to think about them – i'd like that. i'd be happy. i don't rely on being good at them for identity and self-esteem. my self-esteem comes from more like being rational itself, being interested in learning, being willing to change and fix mistakes, etc. a lot of people actually get some self-esteem along those lines, which makes it all the more problematic for them to try to impose limits on discussion – so they end up twisting themselves up into such dishonest tangles trying to make excuses for why they won't discuss or think anymore in order to end discussion. the internal tangles are so much worse than what you see externally btw. like externally they might just say they are busy and will follow up in a week or two, and then not do that. and then after 3 weeks i write a few paragraphs, and they don't reply, and that's that, externally. but internally it often involves some serious breach of integrity to pull that off, and a whole web of dishonest rationalizations. a lot of these people actually did put a lot of thought into stuff behind the scenes rather than just casually leaving like it's nothing – or suppressed a lot of thought behind the scenes, which has consequences.

i had lefty political views – but they weren't very important to my life. thinking about issues was important to me, but i didn't mind having different thoughts.

lots of people have lots of friends, coworkers, family members, customers, etc, to worry about alienating by changing their mind about politics. i had some of that, but relatively less, and i didn't mind alienating people. if one of my friends doesn't want to reconsider politics and is unwilling to be friends with a right wing person, whatever, i'll just lose respect for them. i don't value people and interactions which are tied to some pre-existing unquestionable conclusions.

happily i haven't lost a job or spouse over my beliefs, but i would be willing to. i have lost potential jobs – e.g. i think it'd be quite hard for me to get hired at Google nowadays given some things i've written in public are the kinds of things Google considers hate speech and fires people for. but on the other hand i also got noticed and got some programming work on account of speaking my mind and having an intelligent blog, so that was good. (i don't do stuff like aggressively bring up politics or philosophy in programming work contexts btw)

you don't need to be popular to have a few friends, coworkers and family members you can get along with. you don't need millions of people to be OK with your beliefs. one job, one spouse and 5 good friends is more than a lot of people have. that's easier to get if you stand out in some ways (so some people like you a lot) than if a lot more people have a very mild positive opinion of you.

anyway lots of people have accomplishments they are proud of. they don't want to change their perspective so their accomplishments are all at the beginning of infinity and could really use as much rapid error-correcting progress as they can manage, which they should continue forever.

people are so used to disliking the journey (like learning or work) and liking the destination. so they don't want criticism of the destinations they already reached and to be told they should journey (make progress, change, improve) continuously forever.


btw people get way more offended if you personalize stuff like this (to criticism of them specifically; talking about yourself is alright). that gets in the way of their ability to pretend they are one of the exceptions. they don't want help connecting all this stuff to actual specific flaws in their life and attitudes (or at least not unbounded help of that type – if they could carefully control all the consequences and what they find out, then they might be willing to open that pandora's box a little. but they can't. even if i was totally obedient and stuff, you just can't control, predict and bound the growth of knowledge. it takes severe fucking limits to avoid what's basically the jump to universal progress-making).

and if you don't personalize and you don't call out individuals, mostly everyone just acts like you're talking to someone else. kinda like if someone is hurt you don't want to shout "someone call 911" to the crowd while you try to perform CPR. it's too likely that no one will do it. it's more effective to pick a random person and tell them personally to call 911.


there are legitimate, important, worthwhile questions about how to change while keeping some stability. you need a mind and life situation which is viable for your life throughout the whole process. it's kinda like patching computer software without being able to shut it down and restart it.

the solution isn't to limit criticism, to block messages, to not find things out. knowing about too many problems to deal with is better than not knowing. it lets you prioritize better. even if you're not very good at prioritizing, you ought to do better with a half-understood list with more stuff on it than with simply less information. (unless the less info is according to a wise design that someone else put effort into. then their knowledge about what you should prioritize could potentially be superior to what overwhelmed-you would come up with initially.)

people need to learn to live with conflict, to live with knowing they are at the beginning of infinity and knowing actual open questions, open leads, open very important things to work on or learn with big consequences.

this is difficult as a practical matter when it comes to emotionally charged issues, identity, self-esteem, major attachments, and stuff with lasting consequences like how one treats one's children. people have a hard time knowing they may well be doing a lot of harm to their child, and then just being emotionally OK with that and proceeding in a calm, reasonable way to e.g. read some relevant books and try to learn more about philosophy of knowledge so they can understand education better so they can later, indirectly, be a better parent. and in the meantime they are doing stuff to their kid which leaves most victims really mentally crippled and irrational for the rest of their lives... and what they are doing violates tons of their own existing values and knowing about that bothers them.

this perspective is wrong though. if they don't hear a damn word about specifically some of their flaws, they should still realize they are at the beginning of infinity and must be doing all sorts of things horribly wrong with all sorts of massive, nasty consequences that are sooooooo far from ideal. not knowing the specific criticisms as applied to their life really shouldn't change their perspective much. but people aren't so good at abstract thinking so they just want to shut up certain messages and not think through or learn all the philosophy of BoI.

BoI (dream of socrates chapter) talks about Hermes' perspective and how tons of stuff the Athenians do looks like the example of stealing and then having disasters and then thinking the solution is even more stealing. that applies to you whether anyone names some of the stuff you're really bad at or not. and hearing some indication of some of the stuff you're fucking up – e.g. using violence and threat of violence against your child, as well as a lot of more subtle but serious stuff – should be purely helpful to deciding what to prioritize, what to do next, and hell it should help with motivation.

i wish i knew some big area(s) i was really bad at and had the option to read stuff about it from people who already put a lot of great thought into it that i don't already know. that'd make things so much easier. i know in theory i must be fucking up all kinds of things, but i don't have a bunch of useful leads being handed to me by others anymore. i used to have that a ton, especially from DD. but also other stuff like i read Szasz and found out about psychiatry – not that i had much in the way of pre-existing views on psychiatry, but still, my little bit of vague thinking on the matter was wrong.

i also never had much of an opinion on induction or economics before learning lots about it. that's something i find kinda weird. how much people who don't know much think they know a bunch and are attached. i usually am good at knowing that i don't know much about something, but when i talk to people about psychiatry i find a large portion of them are like super entrenched with pro-psychiatry views even though they really don't know much about it. same with capitalism/socialism and induction. people who've really never studied the matter have such strong opinions they are so attached to.

an example of something that went less smoothly was Israel. i had picked up some anti-Israel ideas from news articles and i think also from some other TCS discussion people like Justin (i know he had bad views on Israel in the past and changed his mind later than i did and he predated me at the TCS IRC chatroom). anyway DD misidentified me as entrenched with anti-Israel dogma, partly b/c i did know (or thought i knew) a bit about it, and I brought up some information i'd read. but, while i can see how it looked a lot like many other conversations, he was actually mistaken about me and i quickly learned more and changed my mind about Israel (with DD offering guidance like recommending things to read and pointing out a few things).

the misunderstanding is important b/c it lets us examine: what happened when DD thought I was being irrational? he said a few harsh things. which, as a matter of fact, i didn't deserve. but so what? did i spend my time getting offended? no. i just wanted to learn and focused on that. i still expected him to be right about the topic, and just wanted to get info.

i used to say, more or less, that DD was always right about everything. this attitude is important and interesting b/c it appears irrational (deferring to authority). it's also an attitude lots of people would dislike, whereas i enjoyed it – i was thrilled to find a bunch of knowledge (embodied by a particular person – which people find more offensive than books for some reason) better and wiser than myself rather than feeling diminished by comparison.

i was, at the same time, very deferential in some ways and not at all deferential in other ways. this is important and people suck at it.

i did not go "well i lost the last 50 arguments but i bet i'm right about Israel. i bet those dozen articles i read means i know more about it than DD and i'll win the debate this time". that's so typical and so dumb.

but i also did not just accept whatever DD said b/c he said it. i expected him to be right but also challenged his claims. i asked questions and argued, while expecting to lose the debate, to learn more about it. i very persistently brought stuff up again and again until i was fully satisfied. lots of people concede stuff and then think it's done and don't learn more about it, and end up never learning it all that well. sometimes i thought i conceded and said so, but even if i did, i had zero shame about re-opening any topic from any amount of time ago to ask a new question or ask how to address a new argument for any side.

i also fluidly talked about arguments for any side instead of just arguing a particular side. even if i was mostly arguing a particularly side, i'd still sometimes think of stuff for DD's side and say that too. ppl are usually so biased and one-sided with their creativity.

after i learned things from DD i found people to discuss them with, including people who disagreed with them. then if i had any trouble thoroughly winning the debate with zero known flaws on my side, zero open problems, zero unanswered criticisms, etc, then i'd go back to DD and expect more and better answers from him to address everything fully. i figured out lots of stuff myself but also my attitude of "DD is always right and knows everything" enabled me to be infinitely demanding – i expected him to be a perfect oracle and just kept asking questions about anything and everything expecting him to always have great answers to whatever level of precision, thoroughness, etc, i wanted. when i wasn't fully convinced by every aspect of an answer i'd keep trying over and over to bring up the subject in more ways – state different arguments and ask what's wrong with them, state more versions of his position (attempting to fix some problem) and ask if that's right, find different ways to think about a question and express it, etc. this of course was very useful for encouraging DD to create more and better answers than he already knew or already had formulated in English words.

i didn't 100% literally expect him to know everything, but it was a good mantra and was compatible with questioning him, debating him, etc. it's important to be able to expect to be mistaken and lose a debate and still have it, eagerly and thoroughly. and to keep saying every damn doubt you have, every counter-argument you think of, to address all of them, even when you're pretty convinced by some main points that you must be badly wrong or ignorant.

anyway the method of not being satisfied with explanations until i'd explained them myself to teach others and win several debates – with no outstanding known hiccups, flaws, etc – is really good. that's the kind of standard of knowledge people need.

standards for what kind of knowledge quality people should aim for is an important topic, btw. people often think their sloppy knowledge is good enough and that more precision isn't needed. why split hairs? this is badly wrong:

  • we're at the beginning of infinity. there's so much wrong with our knowledge and we should strive to make all the progress we can, make it as great as we can.

  • people's actual current knowledge leads to all kinds of tragedies and misery. disasters go wrong in people's lives. a lot. our knowledge isn't good enough. there's so much we can see wrong with the world that we should want to be better. not just advanced stuff like what's wrong with parenting, but more blatant stuff like how the citizens of North Korea are treated, the threat of NK or Iranian nukes, our poor ability to create a reasonable consensus about foreign policy. or people having broken hearts and bitter divorces. or people having a "mental illness" like "depression" or "autism" and kids and malcontents being drugged into a stupor. and even if you don't think psychiatrists are doing anything wrong you can still see it as they are dealing with hard problems and there's room for them to develop better medicines. oh and people die of cancer, car accidents, and stuff – and more generally of aging. and we're still a single-planet civilization that could get wiped out if we don't get to other planets soon enough. and it's not really that hard to list a lot more stuff on a big or small scale. people have mini fights with their family and friends all the time. people get fired, programming projects fail, business in all industries fail, people make bad decisions and lose a bunch of money, people don't achieve all that they wish to, people feel bad about things that happen to them (e.g. someone said something mean) and have a bad time with it and find it distracting, people are late to stuff, people's cooking comes out bad.


FI is a method of always being right. cuz either ur right now, or u change ur mind and then ur right. other stuff is a method of staying wrong.

first you have some position that, as far as you know is right. you've done nothing wrong. even if you're mistaken, you don't know better and you're making reasonable ongoing efforts to seek out new info, learn new things, etc. then someone challenges you, and you realize there's some issues with your view, so your new position is you're undecided pending further thought and info. (that's your intellectual position, in terms of IRL actions u might be mid-project and decide, at this point, it's best not to disrupt it even given the risk you're mistaken.) and then the moment after you're persuaded, your position is you know enough to be persuaded of this new idea. and so who can fault you at any time? you held the right position to hold, given what you knew, at each step.

when ppl argue with me, either they have yet to provide adequate help for me to understand a better idea (so it's ok i haven't adopted the new view yet), or they have in which case i will have successfully adopted the new view (if i haven't successfully done that then apparently the help was inadequate and either they can try to help more or i can work on it without them more, whatever, i'm blameless regardless).


Elliot Temple | Permalink | Messages (10)

The Four Best Books

The four best books are The Fabric of Reality and The Beginning of Infinity by David Detusch (DD), and Atlas Shrugged and The Fountainhead by Ayn Rand (AR).

Update: See my unendorsement of the Deutsch books.

Everyone should learn this stuff, but currently only a handful of people in the world know much about all four of these books. This material is life-changing because it deals with broad ideas which are important to most of life, and which challenge many things people currently think they know.

However: they’re way too deep and novel to read once and understand. The ideas are correct to a level of detailed precision that people don't even know is a possible thing to try for. The normal way people read books is inadequate to learn all the wonderful ideas in these books. To understand , there’s two options:

1) be an AR or DD yourself, be on their level or reasonably close, be the kind of person who could invent the ideas in the first place. then you could learn it alone (though it’d still involve many rereadings and piles of supplementary material, unless you were dramatically better than AR or DD.)

this is not intended as an option for people to choose, they're like one in a billion kind of people. and even if one could do it, it’s way harder than (2) so it'd be a dumb approach.

2) get help with error correction from other people who already understand the ideas. realistically, this requires a living tradition of people willing to help with individualized replies. it’s plenty hard enough to learn the ideas even with great resources like that. to last, it has to educate new people faster than existing people stop participating or die. (realistically, this method still involves supplementary material, rereadings, etc, in addition to discussion.)

What is the current situation regarding relevant living traditions?

DD

for the DD stuff, there’s only one living tradition available: the Fallible Ideas community.

the most important parts of the DD material is based on Karl Popper's philosophy, Critical Rationalism (CR). there’s some CR-only stuff elsewhere, but the quality is inadequate.

Fallible Ideas

besides reading the books, it's also important to understand how the DD and AR ideas fit together, and how to apply the cohesive whole to life.

there's lots of written material about this on my websites and in discussion archives. the only available living tradition for this is the Fallible Ideas community.

AR

for the AR stuff, there are two living traditions available which i consider valuable. there are also others like Branden fans, Kelley fans, various unserious fan forums, etc, which i don’t think are much help.

the two valuable Rand living traditions disagree considerably on some topics, but they do also agree a ton on other topics.

they are the Fallible Ideas community and the Peikoff/Ayn Rand Institute/Binswanger community. The Peikoff version of Objectivism doesn’t understand CR; it’s inductivist. There are other significant flaws with it, but there’s also a lot of value there. It’s has really helpful elaborations of what Rand meant on many topics.


Elliot Temple | Permalink | Messages (9)

Discussion About the Importance of Explanations with Andrew Crawshaw

From Facebook:

Justin Mallone:

The following excerpt argues that explanations are what is absolutely key in Popperian philosophy, and that Popper over-emphasizes the role of testing in science, but that this mistake was corrected by physicist and philosopher David Deutsch (see especially the discussion of the grass cure example). What do people think?
(excerpted from: https://curi.us/1504-the-most-important-improvement-to-popperian-philosophy-of-science)

Most ideas are criticized and rejected for being bad explanations. This is true even in science where they could be tested. Even most proposed scientific ideas are rejected, without testing, for being bad explanations.
Although tests are valuable, Popper's over-emphasis on testing mischaracterizes science and sets it further apart from philosophy than need be. In both science and abstract philosophy, most criticism revolves around good and bad explanations. It's largely the same epistemology. The possibility of empirical testing in science is a nice bonus, not a necessary part of creating knowledge.

In [The Fabric of Reality], David Deutsch gives this example: Consider the theory that eating grass cures colds. He says we can reject this theory without testing it.
He's right, isn't he? Should we hire a bunch of sick college students to eat grass? That would be silly. There is no explanation of how grass cures colds, so nothing worth testing. (Non-explanation is a common type of bad explanation!)
Narrow focus on testing -- especially as a substitute for support/justification -- is one of the major ways of misunderstanding Popperian philosophy. Deutsch's improvement shows how its importance is overrated and, besides being true, is better in keeping with the fallibilist spirit of Popper's thought (we don't need something "harder" or "more sciency" or whatever than critical argument!).

Andrew Crawshaw: I see, but it might turn out that grass cures cold. This would just be an empirical fact, demanding scientific explanation.

TC: Right, and if a close reading of Popper yielded anything like "test every possible hypothesis regardless of what you think of it", this would represent an advancement over Popper's thought. But he didn't suggest that.

Andrew Crawshaw: We don't reject claims of the form by indicated by Deustch because they are bad explanations. There are plenty of dangling empirical claims that we still hold to be true but which are unexplained. Deutsch is mistaking the import of his example.

Elliot Temple:

There are plenty of dangling empirical claims that we still hold to be true but which are unexplained.

That's not the issue. Are there any empirical claims we have criticism of, but which we accept? (Pointing out that something is a bad explanation is a type of criticism.)

Andrew Crawshaw: If you think that my burden is to show that there are empirical claims that are refuted but that we accept, then you have not understood my criticism.

For example

Grass cures colds.

Is of the same form as

aluminium hydroxide contributes to the production of a large quantity of antibodies.

Both are empirical claims, but they are not explanatory. That does not make them bad

Neither of them are explanations. One is accepted and the other is not.

It's not good saying that the former is a bad explanation.

The latter has not yet been properly explained by sciences

Elliot Temple: The difference is we have explanations of how aluminum hydroxide works, e.g. from wikipedia " It reacts with excess acid in the stomach, reducing the acidity of the stomach content"

Andrew Crawshaw: Not in relation to its antibody mechanism.

Elliot Temple: Can you provide reference material for what you're talking about? I'm not familiar with it.

Andrew Crawshaw: I can, but it is still irrelevant to my criticism. Which is that they are both not explanatory claims, but one is held as true while the other not.

They are low-level empirical claims that call out for explantion, they don't themselves explain. Deutsch is misemphesising.

https://www.chemistryworld.com/news/doubts-raised-over-vaccine-boost-theory/3001326.article

Elliot Temple: your link is broken, and it is relevant b/c i suspect there is an explanation.

Andrew Crawshaw: It's still irrelevant to my criticism. Which is that we often accept things like rules of thumb, even when they are unexplained. They don't need to be explained for them to be true of for us to class them as true. Miller talks about this extensively. For instance strapless evening gowns were not understand scientifically for ages.

Elliot Temple: i'm saying we don't do that, and you're saying you have a counter-example but then you say the details of the counter-example are irrelevant. i don't get it.

Elliot Temple: you claim it's a counter example. i doubt it. how are we to settle this besides looking at the details?

Andrew Crawshaw: My criticism is that calling such a claim a bad explanation is irrelevat to those kinds of claims. They are just empirical claims that beg for explanation.

Elliot Temple: zero explanation is a bad explanation and is a crucial criticism. things we actually use have more explanation than that.

Andrew Crawshaw: So?

Elliot Temple: so DD and I are right: we always go by explanations. contrary to what you're saying.

Andrew Crawshaw: We use aliminium hydroxide for increasing anti-bodies and strapless evening gowns p, even before they were explained.

Elliot Temple: i'm saying i don't think so, and you're not only refusing to provide any reference material about the matter but you claimed such reference material (indicating the history of it and the reasoning involved) is irrelevant.

Andrew Crawshaw: I have offered it. I re-edited my post.

Elliot Temple: please don't edit and expect me to see it, it usually doesn't show up.

Andrew Crawshaw: You still have not criticised my claim. The one comparing the two sentences which are of the same form, yet one is accepted and one not.

Elliot Temple: the sentence "aluminium hydroxide contributes to the production of a large quantity of antibodies." is inadequate and should be rejected.

the similar sentence with a written or implied footnote to details about how we know it would be a good claim. but you haven't given that one. the link you gave isn't the right material: it doesn't say what aluminium hydroxide does, how we know it, how it was discovered, etc

Elliot Temple: i think your problem is mixing up incomplete, imperfect explanations (still have more to learn) with non-explanation.

Andrew Crawshaw: No, it does not. But to offer that would be to explain. Which is exactly what I am telling is irrelevant.

What is relevant is whether the claim itself is a bad explanation. It's just an empirical claim.

The point is just that we often have empirical claims that are not explained scientifically yet we accept them as true and use them.

Elliot Temple: We don't. If you looked at the history of it you'd find there were lots of explanations involved.

Elliot Temple: I guess you just don't know the history either, which is why you don't know the explanations involved. People don't study or try things randomly.

Elliot Temple: If you could pick a better known example which we're both familiar with, i could walk you through it.

Andrew Crawshaw: There was never an explanation of how bridges worked. But there were rules of thumb of how to build them. There is explanations of how to use aluminium hydroxide but is actual mechanism is unknown.

Elliot Temple: what are you talking about with bridges. you can walk on strong, solid objects. what do you not understand?

Andrew Crawshaw: That's not how they work. I am talking about the scientific explanation of forces and tensions. It was not always understood despite the fact that they were built. This is the same with beavers dams, they don't know any of the explanations of how to build dams.

Elliot Temple: you don't have to know everything that could be known to have an explanation. understanding that you can walk on solid objects, and they can be supported, etc, is an explanation, whether you know all the math or not. that's what the grass cure for the cold lacks.

Elliot Temple: the test isn't omniscience, it's having a non-refuted explanation.

Andrew Crawshaw: Hmm, but are you saying then that even bad-explanations can be accepted. Cuz as far as I can tell many of the explanations for bridge building were bad, yet they stil built bridges.

Anyway you are still not locating my criticism. You are criticising something I never said it seems. Which is that Grass cures cold has not been explained. But what Deutsch was claiming was that the claim itself was a bad explanation, which is true if bad explanation includes non-explanation, but it is not the reason it is not accepted. As the hydroxide thing suggests.

Elliot Temple: We should only accept an explanation that we don't know any criticism of.

We need some explanation or we'd have no idea if what we're doing would work, we'd be lost and acting randomly without rhyme or reason. And that initial explanation is what we build on – we later improve it to make it more complete, explain more stuff.

Andrew Crawshaw: I think this is incorrect. All animals that can do things refutes your statement.

Elliot Temple: The important thing is the substance of the knowledge, not whether it's written out in the form of an English explanation.

Andrew Crawshaw: Just because there is an explanation of how some physical substrate interacts with another physical substrate, does not mean that you need explanations. Explanations are in language. Knowledge not necessarily. Knowledge is a wider phenomenon than explanation. I have many times done things by accident that have worked, but I have not known why.

Elliot Temple: This is semantics. Call it "knowledge" then. You need non-refuted knowledge of how something could work before it's worth trying. The grass cure for the cold idea doesn't meet this bar. But building a log bridge without knowing modern science is fine.

Andrew Crawshaw: Before it's worth trying? I don't think so, rules of thumb are discovered by accident and then re-used without knowing how or why it could work,,it's just works and then they try it again and it works again. Are you denying that that is a possibility?

Elliot Temple: Yes, denying that.

Andrew Crawshaw: Well, you are offering foresight to evolution then, it seems.

Elliot Temple: That's vague. Say what you mean.

Andrew Crawshaw: I don't think it is that vague. If animals can build complex things like behaves and they should have had knowledge of how it could work before it was worth trying out, then they have a lot of forsight before they tried them out. Or could it be the fact that it is the other way round, we stumble in rules of thumb develop them, then come up with explanations about how they possibly work. I am more inclined to the latter. The former is just another version of the argument from design.

Elliot Temple: humans can think and they should think before acting. it's super inefficient to act mindlessly. genetic evolution can't think and instead does things very, very, very slowly.

Andrew Crawshaw: But thinking before acting is true. Thinking is critical. It needs material to work on. Which is guesswork and sometimes, if not often, accidental actions.

Elliot Temple: when would it be a good idea to act thoughtlessly (and which thoughtless action) instead of acting according to some knowledge of what might work?

Elliot Temple: e.g. when should you test the grass cure for cancer, with no thought to whether it makes any sense, instead of thinking about what you're doing and acting according to your rational thought? (which means e.g. considering what you have some understanding could work, and what you have criticisms of)

Andrew Crawshaw: Wait, we often act thoughtlessly whether or not we should do. I don't even think it is a good idea. But we often try to do things and end up somewhere which is different to what we expected, it might be worse or better. For instance, we might try to eat grass because we are hungry and then happen to notice that our cold disspaeard and stumble on a cure for the cold.

Andrew Crawshaw: And different to what we expected might work even though we have no idea why.

Elliot Temple: DD is saying what we should do, he's talking about reason. Sometimes people act foolishly and irrationally but that doesn't change what the proper methods of creating knowledge are.

Sometimes unexpected things happen and you can learn from them. Yes. So what?

Andrew Crawshaw: But if Deustch expects that we can only work with explanations. Then he is mistaken. Which is, it seems, what you have changed your mind about.

Elliot Temple: I didn't change my mind. What?

What non-explanations are you talking about people working with? When an expectation you have is violated, and you investigate, the explanation is you're trying to find out if you were mistaken and figure out the thing you don't understand.

Elliot Temple: what do you mean "work with"? we can work with (e.g. form explanations about) spreadsheet data. we can also work with hammers. resources don't have to be explanations themselves, we just need an explanation of how to get value out of the resource.

Andrew Crawshaw: There is only one method of creating knowledge. Guesswork. Or, if genetically, by mutation. Physical things are often made without knows how and then they are applied in various contexts and they might and mint not work, that does not mean we know how they work.

Elliot Temple: if you didn't have an explanation of what actions to take with a hammer to achieve what goal, then you couldn't proceed and be effective with the hammer. you could hit things randomly and pray it works out, but it's not a good idea to live that way.

Elliot Temple: (rational) humans don't proceed purely by guesses, they also criticize the guesses first and don't act on the refuted guesses.

Andrew Crawshaw: Look there are three scenarios

  1. Act on knowledge
  2. Stumble upon solution by accident, without knowing why it works.
  3. Act randomly

Elliot Temple: u always have some idea of why it works or you wouldn't think it was a solution.

Andrew Crawshaw: No, all you need is to recognise that it worked. This is easily done by seeing that what you wanted to happen happened. It is non-sequitur to then assume that you know something of how it works.

Elliot Temple: you do X. Y results. Y is a highly desirable solution to some recurring problem. do you now know that X causes Y? no. you need some causal understanding, not just a correlation. if you thought it was impossible that X causes Y, you would look for something else. if you saw some way it's possible X causes Y, you have an initial explanation of how it could work, which you can and should expose to criticism.

Elliot Temple:

Know all you need is to recognise that it works.

plz fix this sentence, it's confusing.

Andrew Crawshaw: You might guess that it caused it. You don't need to understand it to guess that it did.

Elliot Temple: correlation isn't causation. you need something more.

Elliot Temple: like thinking of a way it could possibly cause it.

Elliot Temple: that is, an explanation of how it works.

Andrew Crawshaw: I am not saying correlation is causation, you don't need to explained guesswork, before you have guess it. You first need to guess that something caused something before you go out and explain it. Otherwise what are explaining?

Elliot Temple: you can guess X caused Y and then try to explain it. you shouldn't act on the idea that X caused Y if you have no explanation of how X could cause Y. if you have no explanation, then that's a criticism of the guess.

Elliot Temple: you have some pre-existing understanding of reality (including the laws of physics) which you need to fit this into, don't just treat the world as arbitrary – it's not and that isn't how one learns.

Andrew Crawshaw: That's not a criticism of the guess. It's ad hominem and justificationist.

Elliot Temple: "that" = ?

Andrew Crawshaw: I am agreeing totally with you about many things

  1. We should increase our criticism as much as possible.
  2. We do have inbuilt expectations about how the world works.

What We are not agreeing about is the following

  1. That a guess has to be back up by explanation for it to be true or classified as true. All we need is to criticise the guess. Arguing otherwise seems to me a type of justificationism.

  2. That in order to get novel explanations and creations, this often is done despite the knowledge and necessarily has to be that way otherwise it would not be new.

Elliot Temple:

That's not a criticism of the guess. It's ad hominem and justificationist.

please state what "that" refers to and how it's ad hominem, or state that you retract this claim.

Andrew Crawshaw: That someone does not have an explanation. First, because explanations are not easy to come by and someone not having an explanation for something does not in anyway impugn the pedigree of the guess or the strategy etc. Second explanation is important and needed, but not necessary for trying out the new strategy, y, that you guess causes x. You might develope explanations while using it. You don't need the explanation before using it.

Elliot Temple: Explanations are extremely easy to come by. I think you may be adding some extra criteria for what counts as an explanation.

Re your (1): if you have no explanation, then you can criticize it: why didn't they give it any thought and come up with an explanation? they should do that before acting, not act thoughtlessly. it's a bad idea to act thoughtlessly, so that's a criticism.

it's trivial to come up with even an explanation of how grass cures cancer: cancer is internal, and various substances have different effects on the body, so if you eat it it may interact with and destroy the cancer.

the problem with this explanation is we have criticism of it.

you need the explanation so you can try criticizing it. without the explanation, you can't criticize (except to criticize the lack of explanation).

re (2): this seems to contain typos, too confusing to answer.

Elliot Temple: whenever you do X and Y happens, you also did A, B, C, D. how do you know it was X instead of A, B, C or D which caused Y? you need to think about explanations before you can choose which of the infinite correlations to pay attention to.

Elliot Temple: for example, you may have some understanding that Y would be caused by something that isn't separated in space or time from it by very much. that's a conceptual, explanatory understanding about Y which is very important to deciding what may have caused Y.

Andrew Crawshaw: Again, it's not a criticism of the guess. It's a criticism of how the person acted.

The rest of your statements are compatible with what I am saying. Which is just that it can be done and explanations are not necessary either for using something or creating something. As the case of animals surely shows.

You don't know, you took a guess. You can't know before you guess that your guess was wrong.

Elliot Temple: "I guess X causes Y so I'll do X" is the thing being criticized. If the theory is just "Maybe X causes Y, and this is a thing to think about more" then no action is implied (besides thinking and research) and it's harder to criticize. those are different theories.

even the "Maybe X causes Y" thing is suspect. why do you think so? You did 50 million actions in your life and then Y happened. Why do you think X was the cause? You have some explanations informing this judgement!

Andrew Crawshaw: There is no difference between maybe Y and Y. It's always maybe Y. Unless refuted.

Andrew Crawshaw: You are subjectivist and justificationist as far as I can tell. A guess is objective and if someone despite the fact that they have bad judgement guesses correctly. They still guess correctly. Nothing mitigates the precariousness of this situation. Criticism is the other component.

Elliot Temple: If the guess is just "X causes Y", period, you can put that on the table of ideas to consider. However, it will be criticized as worthless: maybe A, B, or C causes Y. Maybe Y is self-caused. There's no reason to care about this guess. It doesn't even include any mention of Y ever happening.

Andrew Crawshaw: The guess won't be criticised, what will be noticed is that it shouts out for explanation and someone might offer it.

Elliot Temple: If the guess is "Maybe X causes Y because I once saw Y happen 20 seconds after X" then that's a better guess, but it will still get criticized: all sorts of things were going on at all sorts of different times before Y. so why think X caused Y?

Elliot Temple: yes: making a new guess which adds an explanation would address the criticism. people are welcome to try.

Elliot Temple: they should not, however, go test X with no explanation.

Andrew Crawshaw: That's good, but one of the best ways to criticise it, is to try it again and see if it works.

Elliot Temple: you need an explanation to understand what would even be a relevant test.

Elliot Temple: how do you try it again? how do you know what's included in X and what isn't included? you need an explanation to differentiate relevant stuff from irrelevant

Elliot Temple: as the standard CR anti-inductivist argument goes: there are infinite patterns and correlations. how do you pick which ones to pay attention to?

Elliot Temple: you shouldn't pick one thing, arbitrarily, from an INFINITE set and then test it. that's a bad idea. that's not how scientific progress is made.

Elliot Temple: what you need to do is have some conceptual understanding of what's going on. some explanations of what types of things might be relevant to causing Y and what isn't relevant, and then you can start doing experiments guided by your explanatory knowledge of physics, reality, some possible causes, etc

Elliot Temple: i am not a subjectivist or justificationist, and i don't see what's productive about the accusation. i'm willing to ignore it, but in that case it won't be contributing positively to the discussion.

Andrew Crawshaw: I am not saying that we have no knowledge. I am sayjng that we don't have an explanation of the mechanism.

Elliot Temple: can you give an example? i think you do have an explanation and you just aren't recognizing what you have.

Andrew Crawshaw: For instance, washing hands and it's link to mortality rates.

Elliot Temple: There was an explanation there: something like taint could potentially travel with hands.

Elliot Temple: This built on previous explanations people had about e.g. illnesses spreading to nearby people.

Andrew Crawshaw: Right, but the use of soap was not derived from the explanation. And that explanation might have been around before, and no such soap was used because of it.

Elliot Temple: What are you claiming happened, exactly?

Andrew Crawshaw: I am claiming that soap was invented for various reasons and then it turned out that the soap could be used for reducing mortality"

Elliot Temple: That's called "reach" in BoI. Where is the contradiction to anything I said?

Andrew Crawshaw: Reach of explanations. It was not the explanation, it was the invention of soap itself. Which was not anticipated or even encouraged by explanations. Soap is invented, used in a context an explanation might be applied to it. Then it is used in another context and again the explanation is retroactively applied to it. The explantion does not necessarily suggest more uses, nor need it.

Elliot Temple: You're being vague about the history. There were explanations involved, which you would see if you analyzed the details well.

Andrew Crawshaw: So, what if there were explanations "involved" The explanations don't add anything to the discovery of the uses of the soap. This are usually stumbled in by accident. And refinements to soaps as well for those different contexts.

Andrew Crawshaw: I am just saying that explanations of the soap works very rarely suggest new avenues. It's often a matter of trial and error.

Elliot Temple: You aren't addressing the infinite correlations/patterns point, which is a very important CR argument. Similarly, one can't observe without some knowledge first – all observation is theory laden. So one doesn't just observe that X is correlated to Y without first having a conceptual understanding for that to fit into.

Historically, you don't have any detailed counter example to what I'm saying, you're just speculating non-specifically in line with your philosophical views.

Andrew Crawshaw: It's an argument against induction. Not against guesswork informed by earlier guesswork, that often turns out to be mistaken. All explanations do is rule things out. unless they are rules for use, but these are developed while we try out those things.

Elliot Temple: It's an argument against what you were saying about observing X correlated with Y. There are infinite correlations. You can either observe randomly (not useful, has roughly 1/infinity chance of finding solutions, aka zero) or you can observe according to explanations.

Elliot Temple: You're saying to recognize a correlation and then do trial and error. But which one? Your position has elements of standard inductivist thinking in it.

Andrew Crawshaw: I never said anything about correlation - you did.

What is said was we could guess that x caused y and be correct. That's what I said, nothing more mothing less.

Andrew Crawshaw: One instance does not a correlation make.

Elliot Temple: You could also guess Z caused Y. Why are you guessing X caused Y? Filling up the potential-ideas with an INFINITE set of guesses isn't going to work. You're paying selective attention to some guesses over others.

Elliot Temple: This selective attention is either due to explanations (great!) or else it's the standard way inductivists think. Or else it's ... what else could it be?

Andrew Crawshaw: Why not? Criticise it. If you have a scientific theory that rules my guess out, that would be intersting. But saying why not this guess and why not that one. Some guesses are not considered by you maybe because they are ruled out by other expectations, or ey do not occurs to you.

Elliot Temple: The approach of taking arbitrary guesses out of an infinite set and trying to test them is infinitely slow and unproductive. That's why not. And we have much better things we can do instead.

Elliot Temple: No one does this. What they do is pick certain guesses according to unconscious or unstated explanations, which are often biased and crappy b/c they aren't being critically considered. We can do better – we can talk about the explanations we're using instead of hiding them.

Andrew Crawshaw: So, you are basically gonna ignore the fact that I have agreed that expecations and earlier knowledge do create selective attention, but what to isolate is neither determined by theory, nor by earlier perceptions, it is large amount guesswork controlled by criticism. Humans can do this rapidly and well.

Elliot Temple: Please rewrite that clearly and grammatically.

Andrew Crawshaw: It's like you are claiming there is no novelty in guesswork, if we already have that as part of our expectation ps it was not guesswork.

Elliot Temple: I am not claiming "there is no novelty in guesswork".

Andrew Crawshaw: So we are in agreement, then. Which is just that there are novel situations and our guesses are also novel. How we eliminate them is through other guesses. Therefore the guesses are sui generiz and then deselected according earlier expecations. It does not follow that the guess was positively informed by anything. It was a guess about what caused what.

Elliot Temple: Only guesses involving explanations are interesting and productive. You need to have some idea of how/why X causes Y or it isn't worth attention. It's fine if this explanation is due to your earlier knowledge, or it can be a new idea that is part of the guess.

Andrew Crawshaw: I don't think that's true. Again beavers make interesting and productive dams.

Elliot Temple: Beavers don't choose from infinite options. Can we stick to humans?

Andrew Crawshaw: Humans don't choose from infinite options....They choose from the guess that occur to them, which are not infinite. Their perception is controlled by both pyshiologival factors and their expectations. Novel situations require guesswork, because guesswork is flexible.

Elliot Temple: Humans constantly deal with infinite categories. E.g. "Something caused Y". OK, what? It could be an abstraction such as any integer. It could be any action in my whole life, or anyone else's life, or something nature did. There's infinite possibilities to deal with when you try to think about causes. You have to have explanations to narrow things down, you can't do it without explanations.

Elliot Temple: Arbitrary assertions like "The abstract integer 3 caused Y" are not productive with no explanation of how that could be possible attached to the guess. There are infinitely more where that came from. You won't get anywhere if you don't criticize "The abstract integer 3 caused Y" for its arbitrariness, lack of explanation of how it could possibly work, etc

Elliot Temple: You narrow things down. You guess that a physical event less than an hour before Y and less than a quarter mile distant caused Y. You explain those guesses, you don't just make them arbitrarily (there are infinite guesses you could make like that, and also that category of guess isn't always appropriate). You expose those explanations to criticism as the way to find out if they are any good.

Andrew Crawshaw: You are arguing for an impossible demand that you yourself can't meet, event when you have explanations. It does not narrow it down from infinity. What narrows it down is our capacity to form guess which is temporal and limited. It's our brains ability to process and to intepret that information.

Elliot Temple: No, we can deal with infinite sets. We don't narrow things down with our inability, we use explanations. I can and do do this. So do you. Explanations can have reach and exclude whole categories of stuff at once.

Andrew Crawshaw: But it does not reduce it to less than infinite. Explanations allow an infinite amount of thugs most of them useless. It's what they rule out, and things they can rule out is guess work. And this is done over time. So we might guess this and then guess that x caused y, we try it again and it might not work, so we try to vary the situation and in the way develope criticism and more guesses.

Elliot Temple: Let's step back. I think you're lost, but you could potentially learn to understand these things. You think I'm mistaken. Do you want to sort this out? How much energy do you want to devote to this? If you learn that I was right, what will you do next? Will you join my forum and start contributing? Will you study philosophy more? What values do you offer, and what values do you seek?

Andrew Crawshaw: Mostly explanations take time to understand why they conflict with some guess. It might be that the guess only approximates the truth and then find later that it is wrong because we look more into the explanation of i.

Andrew Crawshaw: Elliot, if you wish to meta, I will step out of the conversation. It was interesting, yet you still refuse to concede my point that inventions can be created without explanations. But yet this is refuted by the creations of animals and many creations of humans. You won't concede this point and then make your claims pretty well trivial. Like you need some kind od thing to direct what you are doing. When the whole point is the Genesis of new ideas and inventions and theories which cannot be suggest by earlier explanations. It is true that explanations can help, I refining and understanding. But that is not the whole story of human cognition or human invention.

Elliot Temple: So you have zero interest in, e.g., attempting to improve our method of discussion, and you'd prefer to either keep going in circles or give up entirely?

Elliot Temple: I think we could resolve the disagreement and come to agree, if we make an effort to, AND we don't put arbitrary boundaries on what kinds of solutions and actions are allowed to be part of the problem solving process. I think if you make methodology off-limits, you are sabotaging the discussion and preventing its rational resolution.

Elliot Temple: Not everything is working great. We could fix it. Or you could just unilaterally blame me and quit..?

Andrew Crawshaw: Sorry, I am not blaming you for anything.

Elliot Temple: OK, you just don't really care?

Andrew Crawshaw: Wait. I want to say two things.

  1. It's 5 in the morning, and I was working all day, so I am exhausted.

  2. This discussion is interesting, but fragmented. I need to moderate my posts on here, now. And recuperate.

Elliot Temple: I haven't asked for fast replies. You can reply on your schedule.

Elliot Temple: These issues will still be here, and important, tomorrow and the next day. My questions are open. I have no objection to you sleeping, and whatever else, prior to answering.

Andrew Crawshaw: Oh, I know you haven't asked for replies. I just get very involved in discussion. When I do I stop monitoring my tiredness levels and etc.

I know this discussion is important. The issues and problems.

Elliot Temple: If you want to drop it, you can do that too, but I'd want to know why, and I might not want to have future discussions with you if I expect you'll just argue a while and then drop it.

Andrew Crawshaw: Like to know why? I have been up since very early yesterday, like 6. I don't want to drop the discussion I want to postpone it, if you will.

Elliot Temple: That's not a reason to drop the conversation, it's a reason to write your next reply at a later time.

Andrew Crawshaw: I explicitly said: I don't want to drop the discussion.

Your next claim is a non-sequitur. A conversation can be resumed in many ways. I take it you think it would be better for me to initiate it.

Andrew Crawshaw: I will read back through the comments and see where this has lead and then I will post something on fallible ideas forum.

Elliot Temple: You wrote:

Elliot, if you wish to meta, I will step out of the conversation.

I read "step out" as quit.

Anyway, please reply to my message beginning "Let's step back." whenever you're ready. Switching forums would be great, sure :)


Elliot Temple | Permalink | Messages (17)

Replies to Gyrodiot About Fallible Ideas, Critical Rationalism and Paths Forward

Gyrodiot wrote at the Less Wrong Slack Philosophy chatroom:

I was waiting for an appropriate moment to discuss epistemology. I think I understood something about curi's reasoning about induction After reading a good chunk of the FI website. Basically, it starts from this:

He quotes from: http://fallibleideas.com/objective-truth

There is an objective truth. It's one truth that's the same for all people. This is the common sense view. It means there is one answer per question.

The definition of truth here is not the same as The Simple Truth as described in LW. Here, the important part is:

Relativism provides an argument that the context is important, but no argument that the truth can change if we keep the context constant.

If you fixate the context around a statement, then the statement ought to have an objective truth value

Yeah. (The Simple Truth essay link.)

In LW terms that's equivalent to "reality has states and you don't change the territory by thinking differently about the map"

Yeah.

From that, FI posits the existence of universal truths that aren't dependent on context, like the laws of physics.

More broadly, many ideas apply to many contexts (even without being universal). This is very important. DD calls this "reach" in BoI (how many contexts does an idea reach to?), I sometimes go with "generality" or "broader applicability".

The ability for the same knowledge to solve multiple problems is crucial to our ability to deal with the world, and for helping with objectivity, and for some other things. It's what enabled humans to even exist – biological evolution created knowledge to solve some problems related to survival and mating, and that knowledge had reach which lets us be intelligent, do philosophy, build skyscrapers, etc. Even animals like cats couldn't exist, like they do today, without reach – they have things like behavioral algorithms which work well in more than one situation, rather than having to specify different behavior for every single situation.

The problem with induction, with this view is that you're taking truths about some contexts to apply them to other contexts and derive truths about them, which is complete nonsense when you put it like that

Some truths do apply to multiple contexts. But some don't. You shouldn't just assume they do – you need to critically consider the matter (which isn't induction).

From a Bayesian perspective you're just computing probabilities, updating your map, you're not trying to attain perfect truth

Infinitely many patterns both do and don't apply to other contexts (such as patterns that worked in some past time range applying tomorrow). So you can't just generalize patterns to the future (or to other contexts more generally) and expect that to work, ala induction. You have to think about which patterns to pay attention to and care about, and which of those patterns will hold in what ranges of contexts, and why, and use critical arguments to improve your understanding of all this.

We do [live in our own map], which is why this mode of thought with absolute truth isn't practical at all

Can you give an example of some practical situation you don't understand how to address with FI thinking, and I'll tell you how or concede? And after we go through a few examples, perhaps you'll better understand how it works and agree with me.

So, if induction is out of the way, the other means to know truth may be by deduction, building on truth we know to create more. Except that leads to infinite regress, because you need a foundation

CR's view is induction is not replaced with more deduction. It's replaced with evolution – guesses and criticism.

So the best we can do is generate new ideas, and put them through empirical test, removing what is false as it gets contradicted

And we can use non-empirical criticism.

But contradicted by what? Universal truths! The thing is, universal truths are used as a tool to test what is true or false in any context since they don't depend on context

Not just contradicted by universal truths, but contradicted by any of our knowledge (lots of which has some significant but non-universal reach). If an idea contradicts some of our knowledge, it should say why that knowledge is mistaken – there's a challenge there. See also my "library of criticism" concept in Yes or No Philosophy (discussed below) which, in short, says that we build up a set of known criticisms that have some multi-context applicability, and then whenever we try to invent a new idea we should check it against this existing library of known criticisms. It needs to either not be contradicted by any of the criticisms or include a counter-argument.

But they are so general that you can't generate new idea from them easily

The LW view would completely disagree with that: laws of physics are statements like every other, they are solid because they map to observation and have predictive power

CR says to judge ideas by criticism. Failure to map to observation and lack of predictive power are types of criticism (absolutely not the only ones), which apply in some important range of contexts (not all contexts – some ideas are non-empirical).

Prediction is great and valuable but, despite being great, it's also overrated. See chapter 1 of The Fabric of Reality by David Deutsch and the discussion of the predictive oracle and instrumentalism.

http://www.daviddeutsch.org.uk/books/the-fabric-of-reality/excerpt/

Also you can use them to explain stuff (reductionism) and generate new ideas (bottom-up scientific research)

From FI:

When we consider a new idea, the main question should be: "Do you (or anyone else) see anything wrong with it? And do you (or anyone else) have a better idea?" If the answers are 'no'and 'no' then we can accept it as our best idea for now.

The problem is that by having a "pool of statements from which falsehoods are gradually removed" you also build a best candidate for truth. Which is not, at all, how the Bayesian view works.

FI suggests evolution is a reliable way to suggest new ideas. It ties well into the framework of "generate by increments and select by truth-value"

It also highlights how humans are universal knowledge machines, that anything (in particular, an AGI) created by a human would have knowledge than humans can attain too

Humans as universal knowledge creators is an idea of my colleague David Deutsch which is discussed in his book, The Beginning of Infinity (BoI).

http://beginningofinfinity.com

But that's not an operational definition : if an AGI creates knowledge much faster than any human, they won't ever catch up and the point is moot

Yes, AGI could be faster. But, given the universality argument, AGI's won't be more rational and won't be capable of modes of reasoning that humans can't do.

The value of faster is questionable. I think no humans currently maximally use their computational power. So adding more wouldn't necessarily help if people don't want to use it. And an AGI would be capable of all the same human flaws like irrationalities, anti-rational memes (see BoI), dumb emotions, being bored, being lazy, etc.

I think the primary cause of these flaws, in short, is authoritarian educational methods which try to teach the kid existing knowledge rather than facilitate error correction. I don't think an AGI would automatically be anything like a rational adult. It'd have to think about things and engage with existing knowledge traditions, and perhaps even educators. Thinking faster (but not better) won't save it from picking up lots of bad ideas just like new humans do.

That sums up the basics, I think The Paths Forwards thing is another matter... and it is very, very demanding

Yes, but I think it's basically what effective truth-seeking requires. I think most truth-seeking people do is not very effective, and the flaws can actually be pointed out as not meeting Paths Forward (PF) standards.

There's an objective truth about what it takes to make progress. And separate truths depending on how effectively you want to make progress. FI and PF talk about what it takes to make a lot of progress and be highly effective. You can fudge a lot of things and still, maybe, make some progress instead of going backwards.

If you just wanna make a few tiny contributions which are 80% likely to be false, maybe you don't need Paths Forward. And some progress gets made that way – a bunch of mediocre people do a bunch of small things, and the bulk of it is wrong, but they have some ability to detect errors so they end up figuring out which are the good ideas with enough accuracy to slowly inch forwards. But, meanwhile, I think a ton of progress comes from a few great (wo)men who have higher standards and better methods. (For more arguments about the importance of a few great men, I particularly recommend Objectivism. E.g. Roark discusses this in his courtroom speech at the end of The Fountainhead.)

Also, FYI, Paths Forward allows you to say you're not interested in something. It's just, if you don't put the work into knowing something, don't claim that you did. Also you should keep your interests themselves open to criticism and error correction. Don't be an AGI researcher who is "not interested in philosophy" and won't listen to arguments about why philosophy is relevant to your work. More generally, it's OK to cut off a discussion with a meta comment (e.g. "not interested" or "that is off topic" or "I think it'd be a better use of my time to do this other thing...") as long as the meta level is itself open to error correction and has Paths Forward.

Oh also, btw, the demandingness of Paths Forward lowers the resource requirements for doing it, in a way. If you're interested in what someone is saying, you can be lenient and put in a lot of effort. But if you think it's bad, then you can be more demanding – so things only continue if they meet the high standards of PF. This is win/win for you. Either you get rid of the idiots with minimal effort, or else they actually start meeting high standards of discussion (so they aren't idiots, and they're worth discussing with). And note that, crucially, things still turn out OK even if you misjudge who is an idiot or who is badly mistaken – b/c if you misjudge them all you do is invest less resources initially but you don't block finding out what they know. You still offer a Path Forward (specifically that they meet some high discussion standards) and if they're actually good and have a good point, then they can go ahead and say it with a permalink, in public, with all quotes being sourced and accurate, etc. (I particularly like asking for simple things which are easy to judge objectively like those, but there are other harder things you can reasonably ask for, which I think you picked up on in some ways your judgement of PF as demanding. Like you can ask people to address a reference that you take responsibility for.)

BTW I find that merely asking people to format email quoting correctly is enough barrier to entry to keep most idiots out of the FI forum. (Forum culture is important too.) I like this type of gating because, contrary to moderators making arbitrary/subjective/debatable judgements about things like discussion quality, it's a very objective issue. Anyone who cares to post can post correctly and say any ideas they want. And it lacks the unpredictability of moderation (it can be hard to guess what moderators won't like). This doesn't filter on ideas, just on being willing to put in a bit of effort for something that is productive and useful anyway – proper use of nested quoting improves discussions and is worth doing and is something all the regulars actively want to do. (And btw if someone really wants to discuss without dealing with formatting they can use e.g. my blog comments which are unmoderated and don't expect email quoting, so there are still other options.)

It is written very clearly, and also wants to make me scream inside

Why does it make you want to scream?

Is it related to moral judgement? I'm an Objectivist in addition to a Critical Rationalist. Ayn Rand wrote in The Virtue of Selfishness, ch8, How Does One Lead a Rational Life in an Irrational Society?, the first paragraph:

I will confine my answer to a single, fundamental aspect of this question. I will name only one principle, the opposite of the idea which is so prevalent today and which is responsible for the spread of evil in the world. That principle is: One must never fail to pronounce moral judgment.

There's a lot of reasoning for this which goes beyond the one essay. At present, I'm just raising it as a possible area of disagreement.

There are also reasons about objective truth (which are part of both CR and Objectivism, rather than only Objectivism).

The issue isn't just moral judgement but also what Objectivism calls "sanction": I'm unwilling to say things like "It's ok if you don't do Paths Forward, you're only human, I forgive you." My refusal to actively do anti-judgement stuff, and approve of PF alternatives, is maybe more important than any negative judgements I've made, implied or stated.

It hits all the right notes motivation-wise, and a very high number of Rationality Virtues. Curiosity, check. Relinquishment, check. Lightness, check. Argument, triple-check.

Yudkowsky writes about rational virtues:

The fifth virtue is argument. Those who wish to fail must first prevent their friends from helping them.

Haha, yeah, no wonder a triple check on that one :)

Simplicity, check. Perfectionism, check. Precision, check. Scholarship, check. Evenness, humility, precision, Void... nope nope nope PF is much harsher than needed when presented with negative evidence, treating them as irreparable flaws (that's for evenness)

They are not treated as irreparable – you can try to create a variant idea which has the flaw fixed. Sometimes you will succeed at this pretty easily, sometimes it’s hard but you manage it, and sometimes you decide to give up on fixing an idea and try another approach. You don’t know in advance how fixable ideas are (you can’t predict the future growth of knowledge) – you have to actually try to create a correct variant idea to see how doable that is.

Some mistakes are quite easy and fast to fix – and it’s good to actually fix those, not just assume they don’t matter much. You can’t reliably predict mistake fixability in advance of fixing it. Also the fixed idea is better and this sometimes helps leads to new progress, and you can’t predict in advance how helpful that will be. If you fix a bunch of “small” mistakes, you have a different idea now and a new problem situation. That’s better (to some unknown degree) for building on, and there’s basically no reason not to do this. The benefit of fixing mistakes in general, while unpredictable, seems to be roughly proportional to the effort (if it’s hard to fix, then it’s more important, so fixing it has more value). Typically, the small mistakes are a small effort to fix, so they’re still cost-effective to fix.

That fixing mistakes creates a better situation fits with Yudkowsky’s virtue of perfectionism.

(If you think you know how to fix a mistake but it’d be too resource expensive and unimportant, what you can do instead is change the problem. Say “You know what, we don’t need to solve that with infinite precision. Let’s just define the problem we’re solving as being to get this right within +/- 10%. Then the idea we already have is a correct solution with no additional effort. And solving this easier problem is good enough for our goal. If no one has any criticism of that, then we’ll proceed with it...")

Sometimes I talk about variant ideas as new ideas (so the original is refuted, but the new one is separate) rather than as modifying and rescuing a previous idea. This is a terminology and perspective issue – “modifying" and “creating" are actually basically the same thing with different emphasis. Regardless of terminology, substantively, some criticized flaws in ideas are repairable via either modifying or creating to get a variant idea with the same main points but without the flaw.

PF expects to have errors all other the place and act to correct them, but places a burden on everyone else that doesn't (that's for humility)

Is saying people should be rational burdensome and unhumble?

According to Yudkowsky's essay on rational virtues, the point of humility is to take concrete steps to deal with your own fallibility. That is the main point of PF!

PF shifts from True to False by sorting everything through contexts in a discrete way.

The binary (true or false) viewpoint is my main modification to Popper and Deutsch. They both have elements of it mixed in, but I make it comprehensive and emphasized. I consider this modification to improve Critical Rationalism (CR) according to CR's own framework. It's a reform within the tradition rather than a rival view. I think it fits the goals and intentions of CR, while fixing some problems.

I made educational material (6 hours of video, 75 pages of writing) explaining this stuff which I sell for $400. Info here:

https://yesornophilosophy.com

I also have many relevant, free blog posts gathered at:

http://curi.us/1595-rationally-resolving-conflicts-of-ideas

Gyrodiot, since I appreciated the thought you put into FI and PF, I'll make you an offer to facilitate further discussion:

If you'd like to come discuss Yes or No Philosophy at the FI forum, and you want to understand more about my thinking, I will give you a 90% discount code for Yes or No Philosophy. Email curi@curi.us if interested.

Incertitude is lack of knowledge, which is problematic (that's for precision)

The clarity/precision/certitude you need is dependent on the problem (or the context if you don’t bundle all of the context into the problem). What is your goal and what are the appropriate standards for achieving that goal? Good enough may be good enough, depending on what you’re doing.

Extra precision (or something else) is generally bad b/c it takes extra work for no benefit.

Frequently, things like lack of clarity are bad and ruin problem solving (cuz e.g. it’s ambiguous whether the solution means to take action X or action Y). But some limited lack of clarity, lower precision, hesitation, whatever, can be fine if it’s restricted to some bounded areas that don’t need to be better for solving this particular problem.

Also, about the precision virtue, Yudkowsky writes,

The tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test.

FI/PF has no issue with this. You can specify required precision (e.g. within plus or minus ten) in the problem. Or you can find you have multiple correct solutions, and then consider some more ambitious problems to help you differentiate between them. (See the decision chart stuff in Yes or No Philosophy.)

PF posits time and again that "if you're not achieving your goals, well first that's because you're not faillibilist". Which is... quite too meta-level a claim (that's for the Void)

Please don't put non-quotes in quote marks. The word "goal" isn't even in the main PF essay.

I'll offer you a kinda similar but different claim: there's no need to be stuck and not make progress in life. That's unnecessary, tragic, and avoidable. Knowing about fallibilism, PF, and some other already-known things is adequate that you don't have to be stuck. That doesn't mean you will achieve any particular goal in any particular timeframe. But what you can do is have a good life: keep learning things, making progress, achieving some goals, acting on non-refuted ideas. And there's no need to suffer.

For more on these topics, see the FI discussion of coercion and the BoI view on unbounded progress:

http://beginningofinfinity.com

(David Deutsch, author of BoI, is a Popperian and is a founder of Taking Children Seriously (TCS), a parenting/education philosophy created by applying Critical Rationalism and which is where the the ideas about coercion come from. I developed the specific method of creating a succession of meta problems to help formalize and clarify some TCS ideas.)

I don't see how PF violates the void virtue (aspects of which, btw, relate to Popper's comments on Who Should Rule? cuz part of what Yudkowsky is saying in that section is don't enshrine some criteria of rationality to rule. My perspective is, instead of enshrining a ruler or ruling idea, the most primary thing is error correction itself. Yudkowsky says something that sorta sounds like you need to care about the truth instead of your current conception of the truth – which happily does help keep it possible to correct errors in your current conception.)

(this last line is awkward. The rationalist view may consider that rationalists should win, but not winning isn't necessarily a failure of rationality)

That depends on what you mean by winning. I'm guessing I agree with it the way you mean it. I agree that all kinds of bad things can happen to you, and stuff can go wrong in your life, without it necessarily being your fault.

(this needs unpacking the definition of winning and I'm digging myself deeper I should stop)

Why should you stop?


Justin Mallone replied to Gyrodiot:

hey gyrodiot feel free to join Fallible Ideas list and post your thoughts on PF. also, could i have your permission to share your thoughts with Elliot? (I can delete what other ppl said). note that I imagine elliot would want to reply publicly so keep that in mind.

Gyrodiot replied:

@JUSTINCEO You can share my words (only mine) if you want, with this addition: I'm positive I didn't do justice to FI (particularly in the last part, which isn't clear at all). I'll be happy to read Elliot's comments on this and update in consequence, but I'm not sure I will take time to answer further.

I find we are motivated by the same "burning desire to know" (sounds very corny) and disagree strongly about method. I find, personally, the LW "school" more practically useful, strikes a good balance for me between rigor, ease of use, and ability to coordinate around.

Gyrodiot, I hope you'll reconsider and reply in blog comments, on FI, or on Less Wrong's forum. Also note: if Paths Forward is correct, then the LW way does not work well. Isn't that risk of error worth some serious attention? Plus isn't it fun to take some time to seriously understand a rival philosophy which you see some rational merit in, and see what you can learn from it (even if you end up disagreeing, you could still take away some parts)?


For those interested, here are more sources on the rationality virtues. I think they're interesting and mostly good:

https://wiki.lesswrong.com/wiki/Virtues_of_rationality

https://alexvermeer.com/the-twelve-virtues-of-rationality/

http://madmikesamerica.com/2011/05/the-twelve-virtues-of-rationality/

That last one says, of Evenness:

With the previous three in mind, we must all be cautious about our demands.

Maybe. Depends on how "cautious" would be clarified with more precision. This could be interpreted to mean something I agree with, but also there are a lot of ways to interpret it that I disagree with.

I also think Occam's Razor (mentioned in that last link, not explicitly in the Yudkowsky essay), while having some significant correctness to it, is overrated and is open to specifications of details that I disagree with.

And I disagree with the "burden of proof" idea (I cover this in Yes or No Philosophy) which Yudkowsky mentions in Evenness.

The biggest disagreement is empiricism. (See the criticism of that in BoI, and FoR ch1. You may have picked up on this disagreement already from the CR stuff.)


Elliot Temple | Permalink | Messages (2)

Empiricism and Instrumentalism

Gyrodiot commented defending instrumentalism.

I'm going to clarify what I mean about "instrumentalism" and "empiricism". I don't know if we actually disagree or there's a misunderstanding.

FI has somewhat of a mixed view here (reason and observation are both great), and objects to an extreme focus on one or the other. CR and Objectivism both say you don't have to, and should not, choose between reason and observation. We object to the strong "rationalists" who want to sit in an armchair and reason out what reality is like without doing any science, and we object to the strong "empiricists" who want to look at reality and do science without thinking.

Instrumentalism means that theories are only or primarily instruments for prediction, with little or no explanation or philosophical thought. Our view is that observation and prediction are great and valuable, but aren't alone in being so great and valuable. Some important ideas – such as the theory of epistemology itself – are primarily non-empirical.

There's a way some people try to make philosophy empirical. It's: try different approaches and see what the results are (and try to predict the results of acting according to different philosophies of science). But how do you judge the results? What's a good result? More accurate scientific predictions, you say. But which ones? How do you decide which predictions to value more than others? Or do you say every prediction is equal and go for sheer quantity? If quantity, why, and how do you address that with only empiricism and no philosophical arguments? And you want more accurate predictions according to which measures? (E.g. do you value lower error size variance or lower error size mean, or one of the infinitely many possible metrics that counts both of them in some way?)

How do you know which observations to make, and which portion of the available facts to record about what you observe? How do you interpret those observations? Is the full answer just to predict which way of making observations will lead to the most correct predictions later on? But how do you predict that? How do you know which data will turn out useful to science? My answer is you need explanations of things like which problems science is currently working on, and why, and the nature of those problems – these things help guide you in deciding what observations are relevant.

Here are terminology quotes from BoI:

Instrumentalism   The misconception that science cannot describe reality, only predict outcomes of observations.

Note the "cannot" and "only".

Empiricism   The misconception that we ‘derive’ all our knowledge from sensory experience.

Note the "all" and the "derive". "Derive" refers to something like: take a set of observation data (and some models and formulas with no explanations, philosophy or conceptual thinking) and somehow derive all human knowledge, of all types (even poetry), from that. But all you can get that way are correlations and pattern-matching (to get causality instead of correlation you have to come up with explanations about causes and use types of criticism other than "that contradicts the data"). And there are infinitely many patterns fitting any data set, of which infinitely many both will and won't hold in the finite future, so how do you choose if not with philosophy? By assuming whichever patterns are computable by the shortest computer programs are the correct ones? If you do that, you're going to be unnecessarily wrong in many cases (because that way of prediction is often wrong, not just in cases where we had no clue, but also in cases when explanatory philosophical thinking could have done better). And anyway how do you use empiricism to decide to favor shorter computer programs? That's a philosophy claim, open to critical philosophy debate (rather than just being settled by science), of exactly the kind empiricism was claiming to do without.

Finally I'll comment on Yudkowsky on the virtue of empiricism:

The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction.

I disagree about "roots" because, as Popper explained, theories are prior to observations. You need a concept of what you're looking for, by what methods, before you can fruitfully observe. Observation has to be selective (like it or not, there's too much data to record literally all of it) and goal-directed (instead of observing randomly). So goals and ideas about observation method precede observation as "roots" of knowledge.

Note: this sense of preceding does not grant debating priority. Observations may contradict preceding ideas and cause the preceding ideas to be rejected.

And note: observations aren't infallible either: observations can be questioned and criticized because, although reality itself never lies, our ideas that precede and govern observation (like about correct observational methods) can be mistaken.

Do not ask which beliefs to profess, but which experiences to anticipate.

Not all beliefs are about experience. E.g. if you could fully predict all the results of your actions, there would still be an unanswered moral question about which results you should prefer or value, which are morally better.

Always know which difference of experience you argue about.

I'd agree with often but not always. Which experience is the debate about instrumentalism and empiricism about?


See also my additional comments to Gyrodiot about this.


Elliot Temple | Permalink | Messages (0)

Accepting vs. Preferring Theories – Reply to David Deutsch

David Deutsch has some misconceptions about epistemology. I explained the issue on Twitter.

I've reproduced the important part below. Quotes are DD, regular text is me.

There's no such thing as 'acceptance' of a theory into the realm of science. Theories are conjectures and remain so. (Popper, Miller.)

We don't accept theories "into the realm of science", we tentatively accept them as fallible, conjectural, non-refuted solutions to problems (in contexts).

But there's no such thing as rejection either. Critical preference (Popper) refers to the state of a debate—often complex, inconsistent, and transient.

Some of them [theories] are preferred (for some purposes) because they seem to have survived criticism that their rivals haven't. That's not the same as having been accepted—even tentatively. I use quantum theory to understand the world, yet am sure it's false.

Tentatively accepting an idea (for a problem context) doesn't mean accepting it as true, so "sure it's false" doesn't contradict acceptance. Acceptance means deciding/evaluating it's non-refuted, rivals are refuted, and you will now act/believe/etc (pending reason to reconsider).

Acceptance deals with the decision point where you move past evaluating the theory, you reach a conclusion (for now, tentatively). you don't consider things forever, sometimes you make judgements and move on to thinking about other things. ofc it's fluid and we often revisit.

Acceptance is clearer word than preference for up-or-down, yes-or-no decisions. Preference often means believing X is better than Y, rather than judging X to have zero flaws (that you know of) & judging Y to be decisively flawed, no good at all (variant of Y could ofc still work)

Acceptance makes sense as a contrast against (tentative) rejection. Preference makes more sense if u think u have a bunch of ideas which u evaluate as having different degrees of goodness, & u prefer the one that currently has the highest score/support/justification/authority.


Update: DD responded, sorta:

You are blocked from following @DavidDeutschOxf and viewing @DavidDeutschOxf's Tweets.


Update: April 2019:

DD twitter blocked Alan, maybe for this blog post critical of LT:

https://conjecturesandrefutations.com/2019/03/16/lulie-tanett-vs-critical-rationalism/

DD twitter blocked Justin, maybe for this tweet critical of LT:

https://twitter.com/j_mallone/status/1107349577538158592


Elliot Temple | Permalink | Messages (8)

Philosophy Side Quests

People get stuck for years on the philosophy main quest while refusing to do side quests. That is not how you play RPGs. Side quests let you get extra levels, gear and practice which make the main quest easier to make progress on.

An example of a side quest would be speedrunning a Mario or Zelda game. That would involve some goal-directed activity and problem solving. It’d be practice for becoming skilled at something, optimizing details, and correcting mistakes one is making.


Elliot Temple | Permalink | Messages (10)

Do Primarily Easy Things – Increasing The Productivity Of Your Intellectual Labor Vs. Consumption

When you do productive labor (like at a job), you are able to use what you produce (or some negotiated amount of payment related to what you produce). How you use your income can be broadly viewed in two ways: investment and consumption.

Investment fundamentally means capital accumulation – putting your income towards the accumulation of capital goods which raise the productivity of labor and thereby create a progressing economy which offers exponentially (like compound interest) more and more production each year. The alternative is to consume your income – spend it on consumer's goods like food, video games, lipstick, cars, etc.

People do a mix of savings/investment and consumption. the proportion of the mix determines how good the future is. A high rate of capital accumulation quickly leads to a much richer world which is able to support far more consumption than before while still maintaining a high rate of investment (the pie gets larger. Instead of consuming 80% of the original pie, one could soon be consuming 20% of a much larger pie which is also growing much faster, and that 20% of the larger pie will be more than 80% of the smaller pie.)

For more info on the economics of this, see the diagrams on pages 624 and 625 of George Reisman's book Capitalism: A Treatise on Economics and read some of the surrounding text.

The situation with your intellectual labor parallels the situation with laboring at a job for an income. Your intellectual labor is productive and this production can be directed in different ways – towards consumption, towards increasing the productivity of intellectual labor, or a mix. The more the mix favors increasing the productivity of your intellectual labor, the brighter your future.

Consumption in this case refers to things where you aren't investing in yourself and your education – where you aren't learning more and otherwise becoming more able to produce more in the future. For example, you might put a great deal of effort into writing a book which you hope will impress people, which you are just barely capable of writing. It takes a ton of intellectual labor while being only a little bit educational for you. Most of your intellectual labor is consumed and the result is the book. If you had foregone the book in the short term and invested more in increasing your productivity of intellectual labor, you could have written it at a later date while consuming a much smaller proportion of your intellectual output. This is because you'd be outputting more and even more so because your output would be more efficient – you'd be able to get more done per hour of intellectual labor (one of the biggest factors here would be making fewer mistakes, so you'd spend less labor redoing things). A good question to ask is whether you produced an intellectual work in order to practice or if instead you put a lot of work into polishing it so other people would like it more (that polishing is an inefficient way to learn). It's sad when people who don't know much put tons of effort into polishing what little they do know instead of learning more – and this is my description of pretty much everyone. (If you think you already know so much that you're largely done with further educating yourself, or at least ready to make education secondary, please contact me. I expect to be able to point out that you're mistaken, especially if you're under 50 years old.)

Consumption (rather than investment), in the realm of intellectual labor, primarily relates to going out of your way to try to accomplish things, to do things – like persuading people or creating finished works. It is possible to learn by doing, but it's also possible not to learn much by doing. If you're doing for the sake of learning, great. If you're doing for the sake of an accomplishment, that is expensive, especially if you're young, and you may be dramatically underestimating the expense while also fooling yourself about how educational it is (because you do learn something, but much less than you could have learned if you instead studied e.g. George Reisman's Program of Self-Education in the Economic Theory and Political Philosophy of Capitalism or my Fallible Ideas recommended reading list.)

Broadly, I see many people try to produce important intellectual works when they don't know much. They spend a lot of intellectual labor and produce material which is bad. They would have been far better served by learning more now, and producing more output (like essays) later on when they are able to make valuable intellectual products with a considerably lesser effort. This explains the theme I've stated elsewhere and put in the title of this piece: you should intellectually do (consume) when it's relatively easy and cheap, but be very wary of expensive intellectual projects which take tons of resources away from making intellectual progress.

Some people doubt the possibility of an accumulation of intellectual capital or its equivalent. They don't think they can increase the productivity of their intellectual labor substantially. These same people, by and large, haven't learned speed reading (or speed watching or speed listening). Nor have they fully learned the ideas of great intellectuals like Ayn Rand and Ludwig von Mises. Equipped with these great ideas, they'd avoid going down intellectual dead ends, and otherwise create high quality outputs from their intellectual labor. Even if the process of increasing the productivity of one's intellectual labor runs into limits which result in diminishing returns at some point, that is no excuse for stopping such educational self-investment long before reaching any such limits.

In the long run, the ongoing increase in the productivity of one's intellectual labor requires the ongoing creation of new and improved intellectual tools and methods, and supporting technologies. It requires ongoing philosophical progress. I believe philosophical progress can be unbounded if we work at it (without diminishing returns), but regardless of the far future there is massive scope for productive educational self-investment today. Unless you've exhausted what's already known about philosophy – that is, you are at the forefront of the field – and also spent some time unsuccessfully attempting to pioneer new philosophy ... then you have no excuse to stop investing in increasing the productivity of your intellectual labor (primarily with better and better methods of thinking – philosophy – but also with other things like learning to read faster). Further, until you know what is already known about philosophy, you are in no position to judge the far future of philosophical progress and its potential or lack of potential.

Note: the biggest determinants of the productivity of your intellectual labor are your rate of errors and your ability to find and correct errors. Doing activities where your error rate is below your error correction capacity is much more efficient and successful. You can increase your error correction effectiveness by devoting an unusually large amount of resources to it, but there are diminishing returns on that, so it's typically an inefficient (resource expensive) shortcut to doing a slightly more difficulty project slightly sooner.


The article is itself an example of what I can write in a few minutes without editing or difficulty. It's the fruits of my previous investment in better writing ability in order to increase the productivity of my intellectual labor. I aim primarily to get better at writing this way (cheaply and efficiently), rather than wishing to put massive polishing effort into a few works.


Update (2018-05-18):

What I say in this post is, to some extent, well known common sense. People get an education first and do stuff like a career second. Maybe they aren't life-long learners, but they have the general idea right (learn how to think/do/problem-solve/etc first, do stuff second after you're able to do it well and efficiently).

What goes wrong then? Parenting and schooling offer a bad, ineffective education. This discourages further education (the painfulness and uselessness of their education is the biggest thing preventing life-long learners). And it routinely puts people in a bad situation: trying to do things which they have been educated to be able to do well, but in fact cannot do well. The solution is not to give up on education, but to figure out how to pursue education effectively. A reasonable place to start would be the books of humanity's best thinkers since the start of western civilization. Some people have been intellectually successful and effective (as you can see from the existence of iPhones); you could look into what they did, how they thought, etc.

FI involves ideas that are actually good and effective, as against rivals offering similar overall stuff (rational ideas) but which are incorrect. FI faces the following major challenges: 1) people are so badly educated they screw up when trying to learn FI ideas 2) people are so badly educated they don't know how to evaluate if FI is working, how it compares to rivals, the state of debate between FI ideas and alternative ideas, etc.


Elliot Temple | Permalink | Messages (13)

Changing Minds About Inequality

people have lots of bad ideas they don’t understand much about, like that “inequality” is a major social problem.

what would it take to change their mind? not books with arguments refuting the books they believe. they didn’t get their ideas from structured arguments in serious books. they don’t have a clear idea in their mind for a refutation to point out the errors in. non-interactive refutation (like a book, essay, article) is very, very hard when you have to first tell people what they think (in a one-size-fits-all way, despite major variance between people) before trying to refute it. Books and essays work better to address clearly defined views, but not so well when you’re trying to tell the other side what they think b/c they don’t even know (btw that problem comes up all the time with induction).

to get someone to change their mind about “inequality”, what’d really help is if they thoughtfully considered things like:

what is “inequality”? why is it bad? are we talking about all cases of inequality being equally bad, or does the degree of badness vary? are we talking about all cases of inequality being bad at all, or are some neutral or even good? if the case against inequality isn’t a single uniform thing, applying equally to all cases, then what is the principle determining which cases are worse and why? what’s the reasoning for some inequality being evaluated differently than other inequality?

whatever one’s answers, what happens if we consider tons of examples? are the evaluations of all the examples satisfactory, do they all make sense and fit your intuitions, and reach the conclusions you intended? (cuz usually when people try to define any kind of general formula that says what they think, it gives answers they do not think in lots of example cases. this shows the formula is ad hoc crap, and doesn’t match their actual reasoning, and therefore they don’t even know what their reasoning is. so they are arguing for reasoning they don’t understand or misunderstand, which must be due to bias and irrationality, since you can’t reach a conscious, rational, positive evaluation of your ideas when you don’t even know what they are. you can sometimes reach a positive meta-evaluation where you acknowledge your confusion about the specifics of the ideas, but that’s different.).

anyway, the point is if people would actually think through the issue of inequality it’d change some of their minds. that’d be pretty effective at improving the situation. what stops this? the minor issue is: there are a lack of discussion partners to ask them good questions, guide them, push them for higher standards of clarity, etc. the major issue is: they don’t want to.

why don’t people want to think about “inequality”? broadly, they don’t want to think. also, more specifically, they accepted anti-inequality ideas for the purpose of fitting in. thinking about it may result in them changing their mind in some ways, big or small, which risks them fitting in less well. thinking threatens their social conformity which is what their “beliefs” about “inequality” are for in the first place.

this relates to overreaching. people’s views on inequality are too advanced for their ability to think through viewpoints. the views have a high error rate relative to their holder’s ability to correct error.


Elliot Temple | Permalink | Message (1)

Passivity as a Strategic Excuse

How much of the "passivity" problems people have – about learning FI and all throughout life elsewhere as well – are that they don't want to do something and don't want to admit that they don't want to? How much is passivity a disguise used to hide disliking things they won't openly challenge?

Using passivity instead of openly challenging stuff is beaten into children. They learn not to say "no" or "I don't want to" to their parents. They learn they are punished less if they "forget" than if they refuse on purpose. They are left alone more if they are passive than if they talk about their reasoning for not doing what the parent wants them to do.

Typical excuses for passivity are being lazy or forgetful. Those are traits which parents and teachers commonly attribute to children who don't do what the parent or teacher wants. Blaming things on a supposed character flaw obscures the intellectual or moral disagreement. (Also, character flaws are a misconception – people don't have an innate character; they have ideas!)

The most standard adult excuse for passivity is being busy. "I'm not passive, I'm actively doing something else!" This doesn't work as well for children because their parents know their whole schedule.

Claiming to be busy is commonly combined with the excuse of privacy to shield what one is busy with from criticism. Privacy is a powerful shield because it's a legitimate, valuable concept – but it can also be used as an anti-criticism tool. It's hard to figure out when privacy is being abused, or expose the abuses, because the person choosing privacy hides the information that would allow evaluating the matter.

Note: Despite people's efforts to prevent judgment, there are often many little hints of irrationality. These are enough for me to notice and judge, but not enough to explain to the person – they don't want to understand, so they won't, plus it takes lots of skill to evaluate the small amount of evidence (because they hid the rest of the evidence). Rather than admit I'm right (they have all the evidence themselves, so they could easily see it if they wanted to), they commonly claim I'm being unreasonable since I didn't have enough information to reach my conclusions (because a person with typical skill at analysis wouldn't be able to do it, not because they actually refute my line of reasoning).

Generic Example

Joe (an adult) doesn't like something about Fallible Ideas knowledge and activities (FI), and doesn't want to say what it is. And/or he likes some other things in life better than FI and wants to hide what they are. Instead of saying why he doesn't pursue FI more (what's bad about it, what else is better), Joe uses the passivity strategy. Joe claims to want to do FI more, get more involved, think, learn, etc, and then just doesn't.

Joe doesn't claim to be lazy or forgetful – some of the standard excuses for passivity which he knows would get criticized. Instead, Joe doesn't offer any explanation for the passivity strategy. Joe says he doesn't know what's going on.

Or, alternatively, Joe says he's busy and that the details are private, and he'd like to discuss it, he just doesn't know how to solve the privacy problem. To especially block progress, Joe might say he doesn't mind having less privacy for himself, but there are other people involved and he couldn't possibly say anything that would reduce their privacy. Never mind that they share far more information with their neighbors, co-workers, second cousins, and Facebook...


Elliot Temple | Permalink | Messages (5)

Backbone, Pushback, Standing Up For Your Ideas

You need to be sturdy to do well in FI philosophy discussions or anywhere. Don’t be pushed around or controlled by people who weren’t even trying to push you around, because you’re so weak and fragile almost anything can boss you around without even trying or intending to.

Broadly, people give advice, ideas, criticism, etc.

Some advice can help you right now. Some of it, you don’t understand, you don’t get it, it doesn’t work for you right now. You could ask a question or follow up and then maybe get more advice so it does work, but you still might not get it. It’s good to follow up some sometimes, but that’s another topic.

The point is: you must use your own judgment about which ideas work for you. What do you understand? What makes sense to you?

Filter all the ideas/advice/criticism in this way. Sort it into two categories:

Category 1 (self-ownership and integration of the idea): Do you get it, yourself, in your own understanding, well enough to use it? Are you ready to use it as your own idea, that is yours, that you feel ownership of, and you take full responsibility for the outcome? Would you still use it even if the guy who said it changed his mind (but didn’t tell you why), because it’s now the best idea in your own mind? Would you still use it if all the people advocating it got hit by cars and died, so you couldn't get additional advice?

Category 2 (foreign, non-integrated, confused idea): You don’t get it. Maybe you partly get it, but not fully. Not enough to live it without ever reading FI again, with no followup help. You don’t understand it enough to adapt it when problems come up or the situation changes. You have ideas in your mind which conflict with it. It isn’t natural/intuitive/automated for you. It feels like someone else’s idea, not yours. Maybe you could try doing their advice, but it wouldn’t be your own action.

NEVER EVER EVER ACCEPT OR ACT ON CATEGORY 2 IDEAS.

If you only use category 1, you’re easy to help and safe to talk to. People can give you advice, and there's no danger – if it helps, great, and if it doesn't help, nothing happens. But if you use category 2, you are sabotaging progress and you're hard to deal with.

Note: the standard for understanding ideas needs to be your own standard, not my standard. If you're somewhat confused about all your ideas (by my standards), that doesn't mean everything is category 2 for you. If you learn an idea as well as the rest of your ideas, and you can own it as much as the rest, that's category 1.

Note: Trying out an idea, in a limited way, which you do know how to do (you understand enough to do the trial you have in mind) is a different idea than the original idea. The trial could be category 1 if you know how to do it, know what you're trying to learn, know how to evaluate the results. Be careful though. It's easy to "try" an idea while doing it totally wrong!


But there's a problem here I haven't solved. Most people can't use the two categories because the idea of the two categories itself is in category 2 for them, so it'd be self-contradictory to use it.

To do this categorizing, they'd need to have developed the skill of figuring out what they understand or not. They'd need to be able to tell the difference effectively. But most people don't know how.

They could try rejecting stuff which is category 2 and unconventional, because that's an especially risky pairing. Except they can't effectively judge what's unconventional, and also they don't understand why that pairing matters well enough (so the idea of checking for category-2-and-unconventional is itself a category 2 idea for them; it's also an unconventional suggestion...).


Note: these ideas have been discussed at the FI discussion group. Here’s a good post by Alisa and you can find the rest of the discussion at that link.


Elliot Temple | Permalink | Messages (3)

Discussion Structure

Dagny wrote (edited slightly with permission):

I think I made a mistake in the discussion by talking about more than one thing at once. The problem with saying multiple things is he kept picking some to ignore, even when I asked him repeatedly to address them. See this comment and several comments near it, prior, where I keep asking him to address the same issue. but he wouldn't without the ultimatum that i stop replying. maybe he still won't.

if i never said more than one thing at once, it wouldn't get out of hand like this in the first place. i think.

I replied: I think the structure of conversations is a bigger contributor to the outcome than the content quality is. Maybe a lot bigger.

I followed up with many thoughts about discussion structure, spread over several posts. Here they are:


In other words, improving the conversation structure would have helped with the outcome more than improving the quality of the points you made, explanations you gave, questions you asked, etc. Improving your writing quality or having better arguments doesn't matter all that much compared to structural issues like what your goals are, what his goals are, whether you mutually try to engage in cooperative problem solving as issues come up, who follows whose lead or is there a struggle for control, what methodological rules determine which things are ignorable and which are replied to, and what are the rules for introducing new topics, dropping topics, modifying topics?


it's really hard to control discussion structure. people don't wanna talk about it and don't want you to be in control. they don't wanna just answer your questions, follow your lead, let you control discussion flow. they fight over that. they connect control over the discussion structure with being the authority – like teachers control discussions and students don't.

people often get really hostile, really fast, when it comes to structure stuff. they say you're dodging the issue. and they never have a thought-out discussion methodology to talk about, they have nothing to say. when it comes to the primary topic, they at least have fake or dumb stuff to say, they have some sorta plan or strategy or ideas (or they wouldn't be talking about). but with stuff about how to discuss, they can't discuss it, and don't want to – it leads so much more quickly and effectively to outing them as intellectual frauds. (doesn't matter if that's your intent. they are outed because you're discussing rationality more directly and they have nothing to say and won't do any of the good ideas and don't know how to do the good ideas and can't oppose them either).

sometimes people are OK with discussion methodology stuff like Paths Forward when it's just sounds-good vague general stuff, but the moment you apply it to them they feel controlled. they feel like you are telling them what to do. they feel pressured, like they have to discuss the rational way. so they rebel. even just direct questions are too controlling and higher social status, and people rebel.


some types of discussion structure. these aren’t about controlling the discussion, they are just different ways it can be organized. some are compatible with each other and some aren’t (you can have multiple from the list, but some exclude each other):

  • asking and answering direct questions
  • addressing unstated, generic questions like “thoughts on what i just said?”
  • one person questioning the other who answers vs. both people asking and answering questions vs. some ppl ignoring questions
  • arguing points back and forth
  • saying further thoughts related to what last person said (relevance levels vary, can be like really talking past each other and staying positive, or can be actual discussion)
  • pursuing a goal stated by one person
  • pursuing a goal stated by two people and mutually agreed on
  • pursuing different and unstated goals
  • 3+ person discussion
  • using quotes of the other discussion participants or not
  • using cites/links to stuff outside the discussion or not
  • long messages, short messages, or major variance in message length
  • talking about one thing at a time
  • trying to resolve issues before moving on vs. just rushing ahead into new territory while there are lots of outstanding unresolved points
  • step by step vs. chaotic
  • people keeping track of the outline or just running down rabbit holes

i’ve been noticing structure problems in discussions more in the last maybe 5 years. Paths Forward and Overreaching address them. lots of my discussions are very short b/c we get an impasse immediately b/c i try to structure the discussion and they resist.

like i ask them how they will be corrected if they’re wrong (what structural mechanisms of discussion do they use to allow error correction) and that ends the discussion.

or i ask like “if i persuade you of X, will you appreciate it and thank me?” before i argue X. i try to establish the meaning X will have in advance. why bother winning point X if they will just deny it means anything once you get there? a better way to structure discussion is to establish some stakes around X in advance, before it’s determined who is right about X.

i ask things like if they want to discuss to a conclusion, or what their goal is, and they won’t answer and it ends things fast

i ask why they’re here. or i ask if they think they know a lot or if they are trying to learn.

ppl hate all those questions so much. it really triggers the fuck out of them

they just wanna argue the topic – abortion or induction or whatever

asking if they are willing to answer questions or go step by step also pisses ppl off

asking if they will use quotes or bottom post. asking if they will switch forums. ppl very rarely venue switch. it’s really rare they will move from twitter to email, or from email to blog comments, or from blog comments to FI, etc

even asking if they want to lead the discussion and have a plan doesn’t work. it’s not just about me controlling the discussion. if i offer them control – with the caveat that they answer some basic questions about how they will use it and present some kinda halfway reasonable plan – they hate that too. cuz they don’t know how to manage the discussion and don’t want the responsibility or to be questioned about their skill or knowledge of how to do it.

structure/rules/organization for discussion suppresses ppl’s bullshit. it gives them less leeway to evade or rationalize. it makes discussion outcomes clearer. that’s why it’s so important, and so resisted.


the structure or organization of a discussion includes the rules of the game, like whether people should reply more tomorrow or whether it's just a single day affair. the rules for what people consider reasonable ways of ending a discussion are a big deal. is "i went to sleep and then chose not to think about it the next day, or the next, or the next..." a reasonable ending? should people actually make an effort to avoid that ending, e.g. by using software reminders?

should people take notes on the discussion so they remember earlier parts better? should they quote from old parts? should they review/reread old parts?

a common view of discussion is: we debate issue X. i'm on side Y, you're on side Z. and ppl only say stuff for their side. they only try to think about things in a one-sided, biased way. they fudge and round everything in their favor. e.g. if the number is 15, they will say "like 10ish" or "barely over a dozen" if a smaller number helps their side. and the other guy will call it "around 20" or "nearly 18".

a big part of structure is: do sub-plots resolve? say there's 3 things. and you are trying to do one at a time, so you pick one of the 3 and talk about that. can you expect to finish it and get back to the other 2 things, or not? is the discussion branching to new topics faster than topics are being resolved? are topics being resolved at a rate that's significantly different from zero, or is approximately nothing being resolved?

another part of structure is how references/cites/links are used. are ideas repeated or are pointers to ideas used? and do people try to make stuff that is suitable for reuse later (good enough quality, general purpose enough) or not? (a term similar to suitable for reuse is "canonical").


I already knew that structural knowledge is the majority of knowledge. Like a large software project typically has much more knowledge in the organization than the “payload” (aka denotation aka direct purpose). “refactoring" refers to changing only the structure while keeping the function/content/payload/purpose/denotation the same. refactoring is common and widely known to be important. it’s an easy way for people familiar with the field to see that significant effort goes into software knowledge structure cuz that is effort that’s pretty much only going toward structure. software design ideas like DRY and YAGNI are more about structure than content. how changeable software is is a matter of structure ... and most big software projects have a lot more effort put into changes (like bug fixes, maintenance and new features) than into initial development. so initial development should focus more effort on a good structure (to make changes easier) than on the direct content.

it does vary by software type. games are a big exception. most games they have most of their sales near release. most games aren’t updated or changed much after release. games still need pretty good structure though or it’d be too hard to fix enough the bugs during initial development to get it shippable. and they never plan the whole game from the start, they make lots of changes during development (like they try playing it and think it’s not fun enough, or find a particular part works badly, and change stuff to make it better), so structure matters. wherever you have change (including error correction), structure is a big deal. (and there’s plenty of error correction needed in all types of software dev that make substantial stuff. you can get away with very little when you write one line of low-risk code directly into a test-environment console and aren’t even going to reuse it.)

it makes sense that structure related knowledge is the majority of the issue for discussion. i figured that was true in general but hadn’t applied it enough. knowledge structure is hard to talk about b/c i don’t really have people who are competent to discuss it with me. it’s less developed and talked through than some other stuff like Paths Forward or Overreaching. and it’s less clear in my mind than YESNO.

so to make this clearer:

structure is what determines changeability. various types of change are high value in general, including especially error correction. wherever you see change, especially error correction, it will fail without structural knowledge. if it’s working ok, there’s lots of structural knowledge.

it’s like how the capacity to make progress – like being good at learning – is more important than how much you know how or how good something is now. like how a government that can correct mistakes without violence is better than one with fewer mistakes today. (in other words, the structure mistake of needing violence to correct some categories of mistake is a worse mistake than the non-structure mistake of taxing cigarettes and gas. the gas tax doesn’t make it harder to make changes and correct errors, so it’s less bad of a mistake in the long run.)


Intro to knowledge structure (2010):

http://fallibleideas.com/knowledge-structure

Original posts after DD told me about it (2003)

http://curi.us/988-structural-epistemology-introduction-part-1
http://curi.us/991-structural-epistemology-introduction-part-2

The core idea of knowledge structure is that you can do the same task/function/content in different ways. You may think it doesn’t matter as long as the result is (approximately) the same, but the structure matters hugely if you try to change it so it can do something else.

“It” can be software, an object like a hammer, ideas, or processes (like the processes factory workers use). Different software designs are easier to add features to than others. You can imagine some hammer designs being easier to convert into a shovel than others. Some ideas are easier to change than others. Or imagine two essays arguing equally effectively for the same claim, and your task is to edit them to argue for a different conclusion – the ease of that depends on the internal design of the essays. And for processes, for example the more the factory workers have each memorized a single task, and don’t understand anything, the more difficult a lot of changes will be (but not all – you could convert the factory to build something else if you came up with a way to build it with simple, memorizable steps). Also note the ease of change often depends on what you want to change to. Each design makes some sets of potential changes harder or easier.

Back to the ongoing discussion (which FYI is exploratory rather than having a clear conclusion):

“structure” is the word DD used. Is is the right word to use all the time?

Candidate words:

  • structure (DD’s word)
  • design
  • organization
  • internal design
  • internal organization
  • form
  • layout
  • style
  • plan
  • outline

I think “design” and “organization” are good words. “Form” can be good contextually.

What about words for the non-structure part?

  • denotation (DD’s word)
  • content
  • function
  • payload
  • direct purpose
  • level one purpose
  • task
  • main point
  • subject matter

The lists help clarify the meaning – all the words together are clearer than any particular one.


What does a good design offer besides being easier to change?

  • Flexibility: solves a wider range of relevant problems (without needing to change it, or with a smaller/easier change). E.g. a car that can drive in the snow or on dry roads, rather than just one or the other.

  • Easier to understand. Like computer code that’s easier to read due to being organized well.

  • Made up of somewhat independent parts (components) which you can separate and use individually (or in smaller groups than the original total thing). The parts being smaller and more independent has advantages but also often involves some downsides (like you need more connecting “glue” parts and the attachment of components is less solid).

  • Easier to reuse for another purpose. (This is related to changeability and to components. Some components can be reused without reusing others.)

  • Internal reuse (references, pointers, links) rather than new copies. (This is usually but not always better. In general, it means the knowledge is present that two instances are actually the same thing instead of separate. It means there’s knowledge of internal groupings.)

Good structures are set up to do work (in a certain somewhat generic way), and can be told what type of work, what details. Bad structures fail to differentiate what is parochial details and what is general purpose.

The more you treat something as a black box (never take it apart, never worry about the details of how it works, never repair it, just use it for its intended purpose), the less structure matters.

In general, the line between function and design is approximate. What about the time it takes to work, or the energy use, or the amount of waste heat? What are those? You can do the same task (same function) in different ways, which is the core idea of different structures, and get different results for time, energy and heat use. They could be considered to be related to design efficiency. But they could also be seen as part of the task: having to wait too long, or use too much energy, could defeat the purpose of the task. There are functionality requirements in these areas or else it would be considered not to work. People don’t want a car that overheats – that would fail to address the primary problem of getting them from place to place. It affects whether they arrive at their destination at all, not just how the car is organized.

(This reminds me of computer security. Sometimes you can beat security mechanisms by looking at timing. Like imagine a password checking function that checks each letter of the password one by one and stops and rejects the password if a letter is wrong. That will run more slowly based on getting more letters correct at the start. So you can guess the password one letter at a time and find out when you have it right, rather than needing to guess the whole thing at once. This makes it much easier to figure out the password. Measuring power usage or waste heat could work too if you measured precisely enough or the difference in what the computer does varied a large enough amount internally. And note it’s actually really hard to make the computer take exactly the same amount of time, and use exactly the same amount of power, in different cases that have the same output like “bad password”.)

Form and function are related. Sometimes it’s useful to mentally separate them but sometimes it’s not helpful. When you refactor computer code, that’s about as close to purely changing the form as it gets. The point of refactoring is to reorganize things while making sure it still does the same thing as before. But refactoring sometimes makes code run faster, and sometimes that’s a big deal to functionality – e.g. it could increase the frame rate of a game from non-playable to playable.

Some designs actively resist change. E.g. imagine something with an internal robot that goes around repairing any damage (and its programmed to see any deviation or difference as damage – it tries to reverse all change). The human body is kind of like this. It has white blood cells and many other internal repair/defense mechanisms that (imperfectly) prevent various kinds of changes and repair various damage. And a metal hammer resists being changed into a screwdriver; you’d need some powerful tools to reshape it.


The core idea of knowledge structure is that you can do the same task/function/content in different ways. You may think it doesn’t matter as long as the result is (approximately) the same, but the structure matters hugely if you try to change it so it can do something else.

Sometimes programmers make a complicated design in anticipation of possible future changes that never happen (instead it's either no changes, other changes, or just replaced entirely without any reuse).

It's hard to predict in advance which changes will be useful to make. And designs aren't just "better at any and all changes" vs. "worse at any and all changes". Different designs make different categories of changes harder or easier.

So how do you know which structure is good? Rules of thumb from past work, by many people, doing similar kinds of things? Is the software problem – which is well known – just some bad rules of thumb (that have already been identified as bad by the better programmers)?

  • Made up of somewhat independent parts (components) which you can separate and use individually (or in smaller groups than the original total thing). The parts being smaller and more independent has advantages but also often involves some downsides (like you need more connecting “glue” parts and the attachment of components is less solid).

this is related to the desire for FI emails to be self-contained (have some independence/autonomy). this isn't threatened by links/cites cuz those are a loose coupling, a loose way to connect to something else.

  • Easier to reuse for another purpose. (This is related to changeability and to components. Some components can be reused without reusing others.)

but, as above, there are different ways to reuse something and you don't just optimize all of them at once. you need some way to judge what types of reuse are valuable, which partly seems to depend on having partial foresight about the future.

The more you treat something as a black box (never take it apart, never worry about the details of how it works, never repair it, just use it for its intended purpose), the less structure matters.

sometimes the customer treats something as a black box, but the design still matters a lot for:

  • warranty repairs (made by the company, not by the customer)
  • creating the next generation production
  • fixing problems during development of the thing
  • the ability to pivot into other product lines (additionally, or instead of the current one) and reuse some stuff (be it manufacturing processes, components from this product, whatever)
  • if it's made out of components which can be produced independently and are useful in many products, then you have the option to buy these "commodity parts" instead of making your own, or you can sell your surplus parts (e.g. if your factory manager finds a way to be more efficient at making a particular part, then you can either just not produce your new max capacity, or you could sell them if they are useful components to others. or you could use the extra parts in a new product. the point was you can end up with extra capacity to make a part even if you didn't initially design your factory that way.)

In general, the line between function and design is approximate.

like the line between object-discussion and meta-discussion is approximate.

as discussion structure is crucial (whether you talk about it or not), most stuff has more meta-knowledge than object-knowledge. here's an example:

you want to run a small script on your web server. do you just write it and upload? or do you hook it into existing reusable infrastructure to get automatic error emails, process monitoring that'll restart the script if it's not running, automatic deploys of updates, etc?

you hook it into the infrastructure. and that infrastructure has more knowledge in it than the script.

when proceeding wisely, it's rare to create a ton of topic-specific knowledge without the project also using general purpose infrastructure stuff.

Form and function are related.

A lot of the difference between a smartphone and a computer is the shape/size/weight. That makes them fit different use cases. An iPhone and iPad are even more similar, besides size, and it affects what they're used for significantly. And you couldn't just put them in an arbitrary form factor and get the same practical functionality from them.

Discussion and meta-discussion are related too. No one ever entirely skips/omits meta discussion issues. People consider things like: what statements would the other guy consent to hear and what would be unwanted? People have an understanding of that and then don't send porn pics in the middle of a discussion about astronomy. You might complain "but that would be off-topic". But understanding what the topic is, and what would be on-topic or off-topic is knowledge about the discussion, rather than directly being part of the topical discussion. "porn is off topic" is not a statement about astronomy – it is itself meta discussion which is arguably off topic. you need some knowledge about the discussion in order to deal with the discussion reasonably well.

Some designs actively resist change.

memes resist change too. rational and static memes both resist change, but in different ways. one resists change without reasons/arguments, the other resists almost all change.


Discussion and meta-discussion are related too.

Example:

House of Sunny podcast. This episode was recommended for Trump and Putin info at http://curi.us/2041-discussion#c10336

https://youtu.be/Id2ZH_DstyY

  • starts with music
  • then radio announcer voice
  • voice says various introductory stuff. it’s not just “This is the house of Sunny podcast.” It says some fluff with social connotations about the show style, and gives a quick bio of the host (“comedian and YouTuber”)
  • frames the purpose of the upcoming discussion: “Wanna know what Sunny and her friends are thinking about this week?”
  • tries to establish Sunny as a high status person who is worthy of an introduction that repeats her name like 4 times (as if her name matters)
  • applause track
  • Sunny introduces herself, repeating lots of what the intro just said
  • Sunny uses a socially popular speaking voice with connotations of: young, pretty, white, adult, female. Hearing how she speaks, for a few seconds, is part of the introduction. It’s information, and that information is not about Trump and Putin.
  • actual content starts 37 seconds in

This is all meta so far. It’s not the information the show is about (Trump and Putin politics discussion). It’s about the show. It’s telling you what kind of show it’s going to be, and who the host is. That’s just like discussing what kind of discussion you will have and the background of a participant.

The intro also links the show to a reusable show structure that most listeners are familiar with. People now know what type of show it is, and what to expect. I didn’t listen to much of the episode, but for the next few minutes the show does live up to genre expectations.

I consider the intro long, heavy-handed and blatant. But most people are slower and blinder, so maybe it’s OK. I dislike most show intros. Offhand I only remember liking one on YouTube – and he stopped because more fans disliked it than liked it. It’s 15 seconds and I didn’t think it had good info.

KINGmykl intro: https://www.youtube.com/watch?v=TrN5Spr1Q4A

One thing I notice, compared to the Sunny intro, is it doesn’t pretend to have good info. It doesn’t introduce mykl, the show, or the video. (He introduces his videos non-generically after the intro. He routinely asks how your day is going, says his is going great, and quickly outlines the main things that will be in the video cuz there’s frequently multiple separate topics in one video. Telling you the outline of the upcoming discussion is an example of useful meta discussion.)

The Sunny intro is so utterly generic I found it boring the first time I heard it. I’ve heard approximately the same thing before from other shows! I saw the mykl intro dozens of times, and sure I skipped it sometimes but not every time, and I remember it positively. It’s more unique, and I don’t understand it as well (it has some meaning, but the meaning is less clear than in the Sunny intro.) I also found the Sunny intro to scream “me too, I’m trying hard to fit in and do this how you’re supposed to” and the mykl intro doesn’t have that vibe to me. (I could pretty easily be wrong though, maybe they both have a fake, tryhard social climber vibe in different ways. Maybe i’m just not familiar enough with other videos similar to mykl’s and that’s why I don’t notice. I’ve watched lots of gaming video content, but a lot of that was on Twitch so it didn’t have a YouTube intro. I have seen plenty of super bland gamer intros. mykl used to script his videos and he recently did a review of an old video. He pointed out ways he was trying to present himself as knowing what he’s talking about, and found it cringey now. He mentioned he stopped scripting videos a while ago.)

Example 2: Chef Heidi Teaches Hoonmaru to Cook Korean Short Rib

https://www.youtube.com/watch?v=EwosbeZSSvY

  • music
  • philly fusion overwatch league team intro (FYI hoonmaru is a fusion twitch streamer, not a pro player)
  • slow mo arrival
  • hoonmaru introducing what’s going on (i think he lied when he said that he thought of this activity)
  • hoonmaru talking about his lack of cooking experience
  • hoonmaru says he’ll answer fan questions while cooking
  • says “let’s get started”
  • music and scene change
  • starts introducing the new seen by showing you visuals of hoonmaru in an apron
  • now we see Chef Heidi and she does intro stuff, asks if he’s ready to cook, then says what they’ll be doing.

The last three are things after “let’s get started” that still aren’t cooking. Cooking finally starts at 48s in. But after a couple seconds of cooking visuals, hoonmaru answers an offtopic fan question before finally getting some cooking instruction. Then a few seconds later hoonmaru is neglecting his cooking, and Heidi fixes it while he answers more questions. Then hoonmaru says he thinks the food looks great so far but that he didn’t do much. This is not a real cooking lesson, it’s just showing off Heidi’s cooking for the team and entertaining hoonmaru fans with his answers to questions that aren’t really related to overwatch skill.

Tons of effort goes into setting up the video. It’s under 6 minutes and spent 13.5% on the intro. I skipped ahead and they also spend 16 seconds (4.5%) on the ending, for a total of 18% on intro and ending. And there’s also structural stuff in the middle, like saying now they will go cook the veggies while the meat is cooking – that isn’t cooking itself, it’s structuring the video and activities into defined parts to help people understand the content. And they asked hoonmaru what he thought of the meat on the grill (looks good... what a generic question and answer) which was ending content for that section of the video.

off topic, Heidi blatantly treats hoonmaru like a kid. at 4:45 she’s making a dinner plate combining the foods. then she asks if he will make it, and he takes that as an order (but he hadn’t realized in advance he’d be doing it, he just does whatever he’s told without thinking ahead). and then the part that especially treats him like a kid is she says she’s taking away the plate she made so he can’t copy it, he has to try to get the right answer (her answer) on his own, she’s treating it like a school test. then a little later he’s saying his plating sucks and she says “you did a great job, it’s not quite restaurant”. there’s so much disgusting social from both of them.


Elliot Temple | Permalink | Message (1)

Project Planning Discussion

This is a discussion about rational project planning. The major theme is that people should consider what their project premises are. What claims are they betting their project success on the correctness of? And why? This matter requires investigation and consideration, not just ignoring it.

By project I mean merely a goal-directed activity. It can be, but doesn't have to be, a business project or multi-person project. My primary focus is on larger projects, e.g. projects that take more than one day to finish.

The first part is discussion context. You may want to skip to the second part where I write an article/monologue with no one else talking. It explains a lot of important stuff IMO.


Gavin Palmer:

The most important problem is The Human Resource Problem. All other problems depend on the human resource problem. The Human Resource Problem consists of a set of smaller problems that are related. An important problem within that set is the communication problem: an inability to communicate. I classify that problem as a problem related to information technology and/or process. If people can obtain and maintain a state of mind which allows communication, then there are other problems within that set related to problems faced by any organization. Every organization is faced with problems related to hiring, firing, promotion, and demotion.

So every person encounters this problem. It is a universal problem. It will exist so long as there are humans. We each have the opportunity to recognize and remember this important problem in order to discover and implement processes and tools which can facilitate our ability to solve every problem which is solvable.

curi:

you haven't explained what the human resource problem is, like what things go in that category

Gavin Palmer:

The thought I originally had long ago - was that there are people willing and able to solve our big problems. We just don't have a sufficient mechanism for finding and organizing those people. But I have discovered that this general problem is related to ideas within any organization. The general problem is related to ideas within a company, a government, and even those encountered by each individual mind. The task of recruiting, hiring, firing, promoting, and demoting ideas can occur on multiple levels.

curi:

so you mean it like HR in companies? that strikes me as a much more minor problem than how rationality works.

Gavin Palmer:

If you want to end world hunger it's an HR problem.

curi:

it's many things including a rationality problem

curi:

and a free trade problem and a governance problem and a peace problem

curi:

all of which require rationality, which is why rationality is central

Gavin Palmer:

How much time have you actually put into trying to understand world hunger and the ways it could end?

Gavin Palmer:

How much time have you actually put into building anything? What's your best accomplishment as a human being?

curi:

are you mad?

GISTE:

so to summarize the discussion that Gavin started. Gavin described what he sees as the most important problem (the HR problem), where all other problems depend on it. curi disagreed by saying that how rationality works is a more important problem than the HR problem, and he gave reasons for it. Gavin disagreed by saying that for the goal of ending world hunger, the most important problem is the HR problem -- and he did not address curi's reasons. curi disagreed by saying that the goal of ending world hunger is many problems, all of which require rationality, making rationality the most important problem. Then Gavin asked curi about how much time he has spent on the world hunger problem and asked if he built anything and what his best accomplishments are. Gavin's response does not seem to connect to any of the previous discussion, as far as I can tell. So it's offtopic to the topic of what is the most important problem for the goal of ending world hunger. Maybe Gavin thinks it is on topic, but he didn't say why he thinks so. I guess that curi also noticed the offtopic thing, and that he guessed that Gavin is mad. then curi asked Gavin "are you mad?" as a way to try to address a bottleneck to this discussion. @Gavin Palmer is this how you view how the discussion went or do you have some differences from my view? if there are differences, then we could talk about those, which would serve to help us all get on the same page. And then that would help serve the purpose of reaching mutual understanding and agreement regarding whether or not the HR problem is the most important problem on which all other problems depend.

GISTE:

btw i think Gavin's topic is important. as i see it, it's goal is to figure out the relationships between various problems, to figure out which is the most important. i think that's important because it would serve the purpose of helping one figure out which problems to prioritize.

Gavin Palmer:

Here is a google doc linked to a 1-on-1 I had with GISTE (he gave me permission to share). I did get a little angry and was anxious about returning here today. I'm glad to see @curi did not get offended by my questions and asked a question. I am seeing the response after I had the conversation with GISTE. Thank you for your time.

https://docs.google.com/document/d/1XEztqEHLBAJ39HQlueKX3L4rVEGiZ4GEfBJUyXEgVNA/edit?usp=sharing

GISTE:

to be clear, regarding the 1 on 1 discussion linked above, whatever i said about curi are my interpretations. don't treat me as an authority on what curi thinks.

GISTE:

also, don't judge curi by my ideas/actions. that would be unfair to him. (also unfair to me)

JustinCEO:

Curi's response tells me he does not know how to solve world hunger.

JustinCEO:

Unclear to me how that judgment was arrived at

JustinCEO:

I'm reading

JustinCEO:

Lowercase c for curi btw

JustinCEO:

But I have thought about government, free trade, and peace very much. These aren't a root problem related to world hunger.

JustinCEO:

curi actually brought those up as examples of things that require rationality

JustinCEO:

And said that rationality was central

JustinCEO:

But you don't mention rationality in your statement of disagreement

JustinCEO:

You mention the examples but not the unifying theme

JustinCEO:

GISTE:

curi did not say those are root problems.

JustinCEO:

Ya 🙂

JustinCEO:

Ya GISTE got this point

JustinCEO:

I'm on phone so I'm pasting less than I might otherwise

JustinCEO:

another way to think about the world hunger problem is this: what are the bottlenecks to solving it? first name them, before trying to figure out which one is like the most systemic one.

JustinCEO:

I think the problem itself could benefit from a clear statement

GISTE:

That clear statement would include causes of (world) hunger. Right ? @JustinCEO

JustinCEO:

I mean a detailed statement would get into that issue some GISTE cuz like

JustinCEO:

You'd need to figure out what counts and what doesn't as an example of world hunger

JustinCEO:

What is in the class of world hunger and what is outside of it

JustinCEO:

And that involves getting into specific causes

JustinCEO:

Like presumably "I live in a first world country and have 20k in the bank but forgot to buy groceries this week and am hungry now" is excluded from most people's definitions of world hunger

JustinCEO:

I think hunger is basically a solved problem in western liberal capitalist democracies

JustinCEO:

People fake the truth of this by making up concepts called "food insecurity" that involve criteria like "occasionally worries about paying for groceries" and calling that part of a hunger issue

JustinCEO:

Thinking about it quickly, I kinda doubt there is a "world hunger" problem per se

GISTE:

yeah before you replied to my last comment, i immediately thought of people who choose to be hungry, like anorexic people. and i think people who talk about world hunger are not including those situations.

JustinCEO:

There's totally a Venezuela hunger problem or a Zimbabwe hunger problem tho

JustinCEO:

But not really an Ohio or Kansas hunger problem

JustinCEO:

Gavin

I try to be pragmatic. If your solution depends on people being rational, then the solution probably will not work. My solution does depend on rational people, but the number of rational people needed is very small

GISTE:

There was one last comment by me that did not get included in the one on one discussion. Here it is. “so, you only want people on your team that already did a bunch of work to solve world hunger? i thought you wanted rational people, not necessarily people that already did a bunch of work to solve world hunger.”

JustinCEO:

What you think being rational is and what it involves could probably benefit from some clarification.

Anyways I think society mostly works to the extent people are somewhat rational in a given context.

JustinCEO:

I regard violent crime for the purpose of stealing property as irrational

JustinCEO:

For example

JustinCEO:

Most people agree

JustinCEO:

So I can form a plan to walk down my block with my iPhone and not get robbed, and this plan largely depends on the rationality of other people

JustinCEO:

Not everyone agrees with my perspective

JustinCEO:

The cop car from the local precinct that is generally parked at the corner is also part of my plan

JustinCEO:

But my plan largely depends on the rationality of other people

JustinCEO:

If 10% or even 5% of people had a pro property crime perspective, the police could not really handle that and I would have to change my plans

Gavin Palmer:

World hunger is just an example of a big problem which depends on information technology related to the human resource problem. My hope is that people interested in any big problem could come to realize that information technology related to the human resource problem is part of the solution to the big problem they are interested in as well as other big problems.

Gavin Palmer:

So maybe "rationality" is related to what I call "information technology".

JustinCEO:

the rationality requirements of my walking outside with phone plan are modest. i can't plan to e.g. live in a society i would consider more moral and just (where e.g. a big chunk of my earnings aren't confiscated and wasted) cuz there's not enough people in the world who agree with me on the relevant issues to facilitate such a plan.

JustinCEO:

anyways regarding specifically this statement

JustinCEO:

If your solution depends on people being rational, then the solution probably will not work.

JustinCEO:

i wonder if the meaning is If your solution depends on [everyone] being [completely] rational, then the solution probably will not work.

Gavin Palmer:

There is definitely some number/percentage I have thought about... like I only need 10% of the population to be "rational".

GISTE:

@Gavin Palmer can you explain your point more? what i have in mind doens't seem to match your statement. so like if 90% of the people around me weren't rational (like to what degree exactly?), then they'd be stealing and murdering so much that the police couldn't stop them.

JustinCEO:

@Gavin Palmer based on the stuff you said so far and in the google doc regarding wanting to work on important problems, you may appreciate this post

JustinCEO:

https://curi.us/2029-the-worlds-biggest-problems

JustinCEO:

Gavin says

A thing that is sacred is deemed worthy of worship. And worship is based in the words worth and ship. And so a sacred word is believed to carry great worth in the mind of the believer. So I can solve world hunger with the help of people who are able and willing. Solving world hunger is not an act done by people who uphold the word rationality above all other words.

JustinCEO:

the word doesn't matter but the concept surely does for problem-solving effectiveness

JustinCEO:

people who don't value rationality can't solve much of anything

nikluk:

Re rationality. Have you read this article and do you agree with what it says, @Gavin Palmer ?
https://fallibleideas.com/reason

GISTE:

So maybe "rationality" is related to what I call "information technology".
can you say more about that relationship? i'm not sure what you have in mind. i could guess but i think it'd be a wild guess that i'm not confident would be right. (so like i could steelman your position but i could easily be adding in my own ideas and ruin it. so i'd rather avoid that.) @Gavin Palmer

Gavin Palmer:

so like if 90% of the people around me weren't rational (like to what degree exactly?), then they'd be stealing and murdering so much that the police couldn't stop them.
I think the image of the elephant rider portrayed by Jonathan Haidt is closer to the truth when it comes to some word like rationality and reason. I actually value something like compassion above a person's intellect: and I really like people who have both. There are plenty of idiots in the world who are not going to try and steal from you or murder you. I'm just going to go through these one by one when able.

Gavin Palmer:

https://curi.us/2029-the-worlds-biggest-problems
Learning to think is very important. There were a few mistakes in that article. The big one in my opinion is the idea that 2/3 of the people can change things. On the contrary our government systems do not have any mechanism in place to learn what 2/3 of the people actually want nor any ability to allow the greatest problem solvers to influence those 2/3 of the people. We aren't even able to recognize the greatest problem solvers. Another important problem is technology which allows for this kind of information sharing so that we can actually know what the people think and we can allow the greatest problem solvers to be heard. We want that signal to rise above the noise.

The ability to solve problems is like a muscle. For me - reading books does not help me build that muscle - they only help me find better words for describing the strategies and processes which I have developed through trial and error. I am not the smartest person - I learn from trial and error.

curi:

To answer the questions: I have thought about many big problems, such as aging death, AGI, and coercive parenting/education. Yes I've considered world hunger too, though not as a major focus. I'm an (experienced) intellectual. My accomplishments are primarily in philosophy research re issues like how learning and rational discussion work. I do a lot of educational writing and discussion. https://elliottemple.com

curi:

You're underestimating the level of outlier you're dealing with here, and jumping to conclusions too much.

Gavin Palmer:

https://fallibleideas.com/reason
It's pretty good. But science without engineering is dead. That previous sentence reminds me of "faith without works is dead". I'm not a huge fan of science for the sake of science. I'm a fan of engineering and the science that helps us do engineering.

curi:

i don't thikn i have anything against engineering.

Gavin Palmer:

I'm just really interested in finding people who want to help do the engineering. It's my bias. Even more - it's my passion and my obsession.

Gavin Palmer:

Thinking and having conversations is fun though.

Gavin Palmer:

But sometimes it can feel aimless if I'm not building something useful.

curi:

My understanding of the world, in big picture, is that a large portion of all efforts at engineering and other getting-stuff-done type work are misdirected and useless or destructive.

curi:

This is for big hard problems. The productiveness of practical effort is higher for little things like making dinner today.

curi:

The problem is largely not the engineering itself but the ideas guiding it – the goals and plan.

Gavin Palmer:

I worked for the Army's missile defense program for 6 years when I graduated from college. I left because of the reason you point out. My hope was that I would be able to change things from within.

curi:

So for example in the US you may agree with me that at least around half of political activism is misdirected to goals with low or negative value. (either the red tribe or blue tribe work is wrong, plus some of the other work too)

Gavin Palmer:

Even the ones I agree with and have volunteered for are doing a shit job.

curi:

yeah

curi:

i have found a decent number of people want to "change the world" or make some big improvement, but they can't agree amongst themselves about what changes to make, and some of them are working against others. i think sorting that mess out, and being really confident the projects one works on are actually good, needs to come before implementation.

curi:

i find most people are way too eager to jump into their favored cause without adequately considering why people disagree with it and sorting out all the arguments for all sides.

Gavin Palmer:

There are many tools that don't exist which could exist. And those tools could empower any organization and their goal(s).

curi:

no doubt.

curi:

software is pretty new and undeveloped. adequate tools are much harder to name than inadequate ones.

Gavin Palmer:

adequate tools are much harder to name than inadequate ones.
I don't know what that means.

curi:

we could have much better software tools for ~everything

curi:

"~" means "approximately"

JustinCEO:

Twitter can't handle displaying tweets well. MailMate performance gets sluggish with too many emails. Most PDF software can't handle super huge PDFs well. Workout apps can't use LIDAR to tell ppl if their form is on point

curi:

Discord is clearly a regression from IRC in major ways.

Gavin Palmer:

🤦‍♂️

JustinCEO:

?

JustinCEO:

i find your face palm very unclear @Gavin Palmer; hope you elaborate!

Gavin Palmer:

I find sarcasm very unclear. That's the only way I know how to interpret the comments about Twitter, MailMate, PDF, LIDAR, Discord, IRC, etc.

curi:

I wasn't being sarcastic and I'm confident Justin also meant what he said literally and seriously.

Gavin Palmer:

Ok - thanks for the clarification.

JustinCEO:

ya my statements were made earnestly

JustinCEO:

re: twitter example

JustinCEO:

twitter makes it harder to have a decent conversation cuz it's not good at doing conversation threading

JustinCEO:

if it was better at this, maybe people could keep track of discussions better and reach agreement more easily

Gavin Palmer:

Well - I have opinions about Twitter. But to be honest - I am also trying to look at what this guy is doing:
https://github.com/erezsh/portal-radar

It isn't a good name in my opinion - but the idea is related to having some bot collect discord data so that there can be tools which help people find the signal in the noise.

curi:

are you aware of http://bash.org ? i'm serious about major regressions.

JustinCEO:

i made an autologging system to make discord chat logs on this server so people could pull information (discussions) out of them more easily

JustinCEO:

but alas it's a rube goldberg machine of different tools running together in a VM, not something i can distribute

Gavin Palmer:

Well - it's a good goal. I'm looking to add some new endpoints in a pull request to the github repo I linked above. Then I could add some visualizations.

Another person has built a graphql backend (which he isn't sharing open source) and I have created some of my first react/d3 components to visualize his data.
https://portal-projects.github.io/users/

Gavin Palmer:

I think you definitely want to write the code in a way that it can facilitate collaboration.

curi:

i don't think this stuff will make much difference when people don't know what a rational discussion is and don't want one.

curi:

and don't want to use tools that already exist like google groups.

curi:

which is dramatically better than twitter for discussion

Gavin Palmer:

I'm personally interested in something which I have titled "Personality Targeting with Machine Learning".

Gavin Palmer:

My goal isn't to teach people to be rational - it is to try and find people who are trying to be rational.

curi:

have you identified which philosophical schools of thought it's compatible and incompatible with? and therefore which you're betting on being wrong?

curi:

it = "Personality Targeting with Machine Learning".

Gavin Palmer:

Ideally it isn't hard coded or anything. I could create multiple personality profiles. Three of the markets I have thought about using the technology in would be online dating, recruiting, and security/defense.

curi:

so no?

Gavin Palmer:

If I'm understanding you - a person using the software could create a personality that mimics a historical person for example - and then parse social media in search of people who are saying similar things.

Gavin Palmer:

But I'm not exactly sure what point you are trying to make.

curi:

You are making major bets while being unaware of what they are. You may be wrong and wasting your time and effort, or even being doing something counterproductive. And you aren't very interested in this.

Gavin Palmer:

Well - from my perspective - I am not making any major bets. What is the worst case scenario?

curi:

An example worst case scenario would be that you develop an AGI by accident and it turns us all into paperclips.

Gavin Palmer:

I work with a very intelligent person that would laugh at that idea.

curi:

That sounds like an admission you're betting against it.

curi:

You asked for an example seemingly because you were unaware of any. You should be documenting what bets you're making and why.

Gavin Palmer:

I won't be making software that turns us all into paperclips.

curi:

Have you studied AI alignment?

Gavin Palmer:

I have been writing software for over a decade. I have been using machine learning for many months now. And I have a pretty good idea of how the technology I am using actually works.

curi:

So no?

Gavin Palmer:

No. But if it is crap - do you want to learn why it is crap?

curi:

I would if I agreed with it, though I don't. But a lot of smart people believe it.

curi:

They have some fairly sophisticated reasons, which I don't think it's reasonable to bet against from a position of ignorance.

Gavin Palmer:

Our ability to gauge if someone has understanding on a given subject is relative to how much understanding we have on that subject.

curi:

Roughly, sure. What's your point?

Gavin Palmer:

First off - I'm not sure AGI is even possible. I love to play with the idea. And I would love to get to a point where I get to help build a god. But I am not even close to doing that at this point in my career.

curi:

So what?

Gavin Palmer:

You think there is a risk I would build something that turns humans into paperclips.

curi:

I didn't say that.

Gavin Palmer:

You said that is the worst case scenario.

curi:

Yes. It's something you're betting against, apparently without much familiarity with the matter.

curi:

Given that you don't know much about it, you aren't in a reasonable position to judge how big a risk it is.

curi:

So I think you're making a mistake.

curi:

The bigger picture mistake is not trying to figure out what bets you're making and why.

curi:

Most projects have this flaw.

Gavin Palmer:

My software uses algorithms to classify input data.

curi:

So then, usually, somewhere on the list of thousands of bets being made, are a few bad ones.

curi:

Does this concept make sense to you?

Gavin Palmer:

Love is most important in my hierarchy of values.

Gavin Palmer:

If I used the word in a sentence I would still want to capitalize it.

curi:

is that intended to be an answer?

Gavin Palmer:

Yes - I treat Love in a magical way. And you don't like magical thinking. And so we have very different world views. They might even be incompatible. The difference between us is that I won't be paralyzed by my fears. And I will definitely make mistakes. But I will make more mistakes than you. The quality and quantity of my learning will be very different than yours. But I will also be reaping the benefits of developing new relationships with engineers, learning new technology/process, and building up my portfolio of open source software.

curi:

You accuse me of being paralyzed by fears. You have no evidence and don't understand me.

curi:

Your message is not loving or charitable.

curi:

You're heavily personalizing while knowing almost nothing about me.

JustinCEO:

i agree

JustinCEO:

also, magical thinking can't achieve anything

curi:

But I will also be reaping the benefits of developing new relationships with engineers

curi:

right now you seem to be trying to burn a bridge with an engineer.

curi:

you feel attacked in some way. you're experiencing some sort of conflict. do you want to use a rational problem solving method to try to address this?

curi:

J, taking my side here will result in him feeling ganged up on. I think it will be counterproductive psychologically.

doubtingthomas:

J, taking my side here will result in him feeling ganged up on. I think it will be counterproductive psychologically.
Good observation. Are you going to start taking these considerations into account in future conversations?

curi:

I knew that years ago. I already did take it into account.

curi:

please take this tangent to #fi

GISTE:

also, magical thinking can't achieve anything
@JustinCEO besides temporary nice feelings. Long term its bad though.

doubtingthomas:

yeah sure

JustinCEO:

ya sure GISTE, i meant achieve something in reality

curi:

please stop talking here. everyone but gavin

Gavin Palmer:

You talked about schools of philosophy, AI alignment, and identifying the hidden bets. That's a lot to request of someone.

curi:

Thinking about your controversial premises and civilizational risks, in some way instead of ignoring the matter, is too big an ask to expect of people before they go ahead with projects?

curi:

Is that what you mean?

Gavin Palmer:

I don't see how my premises are controversial or risky.

curi:

Slow down. Is that what you meant? Did I understand you?

Gavin Palmer:

I am OK with people thinking about premises and risks of an idea and discussing those. But in order to have that kind of discussion you would need to understand the idea. And in order to understand the idea - you have to ask questions.

curi:

it's hard to talk with you because of your repeated unwillingness to give direct answers or responses.

curi:

i don't know how to have a productive discussion under these conditions.

Gavin Palmer:

I will try to do better.

curi:

ok. can we back up?

Thinking about your controversial premises and civilizational risks, in some way instead of ignoring the matter, is too big an ask to expect of people before they go ahead with projects?

did i understand you, yes or no?

Gavin Palmer:

no

curi:

ok. which part(s) is incorrect?

Gavin Palmer:

The words controversial and civilizational are not conducive to communication.

curi:

why?

Gavin Palmer:

They indicate that you think you understand the premises and the risks and I don't know that you understand the idea I am trying to communicate.

curi:

They are just adjectives. They don't say what I understand about your project.

Gavin Palmer:

Why did you use them?

curi:

Because you should especially think about controversial premises rather than all premises, and civilizational risks more than all risks.

curi:

And those are the types of things that were under discussion.

curi:

A generic, unqualified term like "premises" or "risks" would not accurately represent the list of 3 examples "schools of philosophy, AI alignment, and identifying the hidden bets"

Gavin Palmer:

I don't see how schools of philosophy, AI alignment, and hidden bets are relevant. Those are just meaningless words in my mind. The meaning of those words in your mind may contain relevant points. And I would be willing to discuss those points as they relate to the project. But (I think) that would also require that you have some idea of what the software does and how it is done. To bring up these things before you understand the software seems very premature.

curi:

the details of your project are not relevant when i'm bringing up extremely generic issues.

curi:

e.g. there is realism vs idealism. your project takes one side, the other, or is compatible with both. i don't need to know more about your project to say this.

curi:

(or disagrees with both, though that'd be unusual)

curi:

it's similar with skepticism or not.

curi:

and moral relativism.

curi:

and strong empiricism.

curi:

one could go on. at length. and add a lot more using details of your project, too.

curi:

so, there exists some big list. it has stuff on it.

curi:

so, my point is that you ought to have some way of considering and dealing with this list.

curi:

some way of considering what's on it, figuring out which merit attention and how to prioritize that attention, etc.

curi:

you need some sort of policy, some way to think about it that you regard as adequate.

curi:

this is true of all projects.

curi:

this is one of the issues which has logical priority over the specifics of your project.

curi:

there are generic concepts about how to approach a project which take precedence over jumping into the details.

curi:

do you think you understand what i'm saying?

Gavin Palmer:

I think I understand this statement:

there are generic concepts about how to approach a project which take precedence over jumping into the details.

curi:

ok. do you agree with that?

Gavin Palmer:

I usually jump into the details. I'm not saying you are wrong though.

curi:

ok. i think looking at least a little at the big picture is really important, and that most projects lose a lot of effectiveness (or worse) due to failing to do this plus some common errors.

curi:

and not having any conscious policy at all regarding this issue (how to think about the many premises you are building on which may be wrong) is one of the common errors.

curi:

i think being willing to think about things like this is one of the requirements for someone who wants to be effective at saving/changing/helping the world (or themselves individually)

Gavin Palmer:

But I have looked at a lot of big picture things in my life.

curi:

cool. doesn't mean you covered all the key ones. but maybe it'll give you a head start on the project planning stuff.

Gavin Palmer:

So do you have an example of a project where it was done in a way that is satisfactory in your mind?

curi:

hmm. project planning steps are broadly unpublished and unavailable for the vast majority of projects. i think the short answer is no one is doing this right. this aspect of rationality is ~novel.

curi:

some ppl do a more reasonable job but it's really hard to tell what most ppl did.

curi:

u can look at project success as a proxy but i don't think that'll be informative in the way you want.

Gavin Palmer:

I'm going to break soon, but I would encourage you to think about some action items for you and I based around this ideal form of project planning. I have real-world experience with various forms of project planning to some degree or another.

curi's Monologue

curi:

the standard way to start is to brainstorm things on the list

curi:

after you get a bunch, you try to organize them into categories

curi:

you also consider what is a reasonable level of overhead for this, e.g. 10% of total project resource budget.

curi:

but a flat percentage is problematic b/c a lot of the work is general education stuff that is reusable for most projects. if you count your whole education, overhead will generally be larger than the project. if you only count stuff specific to this project, you can have a really small overhead and do well.

curi:

stuff like reading and understanding/remembering/taking-notes-on/etc one overview book of philosophy ideas is something that IMO should be part of being an educated person who has appropriate background knowledge. but many ppl haven't done it. if you assign the whole cost of that to a one project it can make the overhead ratio look bad.

curi:

unfortunately i think a lot of what's in that book would be wrong and ignore some more important but less famous ideas. but at least that'd be a reasonable try. most ppl don't even get that far.

curi:

certainly a decent number of ppl have done that. but i think few have ever consciously considered "which philosophy schools of thought does my project contradict? which am i assuming as premises and betting my project success on? and is that a good idea? do any merit more investigation before i make such a bet?" ppl have certainly considered such things in a disorganized, haphazard way, which sometimes manages to work out ok. idk that ppl have done this by design in that way i'm recommending.

curi:

this kind of analysis has large practical consequences, e.g. > 50% of "scientific research" is in contradiction to Critical Rationalist epistemology, which is one of the more famous philosophies of science.

curi:

IMO, consequently it doesn't work and the majority of scientists basically waste their careers.

curi:

most do it without consciously realizing they are betting their careers on Karl Popper being wrong.

curi:

many of them do it without reading any Popper book or being able to name any article criticizing Popper that they think is correct.

curi:

that's a poor bet to make.

curi:

even if Popper is wrong, one should have more information before betting against him like that.

curi:

another thing with scientists is the majority bet their careers on a claim along the lines of "college educations and academia are good"

curi:

this is a belief that some of the best scientists have disagreed with

curi:

a lot of them also have government funding underlying their projects and careers without doing a rational investigation of whether that may be a really bad, risky thing.

curi:

separate issue: broadly, most large projects try to use reason. part of the project is that problems come up and people try to do rational problem solving – use reason to solve the problems as they come up. they don't expect to predict and plan for every issue they're gonna face. there are open controversies about what reason is, how to use it, what problem solving methods are effective or ineffective, etc.

curi:

what the typical project does is go by common sense and intuition. they are basically betting the project on whatever concept of reason they picked up here and there from their culture being adequate. i regard this as a very risky bet.

curi:

and different project members have different conceptions of reason, and they are also betting on those being similar enough things don't fall apart.

curi:

commonly without even attempting to talk about the matter or put their ideas into words.

curi:

what happens a lot when people have unverbalized philosophy they picked up from their culture at some unknown time in the past is ... BIAS. they don't actually stick to any consistent set of ideas about reason. they change it around situationally according to their biases. that's a problem on top of some of the ideas floating around our culture being wrong (which is well known – everyone knows that lots of ppl's attempts at rational problem solving don't work well)

curi:

one of the problems in the field of reason is: when and how do you rationally end (or refuse to start) conversations without agreement. sometimes you and the other guy agree. but sometimes you don't, and the guy is saying "you're wrong and it's a big deal, so you shouldn't just shut your mind and refuse to consider more" and you don't want to deal with that endlessly but you also don't want to just be biased and stay wrong, so how do you make an objective decision? preferably is there something you could say that the other guy could accept as reasonable? (not with 100% success rate, some people gonna yell at you no matter what, but something that would convince 99% of people who our society considers pretty smart or reasonable?)

curi:

this has received very little consideration from anyone and has resulted in countless disputes when people disagree about whether it's appropriate to stop a discussion without giving further answers or arguments.

curi:

lots of projects have lots of strife over this specific thing.

curi:

i also was serious about AI risk being worth considering (for basically anything in the ballpark of machine learning, like classifying big data sets) even though i actually disagree with that one. i did consider it and think it merits consideration.

curi:

i think it's very similar to physicists in 1940 were irresponsible if they were doing work anywhere in the ballpark of nuclear stuff and didn't think about potential weapons.

curi:

another example of a project management issue is how does one manage a schedule? how full should a schedule be packed with activities? i think the standard common sense ways ppl deal with this are wrong and do a lot of harm (the basic error is overfilling schedules in a way which fails to account for variance in task completion times, as explained by Eliyahu Goldratt)

curi:

i meant there an individual person's schedule

curi:

similarly there is problem of organizing the entire project schedule and coordinating people and things. this has received a ton of attention from specialists, but i think most ppl have an attitude like "trust a standard view i learned in my MBA course. don't investigate rival viewpoints". risky.

curi:

a lot of other ppl have no formal education about the matter and mostly ... don't look it up and wing it.

curi:

even riskier!

curi:

i think most projects managers couldn't speak very intelligently about early start vs. late start for dependencies off the critical path.

curi:

and don't know that Goldratt answered it. and it does matter. bad decisions re this one issue results in failed and cancelled projects, late projects, budget overruns, etc.

curi:

lots of ppl's knowledge of decision making processes extends about as far as pro/con lists and ad hoc arguing.

curi:

so they are implicitly betting a significant amount of project effectiveness on something like "my foundation of pro/con lists and ad hoc arguing is adequate knowledge of decision making processes".

curi:

this is ... unwise.

curi:

another generic issue is lying. what is a lie? how do you know when you're lying to yourself? a lot of ppl make a bet roughly like "either my standard cultural knowledge + random variance about lying is good or lying won't come up in the project".

curi:

similar with bias instead of lying.

curi:

another common, generic way projects go wrong is ppl never state the project goal. they don't have clear criteria for project success or failure.

curi:

related, it's common to make basically no attempt to estimate the resources needed to complete the project successfully and estimating the resources available and comparing those two things.

curi:

goals and resource budgeting are things some ppl actually do. they aren't rare. but they're often omitted, especially for more informal and non-business projects.

curi:

including some very ambitious change-the-world type projects, where considering a plan and what resources it'll use is actually important. a lot of times ppl do stuff they think is moving in the direction of their goal without seriously considering what it will take to actually reach their goal.

curi:

e.g. "i will do X to help the environment" without caring to consider what breakpoints exist for helping the environment that make an important difference and how much action is required to reach one.

curi:

there are some projects like "buy taco bell for dinner" that use low resources compared to what you have available (for ppl with a good income who don't live paycheck to paycheck), so you don't even need to consciously think through resource use. but a lot of bigger ones one ought to estimate e.g. how much time it'll take for success and how much time one is actually allocating to the project.

curi:

often an exploratory project is appropriate first. try something a little and see how you like it. investigate and learn more before deciding on a bigger project or not. ppl often don't consciously separate this investigation from the big project or know which they are doing.

curi:

and so they'll do things like switch to a big project without consciously realizing they need to clear up more time on their schedule to make that work.

curi:

often they just don't think clearly about what their goals actually are and then use bias and hindsight to adjust their goals to whatever they actually got done.

curi:

there are lots of downsides to that in general, and it's especially bad with big ambitious change/improve the world goals.

curi:

one of the most egregious examples of the broad issues i'm talking about is political activism. so many people are working for the red or blue team while having done way too little to find out which team is right and why.

curi:

so they are betting their work on their political team being right. if their political team is wrong, their work is not just wasted but actually harmful. and lots of ppl are really lazy and careless about this bet. how many democrats have read one Mises book or could name a book or article that they think refuses a major Mises claim?

curi:

how many republicans have read any Marx or could explain and cite why the labor theory of value is wrong or how the economic calculation argument refutes socialism?

curi:

how many haters of socialism could state the relationship of socialism to price controls?

curi:

how many of them could even give basic economic arguments about why price controls are harmful in a simple theoretical market model and state the premises/preconditions for that to apply to a real situation?

curi:

i think not many even when you just look at people who work in the field professionally. let alone if you look at people who put time or money into political causes.

curi:

and how many of them base their dismissal of solipsism and idealism on basically "it seems counterintuitive to me" and reject various scientific discoveries about quantum mechanics for the same reason? (or would reject those discoveries if they knew what they were)

curi:

if solipsism or idealism were true it'd have consequences for what they should do, and people's rejections of those ideas (which i too reject) are generally quite thoughtless.

curi:

so it's again something ppl are betting projects on in an unreasonable way.

curi:

to some extent ppl are like "eh i don't have time to look into everything. the experts looked into it and said solipsism is wrong". most such ppl have not read a single article on the topic and could not name an expert on the topic.

curi:

so their bet is not really on experts being right – which if you take that bet thousands of time, you're going to be wrong sometimes, and it may be a disaster – but their bet is actually more about mainstream opinion being right. whatever the some ignorant reporters and magazine writers claimed the experts said.

curi:

they are getting a lot of their "expert" info fourth hand. it's filtered by mainstream media, talking heads on TV, popular magazines, a summary from a friend who listened to a podcast, and so on.

curi:

ppl will watch and accept info from a documentary made by ppl who consulted with a handful of ppl who some university gave expert credentials. and the film makers didn't look into what experts or books, if any, disagree with the ones they hired.

curi:

sometimes the info presented disagrees with a majority of experts, or some of the most famous experts.

curi:

sometimes the film makers have a bias or agenda. sometimes not.

curi:

there are lots of issues where lots of experts disagree. these are, to some rough approximation, the areas that should be considered controversial. these merit some extra attention.

curi:

b/c whatever you do, you're going to be taking actions which some experts – some ppl who have actually put a lot of work into studying the matter – think is a bad idea.

curi:

you should be careful before doing that. ppl often aren't.

curi:

politics is a good example of this. whatever side you take on any current political issue, there are experts who think you're making a big mistake.

curi:

but it comes up in lots of fields. e.g. psychiatry is much less of an even split but there are a meaningful number of experts who think anti-psychotic drugs are harmful not beneficial.

curi:

one of the broad criteria for areas you should look into some before betting your project on are controversial areas. another is big risk areas (it's worse if you're wrong, like AI risk or e.g. there's huge downside risk to deciding that curing aging is a bad cause).

curi:

these are imperfect criteria. some very unpopular causes are true. some things literally no one currently believes are true. and you can't deal with every risk that doesn't violate the laws of physics. you have to estimate plausibility some.

curi:

one of the important things to consider is how long does it take to do a good job? could you actually learn about all the controversial areas? how thoroughly is enough? how do you know when you can move on?

curi:

are there too many issues where 100+ smart ppl or experts think ur initial plan is wrong/bad/dangerous, or could you investigate every area like that?

curi:

relying on the opinions of other ppl like that should not be your whole strategy! that gives you basically no chance against something your culture gets systematically wrong. but it's a reasonable thing to try as a major strategy. it's non-obvious to come up with way better approaches.

curi:

you should also try to use your own mind and judgment some, and look into areas you think merit it.

curi:

another strategy is to consider things that people say to you personally. fans, friends, anonymous ppl willing to write comments on your blog... this has some merits like you get more customized advice and you can have back and forth discussion. it's different to be told "X is dangerous b/c Y" from a book vs. a person where you can ask some clarifying questions.

curi:

ppl sometimes claim this strategy is too time consuming and basically you have to ignore ~80% of all criticism you're aware of with according to your judgment with no clear policies or principles to prevent biased judgments. i don't agree and have written a lot about this matter.

curi:

i think this kind of thing can be managed with reasonable, rational policies instead of basically giving up.

curi:

some of my writing about it: https://elliottemple.com/essays/using-intellectual-processes-to-combat-bias

curi:

most ppl have very few persons who want to share criticism with them anyway, so this article and some others have talked more about ppl with a substantial fan base who actually want to say stuff to them.

curi:

i think ppl should write down what their strategy is and do some transparency so they can be held accountable for actually doing it in addition to the strategy itself being something available for ppl to criticize.

curi:

a lot of times ppl's strategy is roughly "do whatever they feel like" which is such a bias enabler. and they don't even write down anything better and claim to do it. they will vaguely, non-specifically say they are doing something better. but no actionable or transparent details.

curi:

if they write something down they will want it to actually be reasonable. a lot of times they don't even put their policies into words into their own head. when they try to use words, they will see some stuff is unreasonable on their own.

curi:

if you can get ppl to write anything down what happens next is a lot of times they don't do what they said they would. sometimes they are lying pretty intentionally and other times they're just bad at it. either way, if they recognize their written policies are important and good, and then do something else ... big problem, even in their own view.

curi:

so what they really need are policies which some clear steps and criteria where it's really easy to tell if they are being done or not. just just vague stuff about using good judgment or doing lots of investigation of alternative views that represent material risks to the project. actual specifics like a list of topic areas to survey the current state of expert knowledge in with a blog post summarizing the research for each area.

curi:

as in they will write a blog post that gives info about things like what they read and what they think of it, rather than them just saying they did research and their final conclusion.

curi:

and they should have written policies about ways critics can get their attention, and for in what circumstances they will end or not start a conversation to preserve time.

curi:

if you don't do these things and you have some major irrationalities, then you're at high risk of a largely unproductive life. which is IMO what happens to most ppl.

curi:

most ppl are way more interested in social status hierarchy climbing than taking seriously that they're probably wrong about some highly consequential issues.

curi:

and that for some major errors they are making, better ideas are actually available and accessible right now. it's not just an error where no one knows better or only one hermit knows better.

curi:

there are a lot of factors that make this kind of analysis much harder for ppl to accept. one is they are used to viewing many issues as inconclusive. they deal with controversies by judging one side seems somewhat more right (or sometimes: somewhat higher social status) instead of actually figuring out decisive, clear cut answers.

curi:

and they think that's just kinda how reason works. i think that's a big error and it's possible to actually reach conclusions. and ppl actually do reach conclusions. they decide one side is better and act on it. they are just doing that without having any reason they regard as adequate to reach that conclusion...

curi:

some of my writing about how to actually reach conclusions re issues http://curi.us/1595-rationally-resolving-conflicts-of-ideas

curi:

this (possibility of reaching actual conclusions instead of just saying one side seems 60% right) is a theme which is found, to a significant extent, in some of the other thinkers i most admire like Eliyahu Goldratt, Ayn Rand and David Deutsch.

curi:

Rand wrote this:

curi:

Now some of you might say, as many people do: “Aw, I never think in such abstract terms—I want to deal with concrete, particular, real-life problems—what do I need philosophy for?” My answer is: In order to be able to deal with concrete, particular, real-life problems—i.e., in order to be able to live on earth.
You might claim—as most people do—that you have never been influenced by philosophy. I will ask you to check that claim. Have you ever thought or said the following? “Don’t be so sure—nobody can be certain of anything.” You got that notion from David Hume (and many, many others), even though you might never have heard of him. Or: “This may be good in theory, but it doesn’t work in practice.” You got that from Plato. Or: “That was a rotten thing to do, but it’s only human, nobody is perfect in this world.” You got it from Augustine. Or: “It may be true for you, but it’s not true for me.” You got it from William James. Or: “I couldn’t help it! Nobody can help anything he does.” You got it from Hegel. Or: “I can’t prove it, but I feel that it’s true.” You got it from Kant. Or: “It’s logical, but logic has nothing to do with reality.” You got it from Kant. Or: “It’s evil, because it’s selfish.” You got it from Kant. Have you heard the modern activists say: “Act first, think afterward”? They got it from John Dewey.
Some people might answer: “Sure, I’ve said those things at different times, but I don’t have to believe that stuff all of the time. It may have been true yesterday, but it’s not true today.” They got it from Hegel. They might say: “Consistency is the hobgoblin of little minds.” They got it from a very little mind, Emerson. They might say: “But can’t one compromise and borrow different ideas from different philosophies according to the expediency of the moment?” They got it from Richard Nixon—who got it from William James.

curi:

which is about how ppl are picking up a bunch of ideas, some quite bad, from their culture, and they don't really know what's going on, and then those ideas effect their lives.

curi:

and so ppl ought to actually do some thinking and learning for themselves to try to address this.

curi:

broadly, a liberal arts education should have provided this to ppl. maybe they should have had it by the end of high school even. but our schools are failing badly at this.

curi:

so ppl need to fill in the huge gaps that school left in their education.

curi:

if they don't, to some extent what they are at the mercy of is the biases of their teachers. not even their own biases or the mistakes of their culture in general.

curi:

schools are shitty at teaching ppl abstract ideas like an overview of the major philosophers and shitty at teaching practical guidelines like "leave 1/3 of your time slots unscheduled" and "leave at least 1/3 of your income for optional, flexible stuff. don't take on major commitments for it"

curi:

(this is contextual. like with scheduling, if you're doing shift work and you aren't really expected to think, then ok the full shift can be for doing the work, minus some small breaks. it's advice more for ppl who actually make decisions or do knowledge work. still applies to your social calendar tho.)

curi:

(and actually most ppl doing shift work should be idle some of the time, as Goldratt taught us.)

curi:

re actionable steps, above i started with addressing the risky bets / risky project premises. with first brainstorming things on the list and organizing into categories. but that isn't where project planning starts.

curi:

it starts with more like

curi:

goal (1 sentence). how the goal will be accomplished (outline. around 1 paragraph worth of text. bullet points are fine)

curi:

resource usage for major, relevant resource categories (very rough ballpark estimates, e.g. 1 person or 10 or 100 ppl work on it. it takes 1 day, 10 days, 100 days. it costs $0, $1000, $1000000.)

curi:

you can go into more detail, those are just minimums. often fine to begin with.

curi:

for big, complicated projects you may need a longer outline to say the steps involved.

curi:

then once u have roughly a goal and a plan (and the resource estimates help give concrete meaning to the plan), then you can look at risks, ways it may fail.

curi:

the goal should be clearly stated so that someone could clearly evaluate potential outcomes as "yes that succeeded" or "no, that's a failure"

curi:

if this is complicated, you should have another section giving more detail on this.

curi:

and do that before addressing risks.

curi:

another key area is prerequisites. can do before or after risks. skills and knowledge you'll need for the project. e.g. "i need to know how wash a test tube". especially notable are things that aren't common knowledge and you don't already know or know how to do.

curi:

failure to succeed at all the prerequisites is one of the risks of a project. the prerequisites can give you some ideas about more risks in terms of intellectual bets being made.

curi:

some prerequisites are quite generic but merit more attention than they get. e.g. reading skill is something ppl take for granted that they have, but it's actually an area where most ppl could get value from improving. and it's pretty common ppl's reading skills are low enough that it causes practical problems if they try to engage with something. this is a common problem with intellectual writing but it comes up plenty with mundane things like cookbooks or text in video games that provides information about what to do or how an ability works. ppl screw such things up all the time b/c they find reading burdensome and skip reading some stuff. or they read it fast, don't understand it, and don't have the skill to realize they missed stuff.)

curi:

quite a few writers are not actually as good at typing as they really ought to be, and it makes their life significantly worse and less efficient.

curi:

and non-writers. cuz a lot of ppl type stuff pretty often.

curi:

and roughly what happens is they add up all these inefficiencies and problems, like being bad at typing and not knowing good methods for resolving family conflicts, and many others, and the result is they are overwhelmed and think it'd be very hard to find time to practice typing.

curi:

their inefficiencies take up so much time they have trouble finding time to learn and improve.

curi:

a lot of ppl's lives look a lot like that.


Elliot Temple | Permalink | Messages (53)

What Is an Impasse?

An impasse is a reason (from the speaker’s pov (point of view)) that the discussion isn’t working.

Impasses take logical priority over continuing the discussion. It doesn’t make sense to keep talking about the original topic when someone thinks that isn’t working.

An impasse chain is an impasse about a discussion of an impasse. The first impasse, about the original topic, is impasse 1. If discussion of impasse 1 reaches an impasse, that’s impasse 2. If discussion of impasse 2 reaches an impasse, that’s impasse 3. And so on.

A chain of impasses is different than multiple separate impasses. In a chain, each link is attached to the previous link. By contrast, multiple separate impasses would be if someone gives several reasons that the original discussion isn’t working. Each of those impasses is about the original discussion, rather than being linked to each other.

When there is a chain of impasses, the most recent (highest number) impasse takes priority over the previous impasses. Impasse 2 is a reason, from the speaker’s pov, that discussion of impasse 1 isn’t working. Responding about impasse 1 at that point doesn’t make sense from his pov. It comes off as trying to ignore him and his pov.

Sometimes people try to solve a problem without saying what they’re doing. Instead of discussing an impasse, they try to continue the prior discussion but make changes to fix the problem. But they don’t acknowledge the problem existed, say what they’re doing to fix it, ask if that is acceptable from the other person’s pov, etc. From the pov of the person who brought up the impasse, this looks like being ignored because the person doesn’t communicate about the impasse and tries to continue the original topic. The behavior looks very similar to a person who thinks the impasse is stupid and wants to ignore it for that reason. And usually when people try to silently solve the problem, they don’t actually know enough about it (since they asked no clarifying questions) in order to get it right on the first try (even if they weren’t confusing the other person by not explaining what they were doing, usually their first guess at a solution to the impasse won’t work).

This non-communicated-problem-solving-attempt problem is visible when people respond at the wrong level of discussion. Call the original topic level 0, the first impasse level 1, the second impasse level 2, the third impasse level 3, and so on. If level 3 has been reached and then someone responds to level 2, 1 or 0, then they’re not addressing the current impasse. They either are ignoring the problem or trying to solve it without explaining what they’re doing. Similarly, if the current level is 1, and someone responds at level 0, they’re making this error.

The above is already explained, in different words with more explanation, in my article Debates and Impasse Chains.


Elliot Temple | Permalink | Messages (7)

IGCs

IGCs are a way of introducing Yes or No Philosophy and Critical Fallibilism. I'm posting this seeking feedback. Does this make sense to you so far? Any objections? Questions? Doubts? Ideas that are confusing?


Ideas cannot be judged in isolation. We must know an idea’s goal or purpose. What problem is it trying to solve? What is it for? And what is the context?

So we should judge IGCs: {idea, goal, context} triples.

The same idea, “run fast”, can succeed in one context (a foot race) and fail in another context (a pie eating contest). And it can succeed at one goal (win the race) but fail at another goal (lose the race to avoid attention).

Think in terms of evaluating IGCs not ideas. A core question in thinking is: Does this idea succeed at this goal in this context? If you change any one of those parts (idea, goal or context) then it’s a different question and you may get a different result.

There are patterns in IGC evaluations. Some ideas succeed at many similar goals in a wide variety of contexts. Good ideas usually succeed at broad categories of goals and are robust enough to work in a fairly big category of contexts. However, a narrow, fragile idea can be valuable sometimes. (Narrow means that the idea applies to a small range of goals, and fragile means that many small adjustments to the context would cause the idea to fail.)

There are infinitely many logically possible goals and contexts. Every idea is in infinitely many IGCs that don’t work. Every idea, no matter how good, can be misused – trying to use it for a goal it can’t accomplish or in a context where it will fail.

Whether there are some universal ideas (like arithmetic) that can work in all contexts is an open question. Regardless, all ideas fail at many goals. And there are many more ways to be wrong than right. Out of all possible IGCs, most won’t work. Totally random or arbitrary IGCs are very unlikely to work (approximately a zero percent chance of working).

Truth is IGC success – the idea works at its purpose. Falsehood or error is an IGC that won’t work. Knowledge means learning about which IGCs work, and why, and the patterns of IGC success and failure.

So far, this is not really controversial. IGCs are not a standard way of explaining these issues, but they’re reasonably compatible with many common views. Many people would be able to present their beliefs using IGC terminology without changing their beliefs. I’ve talked about IGCs because they’re more precise than most alternatives and make it easier to understand my main point.

People believe that we can evaluate both whether an idea succeeds at a goal (in a context) and how well it does. There’s binary success or failure and also degree of success. Therefore, it’s believed, we should reject ideas that will fail and then, among the many that can succeed, choose an idea that will bring a high degree of success and/or a high probability of success.

I claim that this is approach is fundamentally wrong. We can and should use only decisive, binary judgments of success or failure.

The main cause of degree evaluations of ideas is vagueness, especially vague goals.


I'll stop there for now. Please post feedback on what it says so far (rather than on e.g. me not yet explaining vague goals).


Elliot Temple | Permalink | Messages (9)

Some Thoughts on Learning Philosophy

You need to know why you’re learning something in order to know when you’re done. What level of perfection does it need to be learned to? Which details should be learned and which skipped? That depends on its purpose.

At first, you can go by intuition or conventional defaults for how well to learn something, but it’s important at some point to start getting some control over this and making it more intentional and chosen.

To get a grasp on the purpose of learning, you need a tree (or graph). Writing it down helps clarify it in your mind. If you think about it without writing it down, there’s still information in your head that is logically equivalent to a tree (or graph) structure. If you have a goal and something you plan to do that’s related to the goal, then that is a tree: the goal is the root node and the relevant action is a descendent.

A tree can indicate some things you’re hoping to build up to. E.g. the root node is “write well” and then “learn grammar” is one of descendants. But those aren’t specific. How will you know when you succeeded?

It’s OK to sketch out trees with blank parts. You have the root node, then don’t specify everything, and then you get to the grammar node. You don’t have to know exactly what’s in between to know there’s a connection there. Figuring it out is useful though. It’s better to have something pretty generic like “learn mechanics of writing” in between instead of leaving it blank.

If you want to be able to write an article sharing your ideas about dinosaurs so that three of your friends can understand it, that’s more specific. That clearer root node gives more meaning to the “learn grammar” node below it. You can learn just the grammar that’s relevant to the goal. It helps you know when to move on. For example, you can write understandably to your three friends without using any colons or semi-colons. But you will need to understand periods, and you’ll probably want to use a few commas and question marks. And you’ll need to understand what a sentence is – not in full detail but at least the basics.

Another descendent node is “learn vocabulary”. Since the goal relates to dinosaurs, you’ll need some uncommon words like "cretaceous”, but you won’t need to know “sporadically” or “perplexity” (which are sometimes called “SAT words” due to showing up on the SAT college-entrance test – if your goal were to get into more prestigious colleges than you need to learn differently vocabulary).

Bottlenecks and breakpoints are important too. Which areas actually deserve much of your attention? Which are important to your goal and should be focused on? Which aren’t? Why? Usually you can get most stuff to a “good enough” level with little attention and then focus most of your attention on a few areas that will make a big difference to the outcome. If you can’t do that – if there are a lot of hard parts – then the project as a whole is too advanced for you and therefore needs to be divided into more manageable sub-projects. The number of sub-projects you end up with gives you a decent indication of project difficulty. If you have to divide it up into 500 parts to get them into manageable chunks, then it’s a big, hard project overall! If it’s 3 chunks then it’s harder than the average project but not too bad.

A bottleneck is a limiting factor, aka a constraint. If you do better in that area, it translates to a better outcome on the final goal. Most things aren’t bottlenecks. E.g. consider a chain. If you reinforce most links, it won’t make the overall chain stronger, because they weren’t the weakest link anyway. Doing better in that area (that link is stronger) doesn’t translate to more success at the goal (chain holds more weight). But if you find the weakest link – the bottleneck – and reinforce that link, then you’ll actually have a positive impact on the goal.

A breakpoint is a significant, distinguishable improvement. It makes some kinda meaningful difference instead of just being 0.003% better (who cares?). For example, I want to buy something that costs $20. Then there’s a breakpoint at $20. If I have $19 or less, I can’t buy it. If I have $20 or more, I can buy it. The incremental change of gaining $1 from $19 to $20 crosses the breakpoint and makes a big difference (buy instead of can’t buy). But any other $1 doesn’t matter so much. If I go from $15 to $16 or $33 to $34 it doesn’t change the outcome. More resources is generally a good thing, and money is generic enough to use on some other project later, but it’s important to figure out what will make important differences and pursue that. If we optimize things that don’t matter much, we can spend our whole lives without achieving much. There are so many details that we could pay attention to that they could consume all our time if we let them.

More specific goals are easier to achieve. More organized approaches are easier to succeed with. Some amount of organized planning – like connecting something to a clearer goal or sub-goal – helps you figure out what’s important and what’s “good enough”.

If you want to learn much philosophy or be much of a general intellectual, you need to be a decent reader and decent writing so you communication to and from you can happen in writing. And you need some ability to organize ideas and organize your life/time. It doesn’t have to be perfect but it has to work OK. And you need some general competence at most of the common knowledge that most people in our society have. And you need some interest in understanding things and some curiosity. And you need some ability to judge stuff for yourself: Does this make sense to you? Are you satisfied? And you need some ability to change and to consider negative things without getting too emotional. Those things are general purpose enough that it doesn’t really matter what specific types of ideas interest you the most, e.g. epistemology, science or economics, they’re going to be useful regardless.


Elliot Temple | Permalink | Messages (0)

Fallible Justificationism

This is adapted from a Feb 2013 email. I explain why I don't think all justificationism is infallibilist. Although I'm discussing directly with Alan, this issue came up because I'm disagreeing with David Deutsch (DD). DD claims in The Beginning of Infinity that the problem with justificationism is infallibilism:

To this day, most courses in the philosophy of knowledge teach that knowledge is some form of justified, true belief, where ‘justified’ means designated as true (or at least ‘probable’) by reference to some authoritative source or touchstone of knowledge. Thus ‘how do we know . . . ?’ is transformed into ‘by what authority do we claim . . . ?’ The latter question is a chimera that may well have wasted more philosophers’ time and effort than any other idea. It converts the quest for truth into a quest for certainty (a feeling) or for endorsement (a social status). This misconception is called justificationism.

The opposing position – namely the recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying ideas as being true or probable – is called fallibilism.

DD says fallibilism is the opposing position to justificationism and that justificationists are seeking a feeling of certainty. And when I criticized this, DD defended this view in discussion emails (rather than saying that's not what he meant or revising his view). DD thinks justificationism necessarily implies infallibilism. I disagree. I believe that some justificationism isn't infallibilist. (Note that DD has a very strong "all" type claim and I have a weak "not all" type claim. If only 99% of justificationism is infallibilist, then I'm right and DD is wrong. The debate isn't about what's common or typical.)

Alan Forrester wrote:

[Justification is] impossible. Knowledge can't be proven to be true since any argument that allegedly proves this has to start with premises and rules of inference that might be wrong. In addition, any alleged foundation for knowledge would be unexplained and arbitrary, so saying that an idea is a foundation is grossly irrational.

I replied:

But "justified" does not mean "proven true".

I agree that knowledge cannot be proven true, but how is that a complete argument that justification is impossible?

And Alan replied:

You're right, it's not a complete explanation.

Justified means shown to be true or probably true. I didn't cover the "probably true" part. The case in which something is claimed to be true is explicitly covered here. Showing that a statement X is probably true either means (1) showing that "statement X is probably true" is true, or it means that (2) X is conjectured to be probably true. (1) has exactly the same problem as the original theory.

In (2) X is admitted to be a conjecture and then the issue is that this conjecture is false, as argued by David in the chapter of BoI on choices. I don't label that as a justificationist position. It is mistaken but it is not exactly the same mistake as thinking that stuff can be proved true or probably true.

In parallel, Alan had also written:

If you kid yourself that your ideas can be guaranteed true or probably true, rather than admitting that any idea you hold could be wrong, then you are fooling yourself and will spend at least some of your time engaged in an empty ritual of "justification" rather than looking for better ideas.

I replied:

The basic theme here is a criticism of infallibilism. It criticizes guarantees and failure to admit one's ideas could be wrong.

I agree with this. But I do not agree that criticizing infallibilism is a good reply to someone advocating justificationism, not infallibilism. Because they are not the same thing. And he didn't say anything glaringly and specifically infallibilist (e.g. he never denied that any idea he has could turn out to be a mistake), but he did advocate justificationism, and the argument is about justification.

And Alan replied:

Justificationism is inherently infallibilist. If you can show that some idea is true or probably true, then when you do that you can't be mistaken about it being true or probably true, and so there's no point in looking for criticism of that idea.

My reply below responds to both of these issues.


Justificationism is not necessarily infallibilist. Justification does not mean guaranteeing ideas are true or probably true. The meaning is closer to: supporting some ideas as better than others with positive arguments.

This thing -- increasing the status of ideas in a positive way -- is what Popper calls justificationism and criticizes in Realism and the Aim of Science.

I'll give a quote from my own email from Jan 2013, which begins with a Popper quote, and then I'll continue my explanation below:

Realism and the Aim of Science, by Karl Popper, page 19:

The central problem of the philosophy of knowledge, at least since the Reformation, has been this. How can we adjudicate or evaluate the far-reaching claims of competing theories and beliefs? I shall call this our first problem. This problem has led, historically, to a second problem: How can we justify our theories or beliefs? And this second problem is, in turn, bound up with a number of other questions: What does a justification consist of? and, more especially: Is it possible to justify our theories or beliefs rationally: that is to say, by giving reasons -- 'positive reasons' (as I shall call them), such as an appeal to observation; reasons, that is, for holding them to be true, or at least 'probable' (in the sense of the probability calculus)? Clearly there is an unstated, and apparently innocuous, assumption which sponsors the transition from the first to the second question: namely, that one adjudicates among competing claims by determining which of them can be justified by positive reasons, and which cannot.

Now Bartley suggests that my approach solves the first problem, yet in doing so changes its structure completely. For I reject the second problem as irrelevant, and the usual answers to it as incorrect. And I also reject as incorrect the assumption that leads from the first to the second problem. I assert (differing, Bartley contends, from all previous rationalists except perhaps those who were driven into scepticism) that we cannot give any positive justification or any positive reason for our theories and our beliefs. That is to say, we cannot give any positive reasons for holding our theories to be true. Moreover, I assert that the belief we can give such reasons, and should seek for them is itself neither a rational nor a true belief, but one that can be shown to be without merit.

(I was just about to write the word 'baseless' where I have written 'without merit'. This provides a good example of just how much our language is influenced by the unconscious assumptions that are attacked within my own approach. It is assumed, without criticism, that only a view that lacks merit must be baseless -- without basis, in the sense of being unfounded, or unjustified, or unsupported. Whereas, on my view, all views -- good and bad -- are in this important sense baseless, unfounded, unjustified, unsupported.)

In so far as my approach involves all this, my solution of the central problem of justification -- as it has always been understood -- is as unambiguously negative as that of any irrationalist or sceptic.

If you want to understand this well, I suggest reading the whole chapter in the book. Please don't think this quote tells all.

Some takeaways:

  • Justificationism has to do with positive reasons.

  • Positive reasons and justification are a mistake. Popper rejects them.

  • The right approach to epistemology is negative, critical. With no compromises.

  • Lots of language is justificationist. It's easy to make such mistakes. What's important is to look
    out for mistakes and try to correct them. ("Solid", as DD recently used, was a similar mistake.)

  • Popper writes with too much fancy punctuation which makes it harder to read.

A key part of the issue is the problem situation:

How can we adjudicate or evaluate the far-reaching claims of competing theories and beliefs?

Justificationism is an answer to this problem. It answers: the theories and beliefs with more justification are better. Adjudicate in their favor.

This is not an inherently infallibilist answer. One could believe that his conception of which theories have how much justification is fallible, and still give this answer. One could believe that his adjudications are final, or one could believe that his adjudications could be overturned when new justifications are discovered. Infallibilism is not excluded nor required.


Looking at the big picture, there is the critical approach to evaluating ideas and the justificationist or "positive" approach.

In the Popperian critical approach, we use criticism to reject ideas. Criticism is the method of sorting out good and bad ideas. (Note that because this is the only approach that actually works, everyone does it whenever they think successfully, whether they realize it or not. It isn't optional.) The ideas which survive criticism are the winners.

In the justificationist approach, rather than refuting ideas with negative criticism, we build them up with positive arguments. Ideas are supported with supporting evidence and arguments. The ones we're able to support the most are the winners. (Note: this doesn't work, no successful thinking works this way.)

These two rival approaches are very different and very important. It's important to differentiate between them and to have words for them. This is why Popper named the justificationist approach, which had gone without a name because everyone took it for granted and didn't realize it had any rival or alternative approaches.

Both approaches are compatible with both infallibilism and fallibilism. They are metaphorically orthogonal to the issue of fallibility. In other words, fallibilism and justificationism are separate issues.

Fallibilism is about whether or not our evaluations of ideas should be subjected to revision and re-checking, or whether anything can be established with finality so that we no longer have to consider arguments on the topic, whether they be critical or justifying arguments.

All four combinations are possible:

Infallible critical approach: you believe that once socialist criticisms convince you capitalism is false, no new arguments could ever overturn that.

Infallible justificationist approach: you believe that once socialist arguments establish the greatness of socialism, then no new arguments could ever overturn that.

Fallible critical approach: you believe that although you currently consider socialist criticisms of capitalism compelling, new arguments could change your mind.

Fallible justificationist approach: you believe that although you currently consider socialist justifying arguments compelling (at establishing the greatness and high status of the socialism, and therefore its superiority to less justified rivals), you are open to the possibility that there is a better system which could be argued for even more strongly and justified even more and better than socialism.


BTW, there are some complicating factors.

Although there is an inherent asymmetry between positive and negative arguments (justifying and critical arguments), many arguments can be converted from one type to the other while retaining some of the knowledge.

For example, someone might argue that the single particle two slit experiment supports (justifies) the many-worlds interpretation of quantum physics. This can be converted into criticisms of rivals which are incompatible with the experiment. (You can convert the other way too, but the critical version is better.)

Another complicating factor is that justificationists typically do allow negative arguments. But they use them differently. They think negative arguments lower status. So you might have two strong positive arguments for an idea, but also one mild negative argument against it. This idea would then be evaluated as a little worse than a rival idea with two strong positive arguments but no negative arguments against it. But the idea with two strong positive arguments and one weak criticism would be evaluated above an idea with one weak positive argument and no criticism.

This is easier to express in numbers, but usually isn't. E.g. one argument might add 100 justification and another adds 50, and then a minor criticism subtracts 10 and a more serious criticism subtracts 50, for a final score of 90. Instead, people say things like "strong argument" and "weak argument" and it's ambiguous how many weak arguments add up to the same positive value as a strong argument.

In justification, arguments need strengths. Why? Because simply counting up how many arguments each idea has for it (and possibly subtracting the number of criticisms) is too open to abuse by using lots of unimportant arguments to get a high count. So arguments must be weighted by their importance.

If you try to avoid this entirely, then justificationism stops functioning as a solution to the problem of evaluating competing ideas. You would have many competing ideas, each with one or more argument on their side, and no way to adjudicate. To use justificationism, you have to have a way of deciding which ideas have more justificationism.

The critical approach, properly conceived, works differently than that. Arguments do not have strengths or weights, and nor do we count them up. How can that be? How can we adjudicate between competing ideas with out that? Because one criticism is decisive. What we seek are ideas we don't have any criticisms of. Those receive a good evaluation. Ideas we do have criticisms of receive a bad evaluation. (These evaluations are open to revision as we learn new things.) (Also there are only two possible evaluations in this system. The ideas we do have criticisms of, and the ideas we don't. If you don't do it that way, and you follow the logic of your approach consistently, you end up with all the problems of justificationism. Unless perhaps you have a new third approach.)


Elliot Temple | Permalink | Messages (0)

Problem Solving While Reading

I'd urge anyone who has trouble reading something to stop and do problem solving instead of ignoring the problem or giving up. This kind of thing is an opportunity to practice and improve.

You could e.g. take a paragraph you have trouble with and analyze it, possibly with a paragraph tree.

If you do that kind of activity many times, you will get better at reading that type of material and reading in general. You can automatize some of the analysis steps so, in the future, you automatically know some of the results without having to go through all the steps. A way to look at it is if you do those activities enough, you'll get faster at it, and also some of the conclusions will become predictable to you before you've consciously/explicitly done all the steps.

When stuff is hard, slow down and figure out the correct answer – the way you ideally want to do it – so you end up forming good habits (a habit of doing what you think is best when you go slowly and put in more effort) instead of bad habits.

This is the same as improving at other kinds of things, e.g. typing. If you’re typing incorrectly (e.g. hitting a key with the wrong finger, or looking at the keyboard while typing), you should slow down, fix the problems, then speed up only when you’re doing it the way you want to. It’s hard to fix errors while going fast. And you should avoid habit-forming amounts of repetition of the activity until you’re satisfied with the way you’re doing it.

You can never be perfect. It’s also important to sometimes change your habits after they’re formed. Sometimes you’ll learn something new and realize a habit or subconscious automatization should be changed. But forming habits/automatizations and then changing them soon after is inefficient; it’s more efficient to make a serious effort to get them right in the first place so you can reduce the need to change habits. You don’t want to form a habit than is worse than your current knowledge.


If you do this text analysis stuff consistently whenever there are hard parts, it will be disruptive to reading the book. It'll slow you way down and spread your reading out due to taking many breaks to practice. You won’t get much reading flow due to all the interruptions. Here are some options for dealing with that problem:

  1. It doesn't matter. Improving skills is the priority, not understanding the book. You can read the book later including rereading the sections you had many stops during.
  2. Read something else where you run into harder parts infrequently so stopping for every hard part isn't very disruptive.
  3. Make trees, outlines or other notes covering everything so you get an understanding of the book that way rather than from direct reading. E.g. do paragraph trees for every paragraph and then make section trees that put the paragraphs together, and then do trees that put the sections together, and keep doing higher level trees until you cover the whole book.
  4. Read a section at a time then go back and do the analysis and practice after finishing the section but before reading the next section, rather than stopping in the middle of a section. That'll let you read and understand a whole chunk at once (to your current standards). Analyzing/practicing/etc. in between sections shouldn't be very disruptive.

With option 4, it’s very important not to cheat and read multiple sections in a row while planning to go back to stuff eventually. Even if you try to go back later, the hard stuff won’t be fresh enough in your mind anymore. If you’re procrastinating on doing any analysis, it’s because you don’t actually want to do it. In that case you need to do problem solving about that. Why are you conflicted? Why does part of you want to improve intellectually and do learning activities, etc., while part of you doesn’t? What part doesn’t and what is its motivation?

Also how big a section should you use? It depends on the book (does it have natural break points often?) and your memory (if a section is too big you’ll forget stuff from the earlier parts) and your skill level. If a section is too big, you’ll also have too many hard parts you need to do (e.g. 20) which may be overwhelming or seem like too much work. Also by the time you analyze the first 19 hard parts, you won’t remember the 20th one because it’s been so long since you read the end of the section. And if you’re trying to analyze and revise how you understood 20 parts at once, it’s hard to take those all into account at once to update your understanding of what the book said. Doing it closer to “read something, analyze it right away to understand it correctly, keep reading” has clear advantages like letting you actually use your analysis to help your reading instead of the analysis being tacked on later and not actually being used. So you might need to use sections that are pretty short, like 2 or 3 pages long, which could give you more uninterrupted reading flow without being too much to deal with at once. You could do it based on reading time too, like maybe 5 or 10 minutes would be a reasonable chunk to read at once before you stop to analyze (depending on how many problems you’re having). Also if you have a big problem, like you’re really extra confused about a sentence, paragraph or argument, you may want to stop early.

Also, it’s important to analyze and practice regarding small problems and slightly hard parts, not just major problems. Some people only want to focus on the really visible problems, but optimizing smaller stuff will help you get really good at what you’re doing. Also if something is actually a small difficulty then working on it should go fast. If it takes a long time and seems like a hassle, then you needed the practice and it wasn’t that small for you after all. Though if it feels like a hassle, that means you’re conflicted and should investigate that conflict.

If you’re conflicted, here are relevant articles by me:

And I wrote a part 2 for this post:

Subconscious Reading; Conscious Learning; Getting Advanced Skills

And recorded a podcast:

Reading, Learning and the Subconscious | Philosophy Podcast


Elliot Temple | Permalink | Messages (0)

Subconscious Reading; Conscious Learning; Getting Advanced Skills

Yesterday I wrote about practicing when you find any hard parts while reading. I have more to say.

First, noticing it was hard is a visible problem. What you noticed is usually under 10% of the actual problem(s). The problem is probably at least 10x larger than you initially think. So don’t ignore it. When you find visible problems you should be really glad they weren’t hidden problems, and assume they might be the visible tip of an iceberg of problems, and investigate to see if there are more hard-to-find problems near the visible problem. A visible problem is a warning something is wrong that lets you know where to investigate. That’s really useful. Sometimes things go badly wrong and you get no warning and have no idea what’s going on. Lots of people react to visible problems by trying to get rid of them, which is shooting the messenger and making any other related problems harder to find. If you have a habit of “solving” problems just enough that you no longer see the problem, then you’re hiding all the evidence of your other less visible problems and entrenching them, and you’ll have chronic problems in your life without any idea about the causes because you got rid of all the visible clues that you could.

Second, if people practiced hard reading once a day (or once per reading session) regardless of how many hard parts they ran into, they would make progress. That would be good enough in some sense even though they ignored a bunch of problems. But why would you want to do that? What is the motivation there? What part of you wants to ignore a problem, keep going, and never analyze it? What do you think you’re getting out of getting more reading done and less problem solving done?

Are you reading a book that you believe will help you with other urgent problems even if you understand it poorly? Is finishing the book faster going to be more beneficial than understanding it well due to an urgent situation? Possible but uncommon. And if you’re in that situation where you urgently need to read a book and also your reading skill is inadequate to understand the book well, you have other problems. How did you get in that situation? Why didn’t you improve at reading sooner? Or avoid taking on challenges you wouldn’t be able to do with your current skills?

Do you think your current reading, when you find stuff hard to read, is actually adequate and fine? You just think struggling while reading – enough to notice it – is part of successful reading and the solution is extra work and/or a “nobody’s perfect” attitude? Your knowledge can never be perfect so what does it matter if there were visible flaws? It could be better! You could have higher standards.

If you notice reading being hard, your subconscious doesn’t fully know how to read it. Your reading-related habits and automatizations are not good enough. There are three basic ways to deal with that:

  1. Ignore the problem.
  2. Read in a more conscious way. Try to use extra effort to succeed at reading.
  3. Improve your automatizations so your subconscious can get better at reading.

I think a ton of people believe if they can consciously read it, with a big effort, then they do know how to read it, and they have nothing more to learn. They interpret it being hard as meaning they have to try harder, not as indicating they need better skills.

What are the problems with using conscious effort to read?

First, your subconscious isn’t learning what you read well in that case. So you won’t be able to implement it in your life. People have so many problems with reading something then not using it. There are two basic ways to use something in your life:

  1. You can use it by conscious effort. You can try extra hard every time you use it.
  2. You can learn it subconsciously and then use it in a natural, intuitive, normal way. This is how we use ideas the vast majority of the time.

We don’t have the energy and conscious attention to use most of our ideas consciously. Our subconscious has 99% of our mental resources. If you try to learn something in a conscious-effort-only way, you’re unlikely to get around to ever using it, because your conscious attention is already overloaded. It’s already a bottleneck. You’re already using it around maximum capacity. Your subconscious attention is a non-bottleneck. Teaching your subconscious to do things is the only way to get more done. If you learn something so you can only do/use it by conscious effort, then you will never do/use it unless you stop doing/using some other idea. You will have to cut something out to make room for it. But if you learn something subconsciously, then you can use it without cutting anything out. Your subconscious has excess capacity.

So if reading takes conscious effort, you’ll do way less of that reading. And then every idea and skill you learn from that conscious reading will require conscious effort to use, so the reading won’t change your life much. The combination of reading not improving your life, plus taking a lot of precious conscious effort, will discourage you from reading.

It’s possible to read with conscious effort, then do separate practice activities to teach your subconscious. Even if your subconscious doesn’t learn something by reading, it can still learn it in other ways. But people usually don’t do that. And it’s better if your subconscious can learn as much as possible while you read, so less practice is needed later. That’s more efficient. It saves time and effort.

Also you can’t read in a fully conscious way. You always use your subconscious some. If your subconscious is making lots of mistakes, you’re going to make more conscious mistakes. Your conscious reading will be lower quality than when your subconscious is supporting you better. You’ll have more misunderstandings. You can try to counter that by even more conscious effort, but ultimately your conscious mind is too limited and you need to use your subconscious as an effective ally. There is an upper limit on what you can do using only your conscious mental resources plus a little of your subconscious. If your add in effective use of your subconscious, the ceiling of your capabilities rises dramatically.

Also, if you’re reading by conscious effort, you might as well use it as practice and teach your subconscious. The right way to read by conscious effort involves things like making tree diagrams. If you do that a bunch, your subconscious can learn a lot of what you’re doing so that in the future you’ll sometimes intuitively know answers before you make the diagrams.

What people do with high-effort conscious reading often involves avoiding tree diagrams, outlines, or even notes. It’s like saying “I find this math problem hard, so I’m going to try really hard … but only using mental math.” Why!? I think they often just don’t know how to explicitly and consciously break it down into parts, organize the information, process it, etc. If you can’t write down what’s going on in a clear way – if you can’t get the information out of your head onto paper or computer – then the real problem is you don’t know how to read it consciously either. If you could correctly read it in a conscious way, you could write it down. If you had a proper explicit understanding of what you read, what would stop you from putting it into words and speaking them out loud, writing them down, communicating with others, etc? It’s primarily when we’re relying on our subconscious – or just failing – that we struggle to communicate.

People don’t do tree diagrams and other higher-effort conscious analysis mostly because they don’t know how. When they try to do higher effort conscious reading, they don’t actually know what they’re doing. They just muddle through and ignore lots of problems. They weren’t just having and ignoring subconscious reading problems. They were also having and ignoring conscious reading problems. Their conscious understanding is also visibly flawed.

What should be done? You need to figure out how to get it right consciously as step one of learning a skill. Then once you’re satisfied with how you do it consciously, you practice that and form good habits/automatizations in your subconscious. This is the general, standard pattern of how learning works.

If you just keep reading a bunch while being consciously confused, you’re forming bad subconscious habits and failing to make progress. You’re missing out on the opportunity to improve your reading skills. You’re a victim of your own low standards or pessimism. If you want to be a very good, rational thinker you need to get good at reading, both consciously and subconsciously. If you don’t do that, you’ll get stuck regarding fields like critical thinking and you’ll run into chronic problems with learning, with not using and acting on what you read and “learn” (because you can’t act on what you never learned properly – or even if you managed to learn it consciously that won’t work because your conscious is already too busy – to actually do something you have to either stop doing something else or else use your more plentiful subconscious resources).

If you want to get better at reading beyond whatever habits you picked up from our culture, school, childhood, etc., you have two basic options.

Option 1: Read a huge amount and you might very gradually get better. That works for some people but not everyone. It often has diminishing returns. If you’re bad at reading and rarely read, then reading 50 novels has a decent chance to help significantly. If you’re already experienced at reading novels, then you might see little to no improvements after reading more of them. This strategy is basically hoping your subconscious will figure out how to improve if you give it lots of opportunities.

Option 2: Consciously try to improve your reading. This means explicitly figuring out how reading works, breaking it down into parts, and treating it as something that you can analyze. This is where things like outlines, grammar, sentence trees, paragraph trees, and section trees come in. Those are ways of looking at text and ideas in a more conscious, intentional, organized, explicit way.

I think people resist working on conscious reading because it’s a hassle. They read mostly in a subconscious, automatic way. Their conscious mind is actually bad at reading and unable to help much. So when they first start trying to do conscious reading, they actually get worse at reading. They have to catch their conscious reading abilities up to their subconscious reading level before they can actually take the lead with their conscious reading and then start teaching their subconscious some improvements. I suspect people don’t like getting temporarily worse at reading when trying to do it more consciously so they avoid that approach and give up fast. They don’t consciously know what the problem is but they intuitively didn’t like an approach where they’re less able to read and actually quite bad at it. Their conscious reading is a mess so they’d rather stick with their current subconscious reading automatizations – but then it’s very hard to improve much.

The only realistic way to make a lot of progress and intentionally get really good at this stuff is to figure out how to approach reading and textual analysis consciously, gain conscious competence, then gain conscious higher skill level, then teach that higher skill level to your subconscious. If you just stick with your subconscious competence, it works better in the short term but isn’t a path to making much progress. If you’re willing to face your lack of conscious reading skills and you see the value in creating those skills, then you can improve. It’s very hard to learn and improve without doing it consciously. When you originally learned to read, your conscious reading ability was at least as good as your subconscious reading ability. But then you forgot a lot of your conscious reading skill after many years of reading mostly subconsciously. You don’t remember how you thought about reading when you were first learning it and were making a big conscious effort.

You do remember some things. You could probably consciously sound out the letters in a word if you wanted to. But you don’t need to. Your reading problems are more related to reading comprehension, not about reading individual words or letters. Doing elementary school reading comprehension homework is a perfectly reasonable place to start working on your conscious reading skills again. Maybe you’d quickly move up to harder stuff and maybe not and it’s OK either way. I’ve seen adults make errors when trying to read a short story aimed at third graders and then correctly answer some questions about what happened in the story. It’s good to test yourself in some objective ways. You need an answer key or some other people who can catch errors you miss. They don’t necessarily have to be better than reading at you. If you have a group of ten people who are similarly smart and skilled to you, you can all correct each other’s work. That will work reasonably well because you have different strengths and weaknesses. You’ll make some mistakes that other people don’t, and vice versa, even though on average your skill levels are similar. There will also be some systemic mistakes everyone in your group makes, but you can improve a lot even if you don’t address that.

Doing grammar and trees is a way to try to get better at reading than most people. It’s part of the path to being an advanced reader who knows stuff that most people don’t. But a lot of people should do some more standard reading comprehension work too, which is aimed at reducing reading errors you make and getting more used to practicing reading skills, but which isn’t aimed at being an especially advanced reader. I think a lot of people don’t want to do that because of their ego, their desire to appear and/or already be clever, and their focus on advanced skills. But you’re never going to be great at advanced skills unless you go back through all your beginner and intermediate skills and fix errors. You need a much higher level of mastery than is normal at the beginner and intermediate stuff in order to be able to build advanced skills on top of them. The higher you want to build skills above a level, the lower error rate you need at that level. The bigger your aspirations for advanced stuff, the more perfect you need your foundational knowledge to be to support a bunch of advanced knowledge built on top of it.

You can think of it in terms of software functions which call other functions (sub-functions) which call other functions (sub-sub-functions). The lower level functions, like sub-sub-sub-functions, are called more times. For every high level function you call, many many lower level functions are called. So the error rate of the lower level functions needs to be very very low or else you’ll get many, many errors because they’re used so much. This is approximate in some ways but the basic concept is the more you build on something – the more you’re relying on it and repeatedly reusing it – the more error-free it needs to be. If something gets used once a month, maybe it’s OK if it screws up 1% of the time and then you have to do problem solving. If something is used 10,000 times a day, and it’s a basic thing you never want to be distracted by, then it better have a very low error rate – less than a 1 in 100,000 chance of an error is needed for it to cause a problem less than every 10 days on average.

So don’t lose self-esteem over needing to improve your basic or intermediate skills, knowledge and ideas. If you’re improving them to higher standards (lower error rates) than normal, then you aren’t just going back to school like a child due to incompetence. You’re trying to do something that most people can’t do. You’re trying to be better in a way that is relevant to gaining advanced skills that most people lack. You’re not just relearning what you should have learned in school. School teaches those ideas to kinda low standards. School teaches the ideas with error rates like 5%, and if you’re a smart person reading my stuff you’re probably already doing better than that at say a 1% error rate but now you need to revisit that stuff to get the error rate down to 0.0001% so it can support 10+ more levels of advanced knowledge above it.

For more information, see Practice and Mastery.

And I recorded a podcast: Reading, Learning and the Subconscious | Philosophy Podcast


Elliot Temple | Permalink | Messages (0)

Visible and Hidden Problems

Some problems are easier to see than others. If you look for problems, there are some that are pretty easy to find, and some that are hard to find. Some problems are so easy to find that you’ll find them without even looking for problems. Other problems are so hard to find that you could fail to find them after a lifetime of searching.

There are often many related problems. Having no money is a problem that’s easy to notice. But it’s not the whole story. What’s the cause or underlying issue? Maybe it’s because you disliked the jobs you tried and spend most of your time intentionally unemployed. That’s not very hidden either. Why do you dislike those jobs? Maybe you didn’t like being bossed around to do things that you consider unwise. Maybe you didn’t like being bossed around because you’d rather boss others around. Maybe you’re lazy. There are lots of potential problems here.

There can be many, many layers of problems, and the deeper layers are often harder to analyze, so the problems there are more hidden.

Hard to find problems can be impactful. People often see negative consequences in their lives but don’t understand enough about what is causing those consequences.

Like maybe you don’t have many friends and you want more. But you keep not really getting along with people. But you don’t know much about what’s going wrong. Or you might think you know what the problems are, but be wrong – it’s actually mainly something else you never thought of. People often try to solve the wrong problem.


One problem solving strategy people have is to find all the most visible, easy-to-find problems they can and solve them.

This is like going around and cutting off the tips of icebergs. You have these problem-icebergs and you get rid of the visible part and leave the hidden part as a trap. That actually makes things worse and will lead to more boats crashing because now the icebergs are still there but are harder to see. (Actually I’m guessing if you cut the tip of the iceberg then off the rest would float up a little higher and become visible. But pretend it wouldn’t.)

Your visible problems are your guide to where your hidden problems are. They’re not a perfect, reliable or complete guide. But they give pretty good hints. Lots of your invisible problems are related to your visible problems. If you get rid of the visible problems and then start looking for more problems, it’ll be hard to find anything. You basically went around and destroyed most of your evidence about what invisible problems you have.


What should you do instead?

Don’t rush to make changes. Do investigations and post mortems when you identify problems. Look for related problems. Take your time and try to understand root causes more deeply.

Once you have a deeper understanding of the situation, you can try to come up with the right high-power solutions that will solve many related problems at once.

If you target a solution at one problem, you’re likely to fix it in a parochial, unprincipled way – put a band-aid on it.

If you figure out ten problems including some that were harder to see, and you come up with ten solutions, then each of the solutions is likely to be superficial.

But if you figure out ten problems and come up with one solution to address all ten at once, then that solution has high leverage. There’s some conceptual reasoning for how it works. It involves a good explanation. It has wider reach or applicability. It’s more principled or general purpose.

So, not only will this solution solve ten problems at once, it will probably solve twenty more you didn’t know about. It works on some whole categories of problems, not just one or a few specific problems. So it’ll also solve many similar problems that you didn’t even realize you had.


Elliot Temple | Permalink | Messages (0)

Big Picture Reasons People Give Up on Learning Philosophy

Why might people give up on learning philosophy or learning to be a great (critical) thinker?

I think maybe no one has ever quit my community while making rapid progress.

Maybe people only quit when they get fully stuck or progress gets too slow.

How/why do they get stuck?

People are very resistant to doing easy/childish/basic stuff. They want to do complex stuff which they think is more interesting, less boring, more impressive, more important, etc. When they do harder and more complicated stuff, I regard it as skipping steps/prerequisites which leads directly to an overwhelmingly high error rate. They may experience their high error rate as e.g. me having 10 criticisms for each of their posts, which they can't deal with so they might blame the messenger, me. They may be blind to their high error rate because they don't understand what they're doing enough to spot or understand the errors (due to the missing prerequisites, skipped steps) or because they have low standards (they're used to being partially confused and calling that success and moving on – that's how they have dealt with everything complicated since age 5).

People may be disorganized. If you successfully do many tiny projects which don't skip steps, that will only translate into substantive progress if you are following some plan/path towards more advanced stuff and/or you integrate multiple smaller things into more complex stuff.

People may have some hangup/bias and be unwilling to question/reconsider some particular idea.

People are often very hostile to meta discussion. This prevents a lot of problem solving, like doing workarounds. Like if they are biased about X, you could have a meta discussion about how to make progress in a way that avoids dealing with X. It’s completely reasonable to claim “You may be biased about X. I think you are. If you are and we ignore it and assume you aren’t, that could make you stuck. So let’s come up with a plan that works if you are biased about X and also works if you aren’t biased about X.” In other words, we disagree about something (whether you’re biased or wrong about X) and can’t easily agree, so we can come up with a plan that works regardless of who is right about the disagreement. People have trouble treating some of their knowledge as unreliable when it feels reliable to them. Their subconscious intuitions treat it as reliable, and they are bad at temporarily turning those off (in a selective way for just X) or relying on conscious thought processes for dealing with this specific thing. They’re also bad at quickly (and potentially temporarily) retraining their subconscious intuitions.

More broadly if there is any impasse in a discussion, you could meta-discuss a way to proceed productively that avoids assuming a conclusion about the impasse, but people tend to be unwilling to engage in that sort of (meta) problem solving. You can keep going productively in discussions, despite disagreements, if you are willing to come up with neutral plans for continuing that can get a good result regardless of who was right about the disagreement. But people usually won’t do that kind of meta planning and seem unwilling to take seriously that they might be wrong unless you actually convince them that they are wrong. They just want to debate the issue directly, and if that gets stuck, then there’s no way to make progress because they won’t do the meta technique. Or if they will do a meta level, they probably won’t do 5 meta levels to get past 5 disagreements (even with no nesting – just 5 separate, independent disagreements, which is easier than nested disagreements), so you’ll get stuck later.

The two big themes here are people get stuck because they try to build advanced knowledge on an inadequate foundation and they don’t want to work on the foundation. And they have issues with problem solving and get stuck on problems and won’t meta discuss the problems (talking about the problem itself, rather than continuing the original discussion).

Lots of this stuff happens alone. Like biased people might get stuck because they’re biased. And even if they realize they might be wrong or biased about a specific thing, they can still get stuck similar to if I pointed out a potential error or bias.

One pattern I’ve seen is people make progress at first, and then the first time they run into a problem that they get stuck on for a week, they never solve it. That can lead to quitting pretty quickly or sometimes many months later if they keep trying other stuff. When trying other stuff, they will occasionally run into another problem they don’t solve, so the list of unsolved problems grows. They made initial progress by solving problems they found easy (ones their intuitions were good at or whatever), but unless they can solve problems they find hard, they are screwed in the long run.

Regarding going back to less complex stuff to improve knowledge quality, sometimes people try that but run into a few problems. One, they go back to a lot more basic than they’re used to and still make tons of errors and they don’t want to go back way further. Two, they do some basic stuff but are not able to connect it to the more advanced stuff and use it – they aren’t organized enough, don’t integrate enough, do conscious review but don’t change their subconscious, or don’t understand the chain of connections from the basic stuff to the advanced stuff well enough.


Elliot Temple | Permalink | Messages (0)