IQ

This is a reply to Ed Powell writing about IQ.

I believe IQ tests measure a mix of intelligence, culture and background knowledge.

That's useful! Suppose I'm screening employees to hire. Is a smart employee the only thing I care about? No. I also want him to fit in culturally and be knowledgable. Same thing with immigrants.

The culture and background knowledge measured by IQ tests isn't superficial. It's largely learned in early childhood and is hard to change. It is possible to change. I would expect assimilating to raise IQ scores on many IQ tests, just as learning arithmetic raises scores on many IQ tests for people who didn't know it before.

Many IQ test questions are flawed. They have ambiguities. But this doesn't make IQ tests useless. It just makes them less accurate, especially for people who are smarter than the test creators. Besides, task assignments from your teacher or boss contain ambiguities too, and you're routinely expected to know what they mean anyway. So it matters whether you can understand communications in a culturally normal way.

Here's a typical example of a flawed IQ test question. We could discuss the flaws if people are interested in talking about it. And I'm curious what people think the answer is supposed to be.

IQ tests don't give perfect foresight about an individual's future. So what? You don't need perfectly accurate screening for hiring, college admissions or immigration. Generally you want pretty good screening which is cheap. If someone comes up with a better approach, more power to them.

Would it be "unfair" to some individual that they aren't hired for a job they'd be great at because IQ tests aren't perfect? Sure, sorta. That sucks. The world is full of things going wrong. Pick yourself up and keep trying – you can still have a great life. You have no right to be treated "fairly". The business does have a right to decide who to hire or not. There's no way to making hiring perfect. If you know how to do hiring better, sell them the method. But don't get mad at hiring managers for lacking omniscience. (BTW hiring is already unfair and stupid in lots of ways. They should use more work sample tests and less social metaphysics. But the problems are largely due to ignorance and error, not conscious malice.)


Ed Powell writes:

Since between 60% and 80% of IQ is heritable, it means that their kids won't be able to read either. Jordan Peterson in one of his videos claims that studies show there are no jobs at all in the US/Canadian economies for anyone with an IQ below about 83. That means 85% of the Somalian immigrants (and their children!) are essentially unemployable. No immigration policy of the US should ignore this fact.

I've watched most of Jordan Peterson's videos. And I know, e.g., that the first video YouTube sandboxed in their new censorship campaign was about race and IQ.

I agree that it's unrealistic for a bunch of low IQ Somalians to come here and be productive in U.S. jobs. I think we agree on lots of conclusions.

But I don't think IQ is heritable in the normal sense of the word "heritable", meaning that it's controlled by genes passed on by parents. (There's also a technical definition of "heritable", which basically means correlation.) For arguments, see: Yet More on the Heritability and Malleability of IQ.

I don't think intelligence is genetic. The studies claiming it's (partly) genetic basically leave open the possibility that it's a gene-environment interaction of some kind, which leaves open the possibility that intelligence is basically due to memes. Suppose parents in our culture give worse treatment to babies with black skin, and this causes lower intelligence. That's a gene-environment interaction. In this scenario, would you say that the gene for black skin is a gene for low intelligence? Even partly? I wouldn't. I'd say genes aren't controlling intelligence in this scenario, culture is (and, yes, our culture has some opinions about some genetic traits like skin color).

When people claim intelligence (or other things) are due to ideas, they usually mean it's easy to change. Just use some willpower and change your mind! But memetic traits can actually be harder to change than genetic traits. Memes evolve faster than genes, and some old memes are very highly adapted to prevent themselves from being changed. Meanwhile, it's pretty easy to intervene to change your genetic hair color with dye.

I think intelligence is a primarily memetic issue, and the memes are normally entrenched in early childhood, and people largely don't know how to change them later. So while the mechanism is different, the conclusions are still similar to if it were genetic. One difference is that I'm hopeful that dramatically improved parenting practices will make a large difference in the world, including by raising people's intelligence.

Also, if memes are crucial, then current IQ score correlations may fall apart if there's a big cultural shift of the right kind. IQ test research only holds within some range of cultures, not in all imaginable cultures. But so what? It's not as if we're going to wake up in a dramatically different culture tomorrow...


I don't believe that IQ tests measure general intelligence – which I don't think exists as a single, well-defined thing. I have epistemological reasons for this which are complicated and differ from Objectivism on some points. I do think that some people are smarter than others. I do think there are mental skills, which fall under the imprecise term "intelligence", and have significant amounts of generality.

Because of arguments about universality (which we can discuss if there's interest), I think all healthy people are theoretically capable of learning anything that can be learned. But that doesn't mean they will! What stops them isn't their genes, it's their ideas. They have anti-rational memes from early childhood which are very strongly entrenched. (I also think people have free will, but often choose to evade, rationalize, breach their integrity, etc.)

Some people have better ideas and memes than others. So I share a conclusion with you: some people are dumber than others in important very-hard-to-change ways (even if it's not genetic), and IQ test scores do represent some of this (imperfectly, but meaningfully).

For info about memes and universality, see The Beginning of Infinity.

And, btw, of course there are cultural and memetic differences correlated with e.g. race, religion and nationality. For example, on average, if you teach your kids not to "act white" then they're going to turn out dumber.

So, while I disagree about many of the details regarding IQ, I'm fine with a statement like "criminality is mainly concentrated in the 80-90 IQ range". And I think IQ tests could improve immigration screening.


Read my followup post: IQ 2


Elliot Temple | Permalink | Messages (2)

Banned from "Critical Rationalist" Facebook Group

Matt Dioguardi owns a Facebook group with around 5000 members. The membership believes it's an open discussion forum with relaxed rules (just post all you want that's related to Popper "in some manner"), because that's what it publicly states, in writing.

However, I was banned because I didn't like some of Matt's friends' comments and blocked them on Facebook to stop seeing their messages. I don't need toxic people in my life.

I would never dream of banning someone from the Fallible Ideas forum because they set up a mail rule to block posts by my friends Justin and Alan. Some of Matt's friends, like Justin and Alan, were moderators – so what?

Prior to that I had some posts blocked for reasons like mentioning Ayn Rand (in addition to Popper) or mentioning parenting and education (from a Popperian perspective, and in addition to talking about how to spread Critical Rationalist ideas). Discussing the moderation had been unproductive (they refused to answer clarifying questions about the policies or update the stated rules to the actual rules). Some of the forum discussions had also been unproductive (e.g. I repeatedly asked some flamers to stop harassing me, and they did the passive-aggressive version of telling me to go fuck myself – then redoubled their efforts to harrass me). I didn't flame anyone.

So I decided it was time to stop engaging with the toxic people. I knew I was at risk of being banned if I did some further action that wasn't appreciated and there was no problem-solving discussion to address it. I decided to risk this because I thought talking with the toxic people wouldn't solve problems and could actually cause problems. But they wouldn't just leave me alone. For my decision to refocus on productive discussion, and ignore everything else, I was banned. (Dioguardi stated the reason for the ban, it's not speculation.)

Some of them clearly didn't like me (e.g. one of the moderators was also one of the repeat flamers) and wanted an excuse to get rid of me. But what kind of excuse is this? Nothing was wrong with anything I posted, and they banned me anyway!

Update: They also banned anyone from posting a link to anything I wrote.


Elliot Temple | Permalink | Messages (21)

Discussion About the Importance of Explanations with Andrew Crawshaw

From Facebook:

Justin Mallone:

The following excerpt argues that explanations are what is absolutely key in Popperian philosophy, and that Popper over-emphasizes the role of testing in science, but that this mistake was corrected by physicist and philosopher David Deutsch (see especially the discussion of the grass cure example). What do people think?
(excerpted from: https://curi.us/1504-the-most-important-improvement-to-popperian-philosophy-of-science)

Most ideas are criticized and rejected for being bad explanations. This is true even in science where they could be tested. Even most proposed scientific ideas are rejected, without testing, for being bad explanations.
Although tests are valuable, Popper's over-emphasis on testing mischaracterizes science and sets it further apart from philosophy than need be. In both science and abstract philosophy, most criticism revolves around good and bad explanations. It's largely the same epistemology. The possibility of empirical testing in science is a nice bonus, not a necessary part of creating knowledge.

In [The Fabric of Reality], David Deutsch gives this example: Consider the theory that eating grass cures colds. He says we can reject this theory without testing it.
He's right, isn't he? Should we hire a bunch of sick college students to eat grass? That would be silly. There is no explanation of how grass cures colds, so nothing worth testing. (Non-explanation is a common type of bad explanation!)
Narrow focus on testing -- especially as a substitute for support/justification -- is one of the major ways of misunderstanding Popperian philosophy. Deutsch's improvement shows how its importance is overrated and, besides being true, is better in keeping with the fallibilist spirit of Popper's thought (we don't need something "harder" or "more sciency" or whatever than critical argument!).

Andrew Crawshaw: I see, but it might turn out that grass cures cold. This would just be an empirical fact, demanding scientific explanation.

TC: Right, and if a close reading of Popper yielded anything like "test every possible hypothesis regardless of what you think of it", this would represent an advancement over Popper's thought. But he didn't suggest that.

Andrew Crawshaw: We don't reject claims of the form by indicated by Deustch because they are bad explanations. There are plenty of dangling empirical claims that we still hold to be true but which are unexplained. Deutsch is mistaking the import of his example.

Elliot Temple:

There are plenty of dangling empirical claims that we still hold to be true but which are unexplained.

That's not the issue. Are there any empirical claims we have criticism of, but which we accept? (Pointing out that something is a bad explanation is a type of criticism.)

Andrew Crawshaw: If you think that my burden is to show that there are empirical claims that are refuted but that we accept, then you have not understood my criticism.

For example

Grass cures colds.

Is of the same form as

aluminium hydroxide contributes to the production of a large quantity of antibodies.

Both are empirical claims, but they are not explanatory. That does not make them bad

Neither of them are explanations. One is accepted and the other is not.

It's not good saying that the former is a bad explanation.

The latter has not yet been properly explained by sciences

Elliot Temple: The difference is we have explanations of how aluminum hydroxide works, e.g. from wikipedia " It reacts with excess acid in the stomach, reducing the acidity of the stomach content"

Andrew Crawshaw: Not in relation to its antibody mechanism.

Elliot Temple: Can you provide reference material for what you're talking about? I'm not familiar with it.

Andrew Crawshaw: I can, but it is still irrelevant to my criticism. Which is that they are both not explanatory claims, but one is held as true while the other not.

They are low-level empirical claims that call out for explantion, they don't themselves explain. Deutsch is misemphesising.

https://www.chemistryworld.com/news/doubts-raised-over-vaccine-boost-theory/3001326.article

Elliot Temple: your link is broken, and it is relevant b/c i suspect there is an explanation.

Andrew Crawshaw: It's still irrelevant to my criticism. Which is that we often accept things like rules of thumb, even when they are unexplained. They don't need to be explained for them to be true of for us to class them as true. Miller talks about this extensively. For instance strapless evening gowns were not understand scientifically for ages.

Elliot Temple: i'm saying we don't do that, and you're saying you have a counter-example but then you say the details of the counter-example are irrelevant. i don't get it.

Elliot Temple: you claim it's a counter example. i doubt it. how are we to settle this besides looking at the details?

Andrew Crawshaw: My criticism is that calling such a claim a bad explanation is irrelevat to those kinds of claims. They are just empirical claims that beg for explanation.

Elliot Temple: zero explanation is a bad explanation and is a crucial criticism. things we actually use have more explanation than that.

Andrew Crawshaw: So?

Elliot Temple: so DD and I are right: we always go by explanations. contrary to what you're saying.

Andrew Crawshaw: We use aliminium hydroxide for increasing anti-bodies and strapless evening gowns p, even before they were explained.

Elliot Temple: i'm saying i don't think so, and you're not only refusing to provide any reference material about the matter but you claimed such reference material (indicating the history of it and the reasoning involved) is irrelevant.

Andrew Crawshaw: I have offered it. I re-edited my post.

Elliot Temple: please don't edit and expect me to see it, it usually doesn't show up.

Andrew Crawshaw: You still have not criticised my claim. The one comparing the two sentences which are of the same form, yet one is accepted and one not.

Elliot Temple: the sentence "aluminium hydroxide contributes to the production of a large quantity of antibodies." is inadequate and should be rejected.

the similar sentence with a written or implied footnote to details about how we know it would be a good claim. but you haven't given that one. the link you gave isn't the right material: it doesn't say what aluminium hydroxide does, how we know it, how it was discovered, etc

Elliot Temple: i think your problem is mixing up incomplete, imperfect explanations (still have more to learn) with non-explanation.

Andrew Crawshaw: No, it does not. But to offer that would be to explain. Which is exactly what I am telling is irrelevant.

What is relevant is whether the claim itself is a bad explanation. It's just an empirical claim.

The point is just that we often have empirical claims that are not explained scientifically yet we accept them as true and use them.

Elliot Temple: We don't. If you looked at the history of it you'd find there were lots of explanations involved.

Elliot Temple: I guess you just don't know the history either, which is why you don't know the explanations involved. People don't study or try things randomly.

Elliot Temple: If you could pick a better known example which we're both familiar with, i could walk you through it.

Andrew Crawshaw: There was never an explanation of how bridges worked. But there were rules of thumb of how to build them. There is explanations of how to use aluminium hydroxide but is actual mechanism is unknown.

Elliot Temple: what are you talking about with bridges. you can walk on strong, solid objects. what do you not understand?

Andrew Crawshaw: That's not how they work. I am talking about the scientific explanation of forces and tensions. It was not always understood despite the fact that they were built. This is the same with beavers dams, they don't know any of the explanations of how to build dams.

Elliot Temple: you don't have to know everything that could be known to have an explanation. understanding that you can walk on solid objects, and they can be supported, etc, is an explanation, whether you know all the math or not. that's what the grass cure for the cold lacks.

Elliot Temple: the test isn't omniscience, it's having a non-refuted explanation.

Andrew Crawshaw: Hmm, but are you saying then that even bad-explanations can be accepted. Cuz as far as I can tell many of the explanations for bridge building were bad, yet they stil built bridges.

Anyway you are still not locating my criticism. You are criticising something I never said it seems. Which is that Grass cures cold has not been explained. But what Deutsch was claiming was that the claim itself was a bad explanation, which is true if bad explanation includes non-explanation, but it is not the reason it is not accepted. As the hydroxide thing suggests.

Elliot Temple: We should only accept an explanation that we don't know any criticism of.

We need some explanation or we'd have no idea if what we're doing would work, we'd be lost and acting randomly without rhyme or reason. And that initial explanation is what we build on – we later improve it to make it more complete, explain more stuff.

Andrew Crawshaw: I think this is incorrect. All animals that can do things refutes your statement.

Elliot Temple: The important thing is the substance of the knowledge, not whether it's written out in the form of an English explanation.

Andrew Crawshaw: Just because there is an explanation of how some physical substrate interacts with another physical substrate, does not mean that you need explanations. Explanations are in language. Knowledge not necessarily. Knowledge is a wider phenomenon than explanation. I have many times done things by accident that have worked, but I have not known why.

Elliot Temple: This is semantics. Call it "knowledge" then. You need non-refuted knowledge of how something could work before it's worth trying. The grass cure for the cold idea doesn't meet this bar. But building a log bridge without knowing modern science is fine.

Andrew Crawshaw: Before it's worth trying? I don't think so, rules of thumb are discovered by accident and then re-used without knowing how or why it could work,,it's just works and then they try it again and it works again. Are you denying that that is a possibility?

Elliot Temple: Yes, denying that.

Andrew Crawshaw: Well, you are offering foresight to evolution then, it seems.

Elliot Temple: That's vague. Say what you mean.

Andrew Crawshaw: I don't think it is that vague. If animals can build complex things like behaves and they should have had knowledge of how it could work before it was worth trying out, then they have a lot of forsight before they tried them out. Or could it be the fact that it is the other way round, we stumble in rules of thumb develop them, then come up with explanations about how they possibly work. I am more inclined to the latter. The former is just another version of the argument from design.

Elliot Temple: humans can think and they should think before acting. it's super inefficient to act mindlessly. genetic evolution can't think and instead does things very, very, very slowly.

Andrew Crawshaw: But thinking before acting is true. Thinking is critical. It needs material to work on. Which is guesswork and sometimes, if not often, accidental actions.

Elliot Temple: when would it be a good idea to act thoughtlessly (and which thoughtless action) instead of acting according to some knowledge of what might work?

Elliot Temple: e.g. when should you test the grass cure for cancer, with no thought to whether it makes any sense, instead of thinking about what you're doing and acting according to your rational thought? (which means e.g. considering what you have some understanding could work, and what you have criticisms of)

Andrew Crawshaw: Wait, we often act thoughtlessly whether or not we should do. I don't even think it is a good idea. But we often try to do things and end up somewhere which is different to what we expected, it might be worse or better. For instance, we might try to eat grass because we are hungry and then happen to notice that our cold disspaeard and stumble on a cure for the cold.

Andrew Crawshaw: And different to what we expected might work even though we have no idea why.

Elliot Temple: DD is saying what we should do, he's talking about reason. Sometimes people act foolishly and irrationally but that doesn't change what the proper methods of creating knowledge are.

Sometimes unexpected things happen and you can learn from them. Yes. So what?

Andrew Crawshaw: But if Deustch expects that we can only work with explanations. Then he is mistaken. Which is, it seems, what you have changed your mind about.

Elliot Temple: I didn't change my mind. What?

What non-explanations are you talking about people working with? When an expectation you have is violated, and you investigate, the explanation is you're trying to find out if you were mistaken and figure out the thing you don't understand.

Elliot Temple: what do you mean "work with"? we can work with (e.g. form explanations about) spreadsheet data. we can also work with hammers. resources don't have to be explanations themselves, we just need an explanation of how to get value out of the resource.

Andrew Crawshaw: There is only one method of creating knowledge. Guesswork. Or, if genetically, by mutation. Physical things are often made without knows how and then they are applied in various contexts and they might and mint not work, that does not mean we know how they work.

Elliot Temple: if you didn't have an explanation of what actions to take with a hammer to achieve what goal, then you couldn't proceed and be effective with the hammer. you could hit things randomly and pray it works out, but it's not a good idea to live that way.

Elliot Temple: (rational) humans don't proceed purely by guesses, they also criticize the guesses first and don't act on the refuted guesses.

Andrew Crawshaw: Look there are three scenarios

  1. Act on knowledge
  2. Stumble upon solution by accident, without knowing why it works.
  3. Act randomly

Elliot Temple: u always have some idea of why it works or you wouldn't think it was a solution.

Andrew Crawshaw: No, all you need is to recognise that it worked. This is easily done by seeing that what you wanted to happen happened. It is non-sequitur to then assume that you know something of how it works.

Elliot Temple: you do X. Y results. Y is a highly desirable solution to some recurring problem. do you now know that X causes Y? no. you need some causal understanding, not just a correlation. if you thought it was impossible that X causes Y, you would look for something else. if you saw some way it's possible X causes Y, you have an initial explanation of how it could work, which you can and should expose to criticism.

Elliot Temple:

Know all you need is to recognise that it works.

plz fix this sentence, it's confusing.

Andrew Crawshaw: You might guess that it caused it. You don't need to understand it to guess that it did.

Elliot Temple: correlation isn't causation. you need something more.

Elliot Temple: like thinking of a way it could possibly cause it.

Elliot Temple: that is, an explanation of how it works.

Andrew Crawshaw: I am not saying correlation is causation, you don't need to explained guesswork, before you have guess it. You first need to guess that something caused something before you go out and explain it. Otherwise what are explaining?

Elliot Temple: you can guess X caused Y and then try to explain it. you shouldn't act on the idea that X caused Y if you have no explanation of how X could cause Y. if you have no explanation, then that's a criticism of the guess.

Elliot Temple: you have some pre-existing understanding of reality (including the laws of physics) which you need to fit this into, don't just treat the world as arbitrary – it's not and that isn't how one learns.

Andrew Crawshaw: That's not a criticism of the guess. It's ad hominem and justificationist.

Elliot Temple: "that" = ?

Andrew Crawshaw: I am agreeing totally with you about many things

  1. We should increase our criticism as much as possible.
  2. We do have inbuilt expectations about how the world works.

What We are not agreeing about is the following

  1. That a guess has to be back up by explanation for it to be true or classified as true. All we need is to criticise the guess. Arguing otherwise seems to me a type of justificationism.

  2. That in order to get novel explanations and creations, this often is done despite the knowledge and necessarily has to be that way otherwise it would not be new.

Elliot Temple:

That's not a criticism of the guess. It's ad hominem and justificationist.

please state what "that" refers to and how it's ad hominem, or state that you retract this claim.

Andrew Crawshaw: That someone does not have an explanation. First, because explanations are not easy to come by and someone not having an explanation for something does not in anyway impugn the pedigree of the guess or the strategy etc. Second explanation is important and needed, but not necessary for trying out the new strategy, y, that you guess causes x. You might develope explanations while using it. You don't need the explanation before using it.

Elliot Temple: Explanations are extremely easy to come by. I think you may be adding some extra criteria for what counts as an explanation.

Re your (1): if you have no explanation, then you can criticize it: why didn't they give it any thought and come up with an explanation? they should do that before acting, not act thoughtlessly. it's a bad idea to act thoughtlessly, so that's a criticism.

it's trivial to come up with even an explanation of how grass cures cancer: cancer is internal, and various substances have different effects on the body, so if you eat it it may interact with and destroy the cancer.

the problem with this explanation is we have criticism of it.

you need the explanation so you can try criticizing it. without the explanation, you can't criticize (except to criticize the lack of explanation).

re (2): this seems to contain typos, too confusing to answer.

Elliot Temple: whenever you do X and Y happens, you also did A, B, C, D. how do you know it was X instead of A, B, C or D which caused Y? you need to think about explanations before you can choose which of the infinite correlations to pay attention to.

Elliot Temple: for example, you may have some understanding that Y would be caused by something that isn't separated in space or time from it by very much. that's a conceptual, explanatory understanding about Y which is very important to deciding what may have caused Y.

Andrew Crawshaw: Again, it's not a criticism of the guess. It's a criticism of how the person acted.

The rest of your statements are compatible with what I am saying. Which is just that it can be done and explanations are not necessary either for using something or creating something. As the case of animals surely shows.

You don't know, you took a guess. You can't know before you guess that your guess was wrong.

Elliot Temple: "I guess X causes Y so I'll do X" is the thing being criticized. If the theory is just "Maybe X causes Y, and this is a thing to think about more" then no action is implied (besides thinking and research) and it's harder to criticize. those are different theories.

even the "Maybe X causes Y" thing is suspect. why do you think so? You did 50 million actions in your life and then Y happened. Why do you think X was the cause? You have some explanations informing this judgement!

Andrew Crawshaw: There is no difference between maybe Y and Y. It's always maybe Y. Unless refuted.

Andrew Crawshaw: You are subjectivist and justificationist as far as I can tell. A guess is objective and if someone despite the fact that they have bad judgement guesses correctly. They still guess correctly. Nothing mitigates the precariousness of this situation. Criticism is the other component.

Elliot Temple: If the guess is just "X causes Y", period, you can put that on the table of ideas to consider. However, it will be criticized as worthless: maybe A, B, or C causes Y. Maybe Y is self-caused. There's no reason to care about this guess. It doesn't even include any mention of Y ever happening.

Andrew Crawshaw: The guess won't be criticised, what will be noticed is that it shouts out for explanation and someone might offer it.

Elliot Temple: If the guess is "Maybe X causes Y because I once saw Y happen 20 seconds after X" then that's a better guess, but it will still get criticized: all sorts of things were going on at all sorts of different times before Y. so why think X caused Y?

Elliot Temple: yes: making a new guess which adds an explanation would address the criticism. people are welcome to try.

Elliot Temple: they should not, however, go test X with no explanation.

Andrew Crawshaw: That's good, but one of the best ways to criticise it, is to try it again and see if it works.

Elliot Temple: you need an explanation to understand what would even be a relevant test.

Elliot Temple: how do you try it again? how do you know what's included in X and what isn't included? you need an explanation to differentiate relevant stuff from irrelevant

Elliot Temple: as the standard CR anti-inductivist argument goes: there are infinite patterns and correlations. how do you pick which ones to pay attention to?

Elliot Temple: you shouldn't pick one thing, arbitrarily, from an INFINITE set and then test it. that's a bad idea. that's not how scientific progress is made.

Elliot Temple: what you need to do is have some conceptual understanding of what's going on. some explanations of what types of things might be relevant to causing Y and what isn't relevant, and then you can start doing experiments guided by your explanatory knowledge of physics, reality, some possible causes, etc

Elliot Temple: i am not a subjectivist or justificationist, and i don't see what's productive about the accusation. i'm willing to ignore it, but in that case it won't be contributing positively to the discussion.

Andrew Crawshaw: I am not saying that we have no knowledge. I am sayjng that we don't have an explanation of the mechanism.

Elliot Temple: can you give an example? i think you do have an explanation and you just aren't recognizing what you have.

Andrew Crawshaw: For instance, washing hands and it's link to mortality rates.

Elliot Temple: There was an explanation there: something like taint could potentially travel with hands.

Elliot Temple: This built on previous explanations people had about e.g. illnesses spreading to nearby people.

Andrew Crawshaw: Right, but the use of soap was not derived from the explanation. And that explanation might have been around before, and no such soap was used because of it.

Elliot Temple: What are you claiming happened, exactly?

Andrew Crawshaw: I am claiming that soap was invented for various reasons and then it turned out that the soap could be used for reducing mortality"

Elliot Temple: That's called "reach" in BoI. Where is the contradiction to anything I said?

Andrew Crawshaw: Reach of explanations. It was not the explanation, it was the invention of soap itself. Which was not anticipated or even encouraged by explanations. Soap is invented, used in a context an explanation might be applied to it. Then it is used in another context and again the explanation is retroactively applied to it. The explantion does not necessarily suggest more uses, nor need it.

Elliot Temple: You're being vague about the history. There were explanations involved, which you would see if you analyzed the details well.

Andrew Crawshaw: So, what if there were explanations "involved" The explanations don't add anything to the discovery of the uses of the soap. This are usually stumbled in by accident. And refinements to soaps as well for those different contexts.

Andrew Crawshaw: I am just saying that explanations of the soap works very rarely suggest new avenues. It's often a matter of trial and error.

Elliot Temple: You aren't addressing the infinite correlations/patterns point, which is a very important CR argument. Similarly, one can't observe without some knowledge first – all observation is theory laden. So one doesn't just observe that X is correlated to Y without first having a conceptual understanding for that to fit into.

Historically, you don't have any detailed counter example to what I'm saying, you're just speculating non-specifically in line with your philosophical views.

Andrew Crawshaw: It's an argument against induction. Not against guesswork informed by earlier guesswork, that often turns out to be mistaken. All explanations do is rule things out. unless they are rules for use, but these are developed while we try out those things.

Elliot Temple: It's an argument against what you were saying about observing X correlated with Y. There are infinite correlations. You can either observe randomly (not useful, has roughly 1/infinity chance of finding solutions, aka zero) or you can observe according to explanations.

Elliot Temple: You're saying to recognize a correlation and then do trial and error. But which one? Your position has elements of standard inductivist thinking in it.

Andrew Crawshaw: I never said anything about correlation - you did.

What is said was we could guess that x caused y and be correct. That's what I said, nothing more mothing less.

Andrew Crawshaw: One instance does not a correlation make.

Elliot Temple: You could also guess Z caused Y. Why are you guessing X caused Y? Filling up the potential-ideas with an INFINITE set of guesses isn't going to work. You're paying selective attention to some guesses over others.

Elliot Temple: This selective attention is either due to explanations (great!) or else it's the standard way inductivists think. Or else it's ... what else could it be?

Andrew Crawshaw: Why not? Criticise it. If you have a scientific theory that rules my guess out, that would be intersting. But saying why not this guess and why not that one. Some guesses are not considered by you maybe because they are ruled out by other expectations, or ey do not occurs to you.

Elliot Temple: The approach of taking arbitrary guesses out of an infinite set and trying to test them is infinitely slow and unproductive. That's why not. And we have much better things we can do instead.

Elliot Temple: No one does this. What they do is pick certain guesses according to unconscious or unstated explanations, which are often biased and crappy b/c they aren't being critically considered. We can do better – we can talk about the explanations we're using instead of hiding them.

Andrew Crawshaw: So, you are basically gonna ignore the fact that I have agreed that expecations and earlier knowledge do create selective attention, but what to isolate is neither determined by theory, nor by earlier perceptions, it is large amount guesswork controlled by criticism. Humans can do this rapidly and well.

Elliot Temple: Please rewrite that clearly and grammatically.

Andrew Crawshaw: It's like you are claiming there is no novelty in guesswork, if we already have that as part of our expectation ps it was not guesswork.

Elliot Temple: I am not claiming "there is no novelty in guesswork".

Andrew Crawshaw: So we are in agreement, then. Which is just that there are novel situations and our guesses are also novel. How we eliminate them is through other guesses. Therefore the guesses are sui generiz and then deselected according earlier expecations. It does not follow that the guess was positively informed by anything. It was a guess about what caused what.

Elliot Temple: Only guesses involving explanations are interesting and productive. You need to have some idea of how/why X causes Y or it isn't worth attention. It's fine if this explanation is due to your earlier knowledge, or it can be a new idea that is part of the guess.

Andrew Crawshaw: I don't think that's true. Again beavers make interesting and productive dams.

Elliot Temple: Beavers don't choose from infinite options. Can we stick to humans?

Andrew Crawshaw: Humans don't choose from infinite options....They choose from the guess that occur to them, which are not infinite. Their perception is controlled by both pyshiologival factors and their expectations. Novel situations require guesswork, because guesswork is flexible.

Elliot Temple: Humans constantly deal with infinite categories. E.g. "Something caused Y". OK, what? It could be an abstraction such as any integer. It could be any action in my whole life, or anyone else's life, or something nature did. There's infinite possibilities to deal with when you try to think about causes. You have to have explanations to narrow things down, you can't do it without explanations.

Elliot Temple: Arbitrary assertions like "The abstract integer 3 caused Y" are not productive with no explanation of how that could be possible attached to the guess. There are infinitely more where that came from. You won't get anywhere if you don't criticize "The abstract integer 3 caused Y" for its arbitrariness, lack of explanation of how it could possibly work, etc

Elliot Temple: You narrow things down. You guess that a physical event less than an hour before Y and less than a quarter mile distant caused Y. You explain those guesses, you don't just make them arbitrarily (there are infinite guesses you could make like that, and also that category of guess isn't always appropriate). You expose those explanations to criticism as the way to find out if they are any good.

Andrew Crawshaw: You are arguing for an impossible demand that you yourself can't meet, event when you have explanations. It does not narrow it down from infinity. What narrows it down is our capacity to form guess which is temporal and limited. It's our brains ability to process and to intepret that information.

Elliot Temple: No, we can deal with infinite sets. We don't narrow things down with our inability, we use explanations. I can and do do this. So do you. Explanations can have reach and exclude whole categories of stuff at once.

Andrew Crawshaw: But it does not reduce it to less than infinite. Explanations allow an infinite amount of thugs most of them useless. It's what they rule out, and things they can rule out is guess work. And this is done over time. So we might guess this and then guess that x caused y, we try it again and it might not work, so we try to vary the situation and in the way develope criticism and more guesses.

Elliot Temple: Let's step back. I think you're lost, but you could potentially learn to understand these things. You think I'm mistaken. Do you want to sort this out? How much energy do you want to devote to this? If you learn that I was right, what will you do next? Will you join my forum and start contributing? Will you study philosophy more? What values do you offer, and what values do you seek?

Andrew Crawshaw: Mostly explanations take time to understand why they conflict with some guess. It might be that the guess only approximates the truth and then find later that it is wrong because we look more into the explanation of i.

Andrew Crawshaw: Elliot, if you wish to meta, I will step out of the conversation. It was interesting, yet you still refuse to concede my point that inventions can be created without explanations. But yet this is refuted by the creations of animals and many creations of humans. You won't concede this point and then make your claims pretty well trivial. Like you need some kind od thing to direct what you are doing. When the whole point is the Genesis of new ideas and inventions and theories which cannot be suggest by earlier explanations. It is true that explanations can help, I refining and understanding. But that is not the whole story of human cognition or human invention.

Elliot Temple: So you have zero interest in, e.g., attempting to improve our method of discussion, and you'd prefer to either keep going in circles or give up entirely?

Elliot Temple: I think we could resolve the disagreement and come to agree, if we make an effort to, AND we don't put arbitrary boundaries on what kinds of solutions and actions are allowed to be part of the problem solving process. I think if you make methodology off-limits, you are sabotaging the discussion and preventing its rational resolution.

Elliot Temple: Not everything is working great. We could fix it. Or you could just unilaterally blame me and quit..?

Andrew Crawshaw: Sorry, I am not blaming you for anything.

Elliot Temple: OK, you just don't really care?

Andrew Crawshaw: Wait. I want to say two things.

  1. It's 5 in the morning, and I was working all day, so I am exhausted.

  2. This discussion is interesting, but fragmented. I need to moderate my posts on here, now. And recuperate.

Elliot Temple: I haven't asked for fast replies. You can reply on your schedule.

Elliot Temple: These issues will still be here, and important, tomorrow and the next day. My questions are open. I have no objection to you sleeping, and whatever else, prior to answering.

Andrew Crawshaw: Oh, I know you haven't asked for replies. I just get very involved in discussion. When I do I stop monitoring my tiredness levels and etc.

I know this discussion is important. The issues and problems.

Elliot Temple: If you want to drop it, you can do that too, but I'd want to know why, and I might not want to have future discussions with you if I expect you'll just argue a while and then drop it.

Andrew Crawshaw: Like to know why? I have been up since very early yesterday, like 6. I don't want to drop the discussion I want to postpone it, if you will.

Elliot Temple: That's not a reason to drop the conversation, it's a reason to write your next reply at a later time.

Andrew Crawshaw: I explicitly said: I don't want to drop the discussion.

Your next claim is a non-sequitur. A conversation can be resumed in many ways. I take it you think it would be better for me to initiate it.

Andrew Crawshaw: I will read back through the comments and see where this has lead and then I will post something on fallible ideas forum.

Elliot Temple: You wrote:

Elliot, if you wish to meta, I will step out of the conversation.

I read "step out" as quit.

Anyway, please reply to my message beginning "Let's step back." whenever you're ready. Switching forums would be great, sure :)


Elliot Temple | Permalink | Messages (17)

Yes or No Philosophy Discussion with Andrew Crawshaw

From Facebook:

Alan Forrester: https://curi.us/1963-can-winwin-solutions-take-too-long

Assigning weights to ideas never really fitted very well with critical rationalism. Evolution doesn't assign points to genes: they either survive and get copied or they don't. The same is true for an idea: it either solves a problem or it doesn't. This post is relevant to whether there is always a solution to a problem or if we have to weigh ideas to avoid throwing away conflicting ideas that might be okay.

BC: "The same is true for an idea: it either solves a problem or it doesn't." quote

Well who determines whether a problem is solved or not or even what is the problem? The problem of the basis, empirical or otherwise? The search for the algorithm to end all algorithms?

Elliot Temple: problems are solved, or not, in objective reality. people try to understand this with guesses and criticism, as always. there's no authorities. "who determines...?" is begging for an authoritarian answer just like "who should rule?"

BC: "A problem is perceived as such when the progress to a goal by an obvious route is impossible and when an automatism does not provide an effective answer." (W D Wall) What determines the goal?

Elliot Temple: people are free to determine their own goals, by thinking (guesses and criticism).

BC: So what point is being made?

Elliot Temple: you asked tangential questions. i answered. it was your responsibility for them to have a point.

Andrew Crawshaw: I think, Bruce, that the point is is that CR should be about either or claims about truth and falsity. What I don't understand is why this would be incompatible with measures of verisimilitude. I do not know if either Forrester or Temple are averse verisimiltude per se. I think they are critical of the idea that we can build a theory of critical preference on top of this, which was Popper's hope.

Am I right in suggesting, Elliot, that you think that we should only act under the circumstance that there is a single exit strategy, as it is called, and if there is not a single exist strategy that there are ways of making the circumstance such that there is a single exit strategy, therefore getting rid of the need for critical preferences.

Elliot Temple: Ideas either solve a problem or they don't solve it. A criticism either explains why an idea doesn't solve a problem, or fails to. There's no room here for amounts of goodness of ideas, which is a core idea of justificationism. Yes I think critical preferences are a mistake. See:

https://yesornophilosophy.com/argument

http://curi.us/1585-critical-preferences

http://curi.us/1917-rejecting-gradations-of-certainty

Andrew Crawshaw: Yes, I have read that. Are you saying that, given that I have a cold, and that there are two ways of alleviating it but they are incompatible solutions to alleviating this cold, ie they cannot be taken together. Say they are both to hand and both are explained as being effective by the scientific theories we have at our disposal. Would you say then that it is not right to take either?

Elliot Temple: What does "that" refer to? I gave 3 links.

Elliot Temple: > Would you say then that it is not right to take either?

no. i don't know where that's coming from.

Andrew Crawshaw: There is only one link showing. And it says Fallible ideas - Yes or No Philosophy.

Elliot Temple: all 3 links are showing, please look in the text of the post.

Andrew Crawshaw: Okay, I was just clearing up whether I might have misinterpreted you. So your theory applies only to what theories we should act on?

Elliot Temple: No. I don't know where you're getting that interpretation either. I think it would help if you quoted the text you're talking about

Andrew Crawshaw: I am responding to your reply to my comment. I asked about single exit strategies, the scenario I gave was not a single exist strategy, I was wondering how you would answer it.

Elliot Temple: Come up with a theory about what to do that you don't have a criticism of. E.g. "I should take medicine A now b/c i don't have a better idea and it's way better than nothing and it's not worthwhile to spend more time deciding". You can form an idea like that and see if you have a criticism of it or not.

Andrew Crawshaw: But you could substitute Medicine B in your theory and the situation would still be symmetrical.

Elliot Temple: So what?

Elliot Temple: If your theory is that it's best to take one medicine, but not both or neither, and it doesn't matter which one then it's ok to choose arbitrarily or randomly. you don't have a criticism of doing so.

Andrew Crawshaw: Now, you might think my question peculiar. Say I have medicine A and medicine B, everything is exactly the same as it is in the previous scenario, except that medicine B is in the bathroom and medicine A is to hand. Could this be part of preferential decision in favour of A? Even though it's not a criticism of it as a solution?

Elliot Temple: Yes. "Why would I want to go walk to the bathroom for no reason?" is a criticism. Everything else being equal (which it usually isn't), in general I'd rather not go walk to get something.

Andrew Crawshaw: But there is a difference between the two types of criticism, one is of the solution whether it would actually solve it if carried out and the other to do with whether there are other factors. The other factors being about preference.

Elliot Temple: The idea "medicine B as a solution to problem 1" and "medicine B as a solution to problem 2" are different ideas. A criticism may apply to only one of them. The criticism that i don't want to walk and get B doesn't matter for B as a solution to problem 1 (cure my illness), but does criticize choosing B for problem 2 (what action should i take in my life right now, with the situation that A and B medicines are equally good, and the only difference is one is further away and i'd rather not go get it).

This is explained at length in my Yes or No Philosophy.

Andrew Crawshaw: Isn't it slightly unhelpful to add your preference to the formulation of the problem. I mean, in otherwords, that you can just keep extending the formulation of the problem as you think about to carry it out. it seem to me no different than weighing up preferences.

Elliot Temple: Preferences need to be dealt with by critical thinking, not weighing. Weighing doesn't work. Also explained in my Yes or No Philosophy.

Elliot Temple: Weighing is also criticized in BoI and in various blog posts. Did you read the 3 I linked you? You can find more relevant posts e.g. here which is linked at the bottom of a link i gave you: http://curi.us/1595-rationally-resolving-conflicts-of-ideas

Andrew Crawshaw: Maybe I did not communicate properly. The problem is that I want to administer medicine. I have a preference...I would rather not walk. Therefore I go for medicine A. What's changed by reformulating the problem to contain the preference?

Elliot Temple: The point isn't where you notionally put the preference – it's part of the situation in any case. The point is you have a criticism of one option (walking is too hard) and not the other.

Elliot Temple: So one always can and should act on a single, non-refuted idea.

Elliot Temple: You never have to act on a refuted idea, or try to choose between non-refuted ideas by a method other than conjectures and criticism. Such an alternative method would actually be a huge problem for epistemology and basically destroy CR.

Andrew Crawshaw: The administering of medicine B has not been refuted qua alleviating my headache.

Elliot Temple: Right, I said that too.

Andrew Crawshaw: I am not sure of the difference between critical preference and your theory. Seems to be the same theory redescribed. I will have to think about it a little.

Andrew Crawshaw: Thanks for the links, I will read them more carefully over the next week.

Andrew Crawshaw: Oh, Elliot, could you give me the chapter of BoI, where weighing is criticised.

Elliot Temple: 13. Choices

Andrew Crawshaw: Thanks


Elliot Temple | Permalink | Messages (0)

Do Thousands of Error Corrections

This is from a Fallible Ideas email.

I wrote (Sept 2017):

but i also did NOT just accept whatever DD said b/c he said it. i expected him to be right but ALSO challenged his claims. i asked questions and argued, while expecting to lose the debate, to learn more about it. i very persistently brought stuff up again and again until i was FULLY satisfied. lots of people concede stuff and then think it's done and don't learn more about it, and end up never learning it all that well. sometimes i thought i conceded and said so, but even if i did, i had zero shame about re-opening any topic from any amount of time ago to ask a new question or ask how to address a new argument for any side.

i also fluidly talked about arguments for ANY side instead of just arguing a particular side. even if i was mostly arguing a particularly side, i'd still sometimes think of stuff for DD's side and say that too. ppl are usually so biased and one-sided with their creativity.

after i learned things from DD i found people to discuss them with, including people who disagreed with them. then if i had any trouble thoroughly winning the debate with zero known flaws on my side, zero open problems, zero unanswered criticisms, etc, then i'd go back to DD and expect more and better answers from him to address everything fully. i figured out lots of stuff myself but also my attitude of "DD is always right and knows everything" enabled me to be INFINITELY DEMANDING – i expected him to be a perfect oracle and just kept asking questions about anything and everything expecting him to always have great answers to whatever level of precision, thoroughness, etc, i wanted. when i wasn't fully convinced by every aspect of an answer i'd keep trying over and over to bring up the subject in more ways – state different arguments and ask what's wrong with them, state more versions of his position (attempting to fix some problem) and ask if that's right, find different ways to think about a question and express it, etc. this of course was very useful for encouraging DD to create more and better answers than he already knew or already had formulated in English words.

i didn't 100% literally expect him to know everything, but it was a good mantra and was compatible with questioning him, debating him, etc. it's important to be able to expect to be mistaken and lose a debate and still have it, eagerly and thoroughly. and to keep saying every damn doubt you have, every counter-argument you think of, to address ALL of them, even when you're pretty convinced by some main points that you must be badly wrong or ignorant.

anyway the method of not being satisfied with explanations until i'd explained them myself to teach others and win several debates – with NO outstanding known hiccups, flaws, etc – is really good. that's the kind of standard of knowledge people need.

Anne B replied (Sept 2017):

Is this a model you recommend for the rest of us to learn? I can give it a try but I don't think it'll be easy for me for two reasons.

1) I've spent decades trying to be a person who DOESN'T argue. What I usually do when someone says something I don't agree with is stop talking about it. I don't want to rock any boats or get anyone mad at me, especially if I'm wrong.

2) I don't really believe that I could very often reach a point of understanding something so well that I could easily refute any competing arguments. I picture myself asking a question here, someone giving an answer I don't fully believe or understand, then doing a bit of arguing back and forth but never reaching a point where we both understand and agree. I'd give up long before that, not wanting to press the issue, and just "agree to disagree" in my mind. Out loud I might concede. Do you really think I could succeed at this kind of arguing? (By succeed I mean fully convince myself of anything?)

Why can I write decent sentences but Kate and most people are bad at it? (See the "Running your own life" discussion from today.)

Because I found thousands of flaws with my writing in the past (including by listening to criticism) and made efforts to fix those flaws.

I did thousands of error corrections. That's what it takes to be good at something which is moderately difficult.

doing thousands of error corrections requires an attitude towards life and learning. you have to be interested in mistakes, including small mistakes, and make changes to address them.

it also requires being able to make changes without it being a huge cost. if changing anything is super expensive, you'll only do it for BIG fixes. you need changing to be cheap to do it thousands of times.

there's no other way to build up skill. you need to be able to make changes cheaply and do thousands of them. and the changes should focus on error correction.

anyone could do this but most people don't want to. and many people have lots of anti-change stuff in their minds getting in the way. but the disinterest in error correction is problem number one. if people cared enough, then they could start a series of enthusiastic attempts to do something about their change-is-expensive problem.


Elliot Temple | Permalink | Messages (3)

Human Problems and Abstract Problems

This is an email I wrote in July 2013. I'm replying to David Deutsch (Feb 2001) who is in regular yellow quotes and was addressing the topic, Are common preferences always possible?. Two quote levels is Demosthenes (Feb 2001).

Susan Ramirez asked (Feb, 1997):

Why do you believe that it is always possible to create a common preference?

Sarah Lawrence replied (Jan, 2001)

This question is important because it is the same as - Are there some problems which in principle cannot be solved? Or, when applied to human affairs: - Is coercion (or even force, or the threat of force) an objectively inevitable feature of certain situations, or is it always the result of a failure to find the solution which, in principle, exists?

David Deutsch begins his reply:

I think that both Sarah and Demosthenes (below) somewhat oversimplify when they identify 'avoiding coercion' with 'problem-solving'. For instance, Sarah says "This question ... Is the same as[:] Are there some problems

Let's watch out for different uses of the word "problem". [This unquoted material is Elliot writing.]

which in principle cannot be solved?" Well, in a sense it is the same issue. But due to the imprecision of everyday language, this also gives the impression that avoiding coercion depends on everyone adopting the same theory (the solution, the common preference) about whatever was at issue. In fact, that is seldom literally the case, because the parties' conceptions of what is 'at issue' typically change quite radically during common-preference finding. All that is necessary is that the participants change to states of mind which (1) they prefer to their previous states, and (2) no longer cause them to hurt each other.

In other words, common preferences can often be much narrower than it may first appear. You needn't agree about everything, or even everything relevant, but only enough to proceed without hurting (TCS-coercing) each other (or oneself in the case of self-conflicts).

[This next section has two levels of quoting and is Demosthenes. The black bar indicates an additional level of quoting. Two levels means that I'm quoting David Deutsch quoting it.]

I agree that this question is important, though I would offer instead the following two elucidating questions:

In the sphere of human affairs:

  1. Are there any problems that would remain unavoidably insoluble even if they could be worked on without any time and resource limits?

  2. Are there any problems that are unavoidably insoluble within the time and resource limits of the real life situations in which they arise?

The word "problem" in both of these is ambiguous.

Problem-1: (we might call it "human problem"): "a matter or situation regarded as unwelcome or harmful and needing to be dealt with and overcome"

Problem-2: (we might call it an "abstract problem"): "a thing that is difficult to achieve or accomplish"

There are problems, notionally, like going to the moon. But no one gets hurt unless a person has the problem of going to the moon. Problem-1 involves preferences, and the possibility of harm and TCS-coercion. And it is the type of problem which is solved by common preferences.

Problem-2, inherently, does not have time or resource limits, because the universe is not in a hurry, only people are.

So, are there any problems which are insoluble with the time and resource limits of real life situations? Not problem-2 type, because those do not arise in people's life situations, and they do not have time or resource limits.

And as for problem-1 type problems, those are always soluble (within time/resource constraints), possibly involving changing preferences. (BTW, as a general rule of thumb, in non-trivial common preference finding, all parties always change their initial preferences.)

An example:

problem-2: adding 2+2 (there is no time limit, no resource limit -- btw time is a type of resource)

problem-1: adding 2+2 within the next hour for this math test (now there are resource issues, preferences are involved)

Another way to make the distinction is:

problem-1: any problem which could TCS-coerce (hurt) someone

problen-2: any problem which could not possibly ever TCS-coerce (hurt) anyone

problem-2s are not bad. Not even potentially. Problem-1s are bad if and only if they TCS-coerce anyone. A problem like 2+2=? cannot TCS-coerce anyone, ever. There's just no way. It takes a different problem like, "A person asked me what 2+2 is, and I wanted to answer" to have the potential for TCS-coercion.

Notice solving this different problem does not necessarily require figuring out what 2+2 is. Solving problem-1s never requires solving any associated problem-2s, though that is often a good approach. But it's not necessary. So the fact that various problem-2s won't be solved this year need not hurt anyone or cause any problem-1s -- with their time limits and potential for harm -- to go unsolved.

I believe that the answer to question (1) is, no -- there are no human problems that are intrinsically insoluble, given unbounded resources.

This repeated proviso "given unbounded resources" indicates a misconception, I think. The answer to (2) is, uncontroversially, yes. Of course there exist disagreements -- both between people and within a person -- that take time to resolve, and many will not be resolved in any of our lifetimes.

I think this unclear about the two types problems. While it agrees with me in substance, it defers to ambiguous terminology that basically uses unsolved problem-2s to say there are insoluble problems and try to imply it's now talking about problem-1s.

There is a mix up regarding failure to solve an abstract problem like figuring out the right theory of physics (which two friends might disagree about) with failure to solve human problems, like the type that make those friends hurt each other.

It's harmless to have some disagreements that you "agree to disagree" about, for example. But if you can't agree to disagree, then the problem is more dangerous and urgent.

It's uncontroversial that people have unsolved abstract problems for long periods of time, e.g. they might be working on a hard math problem and not find the answer for a decade. And their friend might disagree with them about the best area to look for a solution.

But so what?

Human problems are things like, "I want to solve the problem this week" (maybe you should change your preference?) or "I want to work on the math problem and find good states of mind in regard to it, and enjoy making progress" (this human problem can easily be solved while not solving the harmless abstract problem).

But that has nothing to do with the question being discussed here.

Right because of the confusion over different meanings of "problem".

The fact that after 25 years of almost daily attention to the conflict between quantum theory and general relativity I have failed to discover a theory that I prefer to both (or indeed to either), does not indicate that I have "failed to find a common preference"

Right. Common preferences do not even apply to problem-2s, only problem-1s.

either within myself, or with other proponents of those theories, in the sense that interested Susan Ramirez. I have not found a preferred theory of physics, but I have found successively better states of mind in regard to that problem, each the result of successive failures to solve it.

However this view is only available to those of us who believe that for all moral problems there exists, in principle, a unique, objectively right solution. If you are any kind of moral relativist, or a moral pluralist (as many people seem to be) then you can have no grounds for arguing that all human disputes are in principle soluble.

It is only in spheres where the objective truth of the matter exists and is in principle discoverable, that the possibility of converging on the truth guarantees that all problems are, in principle, soluble.

I agree that for all moral problems

No clear statement of which meaning of problem this refers to.

there exists an objectively right solution, and that this is why consensual relationships -- and indeed all liberal institutions of human cooperation, including science -- can work. The mistake is to suppose that if one does not believe this, it will cease to be true. For people to be able to reach agreement, it suffices that, for whatever reason, they seek agreement in a way that conforms to the canons of rationality and are, as a matter of fact, converging on a truth. Admittedly it is a great impediment if they think that agreement is not possible, and very helpful if they think that it is, but that is certainly not essential: many a cease-fire has evolved into a peace without a further shot being fired. It is also helpful if they see themselves as cooperating in discovering an objective truth, and not merely an agreement amongst themselves, but that too is far from essential: plenty of moral relativists have done enormous good, and made enormous moral progress -- for instance towards creating institutions and traditions of tolerance -- without ever seeking an objective truth, or realising that they were finding one. In fact many did not realise that they were creating agreement at all, merely a tolerance of disagreement. And incidentally, they were increasing the number of unsolved problems in society by promoting dissent and diversity.

Increasing the number of unsolved problem-2s, but decreasing the number of unsolved problem-1s.

What we need to avoid, both in society and in our own minds, is not unsolved problems,

Ambiguous between problem-1s and problem-2s.

not even insoluble problems,

Ambiguous between problem-1s and problem-2s.

Also doesn't seem to be counting preference changing as a solution, contrary to the standard TCS attitude which regards preference changing as a normal part of common preference finding, and part of problem solving.

but a state in which our problems are not being solved

But this time it means problem-1s.

-- where thinking is occurring but none of our theories are changing.

I believe that the answer to question (2) is yes -- human problems that cannot be solved even in principle, given the prevailing time and resource constraint, are legion. Albeit, nowhere near as legion as non-TCS believers would have it. My main argument in support of this thesis is based on introspection: Let him or her who is without ongoing inner conflict proffer the first refutation.

This is a bit like saying, at the time of the Renaissance, that science is impossible because "let him who is without superstition proffer the first refutation". The whole point about reason is that it does not require everything to be right before it can work. That is just another version of the "who should rule?" error in politics. The important thing is not to start out right, but to try to set things up in such a way that what is wrong can be altered. The object of the exercise is not to create a chimerical (and highly undesirable!) problem-free state,

A problem-2-free state is bad. As in, not having any problems we might like to work on. This is bad because it creates a very hard problem-1: the problem of boredom (having no problem-2s to work on, while wanting some will cause TCS-coercion).

A problem-1-free state is ... well there is another ambiguity. Problem-1s are fine if one is rationally coping with them. It's not bad to have human problems and deal with them. What's bad is failure to cope with them, i.e. TCS-coercion.

How can we tell which/when problem-1s get bad? When they do harm (TCS-coercion).

To put it another way: problem-1s are bad when one acts on an idea while having a criticism of it. But if it's just the potential for such a thing in the future, that's part of normal life and fine.

but simply to embark upon actually solving problems rather than being stuck not solving any (or not solving one's own, anyway). Happiness is solving one's problems, not 'being without problems'.

"one's problems" refers only to problem-1s, but "being without problems" and "actually solving problems" are ambiguous.

In other words, I suggest that there isn't a person alive whose creativity is not diminished in some significant way by the existence of inner conflict. Or rather dozens, if not hundreds or thousands, of inner conflicts.

Yes. But having diminished creativity (compared to what is maximally possible, presumably) is and always will be the human condition. Minds are fallible. Fortunately, it is not one's distance from the ideal state that makes one unhappy, but an inability to move towards it.

And if you cannot find a common preference for all the problems that arise within your own mind, it is a logical absurdity to expect to be able always to find a common preference with another, equally conflicted, mind.

Just as well, really. If you found a common preference for all the problems within your own mind, you'd be dead. If you found a common preference for all the problems you have with another person with whom you interact closely, you'd be the same person.

[SNIP]

However, and it is an important however, to approach this goal we must dare to face the inescapable facts that, in practice, it is by no means always possible to find a common preference; that therefore it is not always possible to avoid coercion;

This does not follow, or at least, not in any useful sense. Demosthenes could just as well have make the identical comments about science:

[Demosthenes could have written:]

In the sphere of science:

  1. Are there any problems that would remain unavoidably insoluble even if they could be worked on without any time and resource limits?

  2. Are there any problems that are unavoidably insoluble within the time and resource limits of the real life situations in which they arise?

I believe that the answer to question (1) is, no -- there are no scientific problems that are intrinsically insoluble, given unbounded resources.

Right. And why should it follow from this that a certain minimum of superstition is unavoidable in any scientific enterprise, and that people who try to reject superstition on principle will undergo "intellectual and moral corrosion" if, as is inevitable, they fail to achieve this perfectly -- or even if they fail completely?

As Bronowski stressed and illustrated in so many ways, doing science depends on adopting a certain morality: a desire for truth, a tolerance, an openness to change, an awareness of one's own fallibility and the fallibility of authority, yet also a respect and understanding for tradition ... (It's the same morality as TCS depends on.) And yes, no scientist has ever been entirely free from irrationality, superstition, dogma and all the things that the canons of rationality say are supposed to be absent from a true scientist's mind. Yet none of that provides the slightest argument that a person entering upon a life of science is likely to become unhappy

Tangent: this is a misuse of probability. Whether that happens depends on human choices not chance.

in their work, is likely to find their enterprise ruined either because they encounter a scientific problem that they never solve, or because they fail to rid their own minds of certain superstitions that prevent them from solving anything.

The thing is, all these sweeping statements about insoluble problems

Ambiguous.

and unlimited resources, though true (some of them trivially, some because of fallibilism) are irrelevant to the issue here, of whether a lifestyle that rejects coercion is possible and practical in the here and now. A TCS family can and should reject coercion in exactly the same sense, and by the same means, and for the same reason, as a scientist can and should reject superstition. And to the same extent: utterly. In neither case can the objective ever be achieved perfectly, with finite resources. In neither case can any guarantee be given about what the outcome will be. Will they be happier than if they become astrologers instead? Who knows? And certainly good intentions alone can guarantee nothing. In neither case can the enterprise be without setbacks and failures, perhaps disasters. And in neither case is any of this important, because ... well, whatever goes wrong, however badly, superstition is going to make it worse.

-- David Deutsch

http://www.qubit.org/people/david/David.html

Josh Jordan wrote:

I think it makes sense to proceed according to the best plan you have, even if you know of flaws in it.

What if those flaws are superstition? Or TCS-coercion?

Whatever happens, acting against one's best judgment -- e.g. by disregarding criticisms of flaws one knows -- is only going to make things worse.


Elliot Temple | Permalink | Messages (0)

Discussion: Politicizing the Las Vegas Tragedy

From Facebook:

Evan O'Leary:

What is with people who don't like things to be "politicized"? Do you not want people you tribally dislike to say reasonable things because then you'll have to disagree with them because you were born with nothing but an amygdala for a brain?
EDIT: good point made in the comments, exploiting people's emotions to manipulate their political beliefs while they're in a less rational state is bad

Elliot Temple:
i take it you're insulting right wingers including classical liberals who believe in freedom regarding the issue of gun control. i'd suggest being more clear about what your point is in the future.

so, regarding gun control: instead of insulting people, i think it'd be better to try to investigate, in an objective, scholarly way, whether the factual claims in this book are correct or incorrect:
https://www.amazon.com/War-Guns-Yourself-Against-Control-ebook/dp/B01HH5HN8W/

Evan O'Leary:
 I'd suggest being less paranoid, you're wrong about what I'm arguing

Elliot Temple:
 then clarify

Evan O'Leary:
 There's nothing in my post that needs clarification, people on the left get mad at the NRA for "politicizing" shootings too when they say less people would have died if one of the hostages was carrying a gun

Elliot Temple:
 do you have an example of that? for example, Hillary chose to politicize the shooting rather than accuse the NRA of politicizing. By contrast, I say many right wingers complaining about Hillary politicizing it.

Evan O'Leary:
 Sure, let me find it. There was some hostage situation in recent years when people said open carry would have prevented it

Elliot Temple:
 Hold on, let's stick to the Vegas shooting and representative examples! I'm sure somewhere in history you'll find one example.

Evan O'Leary:
 Not just open carry but also when refugees commit shootings the right politicizes it with immigration

Elliot Temple:
 Are you in favor of gun control or against it?

Evan O'Leary:
 Can't find the hostage situation rn, do you disagree with the immigration point?

I'm not sure what to think about gun control

Elliot Temple:
 I agree that the right sometimes politicizes shootings, but in my understanding the dominant trend after the Vegas shooting – which is the context of your post – was the left politicizing it and the right criticizing the politicization. If I'm mistaken because I didn't see a broad enough sample of political messaging, I'd appreciate the correction. If you saw it similarly, then wasn't your post a reaction to some right winger comments?

Evan O'Leary:
 It was caused by me seeing right winger comments and seeing a problem with the "don't politicize" part of the argument, not the "gun control has downsides" part

Elliot Temple:
 views on gun control are relevant here. e.g. consider Hillary's pivot to bringing up silencers. was that relevant and reasonable, or just unreasonably trying to use the tragedy in an unrelated way? people who have knowledge about silencers and gun rights are going to have a different perspective on Hillary's comments than someone who is neutral. Part of their reaction – which you took issue with – was due to knowledge of the issues, not tribalism and amygdalae.

Elliot Temple:
 Hunters want suppressors to prevent damage to their ears and their dogs' ears, and to be better able to hear each other and prevent dangerous hunting miscommunications. That's what Hillary pivoted to the tragedy to.

https://www.frontpagemag.com/point/268035/how-hillary-clintons-tweet-showcases-cynicism-gun-daniel-greenfield

Elliot Temple:
 A reasonable response would be to call Hillary Clinton dishonest, because her comments were an attempt to shoehorn an unrelated agenda where it didn't fit and mislead the public. The discussion is ready to go straight into the mud. But do we want a bunch of mud slinging and character attacks and typical political dirty fighting to be the centerpiece of the national discussion of the Vegas tragedy? As much as I'm personally pretty willing to debate anything, I do see why people could object to this!


Elliot Temple:
 and the reason some people don't want a bunch of murder to be politicized is because of their respect for life and human dignity.

Evan O'Leary:
 What about politics inherently lacks respect for that

Elliot Temple:
 many political discussions aren't respectful of the gravity of mass murder, as i'm sure you've observed

Evan O'Leary:
 Is that because they're political?

Elliot Temple:
 Partly, yes. Some types of discussions are more known for human decency than others.

Evan O'Leary:
 The only political discussions which lack respect for life and dignity are the ones with bad political arguments

Any solution to this issue is going to be one of policy, so even if politics causes irrationality in humans, our other choice is having murder problems which don't seem less important than irrationality

Elliot Temple:
 "The only political discussions which lack respect for life and dignity are the ones with bad political arguments"

So, most of them? Do you see the problem?

Elliot Temple:
 No one is objecting to debating the issues at some point, and trying to make the discussions civil. But there are questions about the apporpriate immediate comments from public figures. Should they prioritize attempting a dirty political sound byte, or perhaps is it better to begin by saying something about their respect for human life and how sad they are about the tragedy, and then try to debate gun control issues in the normal ways afterwards?

Evan O'Leary:
 The better explanation is irrationality, not politics

Evan O'Leary:
 "Don't politicize" is a problematic criterion, and we have a better criterion, "don't be irrational"

Elliot Temple:
 People debate what is irrational or not. Being more specific is good sometimes.

Elliot Temple:
 Of course it's a problematic criterion. They aren't having extensive serious discussions with both sides engaging with each other. It's not a very intellectual forum.

Justin Mallone:
some on the left have definitely taken the tone of "fuck talking about respect for human life. now is the time for drastic political action." one example is literally not attending a moment of silence as a political protest due to insufficient gun control: http://www.washingtontimes.com/news/2017/oct/5/jackie-speier-congressional-moment-silence-shootin/

Evan O'Leary:
 A better criterion would be "don't politicize too soon after tragedies", but even that creates problems that aren't clearly improvements, because people lose political motivation after tragedies

Elliot Temple:
 that's roughly what lots of them meant, though the issue isn't entirely a matter of time. part of the issue is what you say in the time before the political debate. and your actual attitudes, not just statements.

Elliot Temple:
 and btw they primarily meant for the anti-politicization comments to apply to public figures, and people participating in the hashtags/slogans/yelling kind of politics, not discussions on serious debating forums.

Justin Mallone:
I saw a formulation of don't politicize idea from a right-winger (FYI Elliot, it was Tracinski) that just said wait 72 hours after tragedy. very modest standard but people couldn't even come close to that

Elliot Temple:
 some major voices on the left are really eager to proclaim that they know the solution to tragedies like this. some major voices on the right disagree, and think they have better solutions, but are more willing to try to set that disagreement aside briefly to have some unity in mourning.

Elliot Temple:
 can we pray together and try to think things over for a few days before we go back to squabbling over the same bitter disagreement we've been fighting about for decades?

^ I think that's a reasonable attitude.

Elliot Temple:
 can we, in the wake of the tragedy, use it as a reminder that we're on the same side, instead of using it as leverage to be divisive?

Elliot Temple:
 unfortunately i honestly don't think Hillary Clinton is on the same side as the rest of us. but i can sympathize with people who take the above kind of attitude, and i think most of the left are reasonably decent people.

Elliot Temple:
https://www.youtube.com/watch?v=slDjxJMWJn4


Elliot Temple | Permalink | Messages (0)

Comments on Behavioral Genetics Lecture

These are my comments on the first 49 minutes of Behavioral Genetics II, a 2010 lecture from Robert Sapolsky at Stanford.

Around 30 seconds in, the foxes thing is wrong. He says fox breeding shows evolution moving really fast. But it's not evolution of new traits, it's just adjusting the parameters for traits which evolved in the past. Dawkins made the same mistake. See:

https://groups.yahoo.com/neo/groups/fabric-of-reality/conversations/topics/16068

The takeaway is the video lecturer and Dawkins are not philosophers and they routinely get things wrong when they stray into philosophy issues without realizing it. To understand knowledge creation correctly you have to study epistemology. Evolution itself is a theory of epistemology, and many people trying to talk about it don't even know what field it belongs to. The application of evolution to biology and genes is just one implication of the more general epistemological theory.

Also, regarding fox comparisons: humans are fundamentally different than animals because humans have intelligence software (universal knowledge creation software) and animals don't.

Around 12min, the lecturer talks about genetic markers. Note those are correlations. At 14min he says people carefully checked their statistics to decide how certain they were because things like terminating pregnancies was at stake. But no amount of statistics can ever turn correlations into causations. Before advising a single person to terminate pregnancy, you must have discussions, with arguments and criticism, that try to understand the causality. The video doesn't attempt to discuss how to do this well, or mention the necessity of it. Again this is running into a philosophical issue (how to have a productive debate to seek the truth) and these people aren't philosophers and don't know what they're doing (they don't even realize when they stray out of their field, into a different field they are bad at).

Around 21min the lecturer suggests that genes can control human behavior, with no awareness that memetic evolution and intelligent decision making are the dominant issues for human behavior. He brings up extrapolating from animal genes to humans, including in the case of behavior, without realizing this huge difference. (Extrapolations from animals like that can be reasonable guesses for non-behavior issues like hair color.)

So far the lecturer hasn't said a word about gene-environment interactions or about memes. But once memes existed, they evolved faster than genes and therefore outraced genes to meet lots of selection pressures and therefore there are memes instead of genes for lots of human behaviors.

At 24:15 he brags about how a paper was in a "very prestigious" journal. He's interested in social status instead of truth. The study he talks about is just a correlation study, so who cares? And he didn't name it and they didn't bother putting a citation for it in the YouTube description. Then he talks about a second study, and it's the same thing: he just summarizes the conclusions and expects people to accept these claims without any arguments that they are true. He's just completely ignoring the gene-environment interaction issue, and memes, and it makes what he's saying misleading and unproductive.

At 26min he talks about the amygdala having to do with fear and anxiety. He buys into the standard belief about specialized brain regions for different functions. That is contradicted by the universality view. How can such a disagreement be settled? By debate. David Deutsch, myself and others have debated anyone who was willing to have a serious discussion for many years. And we've sought out people and asked if they had any criticisms of our arguments. There is no one from the other side who is able to win this debate against us. This is partly because, again, they aren't philosophers and knowing how to judge ideas in a debate is a philosophy skill. They don't know how to argue well, which is why they've accepted the wrong ideas and are unwilling to deal with criticism.

Where's the "behavioral memetics" lecture? It's not on the playlist.

I have nothing against this particular lecturer. Everything he said is standard and normal. That doesn't prevent it from containing major errors, which are known, and which a lot of people don't want to hear about. I will debate this lecturer, or whoever else, in writing, with no time limits, in a serious, scholarly way. I will continue the discussion to a conclusion instead of giving up and trying to "agree to disagree" and refusing to answer further criticisms and questions. But he won't do it.

At 40min the lecturer brings up heritability. He correctly says that people misunderstand heritability. That's typical. Experts in the field often do know what "heritability" means (they defined "heritability" totally differently than the regular word so that it'd be easier to study), but then the media misreports all the heritability studies. A great source on heritability is Yet More on the Heritability and Malleability of IQ. It has important points that the lecture leaves out.

Around 46:45 the lecturer uses the word "explained" to mean "correlated with". That's so typical and bad.


Elliot Temple | Permalink | Messages (7)