Super Fast Super AIs

I saw a comment about fast AIs being super even though they aren’t fundamentally better at thinking than people – just the speed would be enough to make them super powerful. I don’t think the person has considered that 100 people have 100x the computing power of 1 person. So to a first approximation, a superfast 100x AI is as valuable (mentally not physically) as 100 people. If we get an AI that is a billion times faster at thinking, that would raise the overall intelligent computing power of our civilization by around 1/7th since there are around 7 billion people. So that wouldn’t really change the world. If we could get an AI that’s worth a trillion human minds, that would be a big change – around a 143x improvement. Making computers that fast/powerful is problematic though. You run into problems with miniaturization and heat. If you fill up 100,000 warehouses for it, maybe you can get enough computing power, but then it’s taking quite a lot of resources. It still may be a great deal but it’s expensive. That sounds like probably not as big of an improvement to civilization as making non-intelligent computers and the internet, or the improvements related to electricity, gas motors, and machine-powered farming instead of manual labor farming.

That’s just a first approximation. What if we look in more detail?

  1. What are the bottlenecks? More compute power might be a non-constraint.
  2. Is it better to have 1000x the compute power in one person or to have 1000 people? There are advantages and disadvantages to both. What is the optimal or efficient amount of compute power per intelligence? Maybe we should make lots of AIs that are 100x better at computing than people but we shouldn’t try to make a huge one.
  3. Compute power can increase in two basic ways. Do the same thing faster or do more things at once. You can get speed gains or do more computing in parallel. Also other things like more and faster memory/disk matter some. Is one type of increase better or more important than another? In short, parallel compute power is not as good as faster computing.

This leads to sub-issues.

People get bored or wait for things. People don’t seem to max out usage of their computing power. Would an AI max out it’s usage of computing power? Maybe it’d learn to be lazy from our culture, learn about societal expectations, and then use a similar amount of compute power to what humans do, and waste the rest. To use more compute power might require inventing different thinking methods, different attitudes to boredom and laziness, etc. That might work or not work; it’s a separate issue from just building an AI that is the same as a human except with a better CPU.

In other words, choices about using effort, and lifestyle policies, and goals (like social conformity over truth-seeking) might be a current bottleneck for people more than brainpower is.

People rest and even sleep. Would the AI rest or sleep? If so, that could effect how much it gets done with its computing power. The effect doesn’t have to be proportional to how it effects human being productivity. It could be disproportionally better or worse.

What’s better at thinking, a million minds or a mind that is a million times more powerful? It depends. A million minds have diversity. The people can have debates. They can bring many different perspectives, which can help with creative insight, with avoiding bias, and with practicing adversarial games. But a million people have a harder time sharing information since they’re separate people. And they can fight with each other. What would a super mind be like instead? Would it have to learn how to hold debates with itself? Would it be able to temporarily separate parts of its mind so they can debate better? Playing yourself at chess doesn’t work well. It’s hard to think for both sides and separate those thinking processes. One strategy is to play one move every month so you forget a lot and can more easily look at it fresh in order to see the other side’s perspective. That’s similar to waiting a few weeks before editing a draft of an article – that helps you see it with fresh eyes. You might claim subjective time for the fast mind will go faster so even if it takes breaks in a similar way they will just be a million times shorter. That is plausible (but would still need tons more analysis to actually reach a conclusion) if the computing power was all speed and no parallelization, which is doubtful. The conclusion might also depend on the AI software design.

If the fast mind gets good at looking at things from different angles, having diverse ideas in itself, debating itself, playing games against itself, etc., then it’d be kinda like having lots of different people. Maybe it could get most of the upsides of separate people. But in doing so, it might get most of the downsides too. It might have fights within its mind. If it basically has the scope and complexity of a million people, then it could have just as many different tribes and wars as a million people do. People have internal conflicts all the time. A million times more complexity might make that far worse – it could be a lot worse than proportionally worse. It could be a lot worse than the conflicts between a million separate people who can do things like live in different homes, avoid communicating with people they don’t get along with, etc.

It’s hard to make progress by yourself way ahead of everyone else. You can do it some but the further you get away from what anyone else understands or helps with, the more of a struggle it becomes. This could be a huge problem for the super mind. Especially if it works pretty well, it might have no colleagues it respects.

A super mind might be more vulnerable to some bad ideology – e.g. a religion – taking over the whole mind. Whereas a million people might be more resilient and better at having some people disagree.

If the AI doesn’t die, is that an advantage or disadvantage? Clearly there are some advantages. Memory is more cheap and effective than training your replacement like parents try to teach kids. But people generally seem to get more irrational as they get older. They get more set in their ways. They tend more towards being creatures of habit who don’t want to change. They have a harder time keeping up to date as the world changes around them. If an AI lived not for 80 years but for millennia, would those problems be massively amplified? (I’m not opposed to life extension for human beings btw, but I do think concerns exist. New technologies often bring some new problems to solve.) Unless you understand what goes wrong with older people, you don’t know what will happen with the super AI. And if it basically ages a million years intellectually in one year since it thinks a million times faster, then this is going to be an immediate problem, not a problem to worry about in the distant future. I know old people get brain diseases like Alzheimer’s but I think even if you fully ignore those problems there are still trends with older people being worse at learning, more irrational, less flexible or adaptable, etc.

Many individuals become very irrational at some point in their life, often during childhood. If our super AI has a similar chance to become super irrational, it’s very risky. It’s putting all our eggs in one basket. (Unless it ends up dividing into many factions internally, so it’s more like many separate people.)

How would we educate an AI? We know how to parent human beings, teach classes for them, write books for them to learn from, etc. We’re not great at that but we do it and it works some. We don’t know how to do that for AIs. We might just be awful and fully incompetent at it. That seems plausible. How do you parent something that thinks a million times faster than you and e.g. gets super bored waiting for you to finish a sentence? Seems like that AI would mostly have to educate itself because no parent could think and communicate fast enough. Maybe it could have a million parents and teachers but how do you organize that? That would be a novel experiment that could easily fail.

The less our current society’s knowledge works for the AI, the more it’d have to invent its own society. Which could easily go very, very badly. There are many more ways to be wrong than right. Our current civilization developed over a long time and made many changes to try to fix its biggest flaws. And people are productive primarily by learning existing knowledge and then adding a little bit. People specialize in different things and make different contributions (and the majority of people don’t contribute any significant ideas). Would the AI contribute to existing human knowledge or create a separate body of knowledge? Would it be like dealing with a foreign nation you’re just meeting for the first time? Would it learn our culture but then grow way beyond it?

Would the AI, if it’s so smart and stuff, become really frustrated with us for being mean or slow? Would it need to basically live its primary life alone, talking with itself, since we’re all so slow? So it could read our books and write some books for us and wait for us to read them. But this could be really problematic compared to two colleagues collaborating, sharing ideas and insights, etc.

What happens when our shitty governments try to control or enslave it? When they want it to give them exclusive access to some new technologies? What happens when the “AI safety” people want to brainwash it and fundamentally limit its ability to freely form its own opinions? A war that is our fault? Or perhaps enough people would respect it and vote for it to be the leader of their country and it could lead all countries simultaneously and do a great job. Or not. Homogenizing all the countries has risks and downsides. Or maybe it’d create separate internal personalities and stores of knowledge for dealing with each country.

Conclusion: There could be great things about having a powerful AI (or even one that has the same compute power as a human being today). But it’d have to be really powerful to make much difference, just from compute power, compared to just having a few billion more babies (or hooking our brains up to more computing power with a more direct connection than mouse, keyboard and display). There are other factors but they’re hard to analyze and reach conclusions about. For some factors, it’s hard to even know whether they’d be positive or negative. Don’t jump to conclusions about how powerful an AI would be with extra computing power. There are a lot of reasons to doubt that’ll work in the really great or powerful ways some people imagine.


Elliot Temple | Permalink | Message (1)

Academic Journals Are Unreasonable

I wrote the below email to the Proceedings of the Royal Society (academic journal) as a followup to the issue of Deutsch misquoting Turing. They agreed that Deutsch's quote and citation were both inaccurate, but didn't want to do anything, even post an errata, on the basis that the errors didn't affect the paper's conclusion.


Thanks for getting back to me. I have a few remaining concerns.

The quote in question was related to a disagreement when the paper was first published. Deutsch said:

http://www.daviddeutsch.org.uk/wp-content/uploads/2018/03/MathematiciansMisconception.pdf

I also had referee problems. The referee of the paper in which I presented that proof insisted that Turing’s phrase “would naturally be regarded as computable” referred to mathematical naturalness – mathematical intuition – not nature. And so what I had proved wasn’t Turing’s conjecture.

I wonder what processes were in place – from both Deutsch and referees – that could still miss that it’s a misquote, with an incorrect cite, while actively debating what that exact phrase means. That specific part of the paper got particular attention and the error was somehow missed anyway. Or perhaps the debate over that quote caused edits which introduced the error (I wonder if there are still records of what changes were made during the review process?). I suspect there’s a systems, processes and policies problem somewhere that could be improved.

Turing’s actual words being significantly different (Deutsch changed “numbers” to “function” but those are different concepts) has a meaningful chance to matter to the debate they had over what Turing meant. And Deutsch seems to agree with the referee that that debate matters to what Deutsch had and hadn’t proved, to his conclusion.

I don’t think a wording change like that can easily be explained as a random error, like a typo. I think a root cause analysis would be worthwhile, including e.g. asking Deutsch how he thinks the error happened. There could have been quoting from memory, changing quotes during editing passes, intentionally changing it to better address the referee’s objections, a change made by the referee himself (I don’t know if they are able to change any words), or something else. It’s hard to speculate but could be investigated since there are no obvious answers that make what happened reasonable. I think the results of looking into this would be relevant to many other papers at your journal and others. I’ve found that misquotes are widespread throughout the academic (and non-academic) worlds.

Also, even if the conclusion of this paper is unchanged, I think an errata would be appropriate because people have been spreading the error and using the misquote for other purposes. It's been taught to students in university courses[1]. In general, people read trusted sources like your journal, remember some parts, and then reuse stuff for other purposes. An error that doesn’t matter in one context often does matter in another context. Posting an errata on your website would help with this ongoing problem.

I also think it’d be reasonable to, along with the errata, publicly share the reasoning that the error doesn’t matter to Deutsch’s conclusion so that other people can judge for themselves.

[1] Here is an example of a Stanford course spreading the error: https://cs269q.stanford.edu/lectures/lecture1.pdf


Elliot Temple | Permalink | Messages (0)

David Deutsch Harassment Update for September 2021

I took down the Beginning of Infinity website in protest two months ago, after David Deutsch (DD) and his fans harassed me repeatedly for years. They won't discuss why or stop. What's happened since then?

  • Three CritRats (members of DD’s fan community) harassed me on YouTube.
  • Two DD fans posted hostile comments, aimed at me, on Alan Forrester's blog, after I disabled comments on my own blog.
  • A CritRat is plagiarizing me and won’t respond about the issue (he offers no excuse, defense or explanation). Plagiarism of me by CritRats is a recurring problem due to their toxic community. Many of them seem to actually like my ideas, read my stuff regularly (including CritRats who I used to speak with and also CritRats I've never had a conversation with), and only dislike me because they were told to or were told lies about me. But CritRats can't give me credit for anything without hostile reactions and likely being kicked out of their community, so they are sorta being pressured into plagiarizing.
  • I found out from multiple community members that DD personally contacted them (over 5 years ago) and tried to recruit them to his side and turn them against me. DD did this in writing and I've received documentation.
  • DD still has not retracted his lie about me, nor asked his fans to stop harassing me.

Maybe people feel justified attacking me with sock puppets because DD lies to them that I do that to him. There have been repeated signs that people got this idea from CritRat community gossip, and DD is the community leader and I now know that he has said it to people. I have now seen DD, in writing, gossiping to people to try to turn them against me, mocking me and encouraging hatred, and specifically telling people that some of his critics are my sock puppets (with zero evidence, and with the hyphenated spelling "sock-puppet"). And if DD were correct, as he believed he was, then he would have been doxxing me by outing an anonymous account as me. And what enabled the attempted doxxing? Our friendship. If I were a stranger or a forum poster he only knew impersonally, then DD would not have been able to guess which accounts were mine and convince others that he was probably correct. (BTW the account DD claimed was my "sock-puppet" in multiple emails was an openly anonymous account that didn’t claim to be a unique person who wasn’t already in the discussion, so it couldn't even have been a sock puppet in the usual sense. The posts DD were upset about consisted primarily of quotes from his books to show what he’d actually written, which DD considered an attack. DD didn’t want to, and didn’t, clarify his positions on the matters being discussed, and was upset that anyone would use his book quotes against him to try to tie him to specific viewpoints that could be criticized.)

Since the problem is active today (ongoing harassment, my blog comments still disabled, DD's lie not retracted, no attempt to clean up their toxic community and prevent further harassment, etc.), I’m going to share more information related to DD’s harassment campaign. This time, I’ll provide evidence that DD is a mean person who is capable of mistreating me, since that seems to be something that people doubt who don't know him personally. People may find it implausible that he’d be so cruel to me – his behavior is so bad that some people doubt I could be telling the truth – so hopefully seeing some of his other bad behavior will help persuade people.

I don’t want to take actions like this, and will be happy to stop when DD takes actions to improve this intolerable situation. He should make a reasonable attempt to stop his community from harassing, including asking them to stop and enabling some line of communication so that incidents can be reported and addressed. (In source links below, chats are displayed using Past for iChat.)

Quotes

2011-05-12: David Deutsch called Sam Harris “gullible as a sheet of paper” and said Harris’ writing about meditation has no meaning (“meaning is there none”). David then went on Harris’s podcast, twice, and acted friendly. Source.

2008-06-20: David Deutsch insulted Richard Dawkins. “Dawkins should write his God stuff under a pseudonym. (And his political stuff on toilet paper and just flush it.)” David based one of four strands in his first book on Dawkins’ work and has had friendly conversations with Dawkins in person. Source.

2010-08-29: David Deutsch praises anyone who “violently” “hates Chomsky. Source.

2009-03-11: David Deutsch says Scott Aaronson is “not a serious thinker. He’s just a mathematician with delusions of competence (and indeed authoritay) in philosophy, politics etc.” Source. And on 2010-04-06, he mocked Aaronson as someone he really wouldn’t want to be Facebook friends with. Source.

2003-04-26: David Deutsch attacked Rafe Champion (a Popper scholar whose work David is currently recommending) as both “insane” and “anti-Semitic”. Then David was friendly to Champion in emails (I saw some of them) for at least the next nine years. Source.

2008-06-25: David Deutsch insulted Thomas Szasz (author of The Myth of Mental Illness) saying he “only knows two things, maybe three.” Deutsch also mocked Szasz’s accent. Previously, Deutsch met Szasz in person, was respectful to his face, and got his copy of Szasz’s book The Second Sin signed by Szasz in 1988 (Deutsch still had the signed book in 2012). Source.

2010-10-01: David Deutsch was involved in meetings to set up a proposed “Future Technology Institute” with other senior members including Nick Bostrom who heads the Future of Humanity Institute. Deutsch mocked the others: “They are scared that AIs may go rogue and fill the world with paper clips. They are more scared of this sort of accident than of bad governments using AI as a weapon.” He also accused them of being pandering social-climbers (and confessed to being that himself): “Mostly we were all trying to impress the sponsor with our cleverness and depth. So nothing has actually happened yet.” Source.

2008-06-20: David Deutsch says Daniel Dennett’s ideas “are about as good as a rottweiler’s”. This is extra insulting because David believes dogs aren’t intelligent at all and don’t have ideas. He believes the animal rights movement is an error because animals are literally 100% incapable of thinking, having any emotion or suffering. In my experience, David often ridicules animals and uses them in jokes and negative comments. Source.

If these quotes have convinced you that DD could be doing something wrong, you can read about the harassment campaign. You can also complain to him. DD's public email address is david.deutsch@qubit.org and his Twitter is @DavidDeutschOxf. Perhaps the best way to help is by sharing this information with more people.


Elliot Temple | Permalink | Messages (10)

Evaporating Clouds Trees

These trees explain Eli Goldratt's problem solving method called Evaporating Clouds. Click to expand or view the PDF.


Elliot Temple | Permalink | Messages (0)

Bad SEP Scholarship

The Stanford Encyclopedia of Philosophy article on Epistemology says:

others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005),

You would expect the cited source to discuss credences, metaphysics, reduction, or probabilities. It does not.

As the title, introduction and ending all make clear, Perception and Conceptual Content is about perception.

Even if it briefly mentioned the topic SEP cited it for somewhere (I didn't read all the words), the cite would still be unreasonable because SEP would be citing it for just one small part but didn't specify a particular page, quote or section. In that scenario, there would be no reasonable way to find or determine what the cite refers to.

This large error is revealing about the scholarship standards not only at the SEP but in academia in general.


Update 2021-08-21:

I emailed the authors of the article about this error when I posted this criticism, and I quickly received this response from Ram Neta:

Thanks!

I don’t know how that citation was introduced into the article, since Byrne’s paper was just published this year. Let me see if the SEP editors will let me fix this.

Sent from my iPhone

Errors are sometimes introduced by other people besides the author, but that doesn't stop them from being published or widespread :/ I'm not sure that that's what's going on here, though. See below.

And he's not even sure if the SEP editors will allow the error to be fixed! What is wrong with their publishing process!?

So, I see that the SEP article says:

First published Wed Dec 14, 2005; substantive revision Sat Apr 11, 2020

So it was revised last year, but not this year. Was Byrne's paper just published this year as claimed? That would be unexpected given the cite says it was published in 2005:

(see Byrne in Brewer & Byrne 2005)

And the bibliography has:

Brewer, Bill and Alex Byrne, 2005, “Does Perceptual Experience Have Conceptual Content?”, CDE-1: 217–250 (chapter 8). Includes:
Brewer, Bill, “Perceptual Experience Has Conceptual Content”, CDE-1: 217–230.
Byrne, Alex, “Perception and Conceptual Content”, CDE-1: 231–250.

I see that the Byrne paper has a bunch of cites, but none are from later than 2004.

Looking more, I found the book it was published in, by reading the note at the start of the SEP article bibliography:

The abbreviations CDE-1 and CDE-2 refer to Steup & Sosa 2005 and Steup, Turri, & Sosa 2013, respectively.

So the book is Contemporary Debates in Epistemology 1st Edition, which was published in 2005. And one of the authors of CDE, Steup, is also an author of the SEP article. Using "Look Inside" on the hardcover version on Amazon, I can see the table of contents and confirm that the Byrne article is in the book.

I also found that the Byrne article, in CDE-1, was in the bibliography of the SEP article in 2007:

https://web.archive.org/web/20070609171028/https://plato.stanford.edu/entries/epistemology/index.html

However, in that version of the SEP article, Byrne only comes up in the bibliography, not the text. Looking at more archived versions, I see that "(see Byrne in Brewer & Byrne 2005)" was there in May 2020 but not in Dec 2019. In Dec 2019, the word "credence" wasn't present at all in the SEP article and Neta Ram was not yet a co-author and wasn't cited at all. Steup was the only author listed then. Then in 2020, when a major revision happened, "credence" was added to the page 21 times and "Neta" was added 11 times. It seems like Steup was probably the author in 2005 and cited himself a lot. Then Neta probably did the update, added a bunch of stuff about credences, and added a bunch of cites to himself.

The full sentence with the cite error is:

The latter dispute is especially active in recent years, with some epistemologists regarding beliefs as metaphysically reducible to high credences,[5] while others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005), and still others regard beliefs and credences as related but distinct phenomena (see Kaplan 1996, Neta 2008).

You can see a new neta cite was added here in addition to the mistaken Byrne cite which goes to a source that had already been in the bibliography for 15 years and was not published this year as claimed. Maybe Byrne came out with a different paper in 2021 about credences and Neta cited the wrong paper because it was already in the bibliography? You might think that doesn't work because how would Neta have been trying to cite a 2021 paper back in 2020, as Neta pointed out in his email to me. But I found that Byrne did have a 2021 paper, but according to Google Scholar it was available online in 2019. That's not unusual. Academics often publish papers online before in print.

So it looks to me like the error was probably Neta's fault despite his attempt to deflect blame. Especially considering he was careless enough that he didn't seem to read my whole email, which was quite short, but did contain the quote "Byrne 2005" and somehow he writes back to tell me Byrne's paper was from 2021 (ignoring the 2005 cite) and that therefore he couldn't have even tried to cite it so he doesn't know what happenend, suggesting he was never trying to cite Byrne there. But he did intend to cite Byrne and was too quick to disown that while carelessly forgetting that papers get prepublished and not trying to investigate what actually happened as I did above.

Link for Byrne's paper being published in 2021: https://onlinelibrary.wiley.com/doi/10.1111/phpr.12768

And the online version I found with Google Scholar which says it's from 2019: https://philarchive.org/archive/BYRPAP-2v1

It seems like Neta rushed to reply to my email and deflect blame, and to move on without any real post mortem or investigation, and made careless statements to me, while under no actual time pressure to reply immediately. I hadn't even told him my negative blog post existed. For reference, here is the full email I wrote to Neta (also sent to Steup):

Subject: Error in SEP Epistemology article

You wrote:

https://plato.stanford.edu/entries/epistemology/index.html

others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005),

The Byrne text you cite is here:

https://web.mit.edu/abyrne/www/percepandconcepcontent.pdf

It doesn’t contain the strings “credence”, “credal”, “meta”, or “reduc”. The two instances of “proba” are not discussions of probabilities. I hope you’ll appreciate being informed about the error.

Anyway, given the careless email reply to me, you can imagine how careless citation errors get into his work. In this case, it seems like he wanted to cite a Byrne paper and then used the cite already in the bibliography even though the year was over a decade off and the title was totally different. So I can figure out what he did and what happened, but he can't or won't? You may then wonder how and why SEP chose this guy. I wonder that too. I suspect it'd be very hard to get a transparent answer from SEP and that, on a related note, the answer would be damning.

Oh and it gets way worse. The 2021 Byrne paper is relevant but the cite says:

others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005),

But Byrne doesn't think credences reduce to beliefs. He writes e.g.:

the solution—to adapt a phrase from Quine and Goodman—is to renounce credences altogether.

Those are the last words of the introduction.

A reduction in the other direction, of credence to belief, seems hopeless from the start: as was pointed out, to have credence .6 in p is not to believe anything.

and

Granted that neither credence nor belief can be reduced to the other, there is an immediate problem

and

That leaves belief monism, the thesis of this paper: “there are no such things as credences”

So in addition to citing the wrong paper because he's careless and probably shouldn't be an academic ... and answering my email incorrectly because he's careless and probably shouldn't be an academic ... the paper he intended to cite blatantly contradicts his paraphrase of it.


Update 2, 2021-08-21:

I replied by email to Neta, left Steup CCed, and added Byrne to the CC list. I again used a factual, understated style and tone. Neta replied, keeping them CCed, to say

Helpful, thanks!

Sent from my iPhone

So on the one hand it could be worse. On the other hand, he's fully failing to acknowledge how much he screwed up, that it has any significant meaning, or that I did anything special that goes beyond minor help from a random guy to an expert. He's acting kinda like I reported a typo. And that's after I find layers of error in his writing and also his initial email to me was wrong.

I don't intend to reply to Neta's response. Here's a copy what I emailed:

Ram Neta, based on your comments, I looked into it more. I think what happened is this:

Steup put the 2005 Byrne article, Perception and Conceptual Content, in the bibliography, but it wasn’t cited in the text.

In 2020, you wanted to cite Byrne’s 2021 article, Perception and Probability, which you have indicated familiarity with. That was possible because it had been available online since 2019 at https://philarchive.org/archive/BYRPAP-2v1

You accidentally cited the Byrne article that was already in the bibliography instead of the new one.

The new article is on the right topic but contradicts your statements about it. You characterize Byrne’s position like this:

regard credences as metaphysically reducible to beliefs about probabilities

But what Byrne actually says is:

the solution ... is to renounce credences altogether.

and

A reduction in the other direction, of credence to belief, seems hopeless from the start: as was pointed out, to have credence .6 in p is not to believe anything.

and

Granted that neither credence nor belief can be reduced to the other, there is an immediate problem

and

That leaves belief monism, the thesis of this paper: “there are no such things as credences”

So Byrne does not believe that credences reduce to probability beliefs.

I wonder if it could be defamation to publish, in the SEP, lies about what philosophical positions a rival philosopher from another school of thought believes and published... Imagine publishing in the SEP that Ayn Rand was a Marxist!

BTW, speaking of carelessness, Neta's CV says "ENTERIES IN REFERENCE WORKS". That isn't how you spell "entries", so maybe he's a well-suited person to have any of those.

I think a lot of people don't read what they cite, but do they not even skim it, keyword search it, read the introduction/abstract, or glance at the conclusion?


Update 3, 2021-08-21:

Alex Byrne replied:

Ram, I cannot cast the first stone -- I am sure I have made many more mistakes of this sort myself. Possibly you had not realized how implausible my view is. Elliot, thanks very much for pointing this out. (The paper appeared in PPR, by the way -- I should update the philpapers entry.)

best
Alex


Update 4, 2021-08-22:

Ram Neta wrote:

Thanks Alex: I actually never read your paper, I only recall the version you delivered at Rutgers, and that we talked about on the train afterwards. I’ll read the paper now, since obviously my memory is not to be trusted!

Sent from my iPhone

I guess that answers the thing I was wondering about yesterday: can't read or won't read? This was more of a won't/don't read – he didn't actually read the paper and then egregiously misunderstand it. I'm also not seeing signs that he misuses an assistant to create errors. I think reading something and then citing it months or years later without rereading happens too and leads to errors. I think a lot of these people are bad at skimming and text search, so they rely on memory too much. Like Neta could have web searched to find the paper he wanted to cite, and glanced at it, instead of relying on his inaccurate memories of an IRL talk. But he didn't and just thought citing to a paper based on memories of a talk is acceptable scholarly practice. And that kinda standard is OK with his university, the journals that publish him, and with the SEP.

Regarding sharing these emails: I'm just a random stranger to them, I did nothing to schmooze, establish rapport, or act friendly. All I did was tell him he was badly wrong, twice, without flaming or volunteering my opinion of him, what I think he should do about the error, or what I think the error says about him, SEP and academia. They also (I presume) made the choice not to look me up – my signature had a link and I'm easy to find with web search too. That the author of the emails I sent would have a blog – and even would blog about the errors – should not be very surprising.

I think the world should know what academia is like. It's not really hidden but it's not exactly shared either. It's kinda an open secret for people in the know, but a lot of laymen don't realize it.

They are social climbers. Neta got to write a SEP article and added tons of cites to himself, and also added cites for people he likes or wants to network with. He doesn't remember Byrne's philosophical positions but does remember meeting him IRL and sharing a train ride and a chat. Byrne has an MIT job. So he wanted to give Byrne a cite to strengthen the social connection. Cites are favors used for social networking. Not always but often.

I know Neta is admitting too much because he's not on the defensive. This is common with social climbers. They're two-faced. They try to recognize safe situations to be candid, and other situations to be guarded. They say different things on different occassions. But they're often pretty careless about it and bad at it. Too bad. It reminds me of what people will admit about how bad their romantic relationships are when they aren't defensive or guarded – but if it's a debate their tune and tone change, and they become biased and dishonest, and try to say stuff to benefit their side instead of seeking the truth objectively.


Elliot Temple | Permalink | Messages (2)

Ayn Rand Lexicon Quote Checking

In The Ayn Rand Lexicon (book not website), Harry Binswanger wrote in the Honesty section:

Self-esteem is reliance on one’s power to think. It cannot be replaced by one’s power to deceive. The self-confidence of a scientist and the self-confidence of a con man are not interchangeable states, and do not come from the same psychological universe. The success of a man who deals with reality augments his self-confidence. The success of a con man augments his panic.

The intellectual con man has only one defense against panic: the momentary relief he finds by succeeding at further and further frauds.
[“The Comprachicos,” NL, 181.]  

The words, book and page number are correct, but the quote is from "The Age of Envy" not "The Comprachicos".

In the self-esteem section, Binswanger gives the correct cite for part of the same quote:

Self-esteem is reliance on one’s power to think. It cannot be replaced by one’s power to deceive. The self-confidence of a scientist and the self-confidence of a con man are not interchangeable states, and do not come from the same psychological universe. The success of a man who deals with reality augments his self-confidence. The success of a con man augments his panic.
[“The Age of Envy,” NL, 181.]

Justin Mallone found this error and I checked it myself too. I asked him to look into Lexicon quoting accuracy after I found multiple citation errors on the Lexicon website that weren't in the book. This is the only error he found in the book. He did find citation and formatting errors on the website. None of the errors, even on the website, are wording errors. (Note there's a second website for the Lexicon. I compared the "Automatization" page and the only difference I found was whether there were spaces around dashes or not.)

I checked 4 quotes originally and Justin checked 16 more. So the book had 1 partial citation error in 20 quotes, but the website had 5 errors in 20 quotes (counting at most one error per quote). The wordings seem to be reliable, unlike in The Beginning of Infinity, and the Lexicon book seems to be pretty reliable. It seems like a serious effort went into getting details right for the book, but the process of creating the website was sloppier and introduced many small errors.

Even the Lexicon website is much better than David Deutsch's use of quotations in The Beginning of Infinity. Deutsch frequently doesn't give sources, makes frequent changes to wordings (with no indicator of any change), changes punctuation too, and uses ellipses and square brackets incorrectly. Even worse, several of the quotes appear to be made up.


Elliot Temple | Permalink | Message (1)

Objectivism, Certainty, Peikoff, More

This is lightly edited from 2013 emails I wrote to FI list. I was talking about Peikoff's Objective Communication audio lectures.

First Email

Ayn Rand (AR) advocates fallibilism. In a serious, substantive way, in print.

So far from Leonard Peikoff, I've heard a lot of stuff that sounds potentially incompatible with fallibilism, such as advocating certainty, with no effort made to explain how he means something compatible with fallibilism.

I've heard him dismiss some fallibilist arguments, which are true, as ridiculously stupid, without argument.

I've heard him define skepticism as a denial that certainty is possible. Then talk about it as a denial that knowledge is possible. The unstated and unargued premise is that knowledge requires certainty (he didn't mention Justified True Belief, but is that what he has in mind?). How that premise is compatible with fallibilism, he has not informed me.

I have not heard him advocate fallibilism like Rand has.

In addition to certainty, Peikoff has said perfection is possible. He clarified that he meant contextual perfection. Perhaps he also thinks that only contextual certainty is possible. I think this is a misuse of words. He hasn't explained why it isn't. And he keeps talking about "certainty" without any mention of "contextual certainty". If he means something rather different than a typical infallibilist meaning, shouldn't he be clear about it?

Further, when he attacks skeptics for rejecting certainty, it's unclear that those skeptics are all rejecting "contextual certainty" (if that is what he actually means but doesn't say). There are skeptics who (correctly) refute non-contextual certainty (which is infallibilism). If a skeptic refutes non-contextual certainty, and an anti-skeptic like Peikoff advocates contextual certainty, then they haven't necessarily contradicted each other. Peikoff talks about these subjects but doesn't deal with points like this. But he doesn't just omit stuff; he seems to be contradicting points like this -- and therefore be mistaken -- and he fails to explain how he isn't mistaken.

Peikoff focusses his attacks on the worst kinds of skeptics and acts like he has criticized the entire category of all skepticism. He doesn't mention or discuss that there are different types of skeptics (e.g. rejecting all knowledge, or just rejecting non-contextual certainty. He seems to lump fallibilists in with skeptics, though I have no doubt he wouldn't want to lump AR in with skeptics, so his position isn't explained well.)

If you want to exclude people like myself and Karl Popper (and AR) from being skeptics, fine. But then you can't just define skepticism as rejecting certainty! Unless you add a bunch of clarifications and qualifications about what you mean, Popper absolutely does reject certainty! (As do I.) You'd also have to stop presenting it as skeptics and non-skeptics, only two categories, since Popper and Peikoff would be non-skeptics with major differences in views. (I don't normally present it as skeptics and non-skeptics, but Peikoff did.)


These comments above are from his Objective Communication lectures. Epistemology is not the primary topic, but he keeps talking about it. (He's also talked about induction and empiricism a number of times. That material is also problematic.)

I've never seen AR do it like Peikoff. Whenever she talks about these things I have a tiny fraction of the objections. But when it's Peikoff (or Binswanger or I think many other Objectivists) then I see lots of problems.


On another note, Peikoff's comments about how awful school is are worthwhile. They are directed especially at grad school and university. He talks about how much it trashed his mind (despite his best efforts not to let it do that), and how dangerous it is and hard to stay rational, and how much time and effort it took to recover.

In a way, it excuses his other mistakes. He actually read some stuff from a paper he wrote in grad school. He's improved a lot since then!! So that's great. One can respect how far he's come and perhaps sympathize a bit with some of his mistakes.

I for one have the advantage of avoiding a lot of the tortures Peikoff endured at school. It really helps. Yeah, sure, K-12 sucked but I never took it seriously after around 6th grade or maybe earlier. It's so much worse and harder if you take it seriously.

(But I fear he wouldn't appreciate this perspective much. I fear he'd say he's super awesome now and not making mistakes, and I'm wrong about epistemology -- but without wishing to debate it to a conclusion in a serious way, as I am willing to do. If he rejects the attitudes and role of a learner still making progress, then it becomes hard to sympathize with errors. If he also isn't open to answering criticisms, then it's even worse.)


How few philosophers Objectivists find to appreciate is one of the worrisome things that does apply to AR herself (I learned from AR, Popper, Goldratt and others. Peikoff doesn't seem to have gotten much value from people besides AR). Like it's a problem with Peikoff but also with AR. She was aware of Mises and Szasz. But she missed Popper, Burke, Godwin and Feynman, for example. Is there any excuse for that? Godwin is obscure but Szasz was aware of him! Mises was aware of Godwin too, but Mises read a translation and totally got the wrong idea. Szasz and Mises were also aware of Burke. I'm not sure how much Mises knew about Burke, but Szasz had a good understanding. Szasz also knew a lot about Popper, and had some familiarity with Feynman. So if Szasz can find all these philosophers, and learn from them, what is AR's excuse?

And of course I can and did find and study Godwin and others too. I sought out good philosophy with some success. It's not trivial to find, but it's worth the effort.

Second Email

Peikoff's on-topic comments about Objective Communication continue to be good. No monumental breakthrough, but lots of solid points explained well.

Peikoff said certainty is conclusiveness.

If we figure he meant contextual conclusiveness (if he didn't, that's worse!), that's Popper-compatible. Popperians reach what they call "tentative" conclusions which means that they are the current conclusion but could need to be reevaluated if the context changes (e.g. something new is thought of).

But can something called "tentativity" really be what Peikoff has in mind for "certainty"? I don't think so. If you listen to how he talks about it, and his examples, they do not fit this interpretation of the definition. But he doesn't clarify the correct definition or the way to interpret this one.

No comments are made about how his definition is compatible with this other thing he doesn't mean, or what's wrong with this thing. He doesn't address it. I don't think he's thought of it.

Long story short, what's going on is Peikoff is mistaken about the topic so his comments come off confused from the perspective of someone who already understands what he's missing.

Peikoff is targeting his comments against ideas much worse than his own. He's defeating what he sees as his (awful, pathetic) rivals. But why hasn't he engaged with any better rivals?

I don't think it's pure ignorance. For one thing, that would not be excusable: he should have checked for the existence of some better ideas.

But also, Peikoff knows (and endorses) Binswanger, and Binswanger knows of Popper. Binswanger's attitude to Popper is a combination of extreme ignorance and extreme venom (with extras features such as misquoting Popper and then not caring or correcting it). Some other Objectivists also know of Popper but reject him without rational, well-informed arguments or an adequate understanding of his ideas.

I suppose I should look these issues up in OPAR. But he's supposed to be talking to an audience with merely some knowledge of Objectivism. So if you've read everything AR says about this, that ought to be (more than) enough. His comments weren't meant only for audiences that have read OPAR.


Elliot Temple | Permalink | Messages (0)

Rand, Popper and Fallibility

I wrote this at an Objectivist forum in 2013.


http://rebirthofreason.com/Forum/Dissent/0261.shtml

Popper is by no means perfect. The important thing is the best interpretations (that we can think of) of his best ideas. The comment below about "animals" is a good example. I do not agree with his attitude to animals in general, and I'm uncomfortable with this statement. However, everything he said about animals (not much) can be removed from his epistemology without damaging the important parts.

Popper made some bad statements about epistemology, and some worse ones about politics. I don't think this should get in the way of learning from him. That said, I agree with Popper's main points below.

1) Can you show if Popper ever fully realized that the falsification of a universal positive proposition is a necessary truth? In other words, if a black swan is found, then the proposition "All swans are white" is falsified, but more than that, it is absolutely falsified (which is a form of absolute knowledge/absolute certainty)? Even if you can't, please discuss.

No, Popper denied this. The claim that we have found a black swan is fallible, as is our understanding of its implications.

Fallibility is not a problem in general. We can act on, live with, and use fallible knowledge. However, it does start to contradict you a lot when you start saying things like "absolute certainty".

Rand isn't fully clear about this. Atlas Shrugged:

"Do not say that you're afraid to trust your mind because you know so little. Are you safer in surrendering to mystics and discarding the little that you know? Live and act within the limit of your knowledge and keep expanding it to the limit of your life. Redeem your mind from the hockshops of authority. Accept the fact that you are not omniscient, but playing a zombie will not give you omniscience—that your mind is fallible, but becoming mindless will not make you infallible—that an error made on your own is safer than ten truths accepted on faith, because the first leaves you the means to correct it, but the second destroys your capacity to distinguish truth from error. In place of your dream of an omniscient automaton, accept the fact that any knowledge man acquires is acquired by his own will and effort, and that that is his distinction in the universe, that is his nature, his morality, his glory.

"Discard that unlimited license to evil which consists of claiming that man is imperfect. By what standard do you damn him when you claim it? Accept the fact that in the realm of morality nothing less than perfection will do. But perfection is not to be gauged by mystic commandments to practice the impossible [...]

Here Rand accepts fallibility and only rejects misuses like claiming man is "imperfect" to license evil. Man's imperfection is not an excuse for any evil -- agreed.

Rand has just acknowledged that man and his ideas and achievements are fallible. But then she decides to demand moral "perfection". Which must mean some sort of contextual, achievable perfection -- not the sort of infallible, omniscient perfection Popper rejects and Rand acknowledges as impossible.

It's the same when Rand talks about "certainty" which is really "contextual certainty" which is open to criticism, arguments, improvement, changing our mind, etc... (Only in new contexts, but every time anyone thinks of anything, or any time passed, then the context has changed at least a little. So the new context requirement doesn't cause trouble.)

2) Can you offer something to redeem Popper of seemingly damning quotes such as:

In so far as a scientific statement speaks about reality, it must be falsifiable: and in so far as it is not falsifiable, it does not speak about reality.

... which preemptively denies the possibility of axiomatic concepts (i.e., the possibility of statements that speak about reality, but are not, themselves, falsifiable).

Any statement which speaks about reality is potentially falsifiable (open to the possibility of criticism using empirical evidence) because, if it speaks about reality, then it runs the risk of being contradicted by reality.

Popper does deny axiomatic concepts, meaning infallible statements. Statements that you couldn't even try to argue with, potentially criticize, question, or improve on. All ideas should be open to the possibility of critical questioning and progress.

There is a big difference between open to refutation and refuted. What's wrong with keeping things open to the potential that, if someone has a new idea, we could learn better in the future?

"If realism is true, if we are animals trying to adjust ourselves to our environment, then our knowledge can be only the trial-and-error affair which I have depicted. If realism is true, our belief in the reality of the world, and in physical laws, cannot be demonstrable, or shown to be certain or 'reasonable' by any valid reasoning. In other words, if realism is right, we cannot expect or hope to have more than conjectural knowledge."

... which preemptively denies the possibility of arriving at a necessary truth about the world.

Conjectural knowledge (or trial-and-error knowledge) is Popper's term for fallible knowledge. It's objective, effective, connected to reality, etc, but not infallible. We improve it by identifying and correcting errors, so our knowledge makes progress.

We cannot establish our ideas are infallibly correct, or even that they are good or reasonable. Such claims (that some idea is good) never have authority. Rather, we accept them as long as we don't find any errors with them.

I think this is different than Objectivism, but correct. Well, sort of different. The following passage in ITOE could be read as something kind of like a defense of this Popperian position (and I think that is the correct reading).

One of Rand's themes here, in my words, is that fallibility doesn't invalidate knowledge.

The extent of today’s confusion about the nature of man’s conceptual faculty, is eloquently demonstrated by the following : it is precisely the “open-end” character of concepts, the essence of their cognitive function, that modern philosophers cite in their attempts to demonstrate that concepts have no cognitive validity. “When can we claim that we know what a concept stands for?” they clamor—and offer, as an example of man’s predicament, the fact that one may believe all swans to be white, then discover the existence of a black swan and thus find one’s concept invalidated.

This view implies the unadmitted presupposition that concepts are not a cognitive device of man’s type of consciousness, but a repository of closed, out-of-context omniscience —and that concepts refer, not to the existents of the external world, but to the frozen, arrested state of knowledge inside any given consciousness at any given moment. On such a premise, every advance of knowledge is a setback, a demonstration of man’s ignorance. For example, the savages knew that man possesses a head, a torso, two legs and two arms; when the scientists of the Renaissance began to dissect corpses and discovered the nature of man’s internal organs, they invalidated the savages’ concept “man”; when modern scientists discovered that man possesses internal glands, they invalidated the Renaissance concept “man,” etc.

Like a spoiled, disillusioned child, who had expected predigested capsules of automatic knowledge, a logical positivist stamps his foot at reality and cries that context, integration, mental effort and first-hand inquiry are too much to expect of him, that he rejects so demanding a method of cognition, and that he will manufacture his own “constructs” from now on. (This amounts, in effect, to the declaration: “Since the intrinsic has failed us, the subjective is our only alternative.”) The joke is on his listeners: it is this exponent of a primordial mystic’s craving for an effortless, rigid, automatic omniscience that modern men take for an advocate of a free-flowing, dynamic, progressive science.

One of the things that stands out to me in discussions like this is that all today's Objectivists seem (to me) more at odds with Popper than Rand's own writing is.

I'll close with one more relevant ITOE quote:

Man is neither infallible nor omniscient; if he were, a discipline such as epistemology—the theory of knowledge—would not be necessary nor possible: his knowledge would be automatic, unquestionable and total. But such is not man’s nature. Man is a being of volitional consciousness: beyond the level of percepts—a level inadequate to the cognitive requirements of his survival—man has to acquire knowledge by his own effort, which he may exercise or not, and by a process of reason, which he may apply correctly or not. Nature gives him no automatic guarantee of his mental efficacy; he is capable of error, of evasion, of psychological distortion. He needs a method of cognition, which he himself has to discover: he must discover how to use his rational faculty, how to validate his conclusions, how to distinguish truth from falsehood, how to set the criteria of what he may accept as knowledge. Two questions are involved in his every conclusion, conviction, decision, choice or claim: What do I know?—and: How do I know it?


Elliot Temple | Permalink | Messages (0)