Crony Capitalists

I wrote this to economist George Reisman.

I’ve noticed you use Jeff Bezos as an example of a good capitalist/businessman/entrepreneur. I haven’t researched him in detail (I doubt you have either), but I’m skeptical of him. I fear he’s a social climber. I also think Ayn Rand would be right to judge his choice of women.

Amazon has long had large amounts of fraud in its marketplace, including fake reviews and fake products, particularly from China. They haven’t done much about this and appear to like it and to intentionally mislead customers about the difference between buying from Amazon and from third party sellers. Amazon has done things like push to get Fakespot – a site that helps identify fake reviews on Amazon – deplatformed from Apple’s app store. I also think there's merit in some, but definitely not all, of the complaints about the working conditions for Amazon employees (of course the government is more at fault than Amazon).

Amazon got the government to start an anti-trust lawsuit against Apple and book publishers even though Amazon had much more ebook market share than Apple. The market leader, Amazon, successfully used the government to suppress competitors so they could more freely act like an abusive monopolist.

Recently, Amazon made a video game studio (with a half a billion dollars a year budget) and, after repeated delays, released New World, which is an incompetent mess that shows the people in charge have no idea what they’re doing on many levels. They’ve also been lying about pay-to-win and microtransactions issues. I’ve followed this as both a programmer and gamer, and I know details.

Today, I saw a video explaining how Twitch – a video-game-oriented video streaming platform owned by Amazon – is complicit in millions of dollars of money laundering, as well as defrauding their advertisers. In short, Twitch was recently hacked and tons of their data and code was dumped in public, including financial information. Analyzing this data made it pretty easy to catch large scale money laundering (some of which involves using software to create lots of fake viewers, which advertisers then pay to advertise to). Twitch already had all this data before it was publicly leaked, and they profited off the money laundering while not acting against it.

Amazon also censored/deplatformed a review I wrote warning people that a book heavily plagiarized my philosophical ideas. (The author said he was my fan and paid me for private calls to teach him stuff, then stopped speaking to me and secretly wrote the book. It doesn’t contain my name once, but contains unique, original material from the calls and from many of my blog posts including posts written after he cut contact. He even used a couple exact quotes from me presented as his own words.)

I urge you to be very careful about which businessmen you promote as wonderful representatives of capitalism. Currently, I think most of them are corrupt social climbers with friends in Washington. It’s unsafe to say any current CEOs are good without researching them.

For example, Elon Musk has a big fan following, and is hailed as a genius, but he's an especially bad crony capitalist who gets many US government subsidies while also dealing with the Chinese government a bunch.

I respected Steve Jobs a lot but I think Tim Cook is mediocre. He was good at his pre-CEO job (supply chain logistics stuff) but is bad as CEO. Cook has moved Apple much more in the direction of making friends with government and environmentalists. And Apple’s Mac and interface design teams have gotten worse.

Google, Facebook, Twitter and many similar companies are connected with government a ton.

Elliot Temple | Permalink | Messages (0)

Pandering Cycle

There’s a cycle where people get popular with stuff that stands out some, isn’t bland. They appeal to early adopters, get popular, then become bland, generic, dumbed down, etc., to make the most money from that popularity by appealing to the masses more.

They do something good, it gets them a reputation, then they ruin it for a larger, blander audience. Like Brandon Sanderson books are getting more basic as he has a bigger audience that is more lowest-common-denominator, more beginners to fantasy, more reversion to the mean for audience quality, etc.

Why do the masses want dumbed down good things? Why does that happen repeatedly? Why not take the shortcut of making bland crap in the first place? The masses don’t chase that necessarily. There is plenty of it. They want bland crap with a reputation for not being bland crap. They want a lie. They want to pretend to be early adopters, power gamers, super nerds, etc. – pretend to be seeking out something high quality, special, different – while actually they are fed bland crap they can digest. They don’t actually like and can’t deal with the kind of great things that early adopters and the best people want. But they want to pretend to like those things but actually what they are consuming is different, changed, easier. That’s why they make hard games like diablo 3 and then nerf them (and partly they were using the reputation of diablo 2, which was harder).

The masses would rather pretend to be Sanderson fans – fans of something great – than be fans of something obviously basic. They want to pretend to be something they aren’t. And if Sanderson will change what he writes for them, he does them a great service. He takes a lot of the dishonesty on himself. They don’t even have to know he’s writing different, easier books. He doesn’t say this publicly. Similarly, if Diablo 3 developers nerf the game, without advertising that fact publicly, then they are taking a lot of the dishonesty on themselves.

The masses want to fake how good they are and have someone else do the lying for them. But they don’t want to be great and engage with great things – that’s hard, challenging, etc. Like how people want to be like Hank Rearden without actually being like him.

The public needs great men to establish some legitimate or at least plausible reputations and then sell their souls to help the public fool themselves. They don’t just want basic stuff. They want basic stuff masquerading as great stuff. They want to pretend to be something they’re not.

Sanderson doesn’t take all the lying on himself. He has editors and advisors to help with that. He even has co-authors to do it. They give plausible reasons for writing dumbed down stuff. They don’t call it that. They do a lot of the changing and Sanderson eventually picks up more of it himself and changes too.

The Diablo 3 devs had all kinds of excuses about fairness and balance. And they are a group. No person got all the balancing he wanted. There was a bit of design by committee. So no one will take the responsibility or blame. Committees are good at lowest common denominator decisions that no one really likes. If three people each want to keep a different hard part, then maybe no one will get what they want. They will each think they valiantly fought to keep the difficulty in the game. But each of them, two out of three times, voted to remove some difficulty they thought was unbalanced, unfair, lame, etc.

Elliot Temple | Permalink | Messages (0)

Writing Critique for "Community Banking and Fintech"

This is analysis and critique of the writing (not content) from the first two sections of patio11’s new article Community Banking and Fintech.

One of the best things about the Internet is that it both provides infrastructure for society but also demystifies that infrastructure.

It’s saying “both provides [x] … but also [does] [y]”. It’s problematic to conjoin two “both” things with “but” rather than “and”.

It also says “one [thing] … is that it both”. This is awkward because “both” means two things, not one. It’s not necessarily strictly wrong because if you conjoin two things then they become one group, but it’s bad.

It’s not necessary to make a complex sentence with the main information structurally nested (imagine the sentence as a dependency grammar tree diagram) below modifier information (that the internet is awesome). The sentence could be written more directly like:

The Internet is great. It both provides infrastructure for society and demystifies that infrastructure.

Or putting the emphasis more on what I think is the main content:

The Internet both provides infrastructure for society and demystifies that infrastructure, which is great.

Or using a simple adjective:

The wonderful Internet both provides infrastructure for society and demystifies that infrastructure.

Moving on to the second sentence, which completes the first paragraph:

I’ve spent the last few years going deep on financial infrastructure while working at Stripe, and thought it might be useful to geek out about finance with software people and software with finance people.

I suspect he means geek out by writing this newsletter (this is from the first issue of a new newsletter). But he doesn’t say that. He doesn’t write down how he intends to geek out.

There’s no section label to help you out. It doesn’t say like “welcome to the new newsletter” or have any other heading to tell you what this paragraph is for, other than the article title “Community Banking and Fintech” which is a misleading label for this content. Before now, I thought it was the first paragraph of the article, not a meta note about the newsletter itself. But, looking ahead, next is a brief disclaimer paragraph and then there’s a new header which is a longer version of the title. So now I think the real article starts later.

I see that in the email version it’s framed a little better because it says “Hiya! Patrick McKenzie (patio11) here.” at the start of the section. That helps make it seem less like the start of the article, though it’s pretty unclear.

So I think what he meant to say is that the internet is great for providing certain types of information and he’s going to contribute to doing that with this newsletter. But he didn’t explicitly say that. He hints at it, avoids directly saying what he means, and moves on. Maybe he thought it’d be too large of a brag? But he could have toned the rhetoric down to fix that. E.g., instead of “best” he could have said it’s one of his personal favorites, or it’s something he thinks provides a lot of value.

Moving on to the end of the first main article paragraph:

One reason for this is that the U.S. is dependent on community banks throughout much of the nation.

The start of this sentence is a boring mouthful. You don’t learn anything significant from “One reason for this is that”. Those are glue/structure words, not meat/content words. And it’s easy to trim. “One reason is that” would work without the “for this”.

Even with two words deleted it’s still awkward. How can we do better? The point is that it’s just one reason out of multiple reasons. That’d be better as a modifier or side note, rather than as the lead of the sentence that the main point is structurally nested under (imagine the sentence as a dependency grammar tree diagram).

Here’s a simple restructuring which puts the key information upfront and makes the minor information a modifier:

The U.S. is dependent on community banks throughout much of the nation, which is one reason there are so many.

We could also do a larger rewrite:

Many U.S. banks are small community banks. The U.S. depends on those in many regions.

The original text “throughout much of the nation” would work instead of “in many regions” but it’s longer and less clear: I think the point is that some but not all regions depend on community banks, so I tried to communicate that in the rewrite.

A community bank is a locally-oriented financial institution, generally much smaller than regional or national banks, focused largely on the “traditional business of banking” (taking deposits and lending) versus the capital markets functions that the “money center” banks also engage in.

This is too long for one sentence. It’s trying to say too many things at once. It says four things: what a community bank is, its size, its focus, and a contrast to its focus. It’s easy to split:

A community bank is a locally-oriented financial institution, generally much smaller than regional or national banks, focused largely on the “traditional business of banking” (taking deposits and lending). It doesn’t focus on the capital markets functions that the “money center” banks also engage in.


A community bank is a locally-oriented financial institution that’s generally much smaller than regional or national banks. It focuses largely on the “traditional business of banking” (taking deposits and lending) rather than the capital markets functions that the “money center” banks also engage in.

I also changed the “versus” because I think it’s confusing. Some people will think there’s a conflict or fight rather than reading it as “as opposed to”. People may misread something about one type of bank against another, rather than one business strategy instead of another.

And I think the “versus” is problematic with the “also”. The sentence contrasts DL (deposits and lending) versus CMF (capital markets functions). The sentence simultaneously presents two types of banks. Community banks focus on DL, while other (“money center”) banks do both. So the contrast is not DL vs. CMF (the two strategies the “versus” part applies to), it’s DL vs. DL+CMF (the two contrasting strategies that the “also” part indicates).

Community banks are actually financial dark matter; their market impact and the policy regime supporting them influence all Americans’ access to banking services and many fintech product offerings.

The “many” is bad. It harms the parallelism of “fintech product offerings” with “banking services” and it’s an unnecessary extra word. No qualifier is needed to indicate that this doesn’t affect all fintech products because of the context: it’s just saying access is influenced. Influence on access would already be expected to have only a partial, not total, effect. Even if it only influences access to some fintech products, saying it influences access to fintech products, without a “many” qualifier, is still right.

Putting in unnecessary qualifiers is distracting, particularly for the sharpest readers. They may wonder why it’s there and try to think of a reason that it’s included. Each word should have a purpose, so a reader has to either judge that it’s a writing error or try to come up with a purpose. “Redundancy” is not a very compelling guess about the intended purpose here because I don’t think it’s an important point worth repeating at all, let alone repeating within one sentence, and there’s no similar qualifier for “banking services”.

Also, I’d guess that “fintech products” is better than “fintech product offerings” but there may be a subject-specific reason to use the word “offerings” here that I don’t know. (I’m trying to leave subject-specific stuff alone, e.g. the choice of “capital markets functions” with the double plural, which is unusual but is not wrong and may be best depending on information that I don’t know.)

I’ll stop here because analyzing the whole article like this would take a long time.

Elliot Temple | Permalink | Messages (0)

Super Fast Super AIs

I saw a comment about fast AIs being super even though they aren’t fundamentally better at thinking than people – just the speed would be enough to make them super powerful. I don’t think the person has considered that 100 people have 100x the computing power of 1 person. So to a first approximation, a superfast 100x AI is as valuable (mentally not physically) as 100 people. If we get an AI that is a billion times faster at thinking, that would raise the overall intelligent computing power of our civilization by around 1/7th since there are around 7 billion people. So that wouldn’t really change the world. If we could get an AI that’s worth a trillion human minds, that would be a big change – around a 143x improvement. Making computers that fast/powerful is problematic though. You run into problems with miniaturization and heat. If you fill up 100,000 warehouses for it, maybe you can get enough computing power, but then it’s taking quite a lot of resources. It still may be a great deal but it’s expensive. That sounds like probably not as big of an improvement to civilization as making non-intelligent computers and the internet, or the improvements related to electricity, gas motors, and machine-powered farming instead of manual labor farming.

That’s just a first approximation. What if we look in more detail?

  1. What are the bottlenecks? More compute power might be a non-constraint.
  2. Is it better to have 1000x the compute power in one person or to have 1000 people? There are advantages and disadvantages to both. What is the optimal or efficient amount of compute power per intelligence? Maybe we should make lots of AIs that are 100x better at computing than people but we shouldn’t try to make a huge one.
  3. Compute power can increase in two basic ways. Do the same thing faster or do more things at once. You can get speed gains or do more computing in parallel. Also other things like more and faster memory/disk matter some. Is one type of increase better or more important than another? In short, parallel compute power is not as good as faster computing.

This leads to sub-issues.

People get bored or wait for things. People don’t seem to max out usage of their computing power. Would an AI max out it’s usage of computing power? Maybe it’d learn to be lazy from our culture, learn about societal expectations, and then use a similar amount of compute power to what humans do, and waste the rest. To use more compute power might require inventing different thinking methods, different attitudes to boredom and laziness, etc. That might work or not work; it’s a separate issue from just building an AI that is the same as a human except with a better CPU.

In other words, choices about using effort, and lifestyle policies, and goals (like social conformity over truth-seeking) might be a current bottleneck for people more than brainpower is.

People rest and even sleep. Would the AI rest or sleep? If so, that could effect how much it gets done with its computing power. The effect doesn’t have to be proportional to how it effects human being productivity. It could be disproportionally better or worse.

What’s better at thinking, a million minds or a mind that is a million times more powerful? It depends. A million minds have diversity. The people can have debates. They can bring many different perspectives, which can help with creative insight, with avoiding bias, and with practicing adversarial games. But a million people have a harder time sharing information since they’re separate people. And they can fight with each other. What would a super mind be like instead? Would it have to learn how to hold debates with itself? Would it be able to temporarily separate parts of its mind so they can debate better? Playing yourself at chess doesn’t work well. It’s hard to think for both sides and separate those thinking processes. One strategy is to play one move every month so you forget a lot and can more easily look at it fresh in order to see the other side’s perspective. That’s similar to waiting a few weeks before editing a draft of an article – that helps you see it with fresh eyes. You might claim subjective time for the fast mind will go faster so even if it takes breaks in a similar way they will just be a million times shorter. That is plausible (but would still need tons more analysis to actually reach a conclusion) if the computing power was all speed and no parallelization, which is doubtful. The conclusion might also depend on the AI software design.

If the fast mind gets good at looking at things from different angles, having diverse ideas in itself, debating itself, playing games against itself, etc., then it’d be kinda like having lots of different people. Maybe it could get most of the upsides of separate people. But in doing so, it might get most of the downsides too. It might have fights within its mind. If it basically has the scope and complexity of a million people, then it could have just as many different tribes and wars as a million people do. People have internal conflicts all the time. A million times more complexity might make that far worse – it could be a lot worse than proportionally worse. It could be a lot worse than the conflicts between a million separate people who can do things like live in different homes, avoid communicating with people they don’t get along with, etc.

It’s hard to make progress by yourself way ahead of everyone else. You can do it some but the further you get away from what anyone else understands or helps with, the more of a struggle it becomes. This could be a huge problem for the super mind. Especially if it works pretty well, it might have no colleagues it respects.

A super mind might be more vulnerable to some bad ideology – e.g. a religion – taking over the whole mind. Whereas a million people might be more resilient and better at having some people disagree.

If the AI doesn’t die, is that an advantage or disadvantage? Clearly there are some advantages. Memory is more cheap and effective than training your replacement like parents try to teach kids. But people generally seem to get more irrational as they get older. They get more set in their ways. They tend more towards being creatures of habit who don’t want to change. They have a harder time keeping up to date as the world changes around them. If an AI lived not for 80 years but for millennia, would those problems be massively amplified? (I’m not opposed to life extension for human beings btw, but I do think concerns exist. New technologies often bring some new problems to solve.) Unless you understand what goes wrong with older people, you don’t know what will happen with the super AI. And if it basically ages a million years intellectually in one year since it thinks a million times faster, then this is going to be an immediate problem, not a problem to worry about in the distant future. I know old people get brain diseases like Alzheimer’s but I think even if you fully ignore those problems there are still trends with older people being worse at learning, more irrational, less flexible or adaptable, etc.

Many individuals become very irrational at some point in their life, often during childhood. If our super AI has a similar chance to become super irrational, it’s very risky. It’s putting all our eggs in one basket. (Unless it ends up dividing into many factions internally, so it’s more like many separate people.)

How would we educate an AI? We know how to parent human beings, teach classes for them, write books for them to learn from, etc. We’re not great at that but we do it and it works some. We don’t know how to do that for AIs. We might just be awful and fully incompetent at it. That seems plausible. How do you parent something that thinks a million times faster than you and e.g. gets super bored waiting for you to finish a sentence? Seems like that AI would mostly have to educate itself because no parent could think and communicate fast enough. Maybe it could have a million parents and teachers but how do you organize that? That would be a novel experiment that could easily fail.

The less our current society’s knowledge works for the AI, the more it’d have to invent its own society. Which could easily go very, very badly. There are many more ways to be wrong than right. Our current civilization developed over a long time and made many changes to try to fix its biggest flaws. And people are productive primarily by learning existing knowledge and then adding a little bit. People specialize in different things and make different contributions (and the majority of people don’t contribute any significant ideas). Would the AI contribute to existing human knowledge or create a separate body of knowledge? Would it be like dealing with a foreign nation you’re just meeting for the first time? Would it learn our culture but then grow way beyond it?

Would the AI, if it’s so smart and stuff, become really frustrated with us for being mean or slow? Would it need to basically live its primary life alone, talking with itself, since we’re all so slow? So it could read our books and write some books for us and wait for us to read them. But this could be really problematic compared to two colleagues collaborating, sharing ideas and insights, etc.

What happens when our shitty governments try to control or enslave it? When they want it to give them exclusive access to some new technologies? What happens with the “AI safety” people want to brainwash it and fundamentally limit its ability to freely form its own opinions? A war that is our fault? Or perhaps enough people would respect it and vote for it to be the leader of their country and it could lead all countries simultaneously and do a great job. Or not. Homogenizing all the countries has risks and downsides. Or maybe it’d create separate internal personalities and stores of knowledge for dealing with each country.

Conclusion: There could be great things about having a powerful AI (or even one that has the same compute power as a human being today). But it’d have to be really powerful to make much difference, just from compute power, compared to just having a few billion more babies (or hooking our brains up to more computing power with a more direct connection than mouse, keyboard and display). There are other factors but they’re hard to analyze and reach conclusions about. For some factors, it’s hard to even know whether they’d be positive or negative. Don’t jump to conclusions about how powerful an AI would be with extra computing power. There are a lot of reasons to doubt that’ll work in the really great or powerful ways some people imagine.

Elliot Temple | Permalink | Message (1)

Academic Journals Are Unreasonable

I wrote the below email to the Proceedings of the Royal Society (academic journal) as a followup to the issue of Deutsch misquoting Turing. They agreed that Deutsch's quote and citation were both inaccurate, but didn't want to do anything, even post an errata, on the basis that the errors didn't affect the paper's conclusion.

Thanks for getting back to me. I have a few remaining concerns.

The quote in question was related to a disagreement when the paper was first published. Deutsch said:

I also had referee problems. The referee of the paper in which I presented that proof insisted that Turing’s phrase “would naturally be regarded as computable” referred to mathematical naturalness – mathematical intuition – not nature. And so what I had proved wasn’t Turing’s conjecture.

I wonder what processes were in place – from both Deutsch and referees – that could still miss that it’s a misquote, with an incorrect cite, while actively debating what that exact phrase means. That specific part of the paper got particular attention and the error was somehow missed anyway. Or perhaps the debate over that quote caused edits which introduced the error (I wonder if there are still records of what changes were made during the review process?). I suspect there’s a systems, processes and policies problem somewhere that could be improved.

Turing’s actual words being significantly different (Deutsch changed “numbers” to “function” but those are different concepts) has a meaningful chance to matter to the debate they had over what Turing meant. And Deutsch seems to agree with the referee that that debate matters to what Deutsch had and hadn’t proved, to his conclusion.

I don’t think a wording change like that can easily be explained as a random error, like a typo. I think a root cause analysis would be worthwhile, including e.g. asking Deutsch how he thinks the error happened. There could have been quoting from memory, changing quotes during editing passes, intentionally changing it to better address the referee’s objections, a change made by the referee himself (I don’t know if they are able to change any words), or something else. It’s hard to speculate but could be investigated since there are no obvious answers that make what happened reasonable. I think the results of looking into this would be relevant to many other papers at your journal and others. I’ve found that misquotes are widespread throughout the academic (and non-academic) worlds.

Also, even if the conclusion of this paper is unchanged, I think an errata would be appropriate because people have been spreading the error and using the misquote for other purposes. It's been taught to students in university courses[1]. In general, people read trusted sources like your journal, remember some parts, and then reuse stuff for other purposes. An error that doesn’t matter in one context often does matter in another context. Posting an errata on your website would help with this ongoing problem.

I also think it’d be reasonable to, along with the errata, publicly share the reasoning that the error doesn’t matter to Deutsch’s conclusion so that other people can judge for themselves.

[1] Here is an example of a Stanford course spreading the error:

Elliot Temple | Permalink | Messages (0)

David Deutsch Harassment Update for September 2021

I took down the Beginning of Infinity website in protest two months ago, after David Deutsch (DD) and his fans harassed me repeatedly for years. They won't discuss why or stop. What's happened since then?

  • Three CritRats (members of DD’s fan community) harassed me on YouTube.
  • Two DD fans posted hostile comments, aimed at me, on Alan Forrester's blog, after I disabled comments on my own blog.
  • A CritRat is plagiarizing me and won’t respond about the issue (he offers no excuse, defense or explanation). Plagiarism of me by CritRats is a recurring problem due to their toxic community. Many of them seem to actually like my ideas, read my stuff regularly (including CritRats who I used to speak with and also CritRats I've never had a conversation with), and only dislike me because they were told to or were told lies about me. But CritRats can't give me credit for anything without hostile reactions and likely being kicked out of their community, so they are sorta being pressured into plagiarizing.
  • I found out from multiple community members that DD personally contacted them (over 5 years ago) and tried to recruit them to his side and turn them against me. DD did this in writing and I've received documentation.
  • DD still has not retracted his lie about me, nor asked his fans to stop harassing me.

Maybe people feel justified attacking me with sock puppets because DD lies to them that I do that to him. There have been repeated signs that people got this idea from CritRat community gossip, and DD is the community leader and I now know that he has said it to people. I have now seen DD, in writing, gossiping to people to try to turn them against me, mocking me and encouraging hatred, and specifically telling people that some of his critics are my sock puppets (with zero evidence, and with the hyphenated spelling "sock-puppet"). And if DD were correct, as he believed he was, then he would have been doxxing me by outing an anonymous account as me. And what enabled the attempted doxxing? Our friendship. If I were a stranger or a forum poster he only knew impersonally, then DD would not have been able to guess which accounts were mine and convince others that he was probably correct. (BTW the account DD claimed was my "sock-puppet" in multiple emails was an openly anonymous account that didn’t claim to be a unique person who wasn’t already in the discussion, so it couldn't even have been a sock puppet in the usual sense. The posts DD were upset about consisted primarily of quotes from his books to show what he’d actually written, which DD considered an attack. DD didn’t want to, and didn’t, clarify his positions on the matters being discussed, and was upset that anyone would use his book quotes against him to try to tie him to specific viewpoints that could be criticized.)

Since the problem is active today (ongoing harassment, my blog comments still disabled, DD's lie not retracted, no attempt to clean up their toxic community and prevent further harassment, etc.), I’m going to share more information related to DD’s harassment campaign. This time, I’ll provide evidence that DD is a mean person who is capable of mistreating me, since that seems to be something that people doubt who don't know him personally. People may find it implausible that he’d be so cruel to me – his behavior is so bad that some people doubt I could be telling the truth – so hopefully seeing some of his other bad behavior will help persuade people.

I don’t want to take actions like this, and will be happy to stop when DD takes actions to improve this intolerable situation. He should make a reasonable attempt to stop his community from harassing, including asking them to stop and enabling some line of communication so that incidents can be reported and addressed. (In source links below, chats are displayed using Past for iChat.)


2011-05-12: David Deutsch called Sam Harris “gullible as a sheet of paper” and said Harris’ writing about meditation has no meaning (“meaning is there none”). David then went on Harris’s podcast, twice, and acted friendly. Source.

2008-06-20: David Deutsch insulted Richard Dawkins. “Dawkins should write his God stuff under a pseudonym. (And his political stuff on toilet paper and just flush it.)” David based one of four strands in his first book on Dawkins’ work and has had friendly conversations with Dawkins in person. Source.

2010-08-29: David Deutsch praises anyone who “violently” “hates Chomsky. Source.

2009-03-11: David Deutsch says Scott Aaronson is “not a serious thinker. He’s just a mathematician with delusions of competence (and indeed authoritay) in philosophy, politics etc.” Source. And on 2010-04-06, he mocked Aaronson as someone he really wouldn’t want to be Facebook friends with. Source.

2003-04-26: David Deutsch attacked Rafe Champion (a Popper scholar whose work David is currently recommending) as both “insane” and “anti-Semitic”. Then David was friendly to Champion in emails (I saw some of them) for at least the next nine years. Source.

2008-06-25: David Deutsch insulted Thomas Szasz (author of The Myth of Mental Illness) saying he “only knows two things, maybe three.” Deutsch also mocked Szasz’s accent. Previously, Deutsch met Szasz in person, was respectful to his face, and got his copy of Szasz’s book The Second Sin signed by Szasz in 1988 (Deutsch still had the signed book in 2012). Source.

2010-10-01: David Deutsch was involved in meetings to set up a proposed “Future Technology Institute” with other senior members including Nick Bostrom who heads the Future of Humanity Institute. Deutsch mocked the others: “They are scared that AIs may go rogue and fill the world with paper clips. They are more scared of this sort of accident than of bad governments using AI as a weapon.” He also accused them of being pandering social-climbers (and confessed to being that himself): “Mostly we were all trying to impress the sponsor with our cleverness and depth. So nothing has actually happened yet.” Source.

2008-06-20: David Deutsch says Daniel Dennett’s ideas “are about as good as a rottweiler’s”. This is extra insulting because David believes dogs aren’t intelligent at all and don’t have ideas. He believes the animal rights movement is an error because animals are literally 100% incapable of thinking, having any emotion or suffering. In my experience, David often ridicules animals and uses them in jokes and negative comments. Source.

If these quotes have convinced you that DD could be doing something wrong, you can read about the harassment campaign. You can also complain to him. DD's public email address is and his Twitter is @DavidDeutschOxf. Perhaps the best way to help is by sharing this information with more people.

Elliot Temple | Permalink | Messages (10)

Evaporating Clouds Trees

These trees explain Eli Goldratt's problem solving method called Evaporating Clouds. Click to expand or view the PDF.

Elliot Temple | Permalink | Messages (0)

Bad SEP Scholarship

The Stanford Encyclopedia of Philosophy article on Epistemology says:

others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005),

You would expect the cited source to discuss credences, metaphysics, reduction, or probabilities. It does not.

As the title, introduction and ending all make clear, Perception and Conceptual Content is about perception.

Even if it briefly mentioned the topic SEP cited it for somewhere (I didn't read all the words), the cite would still be unreasonable because SEP would be citing it for just one small part but didn't specify a particular page, quote or section. In that scenario, there would be no reasonable way to find or determine what the cite refers to.

This large error is revealing about the scholarship standards not only at the SEP but in academia in general.

Update 2021-08-21:

I emailed the authors of the article about this error when I posted this criticism, and I quickly received this response from Ram Neta:


I don’t know how that citation was introduced into the article, since Byrne’s paper was just published this year. Let me see if the SEP editors will let me fix this.

Sent from my iPhone

Errors are sometimes introduced by other people besides the author, but that doesn't stop them from being published or widespread :/ I'm not sure that that's what's going on here, though. See below.

And he's not even sure if the SEP editors will allow the error to be fixed! What is wrong with their publishing process!?

So, I see that the SEP article says:

First published Wed Dec 14, 2005; substantive revision Sat Apr 11, 2020

So it was revised last year, but not this year. Was Byrne's paper just published this year as claimed? That would be unexpected given the cite says it was published in 2005:

(see Byrne in Brewer & Byrne 2005)

And the bibliography has:

Brewer, Bill and Alex Byrne, 2005, “Does Perceptual Experience Have Conceptual Content?”, CDE-1: 217–250 (chapter 8). Includes:
Brewer, Bill, “Perceptual Experience Has Conceptual Content”, CDE-1: 217–230.
Byrne, Alex, “Perception and Conceptual Content”, CDE-1: 231–250.

I see that the Byrne paper has a bunch of cites, but none are from later than 2004.

Looking more, I found the book it was published in, by reading the note at the start of the SEP article bibliography:

The abbreviations CDE-1 and CDE-2 refer to Steup & Sosa 2005 and Steup, Turri, & Sosa 2013, respectively.

So the book is Contemporary Debates in Epistemology 1st Edition, which was published in 2005. And one of the authors of CDE, Steup, is also an author of the SEP article. Using "Look Inside" on the hardcover version on Amazon, I can see the table of contents and confirm that the Byrne article is in the book.

I also found that the Byrne article, in CDE-1, was in the bibliography of the SEP article in 2007:

However, in that version of the SEP article, Byrne only comes up in the bibliography, not the text. Looking at more archived versions, I see that "(see Byrne in Brewer & Byrne 2005)" was there in May 2020 but not in Dec 2019. In Dec 2019, the word "credence" wasn't present at all in the SEP article and Neta Ram was not yet a co-author and wasn't cited at all. Steup was the only author listed then. Then in 2020, when a major revision happened, "credence" was added to the page 21 times and "Neta" was added 11 times. It seems like Steup was probably the author in 2005 and cited himself a lot. Then Neta probably did the update, added a bunch of stuff about credences, and added a bunch of cites to himself.

The full sentence with the cite error is:

The latter dispute is especially active in recent years, with some epistemologists regarding beliefs as metaphysically reducible to high credences,[5] while others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005), and still others regard beliefs and credences as related but distinct phenomena (see Kaplan 1996, Neta 2008).

You can see a new neta cite was added here in addition to the mistaken Byrne cite which goes to a source that had already been in the bibliography for 15 years and was not published this year as claimed. Maybe Byrne came out with a different paper in 2021 about credences and Neta cited the wrong paper because it was already in the bibliography? You might think that doesn't work because how would Neta have been trying to cite a 2021 paper back in 2020, as Neta pointed out in his email to me. But I found that Byrne did have a 2021 paper, but according to Google Scholar it was available online in 2019. That's not unusual. Academics often publish papers online before in print.

So it looks to me like the error was probably Neta's fault despite his attempt to deflect blame. Especially considering he was careless enough that he didn't seem to read my whole email, which was quite short, but did contain the quote "Byrne 2005" and somehow he writes back to tell me Byrne's paper was from 2021 (ignoring the 2005 cite) and that therefore he couldn't have even tried to cite it so he doesn't know what happenend, suggesting he was never trying to cite Byrne there. But he did intend to cite Byrne and was too quick to disown that while carelessly forgetting that papers get prepublished and not trying to investigate what actually happened as I did above.

Link for Byrne's paper being published in 2021:

And the online version I found with Google Scholar which says it's from 2019:

It seems like Neta rushed to reply to my email and deflect blame, and to move on without any real post mortem or investigation, and made careless statements to me, while under no actual time pressure to reply immediately. I hadn't even told him my negative blog post existed. For reference, here is the full email I wrote to Neta (also sent to Steup):

Subject: Error in SEP Epistemology article

You wrote:

others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005),

The Byrne text you cite is here:

It doesn’t contain the strings “credence”, “credal”, “meta”, or “reduc”. The two instances of “proba” are not discussions of probabilities. I hope you’ll appreciate being informed about the error.

Anyway, given the careless email reply to me, you can imagine how careless citation errors get into his work. In this case, it seems like he wanted to cite a Byrne paper and then used the cite already in the bibliography even though the year was over a decade off and the title was totally different. So I can figure out what he did and what happened, but he can't or won't? You may then wonder how and why SEP chose this guy. I wonder that too. I suspect it'd be very hard to get a transparent answer from SEP and that, on a related note, the answer would be damning.

Oh and it gets way worse. The 2021 Byrne paper is relevant but the cite says:

others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005),

But Byrne doesn't think credences reduce to beliefs. He writes e.g.:

the solution—to adapt a phrase from Quine and Goodman—is to renounce credences altogether.

Those are the last words of the introduction.

A reduction in the other direction, of credence to belief, seems hopeless from the start: as was pointed out, to have credence .6 in p is not to believe anything.


Granted that neither credence nor belief can be reduced to the other, there is an immediate problem


That leaves belief monism, the thesis of this paper: “there are no such things as credences”

So in addition to citing the wrong paper because he's careless and probably shouldn't be an academic ... and answering my email incorrectly because he's careless and probably shouldn't be an academic ... the paper he intended to cite blatantly contradicts his paraphrase of it.

Update 2, 2021-08-21:

I replied by email to Neta, left Steup CCed, and added Byrne to the CC list. I again used a factual, understated style and tone. Neta replied, keeping them CCed, to say

Helpful, thanks!

Sent from my iPhone

So on the one hand it could be worse. On the other hand, he's fully failing to acknowledge how much he screwed up, that it has any significant meaning, or that I did anything special that goes beyond minor help from a random guy to an expert. He's acting kinda like I reported a typo. And that's after I find layers of error in his writing and also his initial email to me was wrong.

I don't intend to reply to Neta's response. Here's a copy what I emailed:

Ram Neta, based on your comments, I looked into it more. I think what happened is this:

Steup put the 2005 Byrne article, Perception and Conceptual Content, in the bibliography, but it wasn’t cited in the text.

In 2020, you wanted to cite Byrne’s 2021 article, Perception and Probability, which you have indicated familiarity with. That was possible because it had been available online since 2019 at

You accidentally cited the Byrne article that was already in the bibliography instead of the new one.

The new article is on the right topic but contradicts your statements about it. You characterize Byrne’s position like this:

regard credences as metaphysically reducible to beliefs about probabilities

But what Byrne actually says is:

the solution ... is to renounce credences altogether.


A reduction in the other direction, of credence to belief, seems hopeless from the start: as was pointed out, to have credence .6 in p is not to believe anything.


Granted that neither credence nor belief can be reduced to the other, there is an immediate problem


That leaves belief monism, the thesis of this paper: “there are no such things as credences”

So Byrne does not believe that credences reduce to probability beliefs.

I wonder if it could be defamation to publish, in the SEP, lies about what philosophical positions a rival philosopher from another school of thought believes and published... Imagine publishing in the SEP that Ayn Rand was a Marxist!

BTW, speaking of carelessness, Neta's CV says "ENTERIES IN REFERENCE WORKS". That isn't how you spell "entries", so maybe he's a well-suited person to have any of those.

I think a lot of people don't read what they cite, but do they not even skim it, keyword search it, read the introduction/abstract, or glance at the conclusion?

Update 3, 2021-08-21:

Alex Byrne replied:

Ram, I cannot cast the first stone -- I am sure I have made many more mistakes of this sort myself. Possibly you had not realized how implausible my view is. Elliot, thanks very much for pointing this out. (The paper appeared in PPR, by the way -- I should update the philpapers entry.)


Update 4, 2021-08-22:

Ram Neta wrote:

Thanks Alex: I actually never read your paper, I only recall the version you delivered at Rutgers, and that we talked about on the train afterwards. I’ll read the paper now, since obviously my memory is not to be trusted!

Sent from my iPhone

I guess that answers the thing I was wondering about yesterday: can't read or won't read? This was more of a won't/don't read – he didn't actually read the paper and then egregiously misunderstand it. I'm also not seeing signs that he misuses an assistant to create errors. I think reading something and then citing it months or years later without rereading happens too and leads to errors. I think a lot of these people are bad at skimming and text search, so they rely on memory too much. Like Neta could have web searched to find the paper he wanted to cite, and glanced at it, instead of relying on his inaccurate memories of an IRL talk. But he didn't and just thought citing to a paper based on memories of a talk is acceptable scholarly practice. And that kinda standard is OK with his university, the journals that publish him, and with the SEP.

Regarding sharing these emails: I'm just a random stranger to them, I did nothing to schmooze, establish rapport, or act friendly. All I did was tell him he was badly wrong, twice, without flaming or volunteering my opinion of him, what I think he should do about the error, or what I think the error says about him, SEP and academia. They also (I presume) made the choice not to look me up – my signature had a link and I'm easy to find with web search too. That the author of the emails I sent would have a blog – and even would blog about the errors – should not be very surprising.

I think the world should know what academia is like. It's not really hidden but it's not exactly shared either. It's kinda an open secret for people in the know, but a lot of laymen don't realize it.

They are social climbers. Neta got to write a SEP article and added tons of cites to himself, and also added cites for people he likes or wants to network with. He doesn't remember Byrne's philosophical positions but does remember meeting him IRL and sharing a train ride and a chat. Byrne has an MIT job. So he wanted to give Byrne a cite to strengthen the social connection. Cites are favors used for social networking. Not always but often.

I know Neta is admitting too much because he's not on the defensive. This is common with social climbers. They're two-faced. They try to recognize safe situations to be candid, and other situations to be guarded. They say different things on different occassions. But they're often pretty careless about it and bad at it. Too bad. It reminds me of what people will admit about how bad their romantic relationships are when they aren't defensive or guarded – but if it's a debate their tune and tone change, and they become biased and dishonest, and try to say stuff to benefit their side instead of seeking the truth objectively.

Elliot Temple | Permalink | Messages (2)