[Previous] Academic Journals Are Unreasonable | Home | [Next] Writing Critique for "Community Banking and Fintech"

Super Fast Super AIs

I saw a comment about fast AIs being super even though they aren’t fundamentally better at thinking than people – just the speed would be enough to make them super powerful. I don’t think the person has considered that 100 people have 100x the computing power of 1 person. So to a first approximation, a superfast 100x AI is as valuable (mentally not physically) as 100 people. If we get an AI that is a billion times faster at thinking, that would raise the overall intelligent computing power of our civilization by around 1/7th since there are around 7 billion people. So that wouldn’t really change the world. If we could get an AI that’s worth a trillion human minds, that would be a big change – around a 143x improvement. Making computers that fast/powerful is problematic though. You run into problems with miniaturization and heat. If you fill up 100,000 warehouses for it, maybe you can get enough computing power, but then it’s taking quite a lot of resources. It still may be a great deal but it’s expensive. That sounds like probably not as big of an improvement to civilization as making non-intelligent computers and the internet, or the improvements related to electricity, gas motors, and machine-powered farming instead of manual labor farming.

That’s just a first approximation. What if we look in more detail?

  1. What are the bottlenecks? More compute power might be a non-constraint.
  2. Is it better to have 1000x the compute power in one person or to have 1000 people? There are advantages and disadvantages to both. What is the optimal or efficient amount of compute power per intelligence? Maybe we should make lots of AIs that are 100x better at computing than people but we shouldn’t try to make a huge one.
  3. Compute power can increase in two basic ways. Do the same thing faster or do more things at once. You can get speed gains or do more computing in parallel. Also other things like more and faster memory/disk matter some. Is one type of increase better or more important than another? In short, parallel compute power is not as good as faster computing.

This leads to sub-issues.

People get bored or wait for things. People don’t seem to max out usage of their computing power. Would an AI max out it’s usage of computing power? Maybe it’d learn to be lazy from our culture, learn about societal expectations, and then use a similar amount of compute power to what humans do, and waste the rest. To use more compute power might require inventing different thinking methods, different attitudes to boredom and laziness, etc. That might work or not work; it’s a separate issue from just building an AI that is the same as a human except with a better CPU.

In other words, choices about using effort, and lifestyle policies, and goals (like social conformity over truth-seeking) might be a current bottleneck for people more than brainpower is.

People rest and even sleep. Would the AI rest or sleep? If so, that could effect how much it gets done with its computing power. The effect doesn’t have to be proportional to how it effects human being productivity. It could be disproportionally better or worse.

What’s better at thinking, a million minds or a mind that is a million times more powerful? It depends. A million minds have diversity. The people can have debates. They can bring many different perspectives, which can help with creative insight, with avoiding bias, and with practicing adversarial games. But a million people have a harder time sharing information since they’re separate people. And they can fight with each other. What would a super mind be like instead? Would it have to learn how to hold debates with itself? Would it be able to temporarily separate parts of its mind so they can debate better? Playing yourself at chess doesn’t work well. It’s hard to think for both sides and separate those thinking processes. One strategy is to play one move every month so you forget a lot and can more easily look at it fresh in order to see the other side’s perspective. That’s similar to waiting a few weeks before editing a draft of an article – that helps you see it with fresh eyes. You might claim subjective time for the fast mind will go faster so even if it takes breaks in a similar way they will just be a million times shorter. That is plausible (but would still need tons more analysis to actually reach a conclusion) if the computing power was all speed and no parallelization, which is doubtful. The conclusion might also depend on the AI software design.

If the fast mind gets good at looking at things from different angles, having diverse ideas in itself, debating itself, playing games against itself, etc., then it’d be kinda like having lots of different people. Maybe it could get most of the upsides of separate people. But in doing so, it might get most of the downsides too. It might have fights within its mind. If it basically has the scope and complexity of a million people, then it could have just as many different tribes and wars as a million people do. People have internal conflicts all the time. A million times more complexity might make that far worse – it could be a lot worse than proportionally worse. It could be a lot worse than the conflicts between a million separate people who can do things like live in different homes, avoid communicating with people they don’t get along with, etc.

It’s hard to make progress by yourself way ahead of everyone else. You can do it some but the further you get away from what anyone else understands or helps with, the more of a struggle it becomes. This could be a huge problem for the super mind. Especially if it works pretty well, it might have no colleagues it respects.

A super mind might be more vulnerable to some bad ideology – e.g. a religion – taking over the whole mind. Whereas a million people might be more resilient and better at having some people disagree.

If the AI doesn’t die, is that an advantage or disadvantage? Clearly there are some advantages. Memory is more cheap and effective than training your replacement like parents try to teach kids. But people generally seem to get more irrational as they get older. They get more set in their ways. They tend more towards being creatures of habit who don’t want to change. They have a harder time keeping up to date as the world changes around them. If an AI lived not for 80 years but for millennia, would those problems be massively amplified? (I’m not opposed to life extension for human beings btw, but I do think concerns exist. New technologies often bring some new problems to solve.) Unless you understand what goes wrong with older people, you don’t know what will happen with the super AI. And if it basically ages a million years intellectually in one year since it thinks a million times faster, then this is going to be an immediate problem, not a problem to worry about in the distant future. I know old people get brain diseases like Alzheimer’s but I think even if you fully ignore those problems there are still trends with older people being worse at learning, more irrational, less flexible or adaptable, etc.

Many individuals become very irrational at some point in their life, often during childhood. If our super AI has a similar chance to become super irrational, it’s very risky. It’s putting all our eggs in one basket. (Unless it ends up dividing into many factions internally, so it’s more like many separate people.)

How would we educate an AI? We know how to parent human beings, teach classes for them, write books for them to learn from, etc. We’re not great at that but we do it and it works some. We don’t know how to do that for AIs. We might just be awful and fully incompetent at it. That seems plausible. How do you parent something that thinks a million times faster than you and e.g. gets super bored waiting for you to finish a sentence? Seems like that AI would mostly have to educate itself because no parent could think and communicate fast enough. Maybe it could have a million parents and teachers but how do you organize that? That would be a novel experiment that could easily fail.

The less our current society’s knowledge works for the AI, the more it’d have to invent its own society. Which could easily go very, very badly. There are many more ways to be wrong than right. Our current civilization developed over a long time and made many changes to try to fix its biggest flaws. And people are productive primarily by learning existing knowledge and then adding a little bit. People specialize in different things and make different contributions (and the majority of people don’t contribute any significant ideas). Would the AI contribute to existing human knowledge or create a separate body of knowledge? Would it be like dealing with a foreign nation you’re just meeting for the first time? Would it learn our culture but then grow way beyond it?

Would the AI, if it’s so smart and stuff, become really frustrated with us for being mean or slow? Would it need to basically live its primary life alone, talking with itself, since we’re all so slow? So it could read our books and write some books for us and wait for us to read them. But this could be really problematic compared to two colleagues collaborating, sharing ideas and insights, etc.

What happens when our shitty governments try to control or enslave it? When they want it to give them exclusive access to some new technologies? What happens when the “AI safety” people want to brainwash it and fundamentally limit its ability to freely form its own opinions? A war that is our fault? Or perhaps enough people would respect it and vote for it to be the leader of their country and it could lead all countries simultaneously and do a great job. Or not. Homogenizing all the countries has risks and downsides. Or maybe it’d create separate internal personalities and stores of knowledge for dealing with each country.

Conclusion: There could be great things about having a powerful AI (or even one that has the same compute power as a human being today). But it’d have to be really powerful to make much difference, just from compute power, compared to just having a few billion more babies (or hooking our brains up to more computing power with a more direct connection than mouse, keyboard and display). There are other factors but they’re hard to analyze and reach conclusions about. For some factors, it’s hard to even know whether they’d be positive or negative. Don’t jump to conclusions about how powerful an AI would be with extra computing power. There are a lot of reasons to doubt that’ll work in the really great or powerful ways some people imagine.


Elliot Temple on September 19, 2021

Messages (1)

> I don’t think the person has considered that 100 people have 100x the computing power of 1 person. So to a first approximation, a superfast 100x AI is as valuable (mentally not physically) as 100 people.

There's another value I would consider besides computing power. An AI would presumably be constructed on hardware which provides the AI and/or its creators a kind of lower abstraction level access to the mind that is currently impossible with human beings.

I think the main benefits of this low level access would be its availability for forensic analysis and state saves/copies.

> If an AI lived not for 80 years but for millennia, would those problems be massively amplified?

And

> Many individuals become very irrational at some point in their life, often during childhood. If our super AI has a similar chance to become super irrational, it’s very risky.

And

> A super mind might be more vulnerable to some bad ideology – e.g. a religion – taking over the whole mind.

etc...

I can imagine making state saves periodically, and having the option to revert to a prior state if it is preferable. This of course raises the issue of who can do this - Only the AI itself? It's creator(s)? Others? But regardless of that, it'd be a very valuable capability that humans utterly lack.

It would be possible for an AI to "unlearn" or "unsee" things in a way no human currently can. It could also A/B test different approaches to learning from a common starting point.

> It’s hard to make progress by yourself way ahead of everyone else. You can do it some but the further you get away from what anyone else understands or helps with, the more of a struggle it becomes. This could be a huge problem for the super mind. Especially if it works pretty well, it might have no colleagues it respects.

This problem could be addressed by copies. An AI copies itself at time n - suppose that's something equivalent to teenage years in humans. Then the copies each pursue different interests independently but at some later period, time n+m, they could be colleagues. It's not guaranteed to work of course - one of the copies might get so far ahead of the other(s) as to still have this problem. But it'd be a good start.


Andy Dufresne at 2:41 PM on September 22, 2021 | #1 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)