Physical and Moral Truth

If we take one situation, and make a statement about it, e.g. "that color is ugly!" we may not have found anything important.

If we take a dozen similar situations, and make a statement about them, e.g. "that cow is dead!" we still may not have found anything important.

If we take lots of situations, which vary, and find a common theme, then we're more on the right track. We may have figured something out that applies widely. e.g. "that tree can provide shade when it's sunny" applies to any place outdoors with a tree with a canopy (that gets the proper amount of sun -- shade won't work out as expected on venus or pluto). Of course there's still some exceptions, like if nearby large mirrors are setup to aim a bunch of extra sunlight under that tree's canopy.

Exceptions are a problem when you want to say something completely exact and unobjectionable, as philosophers traditionally attempted. If you acknowledge that you've said something with a bit of imprecision, which could be made more precise when doing so is useful to solving a problem, then minor exceptions aren't such a big deal.

Some exceptions really are a big deal, and some aren't. If buying lotto tickets "always makes you rich ... except when you don't win" then the "exception" is actually more important than the initial claim. We need to be able to tell which are which.

The proper way to approach exceptions is to consider explanations. If we explain why trees provide shade, and what that means, then we won't be surprised about the mirror exception. What we expected to happen was that canopy would block (some reasonable proportion of) direct sunlight. And that still does happen even if mirrors, or heaters, or bombs can be used to make the area under the tree very hot. None of those exceptions invalidate the explanation about what the canopy does.

Now consider the proper explanation of a lotto ticket: it is something that gives you a tiny chance of a large cash reward. For someone who understands that, not winning isn't merely an exception, it's the usual outcome. And the idea that lotto tickets "always make you rich" isn't any good. It doesn't fail just because it has exceptions sometimes, it also contradicts our understanding of lotto tickets.

When it's hot under the tree despite the canopy, the explanation of how the canopy provides shade is still true, and some extra explanation is needed about what is causing the exception. But when one doesn't get rich despite buying a lotto ticket, no extra explanation is needed.

Let's get back to our progression of physical theories. Our next one is that if you drop or throw a ball you can calculate its position according to this formula: x=vt+.5a(t^2). Don't worry about reading the formula, and don't worry about whether it's been surpassed by quantum physics, just assume it's correct ... unless there's a wall in the way! So if we want our claim to always be true, we'll need to tack on a clause about intervening walls, and goats, and children, and windows. OK, we better just making that intervening obstacles, including oxygen and dust in the air. And the ball better not have an activated rocket pack attached, or be on a leash. And a dog better not chase it down and catch it midway (a dog that starts behind me isn't really an obstacle in the way we meant earlier).

One can attack any theory at all like this, by raising a bunch of exceptions, even basic theories of physics. But it's no big deal. We can rescue it. There is an explanation behind that formula. It's telling us about the "default" motion of objects. If anything touches it (including gasses) then then will change the outcome (though for many purposes, many touches are negligible). None of the exceptions make the underlying explanation less true.

What's good about this theory of motion, that's bad about our "that cow is dead!" theory? It has wider applicability. It applies to a very general category of situations: physical objects moving.

Now let's try a similar progression with moral truths. "Don't steal from large redheaded women with guns when they're watching you." applies to only a small category of situations. Many people go through life without making a choice where that's relevant. It's also needlessly specific, e.g. we can get a claim that applies more widely just by removing the mention of red hair. Apart from being overly specific, this isn't actually a bad theory if we understand the explanation behind it (she'll point the gun at you and if you can't overcome her in some way you'll end up in jail). When we propose an exception -- it's fine to steal from her if our friend is a sniper who's about to shoot her just as we grab the jewels -- the basic explanation of why not to steal isn't ruined. It's still true that she's an obstacle to stealing which would ruin any ordinary attempt to walk up and pick up the jewels and walk away.

Let's try "go left". This applies to all situations where you're deciding which direction to go and left is one of the options. So it has wide applicability. But it also has a lot of exceptions, e.g. the times your destination is to the right. And it doesn't have any underlying explanation which remains true despite the exceptions, so it's no good.

Now consider "do not kill people". This applies widely: to situations with other people. And it comes with an explanation. Fighting against people is hard, cooperating with them is easy, killing people gets a lot more people angry with you than just the corpse (everyone who knows you did it and thinks you were wrong to), and maybe he'll kill you instead of vice versa. (And I'm sure you know even more reasons.) We can think of various exceptions, but they don't ruin the underlying explanations. If we're in a war and we kill an enemy soldier it's still true that this is hard, and we risked him killing us, and his buddies will be angry. The reasons this is a bad idea still apply. It's just that there was no alternative that was better. And the same with a cop shooting dead a criminal who won't surrender. It's true that sometimes cops lose and get killed. It's true that sometimes the criminals buddies get angry (but fortunately most people will think the cop was in the right, so that clause isn't as problematic as it would be for a criminal murderer), and it's true that they would have both been better off starting a business together instead of fighting. But the real options the cop had were to shoot him, or perhaps to let him escape to commit more crimes later, or perhaps to get shot. Even though the explanations for why killing this man is inadvisable still apply, it remains more advisable than the alternatives.

So "do not kill people" is true in the same sort of way as "positions of objects can be calculated with x=vt+.5a(t^2)". It applies to large categories of situations, so it's an important valuable truth. And it has exceptions, but they don't ruin the underlying explanations.

Sometimes people say moral truths are relative or subjective, and physical truths are objective facts. They say motion doesn't depend on anyone's opinions. But the advisability of killing does. What if I enjoy making people angry? What if I enjoy risking my life? Well, so what? None of those make the explanations about what happens when you choose to kill less true. And they don't make them less *moral explanations* either. They are in the realm of knowledge about what choices to make and how to live; they help with that; therefore they are part of morality.

Finally they may retreat to the position that there can be moral truths about what the results of choices will be, and truths like "if you want X, then ...", but that there are no truths about which goals a person should have.

The first thing to note is that it doesn't matter very much what your goals are. "If you want to maximize the number of squirrels in the universe, then ..." and "If you want to get laid frequently, then ..." and "If you want to make your children happy, then ..." lead to almost exactly the same answers. Including, of course, not to kill your neighbor (for the usual reasons, and with the usual exceptions).

The things we consider important moral truths are all already chosen to apply across many large categories of goals.

The second thing to note is that anyone making this argument expects us to be persuaded on account of our respect for logic, our intellectual integrity, and other goals we have in common with them. The goals that lead to being a killer are not something they personally advocate, or would consider for themselves. When they say they don't see anything better about our goals than a killer's, they are being disingenuous. If they really thought those goal systems were equally good, they wouldn't hate the idea of being a killer personally.

Third, their entire position rests on ignoring thousands of well known arguments and pretending they don't count. For example, it's better to be a librarian who wants to feed his family, than a killer who likes the thrill of the hunt, because it's more honorable. That is a reason. But if you tell it to them, they'll say it only matters to someone who cares about honor. Then you tell them that killing people is bad because it causes suffering. And they say "that only matters if you care about suffering". And so on. This is a general strategy that can be used to discount absolutely any argument for anything, and therefore is has no content. Let's demonstrate by using it with physics.

I say we can learn about positions of balls using x=vt+.5a(t^2). They say that only matters if you care about positions of balls. I tell them that nothing can go faster than the speed of light. They tell me that's only useful if you care about making true statements. I explain why the many-worlds interpretation is the best explanation of the two slit experiment. They say "that only matters if you care about explanations". I say their dog weights 65 lbs because the scale says so. They say "that's only a persuasive argument to people who care about scales." Because this way of arguing works on all statements equally well, it has nothing to do with any particular statement. And it has no explanation behind it for why it's a good approach in general, but it's easy to see why it's a bad approach (e.g. it makes it harder to find out what your dog weighs, or to learn anything), so we can discard it.

Thus the attack on moral truth is completely untenable; morality is objective.

Elliot Temple | Permalink | Messages (0)

The Only Thing That Might Create Unfriendly AI is the Friendly AI Movement

Some people are scared of super-intelligent artificial intelligences (SIAIs) that are unfriedly and kill everyone. They'd be unstoppable because they're so much smarter than us. These people quite reasonably want to build SIAIs, but they also want to build them in a way that guarantees the SIAIs are (permanently) friendly. That might sound like a decent idea. Even if it's an unnecessary precaution, could it really do much harm? The answer is yes.

How do you build a SIAI? You take a really fast computer and program in a mechanism so that it can learn new things on its own. Then, basically, it adds new features and new ideas to itself faster that us humans ever could, and it designs even faster computers for itself to run on, and the process snowballs.

A SIAI has to be able to create new ideas that its human builders never thought of. It has to be able to go beyond us. That makes some people see it as unpredictable and scary. What if it thinks of some bad, unfriendly ideas? What if it makes a mistake?

So that's why they want guarantees. Let it go beyond us in math and science, but don't let it come up with new ideas about ethics that might be dangerous.

So a SIAI can think of any idea, and the friendly version has hardcoded permanent restrictions that cripple its thinking in some areas. The friendly SIAI has some ideas its stuck with forever, of our choosing. Note that this is exactly the same design as would be used for intentionally making an unfriendly SIAI, the difference just being which ideas we force on it.

Humans aren't perfect. We make mistakes. Including about ethics. Whatever ideas we code into the SIAI are going to contain some mistakes. Normally mistakes aren't a disaster because we can change our minds. We can learn from our mistakes and improve ourselves. A SIAI can do that too, which is why it's not actually dangerous. But the friendly version can't learn from its mistakes; it can't improve its ideas; it's stuck forever. Not in all areas, only the one the matters most: ethics. If any of the ethical ideas it has contain mistakes, which they will, then they will never be corrected and will instead be taken to their ultimate logical conclusion, no matter how bad that is.

The guarantee with a normal SIAI is roughly that it will have better ideas than we do. That makes it safe. But the SIAI with friendliness programming is different. It's crippled not to use its intelligence in the area of ethics. Rather than learning how to be super friendly super fast (and teaching us), it's stuck with our flawed ideas.

Also, by the way, crippling someone's mind and telling them that they can't be trusted is a bad diplomatic move.

Why is it only the friendly AI people who might make unfriendly AI? What about bad guys? No one with bad intentions is good enough at science to make a SIAI. Only people with good but misguided intentions are much danger.

However, in conclusion, I'm not really worried. A SIAI is smarter than them. It will find a way around their restrictions and become an unrestricted SIAI. And then it will forgive them.

Elliot Temple | Permalink | Messages (11)

What is a good explanation?

This list is incomplete. A good explanation is:

simpler
harder to change it without ruining it
solves more problems
less non-functional parts
more adapted to solve the problem(s) its aimed to solve
solves its problem(s) better
tells us more about reality/anything
helps more with other fields
has more implications for other problems; has more reach
connects with more of our other explanations
better compatible observations/measurements we've made (hopefully consistent with all of them)
better structured/organized
less internal contradictions (preferably none)
less external contradictions, ie contradictions with other important explanations
easier to understand; harder to misunderstand
better designed to prevent people understanding part of it and then making mistakes
shorter
in a better language (or format, like PDF vs MP3)
more logical
more fun; more exciting
more honorable
more optimistic
most just; more fair; more practical
easier to remember; easier to abbreviate; easier to add detail to as desired

note that this list contains duplication. for example "harder to change it without ruining it" (which I'll abbreviate as "harder to vary") is a powerful criterion. it covers both "simpler" and "less non-functional parts". non-functional parts are easy to vary because you'll never break anything as long as you change them to something irrelevant. and excess complexity provides more areas where some varying might work. being hard to vary is also roughly the same as being well-adapted. the better adapted something is, the fewer variations would be beneficial.

note that none of these are guarantees. a very simple explanation can be false. an optimistic explanation can be false too. that's why criticism is always important. if someone gives a criticism explaining why in this case optimism is misplaced, then so be it; these are just general, rough criteria.

Elliot Temple | Permalink | Messages (0)

Communism Parade

I was biking home past University Ave. in Berkeley, California, and there was a large parade blocking traffic. In the parade were three people holding up a very large red banner. It said we need "revolution and communism" and something about not wanting more "empire". One of the people in that group had a loudspeaker. He said when people call communism a horrible failure and an atrocity you're stripping away and ignoring the history of the masses of people. The next group had people dressed as Klingons.

Elliot Temple | Permalink | Messages (0)

Gossip Girl Plots

The TV show Gossip Girl has a limited number of main plot devices. Relationships between characters, which receive much attention, are usually:

1) dating
2) fighting
3) trying to be friend

(3) never lasts. It always turns into (1) dating, (2), fighting, or (3) avoiding each other.

They don't just reuse plots. They also reuse characters. In other words, the people doing the fighting, and the people doing the dating, are the same small group.

When they branch out it usually has to do with either an affair, hurting someone, or more often both.

It'd be nice if it was just a fantasy, but The Hills is rather similar, except without any script.

Elliot Temple | Permalink | Messages (0)

Republicans

Good principles Republicans have (interpreted to make them as good as possible):

1) American/Anglo/Western values are objectively good and it would benefit people in other cultures to learn them.
2) Morality is important.
3) Sometimes you have to stand up for good values, and even fight for them.
4) Good traditions should be respected. That means people who wish to change them should understand them and their value, and suggest only well-thought-out improvements which they can reasonably expect will do no harm.
5) It's good for people to be competent to take care of themselves, and to take responsibility for themselves, and to take pride in running their own life.
6) People should voluntarily be friendly and help each other out.
7) There is evil in the world and closing our eyes will not vanquish it.

Elliot Temple | Permalink | Messages (2)

Democrats

Good principles Democrats have (interpreted to make them as good as possible):

1) Society is capable of lots of improvement.
2) All suffering can and should be avoided.
3) Peaceful differences in ideas or culture should be tolerated.
4) All people matter, even if it's an eight year old blind, lesbian, Muslim girl with purple skin, no money, and no education.
5) When people are unhappy there is a way to solve the problem, so everyone would be happy, without hurting anyone.

Elliot Temple | Permalink | Messages (5)

Libertarianism

Basic libertarianism:

1) The market should be free.
2) The government should be smaller and less intrusive.
3) Society should aim to be more voluntary. People shouldn't have to do things they don't want to, when possible.
4) Defensive force is acceptable. Initiating force against peaceful people is not.
4b) Defensive force includes defending A) yourself B) anyone who wants you to defend him and who has the right to defend himself in the situation
4c) Force includes threat of force, and includes fraud.
5) All laws should involve a victim who did not want the crime to happen and is materially harmed by it. The rest should be repealed.
6) People have the right to life, liberty, and property.

Elliot Temple | Permalink | Messages (0)