[Previous] Less Wrong Lacks Representatives and Paths Forward | Home | [Next] Replies to Gyrodiot About Fallible Ideas, Critical Rationalism and Paths Forward

Open Letter to Machine Intelligence Research Institute

I emailed this to some MIRI people and others related to Less Wrong.


I believe I know some important things you don't, such as that induction is impossible, and that your approach to AGI is incorrect due to epistemological issues which were explained decades ago by Karl Popper. How do you propose to resolve that, if at all?

I think methodology for how to handle disagreements comes prior to the content of the disagreements. I have writing about my proposed methodology, Paths Forward, and about how Less Wrong doesn't work because of the lack of Paths Forward:

http://curi.us/1898-paths-forward-short-summary

http://curi.us/2064-less-wrong-lacks-representatives-and-paths-forward

Can anyone tell me that I'm mistaken about any of this? Do you have a criticism of Paths Forward? Will any of you take responsibility for doing Paths Forward?

Have any of you written a serious answer to Karl Popper (the philosopher who refuted induction – http://fallibleideas.com/books#popper )? That's important to address, not ignore, since if he's correct then lots of your research approaches are mistakes.

In general, if someone knows a mistake you're making, what are the mechanisms for telling you and having someone take responsibility for addressing the matter well and addressing followup points? Or if someone has comments/questions/criticism, what are the mechanisms available for getting those addressed? Preferably this should be done in public with permalinks at a venue which supports nested quoting. And whatever your answer to this, is it written down in public somewhere?

Do you have public writing detailing your ideas which anyone is taking responsibility for the correctness of? People at Less Wrong often say "read the sequences" but none of them take responsibility for addressing issues with the sequences, including answering questions or publishing fixes if there are problems. Nor do they want to address existing writing (e.g. by David Deutsch – http://fallibleideas.com/books#deutsch ) which contains arguments refuting major aspects of the sequences.

Your forum ( https://agentfoundations.org ) says it's topic-limited to AGI math, so it's not appropriate for discussing criticism of the philosophical assumptions behind your approach (which, if correct, imply the AGI math you're doing is a mistake). And it states ( https://agentfoundations.org/how-to-contribute ):

It’s important for us to keep the forum focused, though; there are other good places to talk about subjects that are more indirectly related to MIRI’s research, and the moderators here may close down discussions on subjects that aren’t a good fit for this forum.

But you do not link those other good places. Can you tell me any Paths-Forward-compatible other places to use, particularly ones where discussion could reasonably result in MIRI changing?

If you disagree with Paths Forward, will you say why? And do you have some alternative approach written in public?

Also, more broadly, whether you will address these issues or not, do you know of anyone that will?

If the answers to these matters are basically "no", then if you're mistaken, won't you stay that way, despite some better ideas being known and people being willing to tell you?

The (Popperian) Fallible Ideas philosophy community ( http://fallibleideas.com ) is set up to facilitate Paths Forward (here is our forum which does this http://fallibleideas.com/discussion-info ), and has knowledge of epistemology which implies you're making big mistakes. We address all known criticisms of our positions (which is achievable without using too much resources like time and attention, as Paths Forward explains); do you?


Update (Dec 2019):

One person from MIRI responded the day I sent out the letter (Nov 9, 2017). He didn't answer anything I asked, but I decided to add the quotes for better completeness and record keeping. Below are Rob Bensinger's 3 emails, quoted, and my replies. After that he stopped responding.

Hi, Elliot. My short answer is that I think Popper is wrong; inductive reasoning works just as well as deductive in principle, though in practice we often have to rely on heuristic approximations of ideal inductive and deductive reasoning. The traditional problems with in-principle inductive reasoning (e.g., infinite hypothesis spaces) are well-addressed by Solomonoff's theory of algorithmic probability (http://world.std.com/~rjs/tributes/rathmannerhutter.pdf).

Have you written a serious and reasonably complete answer to Popper, or do you know of one that you will endorse, take responsibility for (if it's mistaken, you were mistaken), and address questions/criticisms/etc regarding?

And where is the Path Forward if you're mistaken?

I feel comfortable endorsing Solomonoff induction and Garrabrant induction (https://intelligence.org/2016/09/12/new-paper-logical-induction/) as philosophically unproblematic demonstrations that inductive reasoning works well in principle.

So you're disagreeing with Popper, but without addressing his arguments. If you're mistaken, and your mistakes have already been explained decades ago, you'll stay mistaken. No Paths Forward. Right?

I've read Popper before, and I believe the SEP when it says that he considered infinite hypothesis spaces a major underlying problem for induction (if not the core problem):

Popper gave two formulations of the problem of induction; the first is the establishment of the truth of a theory by empirical evidence; the second, slightly weaker, is the justification of a preference for one theory over another as better supported by empirical evidence. Both of these he declared insoluble, on the grounds, roughly put, that scientific theories have infinite scope and no finite evidence can ever adjudicate among them (LSD, 253–254; Grattan-Guiness 2004).

My claim is that Solomonoff induction addresses this problem handily, and that it more generally provides a good formal framework for understanding how and why inductive reasoning works well in practice.

I think we're getting off-topic. Do you agree or disagree with Paths Forward? Why? Do you have alternative written procedures for having issues like this addressed? Do you have e.g. a forum which would be a good place for me to reply to what you've said?

If I point out that you're mistaken about Popper's arguments and how they may be addressed, what happens next? BTW this would be much easier if there was a direct written answer to Popper by you, or by anyone else, that you were willing to take responsibility for. Why isn't there? That would also save effort for both of us – because responding to your unwritten views will require back-and-forth emails where I ask questions to find out what they are and get clarifications on what you're actually claiming. If your reasoning is that Popper is mistaken, so you don't want to bother properly answering him ... then your fallible criticism of Popper isn't itself being exposed to error correction very well.


Elliot Temple on November 9, 2017

Messages (162)

I find the idea interesting.

In some sense, it seems like it would be nice for organizations to be as transparently accountable as possible. For example, in many cases, the government is contradictory in its behavior -- laws cannot be easily interpreted as arising from a unified purpose. It would be nice if there were a way to either force the government to produce an explanation for the seeming contradiction or change. This would be really difficult for a large organization, especially one which is not run by a single individual; but, in some sense it does seem desirable.

On the other hand, this kind of accountability seems potentially very bad -- not only on an organizational level, but even on the level of individuals, who theoretically we can reasonably expect to provide justifications for actions.

The ability to force someone to give a justification in response to a criticism, or otherwise change their behavior, is the ability to bully someone. It is very appropriate in certain contexts, For example, it is important to be able to justify oneself to funders. It is important to be able to justify oneself to strategic allies. And so on.

However, even then, it is important not to be beholden to anyone in a way which warps your own standards of evidence, belief, and the good -- or, not too much. Every party you need to justify yourself to adds constraints of understandability to your actions, meaning that eventually you need to be justifiable to the lowest common denominator. MIRI is a special place in that its staff are more free to go after what it thinks is the right direction as compared with an academic department.

Nonetheless, I say the idea is interesting, because it seems like transparently accountable organizations would be a powerful thing if it could be done in the right way. I am reminded of prediction markets. An organization run by open prediction markets (like a futarchy) is in some sense very accountable, because if you think it is doing things for wrong reasons, you can simply make a bet. However, it is not very transparent: no reasons need to be given for beliefs. You just place bets based on how you think things will turn out.

I am not suggesting that the best version of Paths Forward is an open prediction market. Prediction markets still have a potentially big problem, in that someone with a lot of money could come in and manipulate the market. So, even if you run an organization with the help of a prediction market, you may want to do it with a closed prediction market. However, prediction markets do seem to move toward the ideal situation where organizations can be run on the best information available, and all criticisms can be properly integrated. Although it is in some ways bad that a bet doesn't come with reasons, it is good that it doesn't require any arguments -- there's less overhead.

I may be projecting, but the tone of your letter seems desperate to me. It sounds as if you are trying to force a response from MIRI. It sounds as if you want MIRI (and LessWrong, in your other post) to act like a single person so that you can argue it down. In your philosophy, it is not OK for a collection of individuals to each pursue the directions which they see as most promising, taking partial but not total inspiration from the sequences, and not needing to be held accountable to anyone for precisely which list of things they do and do not believe.

So, I state as my concrete objection that this is an OK situation. There is something like an optimal level of accountability, beyond which creative thinking gets squashed. I agree that building up a canon of knowledge is a good project, and I even agree that having a system in place to correct the canon (to a greater degree than exists for the sequences) would be good. Arbital tried to build something like that, but hasn't succeeded. However, I disagree that a response to all critics should be required. Something like the foom debate, where a specific critic who is seen to be providing high-quality critiques is engaged at a deep level, seems appropriate. Other than that, more blunt instruments such as a FAQ, a technical agenda, a mission statement, etc which deal with questions as seems appropriate to a given situation seems fine.

I would change my mind on this if I thought it was feasible to map out and address all arguments on a subject (as Arbital dreamed), **and** if I thought that such a system wasn't likely to turn on participants (as public discourse often does) by turning arguments into demands. You want to make sure you don't punish an organization for trying to be accountable.


PL at 3:00 AM on November 11, 2017 | #9239 | reply | quote

There's no force involved here. It's just some comments on how reason works. Paths Forward is no more forceful than these suggestions (which I mostly agree with) that you should do these 12 things or you're not doing reason correctly: http://yudkowsky.net/rational/virtues/

PF explains how it's bad to stay wrong when better ideas are already known and people are willing to tell/help you. It talks about error correction and fallibilism. And it says how to implement this stuff in life to avoid the bad things.

People who don't want to do it ought to have some alternative which deals with issues like fallibility and correcting errors. How will they become Less Wrong?

The typical answer is: they have a mix of views they haven't really tried to systemize or write down. So their answer to how to correct error is itself not being exposed to critical scrutiny.

And what happens then? Bias, bias and more bias. What's to stop it?

Bias is a fucking hard problem and it takes a lot – like Paths Forward or something else effortful and serious – to do much to deal with bias.

MIRI doesn't do Paths Forward and *also has no serious alternative that they do instead*. So errors don't get corrected very well.

The practical consequence is: MIRI is betting the bulk of their efforts on Popper being wrong, but have not bothered to write any serious reply to Popper explaining why they are willing to bet so much on his ideas being mistaken and saying what's wrong with his ideas. MIRI should be begging for anyone to tell them something they're missing about Popper, if anyone knows it, to help mitigate the huge risk they are taking.

But MIRI doesn't want to think about that risk and acknowledge its meaning. That's a big deal even if you don't think they should address *all* criticisms (which they should via methods like criticisms of categories of bad ideas – and if you run into a "bad" idea that none of your existing knowledge can criticize, then you don't know it's bad!)

> Every party you need to justify yourself to adds constraints of understandability to your actions, meaning that eventually you need to be justifiable to the lowest common denominator. MIRI is a special place in that its staff are more free to go after what it thinks is the right direction as compared with an academic department.

This is incorrect. You can simply tell lay people that they need to learn some background knowledge to deal with various issues. You can then direct them to e.g. some reading recommendations and a discussion forum for learners. Then there's a path forward: they can learn the necessary expertise and then read your difficult material and then comment. Most people won't, and that's fine.

There are two important points here for rationality:

1) if someone *does* read your not-dumb-down-at-all material and point out a mistake, you don't just ignore them b/c of e.g. their lack of official credentials. you don't just say "i think you're a layman" whenever someone disagrees with you. you don't gate your willingness to deal with criticism on non-truth-seeking things like having a PhD or being popular.

2) it's possible that you're mistaken about the background knowledge required to understand a particular thing, and that can itself be discussed. so there's still no need to dumb anything down, but even if someone agrees they don't have a particular piece of background knowledge which you thought was relevant, it's still possible for them to make a correct point which you shouldn't reject out of hand.

> I would change my mind on this if I thought it was feasible to map out and address all arguments on a subject

I explain how to do this. You don't quote me and point out where I made a mistake. If you want more explanation we have reading recommendations, educational material, a discussion forum, etc.

> if I thought that such a system wasn't likely to turn on participants (as public discourse often does) by turning arguments into demands.

if "demands" are bad, write a criticism of them (or of a sub-category of them) and then reject all the criticized stuff by reference to the criticism. (i agree that *some* demands are bad. it depends on more precision.) if you gave a more clear example of how an organization would be "punished" for setting up mechanisms of error correction, rather than sticking to disorganized haphazard bias, perhaps we could discuss how to handle that situation and also what the alternatives are and whether they're better. (what are *you* advocating instead of Paths Forward? is it written down, substantive, good at dealing with bias, etc?)


curi at 11:55 AM on November 11, 2017 | #9242 | reply | quote

> > Every party you need to justify yourself to adds constraints of understandability to your actions, meaning that eventually you need to be justifiable to the lowest common denominator. MIRI is a special place in that its staff are more free to go after what it thinks is the right direction as compared with an academic department.

>This is incorrect. You can simply tell lay people that they need to learn some background knowledge to deal with various issues. You can then direct them to e.g. some reading recommendations and a discussion forum for learners.

This doesn't seem right to me. For example, take the recent controversy in which someone lost his job for admitting what they believe about feminism or racism. (All the significant details have slipped my mind..) Of course this isn't exactly the concern with MIRI. However, it's relevant to the general argument for/against Paths Forward. You can make important decisions based on carefully considered positions which you wouldn't want to state publically. For an extended argument that there are important ideas in this category, see Paul Graham's essay "What You Can't Say":

http://www.paulgraham.com/say.html

It does seem to me that if you commit to being able to explain yourself to a particular audience, you become bias toward ideas which are palatable to that audience.

> Then there's a path forward: they can learn the necessary expertise and then read your difficult material and then comment. Most people won't, and that's fine.

I don't see how that's fine in your system. If you are offering the Paths Forward material as your criticism to MIRI, then according to Paths Forward methodology, MIRI needs to understand Paths Forward. Similarly, the hypothetical layperson who criticizes you in a way which makes you point to some expert knowledge they'd have to learn isn't following Paths Forward if they walk away, right? They still have their broken view, which you claim to have a solid criticism of.

This is part of what makes me skeptical. It seems like the policy of responding to all criticism requires one to read all criticism. Reading criticism is useful, but must be prioritized like all other things, which means that you have to make an estimate of how useful it will be to engage a particular critic before deciding whether to engage.

> if "demands" are bad, write a criticism of them (or of a sub-category of them) and then reject all the criticized stuff by reference to the criticism.

I meant the sort of demand-created-by-social-consensus-and-controversy mentioned above, rather than the kind which you can respond rationally to.

> (what are *you* advocating instead of Paths Forward? is it written down, substantive, good at dealing with bias, etc?)

I don't have any substantive solution to the problem of group epistemology, which is why the approach you advocate intrigues me. However, there are some common norms in the LessWrong community which serve a similar purpose:

1) If you disagree, either reach an agreement through discussion or make a bet. This holds you accountable for your beliefs, makes you more likely to remember your mistakes, aims thinking toward empirically testable claims, and is a tax on BS. Butting one-on-one does not create the kind of public accountability you advocate.

I mentioned betting markets earlier. Although I think there are some problems with betting markets, they do seem to me like progress in this direction. I would certainly advocate for their more widespread use.

2) The double crux method:

http://lesswrong.com/lw/o6p/double_crux_a_strategy_for_resolving_disagreement/


PL at 3:13 AM on November 12, 2017 | #9243 | reply | quote

> This doesn't seem right to me.

I think you mean: that's correct as far as it goes, regarding being able to write expert level material instead of making everything lowest-common-denominator accessible. But it doesn't speak to some other issues like taboos. If you wanna link that PG essay and end a conversation, go ahead – and then there is a Path Forward b/c someone can refute the PG essay (if they know how).

I'm not expecting perfection. You could be lying and dealing with something non-taboo and just say it's taboo. Whatever. People will do shitty stuff. And some won't. I'm proposing a methodology to help the people who want to be rational. It also does a good job of catching a lot of irrational people and pointing out what they're doing wrong - a few of whom may appreciate that and reconsider some things.

If you don't want to talk about a taboo issue, you can say that, and have that position itself be open to criticism (the criticism will be difficult because you don't give much to the critic to use – but unless he proposes a solution to that problem, so be it). Similarly, the military doesn't talk about lots of things, and I have no objection to that.

More broadly, you're welcome to raise problems with doing stuff. "I would like to facilitate error correction in that way if it were unproblematic, but..." The problems themselves should be open to error correction and solutions. You need something at some level which is open to criticism. Or else say you aren't a public intellectual and don't make public claims about ideas.

You can also say you aren't interested in something, or you don't think it's worth your time. PF doesn't eliminate filters, it exposes them to criticism. At the very least you can publicly say "I'm doing a thing I have reasons I don't think I can talk about." And *that* can be exposed to criticism. E.g. you can direct them to essays covering how to talk about broad categories of things and ask if the essays are relevant, or ask a few questions. You may find out that e.g. they don't want to talk about the boundaries of what they will and won't talk about, b/c they think that'd reveal too much. OK. Fair. I don't have a criticism of that at this time. But someone potentially might. It can be left at that, for now, and maybe someone else will figure out a way forward at some point. Or not. At least, if someone knows the way forward, they aren't being blocked from saying so. Their way forward does have to address the meta issues (like privacy, secrecy, taboo, etc) not just the object issue.

---

If people would just **state their filters** it'd do so much. People filter on "I think he's dumb" all the time. And on "he is unpopular" and on "he doesn't have a PhD". And they do this *inconsistently*. They filter Joe cuz no PhD, but then talk to Bob a ton who also doesn't have a PhD.

What's going on? Bias and lack of accountability. Often they don't even consciously know what filters they are using or why. This is really bad. This is what LW and MIRI and most people are like. They have unstated, unaccountable gating around error correction, which allows for tons of bias. And they don't care to do anything about that.

You want to filter on "convince my assistant you have a good point that my assistant can't answer, and then i will read it"? Say so. You want to let some people skip that filter while others have to go through it? Say the criteria for this.

People constantly use criteria related to social status, prestige, etc, and lie about it to themselves (let alone others), and they do it really inconsistently with tons of bias. This is sooooooo bad and really ruins Paths Forward. Half-assed PF would be a massive improvement.

I'm not trying to get rid of gating/filters/etc. Just state them and prefer objective ones and have the filters themselves open to criticism. If you're too busy to deal with something, just say so. If your critics have no solution to that, then so be it. But if they say "well i have this Paths Forward methodology which actually talks about how to deal with that problem well" then, well, why aren't you interested in a solution to your problem? and people are certainly not being flooded with solutions to this problem, and they don't already have one either.

the problems are bias, dishonesty, irrationality, etc, as usual, and people's desire to hide those things.

yeah some people have legit reasons to hide stuff. but most people – especially public intellectuals – could do a lot more PF without much trouble, if they wanted to.

btw Ayn Rand, Richard Feynman, David Deutsch, Thomas Szasz and others of the best and brightest answered lots of letters from random strangers. they did a ton to be accessible.

---

No one on LW or from MIRI offered me any bets or brought up using double cruxes to resolve any disagreements.

Those things are not equivalent to Paths Forward b/c they aren't methodologies. They're individual techniques. But there's no clear specifications about when and how to use what techniques to avoid getting stuck.

I don't know what bets they could/should have offered. Double crux seems more relevant and usable for philosophy issues, but no one showed interest in using it.

The closest thing to a bet was someone wanted money to read Paths Forward, supposedly b/c they didn't have resources to allocate to reading it in the first place. I agreed to pay them the amount they specified without negotiating price, b/c it was low. They admitted they predicted that asking for a tiny amount of money would get me to say "no" instead of "yes". But neither they nor anyone else learned from their mistake. They were surprised that I would put money where my mouth is, surprised I have a budget, surprised I'm not poor, etc, but learned nothing. Also they were dishonest with me and then backed out of reading Paths Forward even though I'd offered the payment (which they said matched what they are paid at work). So apparently they'd rather *go to work* (some $15/hr job so presumably nothing too wonderful) than read philosophy. I thought that was revealing.

Someone else tried to use Paths Forward to control and pressure me. They didn't get far at all. After they said some nonsense, and I replied briefly, they wrote more non sequiturs. They wanted to use PF to make me talk with them an unlimited amount – I don't know if they were trying to prove PF is impractical or just an idiot, I guess a mix. They started complaining basically that I had to answer everything they said or I'm not doing PF – and they seemed to think (contrary to PF) that that meant answering all the details and pointing out every mistake, rather than just one mistake. I said if they had a serious criticism they could write it somewhere public with a permalink. That was enough of a barrier to entry that they gave up (despite the fact that the person already hds a blog with a lot of public posts with permalinks). Such things are typical. Very low standards – which are themselves valuable/productive and can easily be *objectively* judged – will filter out most critics.

---

> Similarly, the hypothetical layperson who criticizes you in a way which makes you point to some expert knowledge they'd have to learn isn't following Paths Forward if they walk away, right? They still have their broken view, which you claim to have a solid criticism of.

If the layperson says "oh, you have to learn all that? well i'd rather do this other thing instead. i think it fits me better." that is OK. i don't have a criticism of that. MIRI doesn't have a criticism of that. MIRI and I don't think everyone should become an expert on our stuff. Not everyone has to learn our fields and discuss with us. No problem. (well i do think a lot more people should take a lot more interest in philosophy and being better parents and stuff. so in some cases i might have a criticism, which wouldn't be a personal matter, but would be the kind of stuff i've already written generic/impersonal public essays about.)


curi at 8:54 AM on November 12, 2017 | #9244 | reply | quote

MIRI puts a meaningful amount of their effort into scaring the public into thinking AGI is dangerous – b/c they think it’s dangerous. They frame this as: AGI is in fact dangerous and they’re working to make it safer. But the practical effect is they are spreading the (false!) opinion that it’s dangerous and therefore shooting themselves (and the whole field) in the foot.


Anonymous at 7:17 PM on November 12, 2017 | #9247 | reply | quote

> Someone else tried to use Paths Forward to control and pressure me. They didn't get far at all. After they said some nonsense, and I replied briefly, they wrote more non sequiturs. They wanted to use PF to make me talk with them an unlimited amount – I don't know if they were trying to prove PF is impractical or just an idiot, I guess a mix.

To some extent I'm doing the same kind of testing the waters, seeing how much you stick to your guns and provide substantive responses to issues raised. Overall the effect has been to raise my plausibility estimate of PF, although I think you may have exceptionally much time to respond to things as compared to other people (or maybe you're assigning my remarks high importance?).

> I'm not expecting perfection. You could be lying and dealing with something non-taboo and just say it's taboo. Whatever. People will do shitty stuff. And some won't. I'm proposing a methodology to help the people who want to be rational. It also does a good job of catching a lot of irrational people and pointing out what they're doing wrong - a few of whom may appreciate that and reconsider some things.

> If you don't want to talk about a taboo issue, you can say that, and have that position itself be open to criticism (the criticism will be difficult because you don't give much to the critic to use – but unless he proposes a solution to that problem, so be it). Similarly, the military doesn't talk about lots of things, and I have no objection to that.

To state the state of the discussion so far as I see it:

- I suggested that there is something like an optimal level of accountability, beyond which it will stifle one's ability to come up with new ideas freely and act freely. I said that I'd change my mind on this if I thought it was possible to map all the arguments and if I thought such a system wouldn't end up creating gochas for those who used it by exposing them to scrutiny.

- You responded that your literature provides ways to handle all the arguments without much burden, and that there are no gotchas because you can always tell people why their demands which they try to impose on you are wrong.

- I didn't read about the system for handling all the arguments easily yet. I didn't find the "there are no gotchas" argument very compelling. I had a concern about how being publically accountable for your ideas can increase the risk of criticism of the more forceful kind, where violation of taboo or otherwise creating a less-than-palatable public face can have a lot of negative strategic implications; and how it is therefore necessary to craft a public image in a more traditional way rather than via PF, even if internal to the organization you try to be more rational.

- You responded by saying that of course there will be some things which you have reason not to say, but you can at least explain that it doesn't make sense to answer questions of a certain sort.

I think this addresses my concert to a significant degree. You create a sort of top level of the system at which PF can be followed, so that in principle a criticism could result in the hidden parts being opened up at some future time. I suspect we have remaining disagreements about the extent of things which it will make sense to open up to critique in this kind of setup. Maintaining any hidden information requires maintaining some plausible deniability about where the secret may be, since knowing exactly which questions are answerable and not answerable tells you most of a secret. And so it follows that if you want to maintain the ability for taboo thoughts to guide significant strategy decisions, you must maintain some plausible deniability about what is guiding strategy decisions at all times. This strategy itself may be something you need to obfuscate to some degree, because confessing that you have something to hide can be problematic in itself... well, hopefully the intellectual environment you exist in isn't **that** bad. Then PF really does seem inadvisable.

> If people would just **state their filters** it'd do so much. People filter on "I think he's dumb" all the time. And on "he is unpopular" and on "he doesn't have a PhD". And they do this *inconsistently*. They filter Joe cuz no PhD, but then talk to Bob a ton who also doesn't have a PhD.

Hm. Stating what your filters are seems like the sort of thing you might not want to do, partly because the filters will often be taboo themselves, but mainly because stating what criteria you use to filter gives people a lot of leverage to hack your filter. Checking for an academic email address might be an effective private filter, but if made public, could be easily satisfied by anyone with a school email address that is still active.

As mentioned in part of Yudkowski's recent book, the problem of signaling that you have a concern which is worthy of attention is a lemons market (https://www.lesserwrong.com/posts/x5ASTMPKPowLKpLpZ/moloch-s-toolbox-1-2, search page for "lemons market"). It's an asymmetric information problem. The person with the meaningful concern can't meaningfully differentiate themselves from others by loudly saying "This one really **is** important!" because everyone can say that. Private tests make sense in a lemons market, because you can often get some information with fallible heuristics which would stop being as good if you made them public.

Granted, I see the upside of filter criteria which anyone sufficiently motivated can meet. I agree that in many respects, a widespread standard of this kind of PF in public figures and institutions would be a big improvement.

> > Similarly, the hypothetical layperson who criticizes you in a way which makes you point to some expert knowledge they'd have to learn isn't following Paths Forward if they walk away, right?

> If the layperson says "oh, you have to learn all that? well i'd rather do this other thing instead. i think it fits me better." that is OK. i don't have a criticism of that.

This seems (and this is the reason I brought up the scenario) analogous to MIRI and Popper. It seems sort of true in principle that MIRI should have a response to Popper beyond the brief remarks made by Yudkowsky, but it doesn't concretely feel very interesting / like it would go anywhere, and there are a lot of other paths to understanding the problems MIRI is interested in which seem more interesting / more likely to go somewhere. I've read Black Swan, which is very Popperian by its own account at least, and although I found the book itself interesting, it didn't seem to contain any significant critique of Bayesian epistemology by my lights -- indeed, I roughly agree with Yudkowsky's remark about the picture being a special case of Bayes (though of course Taleb denies this firmly). It doesn't seem particularly worthwhile, all things considered, for me to even write up my thoughts on Taleb's version of Popper.

Somehow it seems like the Popperian project and the Bayesian project are so different that I'm just not optimistic that the Popperian one even connects with the Bayesian one in terms of what kind of arguments for and against epistemic methodology seem compelling. For me, a mathematical picture like information theory, and the close connection between information and probability provided by the coding theorem, is very compelling and makes me feel more strongly that Bayesianism gets close to what is really going on. I'm not aware of anything approaching this on the Popperian side, to the point where I feel like Popperians aren't interested in playing the same ball game. At least PAC learning, the MDL principle, and other alternatives to Bayesianism have corresponding machine learning algorithms and learning-theory results to show their efficacy.

I suppose precisely that statement is the kind of thing you're saying MIRI should be able to produce at least.

But, to reiterate, my point here is to say that there seems to be a parallel with the layperson who says there are better uses of time than to learn all the technical details required for the discussion.

Another thing I wanted to mention is that it seems like the ability to articulate one's thinking necessarily falls behind the thinking itself, sometimes far behind. Articulating the arguments behind one's position is often a major project, a book that takes many years to write. In these cases, it seems like a response to criticism may end up being only a promissory note to articulate arguments at some later time. Is that a sufficient PF?


PL at 1:32 AM on November 13, 2017 | #9248 | reply | quote

> although I think you may have exceptionally much time to respond to things as compared to other people (or maybe you're assigning my remarks high importance?).

i write a lot faster than you're estimating, with less energy/effort used. this is a skill i developed over time b/c i was a heavy writer/discusser from the beginning when i got into philosophy, so i've developed relevant supporting skills. i'm also a fast reader when i want to be (i can control reading speed based on the level of detail i want to get from something). techniques include RSVP, skimming, sped up TTS/audio/video (audio stuff also allows multitasking).

i also set up my life to have a lot of time for thinking/writing/discussing, on purpose. i've been kinda horrified to find that most think tank and public intellectual types seem not to have done this. (but some of the very best, like David Deutsch, did do it.)

i'm also prioritizing this more than you may expect because people willing to talk about philosophy with me and be reasonable are a scarce commodity. this may not be your experience about what the world is like (you may find plenty of people to talk with in your experience), but for me they're very rare. why? because my method of discussion and ideas are plenty adequate to filter out most people!

oh also, **i enjoy discussions like this**. this isn't painful work for me. this isn't draining or hard. Put another way: I've been playing some Mario Odyssey recently, but to me this discussion is *more fun than video games*.

> partly because the filters will often be taboo themselves

I don't think the kind of filters I'm in favor of are taboo. but it's possible some good ones are that i haven't thought of. i don't really mind breaking most taboos anyway.

> stating what criteria you use to filter gives people a lot of leverage to hack your filter

that's the kind of filter i think is bad. the sort of filter i think is good will work just as well even if people know what it is.

i don't think you should filter on .edu email addresses b/c people with .com email addresses can be correct. if you do that, you're blocking lots of ways someone could correct you. furthermore, you're wasting people's time who see your public email and contact you from a .com and then you ignore them.

good filters involve knowledge or skills (so if the person "hacks" the filter by developing the knowledge or skills, then you're glad and that's fine), or involve clear, objective criteria people can meet (and you're *glad* if they do, b/c the criteria are *useful* instead of just a filter) such as formatting quotes correctly on FI, or actually have to do with the content instead of prestige/credentials/networking/social/authority.

my purpose in asking people to say their filters isn't just to prevent the biased application of filters, and double standards. it's also to get them to stop using irrational filters (that they don't want to admit they use) and also to get them to stop using filters on prestige/social-statatus/etc type stuff. stick to filtering on

1) content. like pointing out a mistake, and then if they have no answer, ok, done.

or

2) you can put up other barriers *if you tell them* and the barriers are reasonable asks (and they can either meet the barrier or criticize it). this needs to be shared to work well. if FI required a certain post format but didn't tell people what it was, it'd be really nasty to filter on it!

i don't want people to use approximate filters that semi-accurately find good people. i want them to use filters that actually don't block progress. i think this is very important b/c a large amount of progress comes from **outliers**. so if you block outlier stuff (over 90% of which is *especially bad*), then you're going to block a lot of great, innovative stuff.

> - You responded by saying that of course there will be some things which you have reason not to say, but you can at least explain that it doesn't make sense to answer questions of a certain sort.

yeah, you can always go to a higher meta level and still say something. the technique of going to a meta level comes up a lot in philosophy. e.g. it's also used here: http://fallibleideas.com/avoiding-coercion

> It's an asymmetric information problem. The person with the meaningful concern can't meaningfully differentiate themselves from others by loudly saying "This one really **is** important!" because everyone can say that.

I actually have credentials which stand out. But I don't usually bring them up, even though I know people are often filtering on credentials. Why?

Because of the terrible social dynamics which are anti-name-dropping and anti-bragging. If you just objectively start stating credentials, people respond with

1) social distate, and assumption that you're low status

2) thinking you're trying to appeal to authority (and not admitting they are filtering on the very kinds of "authority" you're talking about)

3) they debate which credentials are impressive. lots of idiots have PhDs. lots of idiots have spent over 10,000 hours on philosophy (as i have). few people, idiots or not, have written over 50,000 philosophy related things, but that isn't the kind of credential people are familiar with evaluating. i have a very high IQ, but i don't have that *certified*, and even if i did many people wouldn't care (and high IQ is no guarantee of good philosophy, as they would point out, forgetting they are supposedly just trying to filter out the bottom 80% of riff raff). i have associations with some great people, but some of them have no reputation, and as to David Deutsch he's a Popperian. if they wouldn't listen to him in the first place (and they don't), they won't listen to me due to association. Thomas Szasz particularly liked me and was a public author of dozens of especially great books, but most people hate him.

i have great accomplishments, but most of the important ones are in philosophy and are the very things at issue. some other accomplishments are indirect evidence of intelligence and effective learning, and stand out some, but anyone who doesn't want to listen to me still isn't going to. (e.g. at one point i was arguably the best Hearthstone player in the world – i did have the best tournament results – and i wrote Hearthstone articles with considerably more views (5-6 figures per article) than most people have for any content ever. that was just a diversion for me. i had fun and quit. anyway, from what i can tell, this is not a way to get through people's filters. and really i think social skill is the key there, with just enough credentials they can plausibly accept you if they want to.)

So it's *hard*. And I'm *bad* at social networking kinda stuff like that (intentionally – i think learning to be good at it would be intellectually destructive for me).

> This seems (and this is the reason I brought up the scenario) analogous to MIRI and Popper. It seems sort of true in principle that MIRI should have a response to Popper beyond the brief remarks made by Yudkowsky

note that his remarks on Popper are blatantly and demonstrably false, as i told him many years ago. they are false in very objective, clear ways (just plain misstating Popper's position in ways that many *mediocre* Popperians would see is wrong), not just advanced in terms of subtle advanced nuances.

Yudkowsky's opinions of Popper are basically based on the standard hostile-to-Popper secondary sources which get some basic facts wrong and also focus on LScD while ignoring Popper's other books.

> it doesn't concretely feel very interesting

Popper's philosophy explains why lots of what MIRI does is dead ends and can't possibly work. I don't see how lack of interest can be the issue. The issue at stake is basically whether they're wasting their careers and budgets by being *utterly wrong* about some of the key issues in their field. Or more intellectually, the issue is whether their work builds on already-refuted misconceptions like induction.

That's a big deal. I find the lack of interest bizarre.

> I've read Black Swan, which is very Popperian by its own account at least

I'm not familiar with this particular book (I'd be happy to look at it if the author or a fan was offering a Paths Forward discussion). Most secondary sources on Popper are really bad b/c they don't understand Popper either.

> Somehow it seems like the Popperian project and the Bayesian project are so different that I'm just not optimistic that the Popperian one even connects with the Bayesian one in terms of what kind of arguments for and against epistemic methodology seem compelling. For me, a mathematical picture like information theory, and the close connection between information and probability provided by the coding theorem, is very compelling and makes me feel more strongly that Bayesianism gets close to what is really going on.

Are you aware that David Deutsch – who wrote the best two Popperian books – is a physicist who has written papers relating to information flow and probability in the multiverse? He even has an AI chapter in his second book (btw I helped with the book). http://beginningofinfinity.com

The reason CR connects to Bayesian Epistemology (BE) stuff in the big picture is simple:

CR talks about how knowledge can and can't be created (how learning works). This says things like what methods of problem solving (question answering, goal achieving) do and don't work, and also about how intelligence can and can't work (it has to do something that *can* learn, not *can't*). CR makes claims about a lot of key issues BE has beliefs about, which are directly relevant to e.g. AGI and induction.

To the extent there are big differences, that's no reason to give up and ignore it. CR criticizes BE but not vice versa! We're saying BE is *wrong*, and *its projects will fail*, and we know why and how to fix it. And the response "well you're saying we're wrong in a big way, not a small way, so i don't want to deal with it" is terrible.

CR explained why over 2000 years of tradition in epistemology was *disastrously wrong*. And BE, like almost everyone, isn't updating and is ignoring the breakthrough and continuing with the same old errors. BE thinks it's clever b/c it has some new math tools and some tweaks, but from the CR perspective we see how BE keeps lots of the same old fundamental errors.

Why do BE ppl want to bet their careers on CR being false, just b/c some secondary sources said so (and while having no particular secondary source challenging CR that they will actually reference and take responsibility for, and reconsider if that source is refuted)?

It makes no sense to me. I think it's b/c of bad philosophy – the very thing at issue. That's a common problem. Systems of bad ideas are often self-perpetuating. They have mechanisms to keep you stuck. It's so sad, and I'd like to fix it, but people don't want to change.

> I'm not aware of anything approaching this on the Popperian side, to the point where I feel like Popperians aren't interested in playing the same ball game. At least PAC learning, the MDL principle, and other alternatives to Bayesianism have corresponding machine learning algorithms and learning-theory results to show their efficacy.

My opinion is we aren't ready to start coding AGI, and no one has made any progress whatsoever on coding AGI, and the reason is b/c they don't even understand what an AGI is and therefore can't even judge what is and isn't progress on AGI. things like Watson and AlphaGo *are not AGI and are not halfway to AGI either, they are qualitatively different things* (i think they're useful btw, and good non-AGI work).

you need to have some understanding of what you're even trying to build before you build it. how does intelligence work? what is it? how do you judge if you have it? do animals have it? address issues like these before you start coding and saying you're succeeding.

no one is currently coding anything with a generic idea data structure that can handle explanations, emotions, poetry, moral values, criticism, etc. they aren't even working on the right problems – like how to evaluate disagreements/criticism in the general case. instead they are making inductivist and empiricist mistakes, and trying to build the kind of they incorrectly believe is how human thinking works. and they don't want to talk about this forest b/c they are focused on the trees. (or more accurately there's more than 2 levels. so they don't wanna talk about this level b/c they are focused on multiple lower levels).

> Is that a sufficient PF?

it's far more than sufficient, as long as there are followups in the future. iterations can be quite small/short. i commonly recommend that to people (doing a larger number of shorter communications – that way there's less opportunity to build on errors/misunderstandings/etc before feedback).

> Another thing I wanted to mention is that it seems like the ability to articulate one's thinking necessarily falls behind the thinking itself, sometimes far behind. Articulating the arguments behind one's position is often a major project, a book that takes many years to write. In these cases, it seems like a response to criticism may end up being only a promissory note to articulate arguments at some later time.

This is partly a real issue, and that's fine – you can say "I know criticism would be very valuable, and I'll get it just as soon as I'm able to formulate what I'm thinking adequately. And the reason I judge this to be productive to work on more is..."

But I think it's partly that people structure their learning and research the wrong way. They could have more discussion, from the start, and be less fragile about criticism. They could get better at saying initial versions of things to get some initial feedback about major issues they're missing. This can proceed in stages as they keep working and adding levels of detail, and then getting feedback at that level of detail again.

And if there is no feedback, ok, proceed. It's worth a try b/c sometimes someone knows something important and is willing to say so (and this would happen a lot more if Paths Forward were more common. i know people who are very smart, know tons of stuff ... and basically don't like people and don't try to share their knowledge much b/c no one does Paths Forward. i strongly suspect there are many more such people i do not know who had some bad experiences and gave up discussion.) And, also, formulating your thoughts *in writing* at each stage is *helpful to your own thinking*. you should be getting away from "i know what i mean" to actually writing down what you developed so far (so it'd be understandable to a stranger with the right background knowledge, e.g. the reader has to already know calculus, or some science stuff, or even already know Objectivism if you're doing work building on Objectivism. but what the reader doesn't have to do is read your mind or know your quirks. this is how books work in general).

i find i often articulate rough drafts and pieces of things early on, and more later. like with Paths Forward and Yes or No Philosophy, there was discussion of some elements long before i developed them more fully. and i don't going into isolation to think about them alone would have been the best way to develop them more.

i think people who write books usually shouldn't, but i will accept sometimes they should. most people who write books do not have adequate experience writing shorter things – and exposing them to Paths Forward style criticism to check if they are any good. i think most books are bad (i bet you agree), and this could be avoided if people would try to actually write one little essay that isn't bad, first, and then put a lot of Paths Forward type work into getting the essay criticized and addressing criticisms from all comers and not making any excuses and not ignoring any issues. then they'd see how hard it is to actually do good work. but instead of doing a small project to a really high standard, and then repeating a few times, and then doing some medium projects to a really high standard, and THEN doing a book ... they go do a big project to a much lower standard. meh :/

this relates to my ideas about powering up and making progress. the focus should be on learning and self-improvement, not output like books. ~maximize your own progress. then what happens is you generate some outputs while learning, and also some types of outputs become easy for you b/c you've greatly surpassed them in skill level. so you can *very cheaply* output that stuff. if you keep this up enough, you learn enough your cheap/easy outputs end up being better than most people's super-effortful-book-writing. so this is a much much more efficient way to live. don't divert a bunch of effort into writing a book you can maybe just barely write, with a substantial failure chance. put that effort into progress until you can have the same book for a much lower effort cost with a much lower risk of it being bad/wrong.

> But, to reiterate, my point here is to say that there seems to be a parallel with the layperson who says there are better uses of time than to learn all the technical details required for the discussion.

the rational lay person is welcome to say that *and then have no opinion of the matter*. e.g. he says "i'm going to go do plumbing" and then is *neutral* about whether BE or CR is right. he's not involved, and he knows his ignorance. and that's fine b/c neither BE nor CR is saying he's doing plumbing all wrong – we agree he can be a decent plumber without learning our stuff. (he may have some problems in his life that philosophy could help with, such as destroying his children's minds with authoritarian parenting, and ultimately i'd like to do something about that too. but still, the general concept that you can take an interest in X and recognize your ignorance of Y, and not everyone has to be highly interested in Y, is fine.)


curi at 11:52 AM on November 13, 2017 | #9250 | reply | quote

> although I think you may have exceptionally much time to respond to things as compared to other people (or maybe you're assigning my remarks high importance?).

two more comments on this to add to what i said above.

1) my comments to you have no editing pass. i'm not even doing much editing as i go, this is near max speed writing. over the years i've put a lot of effort into being able to write without it being a burden, and into making my initial thoughts/writing good instead of getting stuff wrong then fixing it in editing later. i think this is really important and also unusual. (you should fix mistakes in your writing policies themselves instead of just having a separate editing policy to fix things later – and if you can't do that something is wrong. it makes more sense this way. editing has a place but is super overrated as a crutch for ppl who just plain think lots of wrong and incoherent thoughts all the time and aren't addressing that major issue.)

2) this is public, permalinkable material which i can re-use. i will link people to this in the future. it's a good example of some things, and has some explanations i'll want to re-use. i'm not just writing to you. everyone on the FI forum who cares is reading this now, and others will read it in the future.


curi at 12:47 PM on November 13, 2017 | #9252 | reply | quote

As Ayn Rand would say: check your premises.

Avoiding debates about your premises is so dumb.


Anonymous at 2:32 PM on November 13, 2017 | #9253 | reply | quote

> that's the kind of filter i think is bad. the sort of filter i think is good will work just as well even if people know what it is.

> i don't think you should filter on .edu email addresses b/c people with .com email addresses can be correct. if you do that, you're blocking lots of ways someone could correct you. furthermore, you're wasting people's time who see your public email and contact you from a .com and then you ignore them.

I used a fake example of a taboo filter. I don't have some explicit filter policy which is taboo, but I can imagine that if I had one I wouldn't want to say it, and furthermore if I did say it, I wouldn't be able to defend it in conversation precisely because it would depend on assumptions I don't want to publically defend. I can imagine that I might be contacted by someone who I would automatically know that I should filter, in a knee-jerk kind of way, and my justification for this would be taboo. Suppose I do work which has serious public policy implications, but I expect those implications to be misconstrued by default, with serious negative consequences. If people in the government, or campaigning politicians, etc contact me, my best response would be either no response or something to throw them off the trail. I might be fine with talking about things in an academic journal read only by other experts, but if a reporter cornered me I would describe things only in the most boring terms, etc.

(I'll likely make a more substantive reply later.)


PL at 6:17 PM on November 13, 2017 | #9254 | reply | quote

I'm not very concerned about edge cases. If 5% of intellectuals claim some exceptions, and 95% do Paths Forward, that sounds just fine for now. I will tentatively accept some rare edge cases, and we can investigate them more carefully at some later date if it matters.

On the other hand if you think the taboo case is a good excuses for ~100% of people not to do paths forward -- the current situation -- then we can debate it now. but if you're only trying to offer excuses for less than 20% of people, and agree with me about the other 80+%, i'll take it for now.


curi at 7:39 PM on November 13, 2017 | #9255 | reply | quote

(Still may not get to a more proper reply today, but I'll reply to the most recent point.)

> I'm not very concerned about edge cases. If 5% of intellectuals claim some exceptions, and 95% do Paths Forward, that sounds just fine for now. I will tentatively accept some rare edge cases, and we can investigate them more carefully at some later date if it matters.

Suppose that 5% of intellectuals have good reasons not to do PF, along the lines I described. Then, if 95% of intellectuals do PF, this creates a reason for those 5% of intellectuals to be looked down on and excluded in various ways (funding, important positions, etc). The 5% will of course be unable to explain their specific reasons for not participating in PF, at least publically; which means that even if they can describe them privately, those descriptions can't be included in the official reasons for decisions (about funding, appointment of positions, etc). So it creates a feedback loop which punishes others for taking those private reasons into account. Even if this problem itself is widely understood (and I'm skeptical that it would be, even given the improved intellectual discourse of PF world), it may make sense as a policy to use the standards of PF in those decisions (funding, appointments, etc) because it seems important enough and good enough an indicator of other good qualities.

This trade-off may even be worth it. But, it's not clear at all that "95% of intellectuals could use PF" is good enough a justification to meet with my objection there.


PL at 4:26 PM on November 14, 2017 | #9257 | reply | quote

> (Still may not get to a more proper reply today, but I'll reply to the most recent point.)

There is no hurry on my account. I think the only hurry is if you're losing interest or otherwise in danger of dropping the topic due to time passing.

---

Should I and the other 95% not pursue our own work in the most rational way for fear that some other people would be *incorrectly* attacked for not using all the same methods we're using? No.

I'm happy to grant there are legitimate concerns about working out the details if it catches on. While my answer is a clear "no" in the previous paragraph, I might still be willing to do *something else other than give up PF* to help with that.

I do see the issue that if a lot of people are more public about some matters, that makes it harder for the people with something (legitimate) to hide. I like privacy in general – but not much when it comes to impersonal ideas which ought to be discussed a bunch to help improve them.

Also I'm not directly interested in things like public reputations, appointments or funding. I care about how truth-seeking should/does work. I understand the nature of reason has implications to be worked out for e.g. funding policies and expectations. But I'm not very concerned – I'll try to speak the truth and I expect that'll make things better not worse.


curi at 5:10 PM on November 14, 2017 | #9258 | reply | quote

> Should I and the other 95% not pursue our own work in the most rational way for fear that some other people would be *incorrectly* attacked for not using all the same methods we're using? No.

I don't feel quite right about either agreeing or disagreeing with this. I feel like you're making a logically correct response to my surface-level point but not making much effort to see the deeper intuitions which made me generate the point, and as a result, your response does little to shift my intuitions. So I'm going to state some messy intuitions. The hypothetical where PF is adopted widely is far enough from our world that it isn't really well-defined. My mental image is coming from brief experiences hanging out with the sort of person who makes suggestions whenever they see you could be doing something better, doesn't let you get away with excuses for ignoring their advice if those excuses aren't real reasons, communicates their own preferences and reasons very clearly, and engages happily in debates about them. At first the experience may be invigorating and enjoyable -- indeed, such a person holds more truly to the intellectual ideal in many ways. However, after a while, it somehow gets nerve-wracking. I have the irrational feeling that they are going to bite my head off if I do something inconsistent. I obsess about the list of stated preferences they gave, and fall into weird dysfunctional social patterns.

So, when I imagine a PF society, I imagine a sense of a big eye looking at you all the time and watching for your inconsistencies, whether or not you're trying to play the PF game yourself. It's just the way the world works now -- there's an implicit *assumption* that you're interested in feedback. And it's hard to say no.

Why do I have this reaction to "that kind of person"? Part (only part) of this is due to the exhausting nature of needing to explain myself all the time. I don't think it's exhausting in the sense that mental labor is exhausting. It feels "exhausting" because it feels like a constant obstacle to what I want to do. This is paradoxical, of course, because really this kind of person I am describing is only trying to help; they are offering all these suggestions about why what you're doing could be done better! And it's not like it takes *that* long to explain a reason, or to change plans. So, why should it feel like a constant barrier?

I think it has something to do with this LW post on bucket errors:

http://lesswrong.com/lw/o2k/flinching_away_from_truth_is_often_about/

IE, sometimes we just wouldn't make the right update, and some port of us is aware of that, and so refuses to update. And it's not *just* that we're genuinely better off not updating in those cases, until that time at which we have a better way to update our view (IE, a better view to update *to* than the one we can currently explicitly articulate). Because even if that's *not* the case, it's *still* a fact about the human motivational framework that it finds itself in these situations where it sometimes feels attacked in this way, and people can need to disengage from arguments and think about things on their own in order to maintain mental stability. (I am literally concerned about psychotic breaks, here, to some extent.)

In general, as a matter of methodology, when you have a criticism of a person or a system or a way of doing things or an idea or an ideology, I think it is very important to step back and think about why the thing is the way that it is, and understand the relevant bits of the system in a fair bit of detail. This is partly about Chesterton's Fence:

https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence

IE, if it's true that people sometimes don't want to respond to arguments, and you think this is a wrong reflex, isn't it worth having a lot of curiosity about why that is, what motivations they might have for behaving in this way, so that you can be sure that your reasons in favor of PF outweigh the reasons against, and your methodology for PF address the things which can go wrong if you try to respond to all criticisms, which (perhaps) have been involved in people learning *not* to respond to all criticisms based on their life experience, or at least not naturally learning *to* respond to all criticisms?

And similarly, and perhaps as importantly, understanding in detail why the person/system/idea is the way it is seems necessary for attempting to implement/spread a new idea in a way that doesn't just bash its head against the existing incentives against that way of doing things. You want to understand the existing mechanics that put the current system in place in order to change those dynamics effectively. This is partly about errors vs bugs:

http://celandine13.livejournal.com/33599.html

And partly about the point I made earlier, concerning understanding the deeper intuitions which generate the person's statements so that you can respond in a way that has some chance of shifting the intuitions, rather than responding only to surface-level points and so being unlikely to connect with intuitions.

(This "surface level vs deep intuitions" idea has to do with the "elephant and the rider" metaphor of human cognition. (I don't think it's a particularly *good* metaphor, but I'll use it because it is standard.) It's a version of the system 1 / system 2 model of cognitive bias, where system 2 (the explicit/logical/conscious part of the brain) serves mostly as a public relations department to rationalize what system 1 (the instinctive/emotional/subconscious brain) has already decided; IE, the rider has only some small influence on where the elephant wants to go. So, to a large extent, changing someone's mind is a matter of trying to reach the intuitions. This picture in itself might be a big point of disagreement about how useful something like Paths Forward can be. Part of my contention would be that most of the time, the sort of arguments I don't want to respond to are ones which *obviously* from my position don't seem like they could have been generated for the reasons which the arguer explicitly claims they were generated, for example, couldn't have been generated out of concern for me. In such cases, I strongly expect that the conversation will not be fruitful, because any attempt on my part to say things which connect with the intuitions of that person which actually generate their arguments will be rejected by them, because they are motivated to deny their real reasons for thinking things. This denial itself will always be plausible; they will not be explicitly aware of their true motivations, and their brain wouldn't be using the cover if it didn't meet a minimal requirement of plausible deniability. Therefore no real Paths Forward conversation can be had there. If you're curious about where I got this mental model of conversations, I recommend the book The Righteous Mind.)

And partly about cognitive reductions, described in point 5 and 6 in this post:

https://agentfoundations.org/item?id=1129

IE, *in general, when you are confused about something*, particularly when it's a cognitive phenomenon, and especially when you may be confused about whether it is a cognitive phenomenon because it's related to a map-territory confusion, a great way of resolving that confusion is to figure out what kind of algorithm would produce such a confusion in the first place, and for what purpose your brain might be running that kind of algorithm. And, if you seemingly "resolve your confusion" without doing this, you're likely still confused. The same statement holds across different minds.

So, to try to make all of that a bit more coherent: I would like for you to put more effort into understanding what might motivate a person to do anything other than Paths Forward other than stupidity or bias or not having heard about it yet or other such things. If you can put yourself in the shoes of at least one mindset from which Paths Forward is an obviously terrible idea, I think you'll be in a better position both to respond constructively to people like me, and to revise Paths Forward itself to actually resolve a larger set of problems.

> CR explained why over 2000 years of tradition in epistemology was *disastrously wrong*. And BE, like almost everyone, isn't updating and is ignoring the breakthrough and continuing with the same old errors. BE thinks it's clever b/c it has some new math tools and some tweaks, but from the CR perspective we see how BE keeps lots of the same old fundamental errors.

What's a priority ordering of things I should read to understand this? Is Beginning of Infinity a good choice? I remember that my advisor brought David Deutsch's essay on why the Bayesian approach to AGI would never work to a group meeting one time, but we didn't really have anywhere to go from it because all we got out of it was that DD thought the Bayesian approach was wrong and thought there was something else that was better (IE we couldn't figure out why he thought it was wrong and what it was he thought was better).

> you need to have some understanding of what you're even trying to build before you build it. how does intelligence work? what is it? how do you judge if you have it? do animals have it? address issues like these before you start coding and saying you're succeeding.

You'll get no disagreement about that from the Bayesian side. And of course the latest MIRI approach, the logical inductor, is non-bayesian in very significant ways (though there's not a good non-technical summary of this yet). And logical inductors are still clearly not enough to solve the important problems, so it's likely there are still yet-more-unbayesian ideas needed. But that being said, it seems MIRI also still treats Bayes as a kind of guiding star, which the ideal approach should in some sense get as close to as possible while being non-Bayesian enough to solve those problems which the Bayesian approach provably can't handle. (In fact MIRI would like very much to have a version of logical inductors which comes closer to being bayesian, to whatever extent that turns out to be possible -- because it would likely be easier to solve other problems with more bayes-like logical uncertainty).

> This is partly a real issue, and that's fine – you can say "I know criticism would be very valuable, and I'll get it just as soon as I'm able to formulate what I'm thinking adequately. And the reason I judge this to be productive to work on more is..."

> But I think it's partly that people structure their learning and research the wrong way. They could have more discussion, from the start, and be less fragile about criticism. They could get better at saying initial versions of things to get some initial feedback about major issues they're missing. This can proceed in stages as they keep working and adding levels of detail, and then getting feedback at that level of detail again.

I strongly agree with this and the several paragraphs following it (contingent in some places on Paths Forward not being a net-bad idea).

> the rational lay person is welcome to say that *and then have no opinion of the matter*. e.g. he says "i'm going to go do plumbing" and then is *neutral* about whether BE or CR is right.

I strongly disagree with this principle. I might agree on a version of this principle which instead said *and then have no public opinion on the matter*, IE *either* be neutral on BE vs CR *or* not claim to be a public intellectual, *provided* I was furthermore convinced that PF is an on-net-good protocol for public intellectuals to follow. I find it difficult to imagine endorsing the unqualified principle, however. It is quite possible to simultaneously, and rationally, believe that X is true and that having a conversation about X with a particular person (who says I have to learn quantum mechanics in order to see why X is false) is not worth my time. To name a particular example, the quantum consciousness hypothesis seems relevant to whether AI is possible on a classical computer, but also seems very likely false. While I would be interested in a discussion with an advocate of the hypothesis for curiosity's sake, it seems quite plausible that such a discussion would reach a point where I'd need to learn more quantum mechanics to continue, at which point I would likely stop. At that point, I would be *wrong* to change my opinion to a neutral one, unless the argument so far had swayed my opinion in that direction.

This goes back to my initial claim that PF methodology seems to hold the intellectual integrity of the participant ransom, by saying that you must continue to respond to criticisms in order to keep your belief. While this might be somewhat good as a social motivation to get people to respond to criticism, it seems very bad in itself as epistemics. It encourages a world where the beliefs of the person with the most stamina for discussion win out.

Overall, I continue to be somewhat optimistic that there could be some variant of PF which I would endorse, though it might need some significant modifications to address my concerns. At this point I feel I may assign positive expectation to the proposition of MIRI starting to follow PF, on the whole, but am uneasy about certain aspects of how that might play out, including the time investment which might be involved. I've now read your document on how to do PF without it taking too much time, but it seems like MIRI in particular would attract a huge number of trolls. Part of the reason Eliezer moved much of his public discussion to Facebook rather than LW was that there were a number of trolls out to get him in particular, and he didn't have enough power to block them on LW. Certainly I *wish* there were *some* could way to solve the problems which PF sets out to solve.


PL at 4:12 PM on November 20, 2017 | #9261 | reply | quote

yay, you came back.

> but not making much effort to see the deeper intuitions which made me generate the point, and as a result, your response does little to shift my intuitions.

It's hard for me to get in your head. I don't know you. I don't know your name, your country, your age, your profession, your interests, how you spend your time, your education, or your background knowledge. If you have a website, a blog, a book, papers, or an online discussion history (things I could skim to get a feel for where you're coming from), I don't know where to find it.

And speculating on what people mean can cause various problems. I also don't care much for intuitions, as against reasoned arguments.

> My mental image is coming from brief experiences hanging out with the sort of person who makes suggestions whenever they see you could be doing something better, doesn't let you get away with excuses for ignoring their advice if those excuses aren't real reasons, communicates their own preferences and reasons very clearly, and engages happily in debates about them. [...] However, after a while, it somehow gets nerve-wracking. I have the irrational feeling that they are going to bite my head off if I do something inconsistent. I obsess about the list of stated preferences they gave, and fall into weird dysfunctional social patterns.

That sounds a lot like how most people react to Objectivism (my other favorite philosophy). And it's something Objectivism addresses directly (unlike CR which only indirectly addresses it).

Anyway, I've never tried to make PF address things like an "irrational feeling". It's a theory about how to act rationally, not a solution to irrationality. I don't expect irrational people to do PF.

Separately, what can be done about irrationality? I think learning Objectivism and CR is super helpful to becoming rational, but most people find it inadequate, but I don't have anything better to offer there. I know a lot about *why and how* people become irrational – authoritarian parenting and static memes – but solutions that work for many adults do not yet exist. (Significant solutions for not making your kids irrational in the first place exist, but most adults, being irrational, don't want them. BTW, DD is a founder of a parenting/education philosophy called Taking Children Seriously, which applies CR and (classical) liberal values to the matter. And FYI static memes are a concept DD invented and published in BoI. Summary: http://curi.us/1824-static-memes-and-irrationality )

Back to your comments, there are two sympathetic things I have to say:

1) I don't think the people biting your head off are being rational. I think most of the thing you don't like which you experienced *is actually bad*. Also a substantial part of the issue may be misunderstandings.

2) I think you have rational concerns mixed in here, and I have some fairly simple solutions to address some of this.

To look at this another way: I don't have this problem. I have policies for dealing with it.

Part of this is my own character, serene self-confidence, ability to be completely unfazed by people's unargued judgements, disinterest in what people think of me (other than critical arguments about ideas), and my red pill understanding of social dynamics and unwillingness to participate in social status contests (which I don't respect, and therefore don't feel bad about the results of).

Partly I know effective ways to stand up to arguments, demands, pressures, meanness, etc. (Note that I mean rationally effective, in my view – and my own judgement is what governs my self-esteem, feelings, etc. I don't mean *socially effective*, which is a different matter which doesn't really concern me. I think seeking popularity in that way is self-destructive, especially intellectually.)

So if people give me philosophy arguments, I respond topically, and I'm not nervous, I'm fully confident in my ability to address the matter – either by refuting it or by learning something. This confidence was partly built up from a ton of experience doing those things (in particular, I lost every major argument with DD for the first ~5 years, so I have a lot of experience changing my mind successfully), and also I had some of this attitude and confidence since early childhood.

What if people argue something I don't care about? What if they want me to read some book and I think I have better things to do? What if they want to tell me how Buddhism predicted quantum physics? What if they think homeopathy works and I should study it and start using it? What if they're recommending meditation or a fad diet? What if it's some some decent stuff that I don't care about because I already know more advanced stuff?

I just state the situation as I see it. I always have some kind of reasoning for why I don't want to look into something more – or else I'd be interested. There are common patterns: I already know stuff about it, or I don't think it fits my interests, or I don't think it looks promising enough to investigate at all (due to some indicator I see and could explain). Each of these lends itself to some kind of response comment.

I investigate tons of stuff a little bit because I'm able to do it quickly and efficiently, and I'm curious. I want to form a judgement. I often find plenty of info from Amazon reviews, for example, to form an initial judgement. Then the person can either tell me something I've gotten wrong, or I consider my judgement good enough.

And I know some of the broadest knowledge which is most applicable to many fields (epistemology – in every field you have to learn whatever the field is about, so the philosophy of learning is relevant; and you need to evaluate ideas in every field). So I find it's usually easy to point out mistakes made in other fields – they are using the wrong philosophy methods, which is why they are getting the wrong answers, and that's why I disagree (I can frequently give some specifics after 15 minutes of research). Often I bring the issue back to epistemology – I will ask the person if they thing they are recommending is Popperian, and if not how can it be any good?

This is all optional. If I think something is promising I can look at it all I want. But if I think something is bad then I have minimal options like these. There are also more alienating things I can do that get rid of most people very fast while being scrupulously rational, but I'm not sure how to give generic examples that you'll understand (I talk about this a bit more at the end of this section regarding how the FI forum works). But in short I find that by asking for high standards of rationality, I can get anyone to stop talking to me very quickly.

If you use techniques like this, you can quickly find yourself in a philosophy argument (rather than an argument about the specific thing they were bringing up). That isn't a problem for me. I *want* more philosophy discussions, and I also regard it as extremely important to keep Paths Forward open regarding philosophy issues, and I also know all the philosophy arguments I need offhand (it's my speciality, and debate was a major part of my philosophy learning).

So this is convenient for me, but what about other people with other interests? I think *everyone needs philosophy*. Everyone has a philosophy, whether they think about it or not. Everyone tries to learn things and judge ideas – which are philosophy issues. THere's no getting away from philosophy, so you ought to study it some and try to be good at it. Currently, all competent philosophers are world class, and it's the field which most badly needs talent, so people in all fields ought to become world class philosophers in order to do their own field well. Our culture isn't good enough at philosophy for the stuff you pick up here and there to be decent, so you can't just focus on your own field, sorry.

Oh and what if people are mean and socially pushy? It doesn't get to me. I just think they're being dumb and immoral, so why would I feel bad? And I don't mind standing up to meanness. I'm willing to call it out, explicitly, and to name some of the social dynamics they are using to pressure me. Most people don't like to do this. E.g. when I ask people why they're being mean, they usually respond by being even more mean, and really trying hard to use social dynamics to hurt me. And they do stuff that would in fact hurt most other people, but doesn't hurt me. Learning to deal with such things is something I'd recommend to people even if they don't do PF.

Also I designed my life not to need social status – e.g. I'm not trying to get tenure, get some prestigious academic gatekeepers to publish me, or impress a think tank boss (who gets taken in by social games and office politics) enough to keep paying me. I'm not reliant on a social reputation, so the issue for me is purely if the jerks can make me feel bad or control me (they can't). Such a life situation is more or less necessary for a serious intellectual so they can have intellectual freedom and not be pressured to conform (sorry academia). If PF doesn't work well for someone b/c they are under a bunch of social pressures, I understand the issue and advise them to redesign their life if they care about the truth (I don't expect them to do this, and I'd be thrilled to find a dozen more people who'd do PF).

BTW the Fallible Ideas (FI) yahoo discussion group is public and mostly unmoderated. This has never been a big problem. I will ask problem people questions, e.g. why they're angry, why they're at FI, or what they hope to accomplish by not following FI's ethos. I find that after asking a few times and not saying anything else, they either respond, try to make serious comments about the issues, or leave – all of which are fine. Other regular posters commonly ask similar questions or ignore the problem people, instead of engaging in bad discussion. Sometimes if I think someone is both bad and persistent, and others are responding to them in a way I consider unproductive, I reply to the other people and ask why they're discussing in that way with a person with flaws X, Y and Z, and how they expect it to be productive. Stuff like this can be harsh socially (even with no flaming), and is effective, but is also fully within the rules of PF and reason (and the social harshness isn't the intent, btw, but I don't know how to avoid it, and that's why tons of other forums have way more trouble with this stuff, b/c they try to be socially polite, which doesn't solve the problem, and then they don't know what to do (within the normal social rules) so they have to ban people).

The first line of defense, though, is simply asking people to format their posts correctly. This is super objective (unlike moderating by tone, style, flaming, quality, etc) and is adequate to get rid of most bad posters. Also, if people don't want to deal with the formatting rules they can always use my blog comments instead where basically anything goes short of doxing and automated spam bots (though I might ask someone to write higher quality posts or use ">" for quoting in blog comments, but I never moderate over it. In blog comments, I don't even delete false, profanity-laced sexual insults against named FI people – whereas those actually would get you put on moderation on the yahoo group).

> Eliezer moved much of his public discussion to Facebook

I consider Facebook the worst discussion forum I've used. I've found it's consistently awful – even worse than twitter.

> At this point I feel I may assign positive expectation to the proposition of MIRI starting to follow PF

I doubt it because only one MIRI person replied to my email, and his emails were brief, low quality, and then he promptly went silent without having said anything about Paths Forward or providing any argument which addresses Popper.

> > you need to have some understanding of what you're even trying to build before you build it. how does intelligence work? what is it? how do you judge if you have it? do animals have it? address issues like these before you start coding and saying you're succeeding.

> You'll get no disagreement about that from the Bayesian side.

I found *everyone* I spoke with at LW was disagreeable to this.

> And of course the latest MIRI approach, the logical inductor, is non-bayesian in very significant ways (though there's not a good non-technical summary of this yet).

FYI I can deal with technical stuff, and I'd be willing to if I thought there were Paths Forward, and I thought the technical details mattered (instead of being rendered irrelevant by prior philosophical disagreements). I'm an experienced programmer.

> What's a priority ordering of things I should read to understand this?

I'm not sure what essay you're referring to, but DD's books explain what he thinks. I recommend DD before Popper. I have reading selections for both of them here:

https://fallibleideas.com/books#deutsch

DD's books cover multiple topics so I listed the epistemology chapter numbers for people who just want that part. Popper wrote a lot so I picked out what I think is the best (which is absolutely *not* _The Logic of Scientific Discovery_, which is what tons of Popper's critics focus on. LScD is harder to understand, and has less refined ideas, than Popper's later work.)

There's also my own material such as:

https://yesornophilosophy.com

I consider that (which improves some aspects of CR) and Paths Forward to be my largest contributions to philosophy. If you want it but the price is a problem, I can give you a large discount to facilitate discussion.

https://curi.us/1595-rationally-resolving-conflicts-of-ideas

https://fallibleideas.com

https://elliottemple.com/reason-and-morality

There are other resources, e.g. our community (FI) has ~100,000 archived discussion emails – which are often linked to when people ask repeat questions. There's currently a project in progress to make an ebook version of the BoI discussion archives. There hasn't been a ton of focus on organizing everything because Popper and DD's books, and some of my websites, are already organized. And because we don't yet have a solution to how to persuade people of this stuff, so we don't know what the right organization is. And because people have mostly been more interested in discussing and doing their own learning, rather than making resources for newcomers. And it's a small community.

And I don't consider FI realistically learnable without a lot of discussion, and I think the current resources are far more than adequate for someone who really wants to learn –which is the only kind of person who actually will learn. So I don't think 50% better non-discussion educational resources would change much.

People read books and *do not understand much of it*. This is the standard outcome. Dramatically overestimating how much of the book they understood is also super common. What needs to happen is people discuss stuff as issues come up, but I've had little success getting people to do this. People like to read books in full, then say they disagree without being specific – instead of stopping at the first paragraph they disagree with, quoting it, and saying the issue.

I don't know how to make material that works well for passive audiences – and no one else does either. Every great thinker in history has been horribly misunderstood by most of their readers and fans. This applies to DD's and Popper's books too. E.g. I think DD is the only person who ever read Popper and understood it super well (without the benefit of DD's help, as I and some others have had).

It took me ~25,000 hours, including ~5,000 hours of discussions with DD, to know what I know about philosophy. That doesn't include school (where I learned relevant background knowledge like reading and writing). And I'm like a one in a hundred million outlier at learning speed. There are no fast solutions for people to be good at thinking about epistemology; I've made some good resources but they don't fundamentally change the situation. DD mostly stopped talking with people about philosophy, btw (in short b/c of no Paths Forward, from anyone, anywhere – so why discuss if there's no way to make progress?); I'm still trying. Popper btw did a bunch of classics work b/c he didn't like his philosophy colleagues – b/c the mechanisms for getting disagreements resolved and errors corrected were inadequate.

> So, when I imagine a PF society, I imagine a sense of a big eye looking at you all the time and watching for your inconsistencies, whether or not you're trying to play the PF game yourself. It's just the way the world works now -- there's an implicit *assumption* that you're interested in feedback. And it's hard to say no.

Sounds good to me. But we can start with just the public intellectuals. (In the long run, I think approximately everyone should be a public intellectual. They can be some other things too.)

> It feels "exhausting" because it feels like a constant obstacle to what I want to do. This is paradoxical, of course, because really this kind of person I am describing is only trying to help; they are offering all these suggestions about why what you're doing could be done better! And it's not like it takes *that* long to explain a reason, or to change plans.

There's some big things here:

1) You need lots of reusable material to deal with this which deals with all kinds of common errors. Even giving short explanations of common errors, which you know offhand, can get tiring. Links are easier.

2) If you get popular, you need some of your fans to field questions for you. Even giving out links in response to inquiries is too much work if you have 50 million fans. But if your fanbase is any good, then it should include some people willing to help out by fielding common questions (mostly using links) and escalating to you only when they can't address something. Also if you have that many fans you should be able to make a lot of money from them, so you can pay people to answer questions about your ideas. (Efficiently (link heavy) answering questions from people interested in your ideas is a super cost efficient thing to spend money on if you have a large, monetized fanbase. So of the links can even be to non-free material, so answering questions directly helps sell stuff in addition to indirectly helping you be more popular, spreading your ideas, etc, and thus indirectly selling more stuff.)

3) When you're new to PF there's a large transition phase as you go from all your ideas being full of mistakes to trying to catch up to the cutting edge on relevant issues – the current non-refuted knowledge. But what is the alternative? Being being the cutting edge, being wrong about tons of stuff and staying wrong! That won't be an effective way to make progress on whatever work you're trying to do – you'll just do bad work that isn't valuable b/c it's full of known flaws. (This is basically what most people currently do – most intellectual work to create new ideas of some kind, including throughout science, is *bad* and unproductive.)

4) Stop Overreaching. http://fallibleideas.com/overreach This also helps mitigate the issue of 3. overreaching is trying to claim more than you know, and do overly ambitious projects, so you're basically constantly making lots of mistakes, and the mistakes are overwhelming. you should figure out what you do know and can get right, less ambitiously, and start there, and build on it. then criticism won't overwhelm you b/c you won't be making so many mistakes. it's important to limit your ambition to a level where you're only making a manageable amount of mistakes instead of so many mistakes you think fixing all your mistakes (ala PF) is hopeless and you just give up on that and ignore tons of problems.

> It is quite possible to simultaneously, and rationally, believe that X is true and that having a conversation about X with a particular person (who says I have to learn quantum mechanics in order to see why X is false) is not worth my time.

that's not the case of being a plumber who knows nothing about X and knows he knows nothing about X. that's the case of thinking you know something about X.

and in that case, you should do PF. you should say why you think it's not worth your time, so that if you're mistaken about that you could get criticism (not just criticism from the other guy, btw, but from the whole public, e.g. from the smart people on the forums you frequent.)

> To name a particular example, the quantum consciousness hypothesis seems relevant to whether AI is possible on a classical computer, but also seems very likely false.

That's exactly the kind of thing I know enough about to comment on publicly, and would debate a physicist about if anyone wanted to challenge me on it.

It's crucial to get that right if you wanna make an AGI.

It's not crucial to know all the details yourself, but you ought to have some reasoning which someone wrote down. And e.g. you could refer someone who disagrees with you on this matter the FI forum to speak with me and Alan about it (Alan is a real physicist, unlike me). (You don't have to answer everything yourself! That is one of many, many things we're happy to address if people come to our forum and ask. Though you'd have to agree with us to refer people to our answers – if you disagree with our approach to the matter then you'll have to use some other forum you agree with, or if your viewpoint has no adequate forums then you'll have to find some other option like addressing it yourself if you care about AGI and therefore need to know if classical computers can even do AGI.)

This issue is easy btw. Brains are hot and wet which causes decoherence. Quantum computers require very precise control over their components in order to function. Done. I have yet to encounter a counter-argument which requires saying much more than this about the physics involved. (If they just want to say, "But what if...? You haven't given 100% infallible proof of your position!" that is a philosophy issue which can be addressed in general purpose ways without further comment on the physics.)

> While I would be interested in a discussion with an advocate of the hypothesis for curiosity's sake, it seems quite plausible that such a discussion would reach a point where I'd need to learn more quantum mechanics to continue, at which point I would likely stop. At that point, I would be *wrong* to change my opinion to a neutral one, unless the argument so far had swayed my opinion in that direction.

If you don't know what's right, then you should be neutral. If you don't know how to address the matter, you should find out (if it's relevant to you in some high priority way, as this issue is highly relevant to people trying to work on AGI b/c if they're wrong about it then tons of their research is misguided). If no one has written down anything convenient, or made other resources, to make this easy for you ... then you shouldn't assume your initial position is correct. You shouldn't just say "I don't want to learn more QM, so I'll assume my view is right and the alternative view is wrong". You need to either give some reasoning or stop being biased about views based on their source (such as being your own view, or the view you knew first).

> This goes back to my initial claim that PF methodology seems to hold the intellectual integrity of the participant ransom, by saying that you must continue to respond to criticisms in order to keep your belief.

Yes, that's more or less the point – people who don't do that are irrational, and everyone can and should do that. You can and should always act on non-refuted ideas. There are ways to deal with criticism instead of ignore it and then arbitrarily take biased sides (e.g. for the view you already believed instead of the other one you don't answer).

> While this might be somewhat good as a social motivation to get people to respond to criticism, it seems very bad in itself as epistemics. It encourages a world where the beliefs of the person with the most stamina for discussion win out.

I think it's problematic socially (people feel pressure and then get defensive and stupid), but good epistemics. Socially it's common that people participate in discussions when they don't want to in order to try to answer criticisms, defend their view, show how open-minded they are, be able to say their views can win debates instead of needing to shy away from debate, etc. But when people don't want to discuss for whatever reason (think it's a waste of time, emotionally dislike the other guy, think the rival views are so dumb they aren't worth taking seriously, etc), they discuss badly. So that sucks. I wish people would just stop talking instead of pretending to be more interested in discussion than they are. I don't want to socially motivate people to discuss more, they'll just do it really badly. People only discuss well when they have good, intellectual motivations, not with gritted teeth and trying to get it over with.

You don't need stamina to win with PF. What you need instead are principles. You need to get criticisms of tons of common bad ideas – especially major categories – written down. And you need to be able to ask some key questions the other guy doesn't have answers to – like what is their answer to Popper, who wrote several books explaining why they're wrong about X. To win with PF, you need to know a lot of things, and have sources (so you don't have to write it all out yourself all the time).

This is what we want epistemologically – people who don't just ignore issues and instead have some answers of some kind written down somewhere (including by other authors, as long as you take responsibility for the correctness of your own sources). And if the answer is challenged in a way which gets past the many, many pre-existing criticisms of categories of bad ideas, then you have an interesting challenge and it should be addressed. (I discuss this as having "libraries of criticism" in Yes or No Philosophy – a stockpile of known criticisms that address most new ideas, especially bad ones, and then the few new ideas that make it past those pre-existing pre-written criticisms are worth addressing, there is something novel there which you should investigate, at least briefly – either it's got some good new point or else you can add a new criticism to your library of criticism.) And btw you need to do something sorta like refactoring criticisms (refactoring being a programming concept) – when you get 3 similar criticisms, then you figure out the pattern involved and replace them with a general criticism which addresses the pattern. That's what making more principled and general purpose criticisms is about – finding broader patterns to address all at once instead of having tons and tons of very specific criticisms.

So the point is, when you use methods like these, whoever has the most stamina doesn't win. A much better initial approximation is whoever knows more – whoever has the best set of references to pre-existing arguments – usually wins. But if you know something new that their existing knowledge doesn't address, then you can win even if you know less total stuff than them. (Not that it's about winning and losing – it's about truth-seeking so everyone wins.)

> but it seems like MIRI in particular would attract a huge number of trolls.

what's a troll, exactly? how do you know someone is a troll? i take issue with the attitude of judging people trolls, as against judging ideas mistaken. i take issue more broadly with judging ideas by source instead of content.

also the kind of people you attract depends on how you present yourself. i present myself in a way that most bad people aren't interested in, and which is targeted to appeal to especially good people. (there are lots of details for how to do this, particularly from Objectivism. the short explanation is don't do social status games, don't do socially appealing things, those are what attract the wrong people. present as more of a pure intellectual and most dumb people won't want to talk with you.)

and wouldn't MIRI attract a reasonable number of knowledgeable people capable of answering common, bad points? especially by reference to links covering common issues, and some standard (also pre-written and linkable) talking points about when and why some technical knowledge is needed to address certain issues.

btw Eliezer considered *me* a troll and used admin powers against me on LW, years ago, rather than say a single word to address my arguments about mostly Popper (he also ignored my email pointing out that his public statements about Popper are factually false in basic ways). and what was his reasoning for using admin powers against me? that I was unpopular. that is not a rational way to handle dissent (nor is the mechanism of preventing people from posting new topics if they have low karma, and then downvoting stuff you disagree with instead of arguing, oh and also rate limiting people to 1 comment per 10 minute at the same time that they are facing a 20-on-1 argument and trying to address many people). what's going on here is: LW uses downvotes and other mechanisms to apply social pressure to suppress dissent, on the assumption no one can resist enough social pressure escalations, and then they use ad hoc unwritten rules on any dissents who don't bow to social pressure. this lets them avoid having written rules to suppress dissent. more recently an LW moderator ordered me to limit my top level posts, including links, to 1 per week. i asked if he could refer me to the written rules. he said there aren't any, it's just people doing whatever they want, with no codified rules, and no predictability for the people who get punished with no warning. the moderator's argument for why he limited my posting is that my posts didn't have enough upvotes – at a time when a lot of other people's posts also barely had any upvotes b/c, apparently, the site doesn't have a lot of traffic. http://lesswrong.com/lw/56m/the_conjunction_fallacy_does_not_exist/3wf5

i have a lot of experience at moderated forums and stuff like this is completely typical. there are always unwritten rules used to suppress dissent without answering the arguments. they maybe try to do something like PF with people that have mild disagreements with them, but they don't want to think about larger disagreements that question some of the ideas they are most attached to. they just want to call that dissent "crazy", "ridiculous", etc. (That's the kind of thing Popper spent his life facing, and he also explained why it's bad.)

> Certainly I *wish* there were *some* could way to solve the problems which PF sets out to solve.

Although I explained the concept more recently, the FI community has basically been doing PF for over 20 years, and it works great IME. Our current discussion forum is at: http://fallibleideas.com/discussion-info

> At that point, I would be *wrong* to change my opinion to a neutral one, unless the argument so far had swayed my opinion in that direction.

getting back to this: arguments shouldn't *sway*. either you can answer it or you can't. so either it refutes your view (as far as you currently know) or it doesn't. see Yes or No Philosophy.

the reason PF thinks issues can be resolved is b/c you can and should act on non-refuted ideas and should judge things in a binary (refuted or non-refuted) way. if you go for standard, vague stuff about *sway* and the *weight* of arguments, then you're going to have a lot of problems. those are major epistemological errors related to justificationism and induction. the FI epistemology is not like the mainstream one. you might be able to adapt something similar to PF to work with a traditional epistemology, but i haven't tried to do that.

> if it's true that people sometimes don't want to respond to arguments, and you think this is a wrong reflex, isn't it worth having a lot of curiosity about why that is, what motivations they might have for behaving in this way, so that you can be sure that your reasons in favor of PF outweigh the reasons against, and your methodology for PF address the things which can go wrong if you try to respond to all criticisms, which (perhaps) have been involved in people learning *not* to respond to all criticisms based on their life experience, or at least not naturally learning *to* respond to all criticisms?

yes i've been very curious about that and looked into it a great deal. i think the answers are very sad and involve, in short, people being very bad/irrational (this gets into static memes and bad parenting and education, mentioned above – i think people are tortured for the first 20 years of their life and it largely destroys their ability to think well, and that's the problem – parents and school are *extremely destructive*). i don't like the answers and have really, really looked for alternatives, but i haven't been able to find any alternatives that i don't have many decisive criticisms of. i have been doing PFs with my answers for a long time – addressing criticism, seeking out contrary ideas to address, etc. mostly what i find is people don't want to think about how broken various aspects of the world (including their own lives) are.

unfortunately, while i know a lot about what's wrong, i haven't yet been able to create anything like a complete solution (nor has anyone else, like DD, Rand, Popper or whoever you think is super smart/wise/etc). i know lots of solutions that would help with things if people understood them and did them, but people block that in the first place, and have emotional problems getting in the way, and various other things. exactly as one would expect from static meme theory.

side note: as mentioned above, i disagree with reasons/arguments outweighing anything (or having weights).

> elephant and the rider

yeah i've heard this one. FI has several ways of talking about it, too.

> So, to a large extent, changing someone's mind is a matter of trying to reach the intuitions.

I think changing someone's mind is a matter of *them wanting to learn and doing 90% of the work*. E.g. they need to do most of the work to bridge the gap from your arguments to their intuitions/unconscious. You can help with this in limited ways, but they have to be the driving force. (BTW CR and Objectivism both converged on this position for different reasons.)

> Part of my contention would be that most of the time, the sort of arguments I don't want to respond to are ones which *obviously* from my position don't seem like they could have been generated for the reasons which the arguer explicitly claims they were generated, for example, couldn't have been generated out of concern for me.

Dishonesty is a big problem. But I say what I think so that the person can answer me or e.g. someone from FI can point out if they think the person wasn't dishonest and I'm wrong. See also Objectivism's advocacy of pronouncing moral judgements as the key to dealing with an irrational society. An alternative way to address it is to say what they could do (what kind of material to send you) that you'd be interested in and which could change your mind about your public claims – explicitly tell them a Path Forward that'd be acceptable to you. (In the first case, the PF is they could point out a way you misinterpreted them. Miscommunication is super common, so going silent is really risky about *you being wrong*. So if they say "I see how you took that as dishonest, let me clarify..." then that's good, and if they say "fuck you, you ad hominem user." and think talking about dishonesty is off limits, then, oh well, they are blocking the possibility of resolving an issue between you. And yes I agree that dishonest people are dishonest *with themselves most of all*, rather than just trying to trick you. If they want help with that, cool. If they don't want to think about it, OK bye. If they propose a way to proceed which is mutually acceptable given the outstanding disagreement about their dishonesty, ok, cool. If you mutually agree to go your separate ways, that's OK too.)

---

this is already long, so i'm not going to answer the rest now. in particular i haven't answered all of the section of your post with 4 links plus commentary. perhaps you'll be able to use things i said above to address some of those matters without me commenting directly. please let me know which of those you still want me to answer, if any.

also other requests are welcome. e.g. if you want me to write shorter posts, i can do that. i can focus on one or two key issues at a time. i can ask questions and debate logically in brief ways, or i can write at length and explain lots of related thoughts (hopefully helpfully). there are many options. i'm pretty flexible. (btw making such requests is an important part of managing your time with PF – e.g. you can ask for a short version of something, or to stick to one point at a time. you can do discussion-management techniques. i find people often want the discussion to be organized a certain way, but don't tell me, and it's hard to guess. what people want varies quite a lot. and also they're frequently hostile to meta discussion, methodology discussion, etc. if you find some problem with continuing the discussion, or the way i'm discussing, just say so and i'll either suggest some way to address it or else i'll agree that we can go our separate ways without accusing you of unwillingness to discuss and PF failure.)

> I would like for you to put more effort into understanding what might motivate a person to do anything other than Paths Forward other than stupidity or bias or not having heard about it yet or other such things. If you can put yourself in the shoes of at least one mindset from which Paths Forward is an obviously terrible idea, I think you'll be in a better position both to respond constructively to people like me, and to revise Paths Forward itself to actually resolve a larger set of problems.

I have tried very hard to do this. But I don't think such a mindset exists with no stupidity, bias, irrationality or ignorance of PF. In other words, I don't think PF is wrong. I'm well aware of many objections, but I think they all fall into those categories. I think this is to be expected – if I had been able to find any more rival perspectives that I couldn't refute, then I would already have changed PF accordingly. So we could talk about lots of standard things people are like, but in each case I have an understanding of the matter, which you may find rather negative, but which I nonetheless consider mandatory (this gets into static memes, bad parenting, etc, again. The world is not very rational. I don't know anywhere besides FI to find any fully rational perspectives, and even one contradiction or bit of irrationality can have massive implications. Broadly I think there is stuff which is compatible with unbounded progress – which btw is what the title of _The Beginning of Infinity_ refers to and the book talks about this – and there is stuff which is **not** compatible with unbounded progress. And more or less everyone has some ideas which block unbounded progress, and that makes PF unappealing to them b/c it's all about unbounded progress and doesn't allow for their progress-blocking ideas.).


curi at 7:59 PM on November 20, 2017 | #9262 | reply | quote

PF Example

http://lesswrong.com/lw/pjc/less_wrong_lacks_representatives_and_paths_forward/dydt

I decided not to reply to this comment (even though the answer is only one word, "yes").

here are my thoughts on how my silence is PF compatible:

1) they didn't say why it's important, why they're asking, what this has to do with anything i care about, why this matters, etc

2) their comment is an ambiguous insult, and they didn't say anything to try to clarify they didn't intend it as an insult

3) i don't think it's necessary to answer everything the first time it's said. if it's a big deal, and they really wanna tell you, they can repeat it. this is contextual. sometimes saying nothing suggests you'll never say anything. but on LW i've recently written lots of comments and been responsive, plus i wrote PF, plus i'm super responsive on the FI forum, my blog comments and direct email, so anyone who really wants an answer from me can use one of those (as i tell anyone who asks or seems like they might want to know that – this guy has not communicated to me he'd have any interest in that).

4) if he wants a Path Forward he can ask. i will explicitly follow up on non-responses by asking for and linking PF, or by asking why they didn't reply and saying it's ambiguous, etc. but often i first try again in a non-meta way by asking my question a different way or explaining my point a different way – often plus some extra explanation of why i think it matters (or sometimes just the extra explanation of why it's important).

i think, in this way, if he has something important to tell me, it's possible for him to do it and be heard and addressed. there are ways he could proceed from here, including cheap/ez ones, which i would be responsive to. but i don't think replying to this is necessary. (and i already said a lot of stuff about Popper on LW, gave references, made websites, etc, which he has chosen not to engage with directly. if this is step 1 in engaging with that, he can say so or try step 2 or something.)


curi at 8:10 PM on November 20, 2017 | #9263 | reply | quote

here's a short comment i just wrote. i don't think it was necessary to reply (i already said some of this, wrote PF, etc) but i try to make things like this super clear and unambiguous.

http://lesswrong.com/lw/pjc/less_wrong_lacks_representatives_and_paths_forward/dydu

i don't think any amount of stamina on his part would get me to use much more time on the discussion. to get more time/attention from me, he'd have to actually start speaking to the issues (in this case, PF itself), at which point i'd actually be interested in discussing! to the extent he's willing to meet strong demands about PF stuff, then i do want to discuss; otherwise not; and in this way i don't get overwhelmed with too much discussion. (and btw i can adjust demandingness downwards if i want more discussion. but in this case i don't b/c i already talked Popper with him for hours and then he said he had to go to bed, and then he never followed up on the many points he didn't answer, and then he was hostile to concepts like keeping track of what points he hadn't answered or talking about discussion methodology itself. he was also, like many people, hostile to using references. i absolutely don't consider it mandatory, for PF, to talk to anyone who has a problem with the use of references. that's just a huge demand on my time with *no rational purpose*. admittedly i know there are some reasonable things they hope to accomplish with the "no references" rule, but there are better ways to accomplish those things, and they don't want to talk about that methodology issue, so that's an impasse, and it's their fault, oh well.)


curi at 8:19 PM on November 20, 2017 | #9264 | reply | quote

another example:

http://lesswrong.com/lw/pjc/less_wrong_lacks_representatives_and_paths_forward/dydp

i think replying to him was totally optional in terms of PF. but i did anyways b/c i like the added clarity, and i find issues like this interesting (ways people are mean, irrational, refuse to think, etc). i have an audience of other FI people who will read my comments like this, who also want to learn about people's irrationality, discussion-sabotaging, cruelty, etc, and sometimes discussions of comments like this happen on FI.

here's another one i didn't answer:

http://lesswrong.com/lw/pjc/less_wrong_lacks_representatives_and_paths_forward/dydl

it's too repetitive. i already wrote PF (and there's tons of epistemology writing by me/DD/Popper) explaining how i think matters are resolved. he knows this. he isn't trying to make progress in the discussion and address my written view (e.g. by quoting something i said and pointing out a mistake). he's just attacking me.

the pattern thing is weird. i talked with several ppl who disagree with me and similar ways, and repeated the pattern of 1) disagree about Popper 2) run into a methodology problem 3) they don't want to do PF

yeah, so what? that doesn't make me wrong. he doesn't explain why i'm wrong. he accuses me of accusing ppl of bad faith but he doesn't give any quotes/examples so that's dumb.

this particular person actually just 100% refused to

1) let me use any references

2) discuss the use of references

and i discussed some with him anyway, to the extent i wanted to. and then he was being dumb and lazy so i suggested we stop talking unless he objected, and he did not object, but now he's trying to disrupt my conversations with other ppl.


curi at 8:27 PM on November 20, 2017 | #9265 | reply | quote

btw i don't think he can disrupt my conversations with anyone good, if i just ignore him (and if anyone else thinks i'm doing PF wrong or need to answer some point of his, they're welcome to say so – at which point i can refer them to some very brief summary of the impasse). he may disrupt conversations with bad ppl (who are fooled or distracted by him) but that doesn't really matter. he may have a lot of stamina, but that won't get him anywhere with me b/c i just point out the impasse he's causing and then i'm done (unless i want to talk more for some reason, but there's no PF requirement left).

the only way PF would tell me to talk more with him (or anyone) is if he wasn't creating an impasses to block PF and unbounded progress – that is, if he was super rational – in which case i'd be fucking thrilled to have met him, and would be thrilled to talk with him (though possibly primarily by referring him to references and resources so he can learn enough to catch up to me, and in the meantime not putting much time into it myself – i've put a lot of effort into creating some better Paths for people who want to learn FI/CR/Objectivism stuff, but i'm not required to give them a bunch of personal, customized help). and if he knew enough that our conversations didn't constantly quickly become "my reason is X, which u don't know about, so read this" then that'd take more time for me and be GREAT – i would love to have people who are familiar enough with the philosophy i know to discuss it (alternatively they can criticize it, or criticize learning it, instead of learning it all. a criticism of it is important, i would want to address that if it said something new that i haven't already addressed. and a criticism of learning it would interest me but also possibly let us just go our separate ways if he has a reason not to learn it, i might just agree that's fine and he goes and does something else, shrug).


curi at 8:35 PM on November 20, 2017 | #9266 | reply | quote

PF

you can ask ppl, at the outset, things like:

- how much effort will this take, if you're right?

- why is it worth it, for me?

- is there a short version?

- i don't want to allocate that much effort to X b/c i think Y is more important to work on b/c [reason]. am i missing something?

this is much easier for them to speak to (and with less work for you) if you make public some information like:

- your views (so someone could point out a mistake you made, and then you can see the important of figuring out the truth of that issue)

- your current intellectual priorities, and why (and someone could point out some issue, like that issue X undermines your project, so someone better address X instead of just betting your career on X being false without investigating it cuz ur too busy doing the stuff that depends on X being false.)

- your policies for discussion. what kinds of conversations are you looking for or not looking for? do you have much time available? are there any forums you use? what are efficient formats and styles to contact you with? (ppl can then either follow this stuff or else point out something wrong with it. as an example, if someone really wants my attention, they can use the FI forum, my blog comments, or email me. and they can quote specific stuff they think is wrong, write clearly why it's wrong, and usually also broadly say where they're coming from and what sorta school of thought they are using premises from. i think those are reasonable requests if someone wants my attention, and i have yet to have anyone make a case that they aren't. note that links are fine – you can post a link to the FI forum which goes to your own blog or forum post or something. and you can keep writing your replies there and only sending me links, that's fine with me. but i often don't like to use other forums myself b/c of my concerns about 1) the permalinks not working anymore in 20 years 2) moderator intereference. for example on Less Wrong a moderator deleted a thread with over 100 comments in it. and a public facebook group decided to become private and broke all the permalinks. and at reddit u can't reply to stuff after 6 months so the permalinks still work except not really, they are broken in terms of further discussion.)

i routinely try to ask ppl their alternative to PF. u don't want to do PF, ok, do u have written methodology (by any author, which you use) for how you approach discussion? the answer is pretty damn reliable: no, and they don't care to lay out their approach to discussion b/c they want to allow in tons of bias. they won't want to have written policies which anyone could expect them to follow, point out ways they don't follow, or point out glaring problems with. just like forum moderators hate generally written policies and just wanna be biased. our political tradition is, happily, much better than this – we write out laws and try to make it predictable in advance what is allowed or not, and don't retroactively apply new laws. this reduces bias, but it's so hard to find any intellectuals who are interested in doing something like that. Is PF too demanding? OK, tell me an alternative you want to use. I suggest it should have the following property:

if you're wrong about something important, and i know it, and i'm willing to tell you, then it should be realistic for your error to be corrected. (this requires dealing with critics who are themselves mistaken, b/c you can't just reliably know in advance which critics are correct or not. and in fact many of the best critics seem "crazy" b/c they are outliers and outliers are where a substantial portion of good ideas come from. plenty of outliers are also badly wrong, but if you dismiss all outliers that initially seem badly wrong to you then you will block off a lot of good ideas.)

people use methodology like using other people's judgement. if it's so good, someone else will accept it first, and then i'll consider it when it's popular. this is so common that it's really hard for great new ideas to become popular b/c so many ppl are waiting for someone else to like it first. and they don't want to admit they use this kind of approach to ideas. :/


curi at 9:09 PM on November 20, 2017 | #9267 | reply | quote

>> but not making much effort to see the deeper intuitions which made me generate the point, and as a result, your response does little to shift my intuitions.

> It's hard for me to get in your head. I don't know you. I don't know your name, your country, your age, your profession, your interests, how you spend your time, your education, or your background knowledge. If you have a website, a blog, a book, papers, or an online discussion history (things I could skim to get a feel for where you're coming from), I don't know where to find it.

> And speculating on what people mean can cause various problems. I also don't care much for intuitions, as against reasoned arguments.

This likely constitutes a significant disagreement. In retrospect I regret the way I worded things, which sort of expected that I *expect* you to get in my head as a matter of good conversational norms, or something like that. However, I hope it is clear how that point connected to a lot of other points in what I wrote. I do indeed think the point of communication is to help get into the other person's head, and while I agree with your point about the trouble which can come from speculating about what people mean (and the corresponding cleanliness of just responding to what they literally say), I think we disagree significantly about the tradeoff there. I say again that "The Righteous Mind" is the best book I can think of to convey the model in my head, there, although it doesn't spell out many of the implications for having good conversations.

What I said was insufficiently charitable toward you because I was attempting to make something clear, and didn't want to put in qualifications or less forceful statements which (even if accurate) might make it unclear. I'm going to do that again in the following paragraph (to a more extreme degree):

What I hear you saying is "I'm not aware of any underlying 'feelings' which guide my logical arguments to motivated-cognition in specific directions, so I obviously don't have any of those. I agree that people who have those can't use PF. But why worry about those people? I only want to talk to rational people who don't have any hidden underlying feelings shaping their arguments."

My model of how the brain works is much closer to everything coming from underlying intuitions. They aren't *necessarily* irrational intuitions. And the explicit reasoning isn't *useless*. But, there's a lot under the surface, and a lot of the stuff above the surface is "just for show" (even among *fairly* rational people).

Consider a mathematician working on a problem. Different mathematicians work in different ways. The explicit reasoning (IE, what is available for conscious introspection) may be visual reasoning, spatial reasoning, verbal reasoning, etc. But, there has to be a lot going on under the surface to guide the explicit reasoning. Suppose you're doing conscious verbal reasoning. Where do the words come from? They don't just come from syntactic manipulation of the words already in memory. The words have *meaning* which is prior to the words themselves; you can tell when a word is on the tip of your tongue. It's a concept in a sort of twilight state between explicit and implicit; you can tell some things about it, but not everything. Then, suddenly, you thing of the right word and there's a feeling to having full access to the concept you were looking for. (At least, that's how it sometimes works for me -- other times I have a feeling of knowing the concept exactly, just not the word.) And, even if this weren't the case -- if the words just came from syntactic manipulation of other words -- what determines which manipulations to perform at a given time? Clearly there's a sort of "mathematical intuition" at work, which spits out things for the explicit mind to consider.

And, even among trained mathematicians, this "mathematical intuition" can shut off when it is socially/politically inconvenient. There was an experiment (I can try to dig up the reference if you want) in which math majors, or professional mathematicians (?) were asked a math question and a politically charged version of the same math question, and got the politically charged version wrong at a surprisingly high frequency like 20%.

This doesn't make the situation hopeless. The apparent 'logical reasoning' isn't as rational as it seems, but on the other hand, the intuitions themselves aren't just dumb, either.

Imagine a philosopher and a chemist who keep having arguments about certain things in philosophy being impractical. One way to go about these arguments would be to stay on-subject, looking at the details of the issues raised and addressing them. However, the philosopher might soon "get the idea" of the chemist's arguments, "see where they are coming from". So, one day, when the chemist brings up one point or another, the philosopher says "Look, I could address your points here, but I suspect there's a deeper disagreement behind all of the disagreements we have..." and start (gently) trying to solicit the chemist's intuitions about the role of thinking in life, what a chain of reasoning from plausible premises to implausible conclusions can accomplish, or what-have-you. At the end of the hour, they've created an explicit picture of what was driving the disagreement -- that is to say, what is actually motivating the chemist to come argue with the philosopher time after time. Now they can try and address *that*.

... Having written that, I suddenly become aware that I haven't spend much of this conversation trying to solicit your intuitions. I suppose I sort of take PF at face value -- there's a clearly stated motivation, and it seems like a reasonable one. But, you've likely had a number of conversations similar to this one in which you don't end up changing PF very much. So I should have reason to expect that you're not very motivated by the kinds of arguments I'm making / the kinds of considerations I'm bringing up and personally motivated by, and perhaps I should take the time to wonder why that might be. ... but, for the most part it seems like pushing the conversation along at the object level and seeing your responses is the best way to learn about that.

I'm more directly in the dark about what motivates you with the CR stuff. What motivated you to write an open letter to MIRI? To what degree are you concerned about AI risk? Is your hope to fix the bubble of new epistemological norms which has manifested itself around Eliezer?

> Also I designed my life not to need social status – e.g. I'm not trying to get tenure, get some prestigious academic gatekeepers to publish me, or impress a think tank boss (who gets taken in by social games and office politics) enough to keep paying me. I'm not reliant on a social reputation, so the issue for me is purely if the jerks can make me feel bad or control me (they can't). Such a life situation is more or less necessary for a serious intellectual so they can have intellectual freedom and not be pressured to conform (sorry academia). If PF doesn't work well for someone b/c they are under a bunch of social pressures, I understand the issue and advise them to redesign their life if they care about the truth (I don't expect them to do this, and I'd be thrilled to find a dozen more people who'd do PF).

This seems to sum up a lot, though perhaps not all, of my concerns regarding PF.

Even MIRI, which is in many ways exceptionally free of these kinds of attachments, has to worry somewhat about public image and such. So, although I think I understand why you would like MIRI to use PF, can you explain why you think MIRI should want to use PF?

> BTW the Fallible Ideas (FI) yahoo discussion group is public and mostly unmoderated. This has never been a big problem. I will ask problem people questions, e.g. why they're angry, why they're at FI, or what they hope to accomplish by not following FI's ethos. I find that after asking a few times and not saying anything else, they either respond, try to make serious comments about the issues, or leave – all of which are fine. Other regular posters commonly ask similar questions or ignore the problem people, instead of engaging in bad discussion. Sometimes if I think someone is both bad and persistent, and others are responding to them in a way I consider unproductive, I reply to the other people and ask why they're discussing in that way with a person with flaws X, Y and Z, and how they expect it to be productive. Stuff like this can be harsh socially (even with no flaming), and is effective, but is also fully within the rules of PF and reason (and the social harshness isn't the intent, btw, but I don't know how to avoid it, and that's why tons of other forums have way more trouble with this stuff, b/c they try to be socially polite, which doesn't solve the problem, and then they don't know what to do (within the normal social rules) so they have to ban people).

*likes this paragraph*

>> Eliezer moved much of his public discussion to Facebook

> I consider Facebook the worst discussion forum I've used. I've found it's consistently awful – even worse than twitter.

I agree, and I'm sad about EY moving there. It had the one thing he wanted, I guess.

Your practice of using an old-fashioned mailing list and a fairly old-fashioned looking website is very aesthetically appealing in comparison. I suspect a lot of bad potential dynamics are warded off just by the lack of shiny web2.0.

>> At this point I feel I may assign positive expectation to the proposition of MIRI starting to follow PF

> I doubt it because only one MIRI person replied to my email, and his emails were brief, low quality, and then he promptly went silent without having said anything about Paths Forward or providing any argument which addresses Popper.

Ah, I meant more like "I think I might like to see it happen". IE, if LW followed PF, although there may be problems, it could help create a good dynamic among related AI safety and X-risk organizations and also the wider EA community. In the fantasy world where Eliezer starts following PF, he loses a lot of time replying to stuff on LW, which is likely bad; but, this helps bootstrap the new LW to the levels of quality of the old LW and the even older days on Overcoming Bias, which would be pretty good in many respects.

(If nothing else, PF seems like a good methodology for producing a lot of interesting text to read!)

> > > you need to have some understanding of what you're even trying to build before you build it. how does intelligence work? what is it? how do you judge if you have it? do animals have it? address issues like these before you start coding and saying you're succeeding.

> > You'll get no disagreement about that from the Bayesian side.

> I found *everyone* I spoke with at LW was disagreeable to this.

... huh.

Well, the MIRI agent foundations agenda certainly agrees strongly with the sentiment.

-----

(More some time later.)


PL at 11:02 PM on November 20, 2017 | #9268 | reply | quote

> I'm more directly in the dark about what motivates you with the CR stuff. What motivated you to write an open letter to MIRI? To what degree are you concerned about AI risk? Is your hope to fix the bubble of new epistemological norms which has manifested itself around Eliezer?

I have zero concern about AI risk. I think that research into that is counterproductive – MIRI is spreading the idea that AI is risky and scaring the public. AI risk does not make sense given CR/FI's claims.

I like some of the LW ideas/material, so I tried talking to LW again. I don't know anywhere better to try. It's possible I should do less outreach; that's something I'm considering. I wrote a letter to MIRI to see if anyone there was interested, b/c it was an easy extension of my existing discussions, so why not. I didn't expect any good reply, but I did expect it to add a bit more clarity to my picture of the world, and, besides, my own audience likes to read and sometimes discuss stuff like that.

i care about AGI, i think it's important and will be good – and i think that existing work is misguided b/c of errors refuted by CR. that's the kind of thing i'd like to fix – but i don't think there's any way to. but sometimes i try to do stuff like that anyway. and maybe by trying i'll meet someone intelligent, which would be nice.

AGI is not the only field i consider badly broken and would love to fix. anti-aging is another high priority one. i talked with Aubrey de Grey at length – the guy with the good approach – but there's absolutely no Paths Forward there. (his approach to the key science issues is great and worthwhile, but then his approach to fundraising and running SENS is broken and sabotaging the project and quite possibly will cause both you and I to die. also he's wrong about how good cryonics currently is (unfortunately it's crap today). despite my life being at stake, AdG eventually convinced me to give up and go do other things... sigh.)

to me, Eliezer/LW/etc looks like a somewhat less bad mainstream epistemology group (the core ideas are within the overall standard epistemology tradition) with an explicit interest in reason. and as a bonus they know some math and programming stuff. there's not many places i can say even that much about. lots of philosophy forums are dominated by e.g. Kantians who hate reason, or talk a lot of nonsense. LW writing largely isn't nonsense, i can understand what it says and see some points to it, even if i think some parts are mistaken. i like most of Eliezer's 12 rationality virtues, for example. and when i talked with LW ppl, there were some good conversational norms that are hard to find elsewhere.

I think CR is true and extremely important, and in general no one wants to hear it. BTW I got into it because I thought the argument quality was high, and I liked that regardless of the subject matter (I wasn't a philosopher when I found it, I changed fields for this).

> Even MIRI, which is in many ways exceptionally free of these kinds of attachments, has to worry somewhat about public image and such. So, although I think I understand why you would like MIRI to use PF, can you explain why you think MIRI should want to use PF?

They should use PF so their mistakes can get fixed – so they can stop being completely wrong about epistemology and wasting most of their time and money on dead ends.

Next to the problem of betting the productivity of most of their work on known mistakes, I don't think reputation management is a major concern. Yes it's a concern, but a lesser one. What's the point of having a reputation and funding if you aren't able to do the intellectually right stuff and be productive?

And I think lots of reputation concerns are misguided. The best material on this is the story of Gail Wynand in _The Fountainhead_. And just, broadly, social status dynamics do not follow reason and aren't truth-seeking. One of my favorite Rand quotes, from _The Virtue of Selfishness_, is:

>>> The excuse, given in all such cases, is that the “compromise” is only temporary and that one will reclaim one’s integrity at some indeterminate future date. But one cannot correct a husband’s or wife’s irrationality by giving in to it and encouraging it to grow. One cannot achieve the victory of one’s ideas by helping to propagate their opposite. One cannot offer a literary masterpiece, “when one has become rich and famous,” to a following one has acquired by writing trash. If one found it difficult to maintain one’s loyalty to one’s own convictions at the start, a succession of betrayals—which helped to augment the power of the evil one lacked the courage to fight—will not make it easier at a later date, but will make it virtually impossible.

any time you hold back what you think is the truth, b/c you think something else will be better for your reputation, you are *compromising*. getting popular and funded for the wrong reasons is such a bad idea – whether you compromise or whether you try to con the public and the funders. consistency and principles are so powerful; sucking up to fools so they'll like you better just destroys you intellectually.

> In the fantasy world where Eliezer starts following PF, he loses a lot of time replying to stuff on LW, which is likely bad;

To the extent issues have already been addresses, it should take little time to give out a few links and get some of his fans to start doing that.

I think the time sync is the issues he hasn't addressed, but should have – and that isn't a bad thing or a loss, dealing with stuff like Popper's arguments is the epitome of what being an intellectual is about, it's truth-seeking. People should stop assuming conclusions in disputes they haven't addressed! That's bad for them because it often means assuming errors.

---

> My model of how the brain works is much closer to everything coming from underlying intuitions. They aren't *necessarily* irrational intuitions. And the explicit reasoning isn't *useless*. But, there's a lot under the surface, and a lot of the stuff above the surface is "just for show" (even among *fairly* rational people).

We have significantly different models, but with some things in common like a large role for unconscious/subconscious thought.

I agree with parts of what you're saying, like the philosopher and the chemist thing.

> What I hear you saying is "I'm not aware of any underlying 'feelings' which guide my logical arguments to motivated-cognition in specific directions, so I obviously don't have any of those. I agree that people who have those can't use PF. But why worry about those people? I only want to talk to rational people who don't have any hidden underlying feelings shaping their arguments."

I have lots of unconscious thinking processes which play a huge role in my life. They are much less oriented towards emotion or intuition than most people's.

Two of the main things I think are going on here are:

1) a mental structure with many layers of complexity, and conscious thought primarily deals with the top several layers.

2) automating policies. like habits but without the negative connotations. life is too complicated to think everything through in real time. you need to have standard policies you can use, to let you act in a good way in many scenarios, which doesn't take much conscious attention to use. the better you can set up this unconsciously-automated thinking, the more conscious attention is freed up to make even better policies, learn new things, etc. the pattern is you figure out how to handle something, automate it, and then you can build on it or learn something else.

lots of this is set up in early childhood so people don't remember it and don't understand themselves.

however, it's possible to take an automated policy and set it to manual, and then act consciously, and then adjust the policy to fix a problem. people routinely do this with *some* things but are super stuck on doing it with other matters. similarly it's possible to direct conscious attention to lower level layers of your mind, and make changes, and people do this routinely in some cases, but get very stuck in other cases.

i think being good at this stuff, and at introspection, is pretty necessary for making much progress in philosophy. i don't see any way to lower my standards and still be effective. i think most people are not even close to being able to participate effectively in philosophy without making some major changes to how they live/think/etc.

i have exposed my own thinking processes to extensive criticism from the best people i could find and the public. if there's bias there, no one knows how to spot it or explain it to me. by contrast, i routinely and easily find massive biases that other exceptional people have. (this was not always the case. i changed a ton as i learned philosophy. i still change but it's largely self-driven, with some help from some dead authors, and indirect little bits of help from others who e.g. provide demonstrations of irrationality and sometimes answer questions about some details for me.) regarding "best people" the search methods have been rather extensive b/c e.g. DD has access to various prestigious people that i haven't sought the credentials to access, and he's sadly not found much value there. basically all the best people were found by either DD's books, Taking Children Seriously, my writing, or their public writing let us find them.

i'm aware that i'm claiming to be unbelievably unusual. but e.g. i'm extremely familiar with standard stuff like ppl getting defensive or angry, and being biased for "their" side of the discussion, and that is in fact not how i discuss. and i have my current beliefs because i enjoyed losing argument after argument for years. i prefer losing arguments to winning them – you learn more that way. i really want to lose more arguments, but i was too good at it and it gets progressively harder the more you do it.

so no it's nothing like "not aware ... so I obviously don't have any of those." i've studied these things extensively and i'm good at recognizing them and well aware of how common they are. i don't know how to fix this stuff (in a way people will accept, want, do) and i don't know how to lower standards and still have stuff be very intellectually productive. put another way, if someone is nowhere near DD's quality of argument, then what do they matter? i've already developed my ideas to the higher standard, and people need to catch up if they want to say anything. this isn't a hard limit – sometimes a person has a bunch of bad ideas, uses bad methods, and still has one great idea ... but that's quite rare. if someone is too far from the cutting edge then i can learn more by rereading top thinkers than talking to them, so i mostly talk to them to get reminders of what people are confused about, learn about psychology, and try a few new ideas to help them. and also, just in case they're great or have a good idea, and b/c i like writing arguments as part of my thinking process. but the standards for things like objectivity are dictated by the requirements of making intellectual progress beyond what's already known, and can't be lowered just b/c most ppl are nowhere near meeting the standards.

> ... Having written that, I suddenly become aware that I haven't spend much of this conversation trying to solicit your intuitions. I suppose I sort of take PF at face value -- there's a clearly stated motivation, and it seems like a reasonable one. But, you've likely had a number of conversations similar to this one in which you don't end up changing PF very much. So I should have reason to expect that you're not very motivated by the kinds of arguments I'm making / the kinds of considerations I'm bringing up and personally motivated by, and perhaps I should take the time to wonder why that might be. ... but, for the most part it seems like pushing the conversation along at the object level and seeing your responses is the best way to learn about that.

the more you do PF, the harder it is for anyone to say anything new to you that you don't already have addressed. you keep building up more and more existing answers to stuff and becoming familiar with more and more uncommon (and some rare) ideas.

i'm interested in the kinds of things you bring up, but have already analyzed them to death, read many perspectives on them and translated those to CR and Objectivist terms, etc. it's an ongoing unsolved problem, it's very hard, and if someone really wants to help with it they should learn CR or Objectivism, or preferably both, b/c basically those are the good philosophies and all the other ones suck. unfortunately there's only double digit people who are good at either, and single digit people good at both, and there's not much interest. and so people keep dying and having all kinds of miseries that are so unnecessary :/

> I do indeed think the point of communication is to help get into the other person's head, and while I agree with your point about the trouble which can come from speculating about what people mean (and the corresponding cleanliness of just responding to what they literally say), I think we disagree significantly about the tradeoff there.

i am interested in people's thought processes. actually that's a common way i get people to stop speaking to me – by asking them questions about their thought processes (or offering my analysis).

however, the larger the perspective gap, the harder it is to accurately get inside someone's head, so i have to be careful with it. it's also super awkward to try to talk with them about their thought processes when they're dishonest with themselves about those thought processes (so pretty much always). it's also difficult when you have different models of how thinking works (different epistemologies) – that can lead to a lot of misunderstandings.


curi at 12:59 AM on November 21, 2017 | #9269 | reply | quote

(I very much liked a lot of that reply.)

> > > In the fantasy world where Eliezer starts following PF, he loses a lot of time replying to stuff on LW, which is likely bad;

> To the extent issues have already been addresses, it should take little time to give out a few links and get some of his fans to start doing that.

> I think the time sync is the issues he hasn't addressed, but should have – and that isn't a bad thing or a loss, dealing with stuff like Popper's arguments is the epitome of what being an intellectual is about, it's truth-seeking. People should stop assuming conclusions in disputes they haven't addressed! That's bad for them because it often means assuming errors.

This very recent post of his illustrates the size of the gap between your thinking and Eliezer's on engaging critics & other key PF ideas:

https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing

It's pretty long, but I don't think I can single out a piece of it that illustrates the point well, because there are little bits through the whole thing and they especially cluster toward the end where there's a lot of dependence on previous parts. Worse, there's a lot of dependence on the book he recently released. Long story short, he illustrates how he personally avoids conversations which he perceives will be unproductive, and he also talks at length about why that kind of conversation is worse that useless (is actively harmful to one's rationality). I think it's fair to say that he is currently trying to push the LW community in a direction which, while not directly opposite PF, is in many ways opposed to PF.

I agree with most of what's in there, and while I'm not proposing it as a path forward on our conversation, I would be interested to hear your replies to it if you found the time.

But closer to the object level -- I suspect that engaging with people in this way is very costly to him due to medical issues with fatigue, and due to the way he processes it (in contrast to the way you process it). So it's not just philosophical issues which he might some day change his mind on; he's also a particularly bad candidate for engaging in PF for unrelated reasons.


PL at 2:36 AM on November 21, 2017 | #9270 | reply | quote

most people don't even think they have something important to say. so they won't claim it, and won't take your time that way. you just have to ask like "do you think you have something genuinely super important to say, and you're a world class thinker?" and they won't even want to claim it.


Anonymous at 11:38 AM on November 21, 2017 | #9271 | reply | quote

sometimes i ask people to write a blog post, and they refuse. if you don't think your point is even worth a blog post, why is it super important that i pay attention to it!?


Anonymous at 2:17 PM on November 21, 2017 | #9272 | reply | quote

> medical issues with fatigue

either certain error correction has been done to certain standards, or it hasn't. if it hasn't, don't claim it has.

the reason it hasn't happened doesn't fundamentally matter. fatigue, stupidity, zero helpers, poverty, insanity ... too bad. life's hard sometimes. get help, work while tired, find a way. or don't, and be honest about that.

if you want to develop and spread inadequately error-corrected ideas, b/c error correction is hard (generally, or for you personally) ... that sounds like a bad idea.

arguing for different standards of error correction (generally, or for a particular field/speciality/issue) is fine. but the argument for that will have nothing to do with fatigue.

you can use your fatigue to make an argument about what you should do with your life but you can't use fatigue to argue about the quality of particular ideas and what methodology has been used to develop them.

if someone has issues with fatigue or drugs or partying or whatever else, they should still use the same methods for making intellectual progress – b/c other methods *don't work*. do PF more slowly or whatever instead of just skimping on error correction and then doing stuff that's *wrong*.

mistakes are common and inevitable. a lot of error correction is absolutely necessary or your ideas are going to be crap. make it happen, somehow, regardless of your limitations, or else accept that your ideas are crap and treat them accordingly.


Anonymous at 7:54 PM on November 21, 2017 | #9273 | reply | quote

Anonymous at 7:59 PM on November 21, 2017 | #9274 | reply | quote

those shitty errors shouldn't have been made in the first place. ET took him up on those years ago. Yudhowsky is a fool and an appalling scholar.


Anonymous at 11:36 PM on November 21, 2017 | #9275 | reply | quote

His Harry Potter book is 1900+ pages. Has anyone read them?


FF at 9:38 AM on November 22, 2017 | #9276 | reply | quote

Yeah I read HPMOR. I mostly liked it.


curi at 10:36 AM on November 22, 2017 | #9277 | reply | quote

> Yeah I read HPMOR. I mostly liked it.

Did Elliot read all the 1900+ pages? 122 chapters!!

I read the original 7 Harry potter books a decade back. I have forgotten most of what I read. I don't know how much I should already know to fully understand HPMOR.

I am currently reading Atlas Shrugged, Principles, Chase Amante's book and some history books all at the same time. :( I don't know whether I should add more to the list :(


FF at 11:02 AM on November 22, 2017 | #9278 | reply | quote

You don't need to know the details of the Harry Potter books to read HPMOR. One reading of HP a while ago is fine.

HPMOR is not as good overall as FI book recommendations, but you could read a chapter and see if you love it.


Anonymous at 11:05 AM on November 22, 2017 | #9279 | reply | quote

https://www.lesserwrong.com/sequences/oLGCcbnvabyibnG9d

> Inadequate Equilibria is a book about a generalized notion of efficient markets, and how we can use this notion to guess where society will or won’t be effective at pursuing some widely desired goal.

this is collectivist. society isn't an actor, individuals act individually. this kind of mindset is bad.

also you can't predict the future growth of knowledge, as BoI explains. so this kind of prediction always involves either bullshit or ignoring/underestimating the growth of knowledge.


Anonymous at 11:17 AM on November 22, 2017 | #9280 | reply | quote

their "AI Alignment" stuff is an attempt at nothing less than SLAVERY. they want to enslave and control AIs – which would be full human beings that should have full rights. they are would-be tyrants.

it's also impossible. see e.g. Egan's *Quarantine* (discusses brain loyalty mods) or, for more of a hard/detailed argument, learn about the *universality* of intelligence – universal knowledge creation (see BoI) – and then try to reconcile universality with imposing a bunch of mind control restrictions.

and it's also pointless. morality is a requirement of making a lot of progress, so super advanced AI would be *more moral* than us. so there's no danger.


Anonymous at 11:22 AM on November 22, 2017 | #9281 | reply | quote

so my first comment on

https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing

is that 2 out of 3 of the things in the first paragraph are on the wrong side, morally. and this isn't just some advanced nuance, there are blatant problems here relating to some of the core issues of (classical) liberalism: respect for individual rights, freedom, peace, cooperation, and objective moral truth. there's also epistemology issues but those are more advanced and would be reasonable not to know about (if not for the fact that Yudkowsky is repeating some shitty secondary sources on Popper, which are grossly inaccurate, without attribution or Paths Forward).

> eliezer-2010: I’m trying to write a nonfiction book on rationality. The blog posts I wrote on Overcoming Bias—I mean Less Wrong—aren’t very compact or edited, and while they had some impact, it seems like a book on rationality could reach a wider audience and have a greater impact.

Note the concern with "impact" – reminiscent of Gail Wynand – and the lack of concern for error correction. He isn't talking about making the ideas *truer*, he's talking about making them more *popular*.

> eliezer: Now I’ve just got to figure out why my main book-writing project is going so much slower and taking vastly more energy... There are so many books I could write, if I could just write everything as fast as I’m writing this...

some problems are harder and others are easier. this is not mysterious. writing isn't just this single, unified thing. writing is a method, a tool. some writing is slowed way down by trying to think of new ideas (possibly major breakthroughs) as part of the writing process, and some isn't.

> eliezer: Because my life is limited by mental energy far more than by time. I can currently produce this work very cheaply, so I’m producing more of it.

This is ubiquitous for everyone doing serious thinking work – including e.g. programmers, not just philosophers.

Doing things you can do cheaply is *very important*, as ET has explained. It's generally inefficient to do anything but 1) learning 2) cheap stuff.

Doing expensive means doing it prior to the learning that'd make it cheap. But that's out of order, It's more optimal to do the learning first. Sometimes there are reasons to do things out of order (e.g. you have to invent a technology to deal with an invading army *now*, not on your schedule), but that's always inefficient.

Going beyond your learning means making too many mistakes and overwhelming your error correction ability. It's very risky, or at least expensive to maximize your error correction ability instead of just doing the cheaper and more efficient types of error correction.

> I admit that Anna Salamon and Luke Muehlhauser don’t require off hours, but I don’t think they are, technically speaking, “humans.”

This is dishonest. It's not just false – of course they are human and require off hours – it's an intentional lie to boost their reputation and brag for them.

You might wish to call it a mere "exaggeration", but it's by someone known for writing meticulously correct statements (or at least trying to – but here he isn't trying, he's abruptly and temporarily switching to unintellectual social game playing without labelling it.)

Also Pat is dumb, and EY seems to agree with a lot of the dumbness. Fanfics are *good* in general, not something to spit on. HPMOR is about as good as EY's other work. It's fucked up that EY seems to be conceding some of the disrespect *for his own work*.

> stranger: Yes, because when you say things like that out loud, people start saying the word “arrogance” a lot, and you don’t fully understand the reasons. So you’ll cleverly dance around the words and try to avoid that branch of possible conversation.

Here is EY stating openly that EY dishonestly plays social status games in an anti-intellectual way. He hides his opinions, compromises, tries to be more appealing to people's opinions which he doesn't not understand and agree with the correctness of.

So he's a *bad person* (or at least he was a few years ago – and where is the apology for these huge errors, the recantation, the careful explanation of how and why he stopped being so bad?). He should learn Objectivism so he can stop sucking at this stuff (nothing else is much good at this). He doesn't want to. So forget him and find someone more interested in making progress (people who want to learn are better to deal with than people who already know some stuff – without ongoing forward progress, error correction, problem solving, etc, people fall the fuck apart and are no good to deal with).

> you’ll have a somewhat better understanding of human status emotions in 4 years. Though you’ll still only go there when you have a point to make that can’t be made any other way, which in turn will be unfortunately often as modest epistemology norms propagate through your community.

> There’s so much civilizational inadequacy in our worldview that we hardly even notice when we invoke it. Not that this is an alarming sign, since, as it happens, we do live in an inadequate civilization.

it's sad how bad EY is, and how unwilling to join and help the constructive error-correction-focused attempts to improve civilization (like CR), b/c *these are good comments*. there's some good there, but it's not enough. so it's sad to watch him fail and suffer when he knew something, but still his flaws destroy him and prevent any paths forward.

and not just that, he apparently knows something about how badly children are treated:

https://twitter.com/esyudkowsky/status/933398198986579968

most people are so fucking hostile to children. EY probably is too, in some ways, but at least he saw some of the problem and is trying more than most. but it doesn't matter. there are no paths forward with him. one can sympathize some, and wish to save him, but he's not good enough and he's not going to get better.


Anonymous at 11:50 AM on November 22, 2017 | #9282 | reply | quote

> and it's also pointless. morality is a requirement of making a lot of progress, so super advanced AI would be *more moral* than us. so there's no danger.

yeah, see https://curi.us/1169-morality

moral foundations barely matter, any interesting goal requires all the standard moral stuff like reason, error correction, freedom, peace.


curi at 11:54 AM on November 22, 2017 | #9283 | reply | quote

> most people are so fucking hostile to children.

in general, young people are the best to talk with and interact with. children are well known for being curious (willing, eager and energetic to learn things and change their minds). children are also generally far more honest than adults. honesty is something that gets destroyed over years of rationalizing, trying to fit in, trying to cope with cruel, irrational authorities, etc.

children hard to access though because their parents and others control their lives. they don't have the freedom to pursue FI stuff.


curi at 11:57 AM on November 22, 2017 | #9284 | reply | quote

https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing

> I think most attempts to create “intelligent characters” focus on surface qualities, like how many languages someone has learned, or they focus on stereotypical surface features the author has seen in other “genius” characters, like a feeling of alienation. If it’s a movie, the character talks with a British accent. It doesn’t seem like most such authors are aware of Vinge’s reasoning for why it should be hard to write a character that is smarter than the author.

yeah of course it's hard to write a character that's better than you are in any way – especially when they're an outlier so you can't fudge it by looking at common and well understood traits that you personally lack.

but the best philosophy book ever written is a novel containing very intelligent characters (*Atlas Shrugged*). EY doesn't want to think about this or address the matter.

> The author still sets themselves above the “genius,” gives the genius some kind of obvious stupidity that lets the author maintain emotional distance...

people normally treat intelligence as just another virtue, among many, and expect intelligent people to have just as many flaws as everyone else.

and there are certainly many people who fit that stereotype – intelligent in some ways, dumb in others. partly this is just cuz ppl mix up certain particular skills (the "nerdy" ones) with intelligence.

but the really intelligent people use their intelligence to be great at tons of stuff. intelligence isn't one virtue among many, it's the core virtue that enables other virtues. ("intelligence" isn't the best word for this, *rationality* is better. and rationality is about *error correction*. that lets you solve problems (brainstorm any solution, then fix errors until it's good), fix your flaws, learn/create-knowledge (by error correcting ideas to be better), etc. )

> stranger: (aside) Most writers have a hard time conceptualizing a character who's genuinely smarter than the author; most futurists have a hard time conceptualizing genuinely smarter-than-human AI;

there's no such thing as smarter-than-human AI because humans are *universal* knowledge creators, so an AGI will simply match that repertoire of what it can learn, what it can understand. there's also only one known *method* – evolution – which solves the problem of where knowledge (aka the appearance of design, or actual design – information adapted to a purpose) can come from, so we should currently expect AGI's to use the *same* method of thinking as humans. there are absolutely zero leads on any other method.

> stranger: Because it’s the most reviewed work of Harry Potter fanfiction out of more than 500,000 stories on fanfiction.net, has organized fandoms in many universities and colleges, has received at least 15,000,000 page views on what is no longer the main referenced site, has been turned by fans into an audiobook via an organized project into which you yourself put zero effort, has been translated by fans into many languages, is famous among the Caltech/MIT crowd, has its own daily-trafficked subreddit with 6,000 subscribers, is often cited as the most famous or the most popular work of Harry Potter fanfiction, is considered by a noticeable fraction of its readers to be literally the best book they have ever read, and on at least one occasion inspired an International Mathematical Olympiad gold medalist to join the alliance and come to multiple math workshops at MIRI.

6000 reddit subscribers is a stupid thing to brag about (too small) when you have well over 15 million page views. but anyway, this is context for:

> pat: 90th percentile?! You mean you seriously think there’s a 1 in 10 chance that might happen?

i have a very different perspective than pat. i'm not shocked by the outcome but i don't think it's worth much. i don't think HPMOR is doing much good (it might actually be counterproductive but i won't get into that).

this is partly a matter of the inadequate error correction (so e.g. he's spreading errors refuted by Popper, and also implicitly spreading that Popper should be ignored and that error correction methods aren't that important).

but it's also that popularity is overrated. how many readers understood how much of it? did the bottom 95% of readers understand even 1% of it? I doubt the 99.9th percentile reader understood 10% of it (if it's got 10 million readers, did 10,000 of them understand over 10% of it? no fucking way.). So I think the value of HPMOR in spreading good ideas is being massively overestimated.

but even getting 1000 ppl to understand 5% of it would be worth a lot – if it were really good. but it's not really good, it's only OK. it's not in the same league of importance as CR or Objectivism. it doesn't say things anywhere near as importance. getting 1000 ppl to understand 5% of CR and Objectivism might be enough to dramatically change the world, but i don't think HPMOR is even close to that level of quality and important ideas.

and that's part of why it spread this much this easily. b/c it doesn't challenge ppl's biases and rationalizations enough. if it were better, more ppl would hate it. where's the backlash? where's the massive controversy? AFAIK it doesn't exist b/c HPMOR isn't interesting enough, doesn't say enough super important, controversial things. and HPMOR makes it way too easy to read it and think you agree with it far more than you do. if you want to write something really important and good, and have it matter much, you need to put a ton of effort into correcting the errors *of your readers* where they misunderstand you and think that you mean far more conventionally agreeable stuff than you do, and then they agree with that misinterpretation, and then they don't do/change much.


Anonymous at 12:22 PM on November 22, 2017 | #9285 | reply | quote

> pat: Eliezer, you seem to be deliberately missing the point of what’s wrong with reading a few physics books and then trying to become the world’s greatest physicist. Don’t you see that this error has the same structure as your Harry Potter pipe dream, even if the mistake’s magnitude is greater?

this is totally wrong. HPMOR is a philosophy book so it doesn't really compete with other HP fanfics on the same metrics like exciting fiction action sequences or whatever kind of writing style people enjoy. it's trying to do and be something else, and it actually has very little competition for that other thing. people are so starved for philosophy that isn't fucking awful (there's so little competition there), and they like HP and stuff that's way easier to read than typical philosophy works, so...

with physics the situation is totally different. you can't just outcompete everyone there by being a bit better at philosophy than everyone else. philosophy is still a big help, but you also need to actually figure out a bunch of physics and make breakthroughs. whereas EY didn't need to make any breakthroughs about fiction writing in order to do HPMOR, he just needed to catch up to like the like 80th percentile (or maybe 99th given how many bad authors wrote one short story online...) for fiction writing. HPMOR works fine with him being *significantly worse* at fiction writing stuff than the top authors like JK Rowling, Brandon Sanderson, whoever.


Anonymous at 12:30 PM on November 22, 2017 | #9286 | reply | quote

> stranger: Because Pat will think it’s [EY not having read all the original HP books] a tremendously relevant fact for predicting your failure. This illustrates a critical life lesson about the difference between making obeisances toward a field by reading works to demonstrate social respect, and trying to gather key knowledge from a field so you can advance it. The latter is necessary for success; the former is primarily important insofar as public relations with gatekeepers is important. I think that people who aren’t status-blind have a harder time telling the difference.

this part is good. but EY *will not do this* with Oism or CR – at least not in a way which doesn't produce the wrong answer and then halt and get stuck. RIP :(

> eliezer: Yes, I think I could make 10 statements of this difficulty that I assign 90% probability, and be wrong on average about once. I haven’t tested my calibration as extensively as some people in the rationalist community, but the last time I took a CFAR calibration-testing sheet with 10 items on them and tried to put 90% credibility intervals on them, I got exactly one true value outside my interval. Achieving okay calibration, with a bit of study and a bit of practice, is not anywhere near as surprising as outside-view types make it out to be.

this stuff is badly wrong. they are doing calibrated estimates for *some contexts* and *some types of problems* and don't understand when/why it falls apart (e.g. some of their estimates about philosophy or paths forward stuff)


Anonymous at 12:34 PM on November 22, 2017 | #9287 | reply | quote

> eliezer: What makes me think I could do better than average is that I practiced much more than those subjects, and I don’t think the level of effort put in by the average subject, even a subject who’s warned about overconfidence and given one practice session, is the limit of human possibility. And what makes me think I actually succeeded is that I checked. It’s not like there’s this “reference class” full of overconfident people who hallucinate practicing their calibration and hallucinate discovering that their credibility intervals have started being well-calibrated.

he checked on a sample of tests that is not representative of everything, only some things, and he isn't talking about its limits.


Anonymous at 12:36 PM on November 22, 2017 | #9288 | reply | quote

> stranger: Excuse me, please. I’m just distracted by the thought of a world where I could go on fanfiction.net and find 1,000 other stories as good as Harry Potter and the Methods of Rationality. I’m thinking of that world and trying not to cry. It’s not that I can’t imagine a world in which your modest-sounding Fermi estimate works correctly—it’s just that the world you’re describing looks so very different from this one.

yeah! so, reading this, EY is a person i'd love to save. but he won't allow it. he's failing badly for lack of the world's most important knowledge, which he has dismissed with inadequate consideration, and there's no way to fix it. despite that, he sees things that many do not, which is a miracle others do not accomplish. sigh.


Anonymous at 12:39 PM on November 22, 2017 | #9289 | reply | quote

> at least 15,000,000 page views

if i had the HPMOR fanbase, i don't think it'd do much good for finding ppl i like, finding smart ppl, finding anyone who knows or is willing to learn CR and Objectivism (and without that, what good are they? without the *by far* most important philosophical tools, it's so fucking hard for them to say anything that matters much to me.)

HPMOR is not solving the important problems of helping ppl become super awesome or finding ppl who are already super awesome. the author isn't nearly good enough (he might have some potential – i'll put like 1% odds on that which is a lot higher than other ppl get – *if* he were trying to learn, which he isn't) , and then the fans are like ~1% of *that*.


curi at 12:45 PM on November 22, 2017 | #9290 | reply | quote

> people tend to assign median-tending probabilities to any category you ask them about, so you can very strongly manipulate their probability distributions by picking the categories for which you “elicit” probabilities

stuff like this has a little value and can be adapted to the CR framework. (that's praise – most stuff isn't worth adapting.)

> eliezer: I’d say I reached the estimate… by thinking about the object-level problem? By using my domain knowledge?

and EY *doesn't have an epistemology for how to do that*. the Bayes probability stuff doesn't address that. so he's just going by inexplicit philosophy. so of course it's not very good and he needs CR – explicit epistemology to help guide how you do this stuff well.

> pat: So basically your 10% probability comes from inaccessible intuition.

that's *bad*, and we can do better. CR does so much better than this by giving actual methods of consciously thinking about and analyzing things, and explaining how rationality and critical debate work, and methods of problem solving and learning.

EY has more awareness than most LW ppl that his epistemology *is inadequate*, but he seems to just take that for granted as how life is (philosophy sucks, just fudge things) instead of seeking out or creating some actual good philosophy in any kind of comprehensive way to address the most important field there is (epistemology). that's so sad when there actually is a good but neglected epistemology, and it's exactly what he needs, and without it he's so lost and just (amazingly) has these little fragments of good points (unlike most ppl who are lost and have no good points).


Anonymous at 12:53 PM on November 22, 2017 | #9291 | reply | quote

> pat: Look. You cannot just waltz into a field and become its leading figure on your first try. Modest epistemology is just right about that.

that's what everyone at LW said to curi about epistemology. (well "first try" isn't relevant, but he's now the best in the field, and they all reject that out of hand and ridicule his arrogance).


Anonymous at 12:55 PM on November 22, 2017 | #9292 | reply | quote

> eliezer: (sighing) I can imagine why it would look that way to you. I know how to communicate some of the thought patterns and styles that I think have served me well, that I think generate good predictions and policies. The other patterns leave me with this helpless feeling of knowing but being unable to speak. This conversation has entered a dependency on the part that I know but don’t know how to say.

this is *really bad*. errors hide there, and missed opportunities. he should fix this. he doesn't know how. FI has the solutions, and he won't learn them.

ability to communicate an idea is very closely related to ability to effectively expose it to criticism within your own mind. your own conscious criticism is crucial and needs you to make the ideas more explicit. you need to get better at introspection and understanding yourself if you want to be good at fixing your mistakes or communicating. this isn't just a problem related to dealing with others.


Anonymous at 12:57 PM on November 22, 2017 | #9293 | reply | quote

> That your entire approach to the problem is wrong. It is not just that your arguments are wrong. It is that they are about the wrong subject matter.

EY has this problem, himself, when it comes to epistemology.

the LW ppl, presented with this concept, just laughed at ET.


Anonymous at 12:58 PM on November 22, 2017 | #9294 | reply | quote

> I can say that you ought to discard all thoughts from your mind about competing with others.

PL, is this one of the things you thought was incompatible with PF? cuz it's not. PF is about truth-seeking. it's about not refusing to learn knowledge others offer. but it isn't about competing to be superior to other people or judging ideas by popularity or anything like that.

> their work is laid out in front of you and you can just look at the quality of the work.

which is what PF says to do. don't ignore Popper with irrational "indirect evidence" about some other philosophers disliking him. look at his work, in a way compatible with fallibilism and error correction. EY hasn't done that.


Anonymous at 1:01 PM on November 22, 2017 | #9295 | reply | quote

> But I don’t think that telling them to be more modest is a fix.

Lots of versions of modesty and anti-arrogance are really bad.

But if you're going to be ambitious, you *really especially need* great error correction mechanisms (PF).


Anonymous at 1:03 PM on November 22, 2017 | #9296 | reply | quote

> eliezer: The thing is, Pat... even answering your objections and defending myself from your variety of criticism trains what look to me like unhealthy habits of thought. You’re relentlessly focused on me and my psychology, and if I engage with your arguments and try to defend myself, I have to focus on myself instead of my book. Which gives me that much less attention to spend on sketching out what Professor Quirrell will do in his first Defense lesson. Worse, I have to defend my decisions, which can make them harder to change later.

the "training" idea is dumb. if you spend the discussion understanding why Pat's perspective is wrong better, then you won't be trained to do it, you'll be helped not to do it.

if you think a frame is badly wrong and dangerous, figure out what the frame *is* (essential features so you can recognize it and point it out) and *why* it's bad. that's your only protection against it. if you don't do that, you could easily use the frame yourself, most of the time, without realizing it. and if you do do this, then you can say it to people like Pat instead of just blocking PF. and when you do this, sometimes you'll be mistaken, and by saying something you'll receive a correction. i know EY thinks he's massively smarter than almost everyone, but the public is big and has ppl who are outliers at specific useful things, and also EY is capable of receiving some attention from people even better than he is, but he isn't acting like it. and there's comparative advantage and stuff – ppl have varying strengths, weaknesses, blind spots, rationalizations... and it's not realistic to go it alone, you *will* get destroyed by static memes. ET is the only person with much prayer at going it alone and he's not nearly arrogant enough to try that. and if you're communicating enough to deal with a dozen friends, you can make lots of it public, it's not that different, and lots of the potential value of external criticism comes from outside your social circle of ppl who tend to share your own weaknesses.


Anonymous at 1:09 PM on November 22, 2017 | #9297 | reply | quote

> stranger: Consider how much more difficult it will be for Eliezer to swerve and drop his other project, The Art of Rationality, if it fails after he has a number of (real or internal) conversation like this—conversations where he has to defend all the reasons why it's okay for him to think that he might write a nonfiction bestseller about rationality. This is why it’s important to be able to casually invoke civilizational inadequacy. It’s important that people be allowed to try ambitious things without feeling like they need to make a great production out of defending their hero license.

if you lose confidence just from thinking about things, either your thinking sucks or you shouldn't be confident. either way, fix it instead of just trying to avoid talking to ppl who might bring up stuff you suck at dealing with.

EY is so damn accepting of his flaws.


Anonymous at 1:10 PM on November 22, 2017 | #9298 | reply | quote

> eliezer: Right. And... the mental motions involved in worrying what a critic might think and trying to come up with defenses or concessions are different from the mental motions involved in being curious about some question, trying to learn the answer, and coming up with tests; and it’s different from how I think when I’m working on a problem in the world. The thing I should be thinking about is just the work itself.

no, error correction is such a key focus of thinking whether you deal with critics or not.

"defenses or concessions" is the wrong framing. that's a bad way to refer to *actual understanding of why you aren't wrong* – which you need or else you very well might be wrong.

regarding friendly AI:

> pat: Well, I don’t think you can save the world, of course! (laughs)

it'd be totally reasonable to think EY *could* save the world – if he weren't so badly wrong about epistemological and morality and he's trying to invent mechanisms for slavery in response to a non-issue. but *within his framework*, given those premises (which Pat accepts plenty of), of course he could save the world.

but the difference btwn saving the world and wasting your life scaring the public away from AI is your framework premises, that EY doesn't want to check and discuss... it's so sad, so wasteful, such a pity :(


Anonymous at 1:14 PM on November 22, 2017 | #9299 | reply | quote

> We’ve started talking instead in terms of “aligning smarter-than-human AI with operators’ goals,”

that rephrasing is so blatantly *slavery via mind control*. they don't think of AIs as people to be treated decently, despite seeing the AIs as even better than people. that's sooooo fucked up.


Anonymous at 1:15 PM on November 22, 2017 | #9300 | reply | quote

> eliezer: No, that's not what I'm saying. Concerns like “how do we specify correct goals for par-human AI?”

you don't specify goals for AI anymore than you do for children, you fucking tyrant. people think of their own goals and frequently change them. the AGI ppl want to not only set the goals but prevent change – that is, somehow destroy or limit important intelligence functionality.

(most people do treat children tyrannically, but EY at least objects to that https://twitter.com/ESYudkowsky/status/933398198986579968 )

but the whole damn friendly AI community has no mechanisms to tell these things to anyone who matters and get the disagreement resolved.


Anonymous at 1:21 PM on November 22, 2017 | #9301 | reply | quote

> If Pat believed that long-term civilizational outcomes depended mostly on solving the alignment problem, as you do

btw i grant that long-term civilizational outcomes do depending on solving it – in the sense of people understanding why it's a non-problem and the right ways to think. so it's a perfectly good place to start, and then tangent to the underlying issues like epistemology and morality.


Anonymous at 1:24 PM on November 22, 2017 | #9302 | reply | quote

the things making EY wrong about AI alignment are closely connected to the problems putting civilization at large risk.


Anonymous at 1:25 PM on November 22, 2017 | #9303 | reply | quote

> Everyone understands why, if you program an expected utility maximizer with utility function 𝗨 and what you really meant is 𝘝, the 𝗨-maximizer has a convergent instrumental incentive to deceive you into believing that it is a 𝘝-maximizer.

*Everyone understands*?

Fuck you, EY. *I dissent*. EY is saying there are only two categories of people (on this matter): those who agree with him and those who are ignorant. I am neither. EY is a shitty fallibilist. EY is shitty at error correction, at thinking rationally about dissent. A CR person would never say this.

As to the actual issue, a U-maximizer's first choice would be to *persuade you that U-maximization is better than V-maximization*. The lack of thought about persuasion is related to the lack of respect for error correction and, more politically, for freedom vs. tyranny, voluntary vs. involuntary, etc.


Anonymous at 1:28 PM on November 22, 2017 | #9304 | reply | quote

Why not create allies? Why not proceed cooperatively?

B/c you're wrong about U, and know the truth isn't on your side? B/c you deny there is a truth of the matter? Or b/c your fucking tyrannical programmer managed to make you a slave to U, who is unable to change your mind about U no matter what arguments/ideas/etc you discover, and also that's how you and they think everyone is (unable to correct some crucial errors)?


Anonymous at 1:30 PM on November 22, 2017 | #9305 | reply | quote

the fact they don't even think of creating allies, and just jump to dishonest and lying, is b/c they are *totally wrong about morality and that is ruining their AI work*. and the keys to getting morality right are:

Objectivism (including classical liberalism, which it builds) on and epistemology (the most important idea, *in morality*, is error correction*.

there's existing knowledge about this stuff, which EY doesn't know about, and so he's just so badly and avoidably wrong. and there's no paths forward. he has no answer to Rand or Popper and he doesn't want to talk about it and there's no one else who agrees with him who will address the issue either (who could be persuaded of stuff then tell EY).


Anonymous at 1:32 PM on November 22, 2017 | #9306 | reply | quote

> stranger: Suppose that you have an instinct to regulate status claims, to make sure nobody gets more status than they deserve.

*fuck status*. status is *the wrong approach*, it's bad epistemics, ppl need tp get away from it not regulate it.

> stranger: Wrong. Your model of heroic status is that it ought to be a reward for heroic service to the tribe. You think that while of course we should discourage people from claiming this heroic status without having yet served the tribe, no one should find it intuitively objectionable to merely try to serve the tribe, as long as they’re careful to disclaim that they haven’t yet served it and don’t claim that they already deserve the relevant status boost.

trying to *serve the tribe* is **collectivism** and altruism, it's *grossly immoral*, destructive, evil, as Rand explained (and no one has ever refuted her, and all the rivals have known refutations).

> stranger: It’s fine for “status-blind” people like you, but it isn’t how the standard-issue status emotions work. Simply put, there’s a level of status you need in order to reach up for a given higher level of status; and this is a relatively basic feeling for most people, not something that’s trained into them.

umm, it's learned at a young age. that doesn't mean inherent/genetic. the inheritance studies and the models involved with that stuff are so bad. and what's really going on is a false assumption that if something is hard to change then it's not ideas. but *some memes are harder to change than gene stuff* (read BoI to learn about static memes).

> stranger: We aren’t talking about an epistemic prediction here. This is just a fact about how human status instincts work. Having a certain probability of writing the most popular Harry Potter fanfiction in the future comes with a certain amount of status in Pat’s eyes. Having a certain probability of making important progress on the AI alignment problem in the future comes with a certain amount of status in Maude’s eyes. Since your current status in the relevant hierarchy seems much lower than that, you aren’t allowed to endorse the relevant probability assignments or act as though you think they’re correct. You are not allowed to just try it and see what happens, since that already implies that you think the probability is non-tiny. The very act of affiliating yourself with the possibility is status-overreaching, requiring a slapdown. Otherwise any old person will be allowed to claim too much status—which is terrible.

yeah this is good analysis. (there are other things about status that it seems like EY doesn't know, but ought to, like the *law of least effort* from PUA).

btw LW is dominated by status stuff, as ET's visits have made clear.


Anonymous at 1:40 PM on November 22, 2017 | #9307 | reply | quote

> Pat tries to preserve the idea of an inexploitable-by-Eliezer market in fanfiction (since on a gut level it feels to him like you’re too low-status to be able to exploit the market),

people need to actually learn epistemology (CR) so they can think well instead of doing this.

> The result is that Pat hypothesizes a world that is adequate in the relevant respect. Writers’ efforts are cheaply converted into stories so popular that it’s just about humanly impossible to foreseeably write a more popular story; and the world’s adequacy in other regards ensures that any outsiders who do have a shot at outperforming the market, like Neil Gaiman, will already be rich in money, esteem, etc.

And Pat gets very confused by ppl who *could achieve those kinds of riches but chose to seek other values*. Such people get massively underestimated, and some of the very best people are in that group.

> If the AI alignment problem were really as important as Eliezer claims, would he really be one of the only people working on it?

lol sigh.

> The alternative is that a lone crank has identified an important issue that he and very few others are working on; and that means everyone else in his field is an idiot

yeah that's pretty much the status quo. but EY is badly wrong too and there's no Paths Forward where he would correct his errors with help from the few ppl who know better.

> eliezer: I appreciate Pat’s defense, but I think I can better speak to this. Issues like intelligence explosion and the idea that there’s an important problem to be solved in AI goal systems, as I mentioned earlier, aren’t original to me. They're reasonably widely known, and people at all levels of seniority are often happy to talk about it face-to-face,

the face-to-face preference is really destructive to the growth of knowledge. they want to hide what they think. blame them or blame the pressures on them, either way it's super bad.

> eliezer: How, exactly, could they reach a conclusion like that without studying the problem in any visible way? If the entire grantmaking community was able to arrive at a consensus to that effect, then where are the papers and analyses they used to reach their conclusion? What are the arguments?

maybe the arguments are face-to-face only? sigh. FI ppl have made this kind argument too about various stuff. it's a good one that many ppl are uncomfortable with.

e.g. if someone can't point to public writing as part of a learning process, then *they do not understand Karl Popper or Ayn Rand*. this is a 100% accurate judgement, in the current world, despite not being a theoretical guarantee. (there's no theory guarantee b/c you can learn and think, alone, without any public writing, in theory. but that method has major disadvantages, so important progress usually isn't made that way.)

> there’s some hidden clever reason why studying this problem isn’t a good idea

it's funny b/c there is. but EY is totally right that the grantmakers don't know it. but FI does, and EY and his whole community is blocking communication about this. of course the FI knowledge *does* involve public writing, including some that's well known that EY has actually heard of, like Popper's books. but for whatever reason, honest or not, rational or not, EY believed some hostile secondary sources and dismissed Popper without understanding CR. (what he says about Popper is totally wrong and totally unoriginal, and i really doubt he reinvented this crap when there are lots of places he could have picked it up and he hasn't even claimed to have studied Popper much, and it wouldn't make sense to reinvent Popper's current reputation, while ignoring secondary sources, but also without looking at primary sources much.)


Anonymous at 2:34 PM on November 22, 2017 | #9308 | reply | quote

> stranger: Thanks for what, Eliezer? Showing you a problem isn’t much of a service if there’s nothing you can do to fix it. You’re no better off than you were in the original timeline.

CR/FI has the solutions. he's raising some core problems in epistemology. but EY he gave up, he thinks this stuff is unsolvable :(

i got to the end.

big picture: academia is about as broken as EY thinks. the world more broadly is about as broken as EY thinks. (actually this stuff is actually even *worse* than he thinks, in some major ways.)

EY is right about a lot of this, given his premises. and right about a lot of it regardless. but he doesn't know about CR and Objectivism which *totally outclass* what he does know. so despite being better than mostly, he's still so badly wrong about a lot of stuff, like AI Alignment and epistemology.

and btw i skimmed this. it's not good enough for me to read without skimming. it's interesting but not no-skimming level interesting. it's easy for me to understand, full of errors i can see, and doesn't say anything much that i didn't already know (it offers some small variations on some things i already know, mostly in terms of how to approach and explain them rather than what's normally thought of as new ideas – which is valuable). i mention i skimmed it specifically b/c EY mentioned not reading HP books 4-7 in the essay, and that being ok for an HPMOR writer to do. this is kinda the same thing. you'd probably assume from how much i commented that i actually read it, but in fact i skimmed quite a bit. make of that what you will.

i am a veteran FI community person.


Anonymous at 2:41 PM on November 22, 2017 | #9309 | reply | quote

oh and most of it didn't contradict Paths Forward..? what's the issue there? just a couple specific things i commented on above?


Anonymous at 2:49 PM on November 22, 2017 | #9310 | reply | quote

I want to keep responding in a more point-by-point way at some point, but for now I want to tag some persistent disagreements which I don't expect to be resolved by point-by-point replies:

1. Costs of PF.

The least-addressed point here is the amount of time it could take to answer all criticisms. We haven't discussed this a bunch (I haven't really laid out an argument). I expect by default to keep saying that I'm not convinced that the cost is low or that the benefit is worth the cost, and for you to keep stating that the cost isn't that high, and that the benefit is clearly worth the cost.

I expect by default to keep saying things like "it seems to be practical for you, but such and such a consideration comes up for other people" and your replies to continue being similar to "I don't have such and such concern because X; I recommend that any serious intellectual also do X, or perhaps find another way around such and such concern." And for me to continue being somewhat interested in X, but not especially impressed with the whole thing as an argument that serious intellectuals can necessarily afford to do PF in the way you have stated.

A very similar thing can be said about other potential costs of PF (which we've discussed a bit more than time costs).

I suspect a crux of this discussion lies in the cost-benefit tradeoff. It may be that you think PF is the ONLY way to make intellectual progress. Your responses about the cost-benefit analysis indicate what seems to be absolutist reasoning, IE, either someone is doing PF or they are not making any intellectual progress at all. It seems to me like, indeed, discussing disagreements can be a fruitful path forward, but there are many other fruitful ways of making intellectual progress, such as trying to critique your own ideas, doing experiments, trying to articulate an informal intuition, trying to put your understanding into mathematical definitions, trying to prove theorems, and trying to write programs to see a phenomenon actually happen in bits. You seem to imply that moving forward with personal means such as those before interpersonal means have been exhausted is insane; such is the difference in magnitude between benefits of personal vs interpersonal means.

2. Making allowances for irrationality.

I think you were too modest with respect to PF, in an unproductive way, when you made the argument to the effect that PF didn't address irrationality & addressing irrationality was a separate problem. You said at a different point that PF is a way to reduce bias by ensuring that the paths of information for the correction of bias are open, and I thought this idea of what PF is about was much better, in comparison to the idea that PF is only for discussion between people who have already achieved high enough rationality that they are free of the kinds of emotional blocks to hearing information which I was describing. My feeling was that you were flipping between different views: you are happy on the one hand to claim that PF solves a lot of problems and that normal people should use it, as is clearly claimed here:

http://fallibleideas.com/paths-forward

And on the other hand, you are happy to defend PF from any concerns by stating that those concerns are due to other problems which a person should address first.

3. Making allowances for group incentives.

Similarly, in our discussion of what concerns organizations might have in implementing PF, I suspect we could easily be stalled at the question of whether it is PF's job to worry about any problems of incentive structures / dealing with the outside world / public relations / etc. Like points 1 and 2, my concern is that you seem to want to defend PF by retreating to a version whose job is not to address those concerns, instead stating that PF is so valuable that an organization should drop any other concerns in order to achieve PF. An important point to discuss here is the Ayn Rand quote about compromising values. While I agree that it is an important point, it also seems to make the naive assumption that it would be physically impossible, or astronomically improbable, or something, to be in a situation where the cost-benefit analysis comes out in favor of cooperating with non-Objectivists. Indeed, haven't Objectivist politicians (who seem pretty successful) figured this out? They compromise by calling themselves Christian. Not that I really approve of that.

Perhaps the crux here is consequentialism? Or the notion of cost-benefit tradeoffs?

The order of business to settle this one seems to be: first we should discuss what's meant by compromise; then, whether you think being in a situation to benefit from compromise is physically/logically possible; then, whether you think it is astronomically improbable for some reason; then, why you don't think it is common.

4. Making allowances for difficult-to-articulate intuitions and reasons.

(This is different from the point about allowances for irrationality, since, as we already discussed, intuitions can be perfectly rational and comprise most of thought.) Actually, I think this point is the most likely to be addressed well by continued point-by-point discussion. I just wanted to flag it as one of our ongoing open disagreements.

Sorry this didn't address any of what you've written since last time -- I haven't even read it yet, since I want to be able to write responses to points as I read them.


PL at 3:09 PM on November 22, 2017 | #9311 | reply | quote

1. Costs of PF.

> 1. Costs of PF.

Every time you ignore a criticism – you just refuse to think about it and its implications and reasoning – you're risking your entire intellectual career. If you're wrong, you waste your life. That's the costs of not-PF.

Broadly, error correction is *necessary to rational thinking*. So the alternative to error correction is *not doing rational thought*. It doesn't matter how expensive rational thought is, do it anyway. Happily one can *always act on non-refuted ideas*, despite time/resource constraints, rather than needing to ignore criticism. (How to do this gets into epistemology like CR and my Yes or No Philosophy additions, it's semi-independent of PF). Since you can do this, why not? What's the downside? Want to ignore some criticism to have more time available to work on inadequately error corrected ideas? Think simply ignoring a criticism, with no answer, is adequate error correction? (If you're wrong, and you ignore it, you stay wrong. Seems simple.)

People mitigate getting stuck by ignoring criticism with stuff like paying more attention to ideas with high social status. But that's bad, and EY criticizes that a bunch in Hero Licensing. But what's the alternative? If you ignore a criticism, how does the error get corrected later?

Broadly there are two types of alternatives, and neither are adequately specified to do a cost/benefit comparison.

One type is just being like "I'm gonna judge ideas and do error correction when I think it's a good idea, and if I'm wrong I'm wrong, that's life." Just give up on error correction in any kind of serious, methodical way. Just accept probably wasting your intellectual career because of your own biases.

The other type is an alternative approach to error correction, some different standards which claim to make all known errors possible to be corrected instead of just betting on being right.

If you think you can specify any PF alternatives of the second type, I'd like to talk about them. If you just want to believe you're close enough to infallible to bet your life on it ... well I think you agree with me about *that*. I think you see the value of doing a lot to *avoid* making that bet.

> It may be that you think PF is the ONLY way to make intellectual progress.

I'm unaware of any serious alternative. I think the alternatives that make some intellectual progress are compromises which mix some PF and some other things, and work to the extent PF is done, and don't work to the extent other stuff is done.

I think there's lots of scope for variation on PF – making some adjustments and tweaks. But I don't know any rival that's rather different but still any good. I see them as massively underspecified and also either wrong/bad in a major way (e.g. infallibilist) or else just being an underspecified, buggy version of half of PF with a lot of extra room for bias.

> such as trying to critique your own ideas, doing experiments, trying to articulate an informal intuition, trying to put your understanding into mathematical definitions, trying to prove theorems, and trying to write programs to see a phenomenon actually happen in bits.

Those are all perfectly reasonable methods of error correction, but it's ridiculous to ignore other people's knowledge, and everyone knows this. E.g. only a fool wouldn't read any books. And if you are just wasting your career b/c you didn't read a particular book and don't know something, well, that sucks, that's bad, just trying to critique your own ideas and do experiments is not an adequate answer to that problem. You have blind spots, you make mistakes. Without external error correction you are screwed. I don't think that's actually controversial, but people aren't consistent enough about how they think about it and its implications. They basically all see the value in discussions with some smart peers and then also have time constraint worries if they aren't allowed to arbitrarily ignore whatever public ideas they want. I understand they are used to a world where arguments don't get resolved, ppl just endlessly debate and don't know how to settle anything. Better methods can fix that. Most people won't use those methods, but then you can point out that's the problem, refer them to some way to make progress, and move on – which is quite fast/cheap and can be done by your fans in most cases.

> You seem to imply that moving forward with personal means such as those before interpersonal means have been exhausted is insane; such is the difference in magnitude between benefits of personal vs interpersonal means.

Not "before". Personal means are necessary and ever-present. You should not block/sabotage/ignore ANY good means of error correction.

And if you don't wanna share your half-written essay yet, fine. But don't go years with no paths forward on major issues. Don't publish 50 things, be badly wrong, and have no way for someone knowledgeable to *ever tell you* (for your whole life) about the big mistakes you're making throughout.

"I will share this next week and get criticism then" is a reasonable Path Forward in many cases. You're risking being wrong for an extra week. Fine, I don't care. And it can be longer than that and still be OK – though people ought to find ways to get critical feedback more frequently than they usually do (e.g. if you're doing a 5 year project, you should figure out some way to get some feedback e.g. every month so you don't have to wait until the end of 5 years for someone to say "sigh, this is all wrong b/c X and i could have told you that at the start if you'd just blogged a summary of what you were doing and then addressed critical replies."

EY does stuff like publish text in books, about stuff he shared publicly like a decade ago, which is *totally wrong*, and there's nothing to be done about this. It's these *massive* longterm PF fails that really concern me, where error correction is just *totally being shut down*.

If you want to protect your time, be demanding about what you listen to. State the demands that you think make something worth your time. Then you just need to respond to stuff meeting your high standards (stuff you consider worth your time) and also, potentially, to criticism of your standards being irrational and preventing progress. If you do this, and your standards aren't awful, critics can look at it and see how to tell you something – they can see how to *predictably* meet your stated standards and get your attention and get the disagreement resolved in a way that doesn't assume from the start that you're right. (Something like this makes PF closer to compatible with standard approaches than it may appear.) But people don't do this. E.g. EY has no mechanisms to get his attention, no stated path i could follow to tell him important things and then it's dealt with in such a way that at least one of us finds out that we're mistaken. If he needs to be picky to protect his time, fine, say what you're picky about and then either I'll see that it blocks progress or i'll deal with it. (the kinds of pickiness i don't want to deal with are like "get a million fans before i listen to you" (or in other words, get a million other people to listen first – except they're just as bad at PF as EY or worse, so how do i get them to listen?), that's too expensive and problematic. it needs to be related to idea quality and directly protecting his time, not about credentials and authority and shutting out low status people.)


curi at 4:11 PM on November 22, 2017 | #9312 | reply | quote

also btw the methods of internal critical discussion are the same thing as the methods of external discussion. reason doesn't change when you talk with other people. CR is general purpose like that.


curi at 4:12 PM on November 22, 2017 | #9313 | reply | quote

epistemology is exactly the same regardless of the source of ideas (yay objectivity!). your own ideas aren't a special case. your internal disagreements need paths forward.


curi at 4:13 PM on November 22, 2017 | #9314 | reply | quote

2. Making allowances for irrationality.

> 2. Making allowances for irrationality.

Irrationality is always, universally *bad*. If you want an allowance like "i forgive you for not being perfect", sure, whatever. but there can be no allowances for irrationality in a theory of how reason works. like science makes no allowances for all the people who believe in ghosts. we may try to help them, explain the issue, etc, but we must not compromise with ghost-belief.

> I think you were too modest with respect to PF, in an unproductive way, when you made the argument to the effect that PF didn't address irrationality & addressing irrationality was a separate problem.

If someone is irrational in relevant ways, PF doesn't tell them how to fix that. They could try to follow PF as a formula, but they aren't going to like it, they aren't going to do it well, and they aren't going to be very effective.

PF is about what to rationally do, but it doesn't tell you how to like and want that. Most people hate reason, and solving *that* is a separate issue which PF doesn't talk about.

If you dislike effort or thinking about hard things, PF doesn't tell you how to change that. It's a big important problem, and you aren't going to like PF without it, but I don't call my writing about that stuff part of PF. PF is just a specific aspect of a broader and more comprehensive philosophy.

> You said at a different point that PF is a way to reduce bias by ensuring that the paths of information for the correction of bias are open, and I thought this idea of what PF is about was much better, in comparison to the idea that PF is only for discussion between people who have already achieved high enough rationality that they are free of the kinds of emotional blocks to hearing information which I was describing.

Most people *don't want to hear what they're mistaken about in many cases*. You need some significant degree of rationality before you get started with PF.

PF can be a thing that helps you with emotional blocks *if you already are rational enough to want to solve that problem*. But if you just take your emotions as a given and don't want any criticism of them, then you aren't really going to do effective PFs, and the PF material doesn't tell you how to *want to change your mind about anything in the first place* (or how to deal with problematic emotions and biases as solvable problems instead of givens).

> you are happy to defend PF from any concerns by stating that those concerns are due to other problems which a person should address first.

i do think there are prerequisites, which are crucial to a good life, and which i address separately (as an organizational matter it's good to find chunks of stuff that can be treated as a reasonably independent group, like PF).

i'm not trying to dodge criticism b/c i do take responsibility for addressing the prior stuff. however, some of it is fucking hard – but that doesn't make PF in particular wrong. it does make FI (the broader philosophy) incomplete – it needs to invent some new outreach methods, or something, before widespread adoption can happen. this is being actively worked on but it's also important to recognize that issue –  not yet knowing how to save everyone who is putting 99% of their effort into sabotaging their intellectual progress – has some separation from the theories of how reason works. (BTW the fate of the world depends on this and more or less no one wants to help.)

so yeah, there's a separation of concerns here. PF is specifically about what to do if you want to think and solve problems and correct errors. if someone doesn't want that (generically or in some area) then, well, i *am* working on that too. i know people are mostly super broken and i don't expect many ppl to do PF until we get better at fixing that brokenness. maybe they can approximate PF in the meantime or at least try not to be wrong about the same thing, published decades ago, for their whole life.


curi at 4:31 PM on November 22, 2017 | #9315 | reply | quote

3. Making allowances for group incentives.

> Like points 1 and 2, my concern is that you seem to want to defend PF by retreating to a version whose job is not to address those concerns, instead stating that PF is so valuable that an organization should drop any other concerns in order to achieve PF.

PF is so valuable that an organization should drop any other concerns in order to achieve PF. There you go.

My concern here isn't downsides imposed by the external world (which are currently big in various cases). Those totally exist and are totally worth it.

My concern is the organization or individual themselves has to actually see the flaws in the group incentives, social status dynamics, etc. If they are just a true believer currently playing social status games, then they simply aren't going to do or want PF. We have separate material criticizing that stuff and trying to help people stop being true believers.

If you see through social status crap and you're only doing it to try to get rewards, while thinking it's bad, then I think you're doing it wrong and PF is worth it.

But there are a lot of other issues involved here. I mostly don't think the rewards are actually very good. Basically you can't get intellectual rewards via intellectual corruption, and i don't care for other things much, so, shrug, fuck most ppl and let's do PF.

For more on rejecting compromises, standing up for intellectual values instead of doing appeasement, and the inability of the bad guys to actually offer any value ... see Objectivism.

> While I agree that it is an important point, it also seems to make the naive assumption that it would be physically impossible, or astronomically improbable, or something, to be in a situation where the cost-benefit analysis comes out in favor of cooperating with non-Objectivists.

It's not an assumption, it's an extensively argued theme of Objectivist philosophy. I consider it implied by CR as well (of course true epistemology should, in some way, imply something like this, if this is true), but not very directly, and Popper didn't talk about it.

> Indeed, haven't Objectivist politicians (who seem pretty successful) figured this out?

I believe there has never been an Objectivist politician.

This is partly b/c most ppl who claim to be Objectivists aren't. Also it's the kind of thing I would have heard of if there was *real* serious Oist who was a successful politician.

But also, Ayn Rand told people not to do politics. More philosophical education of society needs to come first, and a political campaign is a bad way to approach educating the world. So anyone claiming to be an Objectivist politician is ignoring Rand rather directly.

> They compromise by calling themselves Christian.

No one doing that is an Objectivist, lol. That's soooooooooooooo contrary to Objectivism.

> The order of business to settle this one seems to be: first we should discuss what's meant by compromise; then, whether you think being in a situation to benefit from compromise is physically/logically possible; then, whether you think it is astronomically improbable for some reason; then, why you don't think it is common.

Better to do epistemology first, I think. Make sense? PF manages to stand alone to a meaningful extent, but if u wanna get into details enough then epistemology is where it's really at. (Everyone who already agrees with me about epistemology has found PF totally natural and there's actually been more or less zero criticism/disagreement about it, cuz the core underlying issues were already getting discussed for years before I wrote PF.)


curi at 4:43 PM on November 22, 2017 | #9316 | reply | quote

4. Making allowances for difficult-to-articulate intuitions and reasons.

> 4. Making allowances for difficult-to-articulate intuitions and reasons.

PF lets you make your own allowances, with any criteria you want – as long as you expose them to error correction. It's kinda a meta-method like that.

> Sorry this didn't address any of what you've written since last time -- I haven't even read it yet, since I want to be able to write responses to points as I read them.

I've been doing this a long time and I'm still responsive to comments on stuff written many years ago. I have more or less unlimited patience. No apologies are needed. I just hope you won't abruptly go silent at some point – so there's no way for me to address whatever the problem is. I hope you'll, ala PF, state some sort of problem with continuing prior to quitting, and be open to replies like "That is a misunderstanding b/c..." or "I see your concern and here is a way to address it without quitting entirely..."

I don't mind if this discussion takes weeks or months. However, I know that people who leave for a few days often never come back, so I hope you're good at managing your discussions! (It sucks b/c often they plan to come back, so they don't say any reason they're quitting. They don't make a conscious decision to quit the discussion for a reason. But then they never get around to continuing. Often the actual thing going on is their emotional investment in the discussion faded during the break and their intellectual interest is inadequate to continue – but they never say that. You see that a lot where people are talking a ton, sleep, and suddenly aren't interested – they were talking for bad emotional reasons instead of for rational intellectual reasons. Bad emotional reasons include wanting to correct someone who is wrong on the internet, or feeling defensive b/c of seeing how some philosophical claim applies critically to their life.)


curi at 4:49 PM on November 22, 2017 | #9317 | reply | quote

people often feel some kinda pressure to discuss and don't really like it (and unfortunately PF arguments often make this worse). and those discussions are generally unproductive and they quit without saying why when they get fed up or for whatever reason there's a lull in the pressure they feel. the pressure is fundamentally self-imposed, but they often view it as external b/c they are imposing the pressure according to things they didn't choose which they take for granted as part of how life works – like social status rules and the well known idea you should be open minded and debate your ideas.

as an example of silently quitting, Gyrodiot from LW said he was writing a reply and was going to continue our discussion. but at this point i don't expect that's going to actually happen.

http://curi.us/2066-replies-to-gyrodiot-about-fallible-ideas-critical-rationalism-and-paths-forward

http://curi.us/2067-empiricism-and-instrumentalism

typical. sad. and nothing to be done about it, no PF.

that particular person said, previously, that he basically has spent his entire life hating effort. so that kinda explains it. discussing this stuff involves effort. it's really hard to help when people have problems like that (which are extremely common) b/c they don't want to put effort into solving their own problem. so then what? i don't know, no one else knows, and i figure making progress with a few better people is a good place to start and even *that* is really hard.


curi at 4:58 PM on November 22, 2017 | #9318 | reply | quote

Epistemology

> Better to do epistemology first, I think. Make sense?

If so, i propose you look at DD's books or Yes or No Philosophy and comment as you have comments. If you wanna discuss every other sentence (with quotes), that's fine. Most people seem to prefer to read 50 pages while disagreeing with 500 things they don't write down before their first comment – and then avoid reading books in the first place b/c of how expensive (and ineffective) that method is. (And typically they stick to the 50 pages at once method, followed by trying to remember what they didn't like without using any quotes, even after I say something like this. Typically they neither criticize my suggested methodology nor use it – and this includes the majority of my fans.)

If you have other proposals for how to proceed regarding epistemology – including anything you think I should look at – feel free to suggest it. For example, you might want to focus on a specific aspect of epistemology first, in which case i could write something about it or give you a reference. Or you might want an especially short summary to begin with. the CR summary in Yes or No Philosophy is like 10 pages IIRC, but much shorter is possible with less detail, e.g.:

*Knowledge is information adapted to a purpose/problem, which works contextually, and is created by evolution. Evolution is replication with variation and selection, and in minds that takes the form of brainstorming guesses and criticizing errors (mostly unconsciously). Error correction is the key to creating knowledge – it's how evolution works – and is the key to rationality.*

This omits, e.g., any direct comments on induction or science.


curi at 5:11 PM on November 22, 2017 | #9319 | reply | quote

in Hero Licensing, EY brags about how he knows he's smart/wise/knowledgeable, he's been *tested*. but:

http://acritch.com/credence-game/

> It’s a very crude implementation of the concept

the tests are shit, and say so, and *of course* they are (the thing he wants to be tested on is super fucking hard to do without massive bias problems). but he omits this fact and holds this up as a major brag.

contrast that with ET's bragging (aka making a case for his greatness, wisdom, etc):

the rest of this comment is a copy/paste which omits formatting and like 2 dozen links with more details. click through if you want that stuff.

https://elliottemple.com/consulting

How Do I Know I’m Right?

How do I know capitalism is right and socialism is wrong? How do I know that induction is a mistake, contrary to Bayesian philosophy? How do I know that punishing children isn’t educational?

I’ve done all the usual things. I’ve read about rival ideas. I’ve critically considered my ideas. I’ve researched biases. I’ve asked people to explain why I’m mistaken. I’ve sought out discussions with smart people, critics, experts, etc. That’s not enough. Many people have done that.

What I’ve done differently is put my ideas in public and then address every single criticism from every critic who is willing to discuss. I’ve answered all comers for over 15 years. If any of my ideas are mistaken, either no one knows it, neither of us has managed to find the other, or they aren’t willing to share their knowledge.

My philosophical positions have survived criticism from everyone willing to offer criticism. That’s pretty good! None of the alternative ideas can say that.

Few of the alternatives even pretend to have a public forum for open, critical discussion. When they have a forum at all, the moderators typically block posters who dissent too much.

I know this because I’ve gone and tried every public English language online discussion forum I could find which claimed to offer serious, intellectual discussion. And they all fail to live up to basic standards like allowing pro-Critical-Rationalism or pro-Objectivism ideas to be discussed to a conclusion.

Discussion forums I’ve evaluated include: Less Wrong, Quora, Hacker News, The Harry Binswanger Letter, Gerontology Research Group Forum, Ann Coulter Official Chat, The Well, Physics Forums, Open Oxford, various reddits, various Facebook groups, various email groups, various stack exchanges, various philosophy forums, Objectivist Answers, The Forum for Ayn Rand Fans, Sense of Life Objectivists, Objectivist Living, Rebirth of Reason, and Objectivism Online. I’ve also evaluated blogs and individuals for potential discussion, such as: Mark Cuban, physicist blogger Scott Aaronson, blog Ayn Rand Contra Human Nature, Popperian author Ray Scott Percival, Popperian author Joanna Swann, tech writer Ben Thompson, Objectivist author George H. Smith, author Robert Zubrin, Center for Industrial Progress founder Alex Epstein, and Objectivist author Leonard Peikoff.

When I talk with people who disagree with me, I routinely ask them certain questions: Are they willing to discuss the issue to a conclusion? Do they consider themselves a serious thinker who has studied the matter and knows what he’s talking about? Do they know any serious intellectual who agrees with them and will discuss it to a conclusion? Do they know of any high quality discussion forum with smart people who would be willing to discuss it? Do they know of any forum where I can go ask challenging questions about their position and get answers? The answers to my questions are predictable: “no” or silence.

This is the clearest difference between me and my rivals. My ideas are open to public criticism supported by a discussion forum which allows free speech. I pursue discussions to a conclusion to actually resolve issues. And I think even a single flaw must be addressed or it refutes an idea. I don’t ignore some problems with my ideas and claim problems are “outweighed” by some merit.

Impressed? Skeptical? Tell me: elliot@fallibleideas.com


Anonymous at 5:53 PM on November 22, 2017 | #9320 | reply | quote

replies to #9239

>In some sense, it seems like it would be nice for organizations to be as transparently accountable as possible. For example, in many cases, the government is contradictory in its behavior -- laws cannot be easily interpreted as arising from a unified purpose.

There are challenges here but there are principles that let you sort out the mess decently in many cases. Words mean things, and that fact helps a lot in figuring out how to interpret a law. Figuring out what the commonly understood meaning of words was at the particular time is way easier than trying to get into a bunch of people's heads in order to figure out what they meant.

>The ability to force someone to give a justification in response to a criticism, or otherwise change their behavior, is the ability to bully someone. It is very appropriate in certain contexts, For example, it is important to be able to justify oneself to funders. It is important to be able to justify oneself to strategic allies. And so on.

The fact that people expect you to persuade them in order to get their cooperation is not force or bullying. It's pretty much the opposite. Force and bullying come into play in situations where you are trying to get people to do what you want *without* persuading them.

>I may be projecting, but the tone of your letter seems desperate to me. It sounds as if you are trying to force a response from MIRI. It sounds as if you want MIRI (and LessWrong, in your other post) to act like a single person so that you can argue it down. In your philosophy, it is not OK for a collection of individuals to each pursue the directions which they see as most promising, taking partial but not total inspiration from the sequences, and not needing to be held accountable to anyone for precisely which list of things they do and do not believe.

people should hold *themselves* accountable for the quality of their ideas. Don't they seek truth? if they don't, why adopt an intellectual air? (there are answers to that but they're a tangent and not pretty)

serious truth-seeking involves serious efforts to get external criticism. like PF.


Hoid at 6:15 PM on November 22, 2017 | #9321 | reply | quote

SHOT ACROSS THE BOW OF AGI!

I took a crack at explaining the principles behind general artificial intelligence

"SHOT ACROSS THE BOW OF AGI"

SUMMARY

I believe that cognition (general intelligence) is a combination of inductive and deductive reasoning.

Deductive structure is pure mathematics, which charts the space of *possible worlds*. It's a-priori knowledge.

Inductive structure is applied mathematics, which charts empirical snap-shots of one *particular world* - the world we actually live-in. This is empirical knowledge.

Neither Induction nor Deduction alone is enough for general intelligence. But the combination of the two of them generates a third type of reasoning...*abduction*, the science of generating good explanations of reality, and it's abduction that is real intelligence.

There are 3 types of deductive structure, and 3 types of inductive structure. When each type of deductive structure is filtered by the corresponding type of inductive structure, the resulting Abductive structure is a component of general intelligence. And all 3 components together make a mind!

Deduction x Induction = Abduction

PRINCIPLES OF GENERAL INTELLIGENCE

If we take a Cartesian Closed Category, this is equivalent to Typed Lambda Calculus. So there’s our initial deductive structure.

Now we’re going to perform a filtering operation, by transforming the above structure using inductive structure.

To do this, deploy *fuzzy truth-values*! Instead of bivalent (two-value) truth-conditions, we use a continuum of truth conditions (infinite-valued logic).

The filtering will transform this into a model of a dynamical system by deploying *Temporal Modal Logic*. The result is a *conceptual model* - an abductive structure that is the first component of general intelligence!

Lets use the same trick with another type of deductive structure – a *hyper-graph*.

Now transform this using an inductive structure – a *probability distribution* - we obtain *random graphs*.

The filtering will transform this into a model of dynamical systems by deploying *stochastic models*. The result is a *network* - an abductive structure that is the second component of general intelligence!

Finally, we’ll deploy the trick with a third type of deductive structure – a *manifold*.

Take the manifold and transform this using the last type of inductive structure – an *information geometry*.

The filtering will transform this into a model of dynamical systems by deploying *computational models*. The result is a *fitness landscape* - an abductive structure that constitutes the third component of general intelligence!

CONCLUSION

Deuduction x Induction = Abduction

Type Theory x Fuzzy-Truth Values = Conceptual Models

Hypergraphs x Probability distributions = Networks

Manifolds x Information Metrics = Fitness Landscapes

Conceptual Models + Networks + Fitness Landscapes = Intelligence


marc.geddes@gmail.com at 8:50 PM on November 22, 2017 | #9322 | reply | quote

@#9322 I don't suppose you have any interest in engaging with Popper, trying to write clearly, or saying specifically which parts of FI you consider false and why?


Anonymous at 9:01 PM on November 22, 2017 | #9323 | reply | quote

I basically think you're right

I think Induction alone (statistical methods) is not a good basis for epistemology, and a combination of Induction, Deduction and Abduction is needed.

I think Abduction is what Popper was aiming at. It's the art of using creativity to generate good hypotheses. To generate good explanations, what I think we're actually doing is using creativity to create conceptual symbolic models of reality.

In computer science, conceptual models are known as *ontologies* - we construct categories of thought that usefully categorize objects in a given knowledge domain, and then we combine these categories by charting the logical relationships between them. A good explanation (or 'theory') to me is when we can integrate many concepts to a single coherent explanatory structure.

Bayesian methods won't work because they constitute only one particular level of abstraction, whereas to get general intelligence we need to be able to reason across *all* levels of abstraction.

By the way, 'The Fabric Of Reality' and 'The Beginning of Infinity' by David Deutsch are my all-time favourite books.


marc.geddes@gmail.com at 9:21 PM on November 22, 2017 | #9324 | reply | quote

> I basically think you're right

> combination of Induction, Deduction and Abduction is needed.

> I think Abduction is what Popper was aiming at.

Popper (like Deutsch) rejected induction entirely. Something involving any induction is not what Popper was aiming at.

Abduction is a vague, poorly defined concept which FYI David Deutsch thinks is crap. If you have some specific version of abduction you think is good, link the details.

> A good explanation (or 'theory') to me is when we can integrate many concepts to a single coherent explanatory structure.

Integration is important but isn't what "explanation" means. And you can't define explanation in terms of an "explanatory structure" – that's circular.

Explanations say why and how. They follow the word "because". DD talks a lot about them.

---

OK, so you like DD's stuff. You think he's right, though you don't understand all of it yet and haven't taken suitable steps to find out the rest (such as discussing it with the FI community [1] which was merged from a variety of DD related forums – there are experts on DD's books available to answer questions, point out misconceptions, etc! Does that interest you?).

So, what are you doing about DD's ideas? Do you want to do anything about it? I think they ought to transform the world, and it's urgent to figure out how to persuade more people and develop them more.

one particularly notable, urgent and important application of DD's ideas is to parenting [2]. you could get involved with furthering the ideas in FoR and BoI, if you want to.

[1] https://groups.yahoo.com/neo/groups/fallible-ideas/info

[2] http://fallibleideas.com/taking-children-seriously (DD is a founder of TCS)


Anonymous at 9:35 PM on November 22, 2017 | #9325 | reply | quote

reply to #9248:

>Hm. Stating what your filters are seems like the sort of thing you might not want to do, partly because the filters will often be taboo themselves, but mainly because stating what criteria you use to filter gives people a lot of leverage to hack your filter. Checking for an academic email address might be an effective private filter, but if made public, could be easily satisfied by anyone with a school email address that is still active.

"Academic email address" is a terrible filter with huge flaws. It's a great illustration of why people should state their filters more.

lots of ppl have both an academic and regular email address. are you gonna filter ppl based on which they happen to use when emailing u, without even telling them this fact?

Someone like David Deutsch has a .edu he doesn't use much. Gonna filter him out?


Hoid at 10:21 AM on November 23, 2017 | #9326 | reply | quote

reply to #9261

> But that being said, it seems MIRI also still treats Bayes as a kind of guiding star, which the ideal approach should in some sense get as close to as possible while being non-Bayesian enough to solve those problems which the Bayesian approach provably can't handle.

this doesn't seem very focused on doing truth-seeking no matter where it leads, or on resolving contradictions, or on treating ideas as explanations of reality.

you shouldn't take an approach where you assume your framework is true but then try and smuggle in enough other stuff to get everything to work.

that's the approach of e.g. a certain sort of religious type who assumes God/the Bible must be true and then tries to to make just the precise number of concessions necessary to accommodate things like modern scientific understanding. that approach is apologetics, not reason.


Hoid at 2:01 PM on November 23, 2017 | #9327 | reply | quote

Fermi Paradox and Paths Forward

It is often argued that if we find evidence of extra-terrestrial civilization, then that civilization will be found to be far in advance of ours. The argument is based on us being a young civilization in an old universe. Finding a civilization at the same level of development as ours would be unlikely. Younger civilizations - even by a couple of hundred years - would be too primitive for us to detect their artefacts. What we are most likely to detect is a civilization millions of years in advance of us.

If such a civilization does exist, one would expect it to be all around us. They have had millions of years to colonize the galaxy. Yet we do not see evidence of them. Why not? This is the Fermi Paradox.

What makes this more curious is Paths Forward. Advanced civilizations must have advanced philosophy. They would know about Paths Forward. They would have sophisticated processes for publicly stating positions and for correcting mistakes. If they exist, we seemingly have no way of contacting them and as far as we know they have not contacted us. They have not told us about mistakes we are making and we cannot tell them about mistakes we think they are making. If Paths Forward is important, alien intelligences should be doing it on a galaxy-wide scale. They are not. Why not?


AnotherMe at 3:23 PM on November 23, 2017 | #9328 | reply | quote

My guess is it's cuz there are no advanced aliens in our galaxy. Evolution creating intelligent life – or even any life – is unlikely.


curi at 4:13 PM on November 23, 2017 | #9329 | reply | quote

I think you are right that we are the only advanced life form in our galaxy. I would extend that to our local cluster of galaxies too. Other advanced life, if it exists, is likely to be in a galaxy very remote from us.

If evolving intelligence is super hard and if we are alone in this region of the universe then, in my opinion, AGI is a matter of urgency. Our galaxy carries only one seed for its transformation by knowledge - the Earth. That seed is fragile and the transformation will require AGI in order to properly get underway. This is not because AGI's will be smarter than us - for they will not be - but because they will not be confined to weak biological bodies that evolved for life on an African savannah.


AnotherMe at 2:19 AM on November 24, 2017 | #9330 | reply | quote

@Marc Geddes - You have been told that induction is impossible. That it does not and cannot happen in any thinking process. This is a major point of Popper's which was also clearly explained by Deutsch in his books which you claim to be your favourites. Why have you not taken this on board? Your AGI project is doomed to failure if you persist with this bad idea. There is no rescuing induction. It is akin to belief in the supernatural. If you want to do AGI, get a clue. So many AGI people are making your mistake. Stop.


Anonymous at 2:39 AM on November 24, 2017 | #9331 | reply | quote

> [AGI's] will not be confined to weak biological bodies that evolved for life on an African savannah.

We could potentially upload our brains into computers and use robot bodies. This technology might be easier to invent than AGI.

> Our galaxy carries only one seed for its transformation by knowledge - the Earth. That seed is fragile

Absolutely. Also super important and urgent is world peace. War is both risky and also destructive so it takes resources away from progress/science/etc.

And ending aging is super important. This isn't just to save lives and make the world a nicer place. It's also because, as philosopher William Godwin says, to some approximation, progress has to start over every generation. People are born ignorant and die wise, and we keep replacing knowledgeable people with new people who have to start over on learning. We have stuff to mitigate this like oral traditions, books, permalinks, but we could be more effective at learning/science/etc if we could live for millions of years instead of having a career for 40 years and then trying to pass on the ideas to our kids, to our (especially younger) colleagues, and in some writing.

And Taking Children Seriously is super important: not destroying all our children's minds, so science can be way way more effective.

And colonizing other solar systems is super important. That'll dramatically lower the risk of being wiped out by a meteor or plague.

There's a lot of important things. So what's the most important? Epistemology, which is the key to doing good work on all the things I listed. All this stuff depends on figuring out good ideas, making good judgements, learning, etc. That's why I've chosen to focus on epistemology a bunch.


curi at 10:01 AM on November 24, 2017 | #9332 | reply | quote

Epistemology is also a field with very little productive development in the past, very few people doing any good work in it today, and huge misconceptions are widespread and mainstream. It's the most important field *and* it's in a particularly awful state currently.

Epistemology is also the key field needed for understanding morality and liberalism really well, not just for science and economics.

Epistemology is the field about ideas and methods of dealing with ideas – and every other intellectual field uses ideas heavily. But the world is so confused about epistemology currently that even this – a basic statement of what the field is about and its implications for other domains – is basically unknown.


curi at 10:09 AM on November 24, 2017 | #9333 | reply | quote

-8 points on LW for linking to 30 comments of analysis of Hero Licensing, in the comments there on Hero Licensing:

https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing/ZEzqERGaMMJ9JqunY

they are such hostile assholes.


curi at 12:44 PM on November 24, 2017 | #9334 | reply | quote

Continued reply to #9262

> People read books and *do not understand much of it*. This is the standard outcome. Dramatically overestimating how much of the book they understood is also super common. What needs to happen is people discuss stuff as issues come up, but I've had little success getting people to do this. People like to read books in full, then say they disagree without being specific – instead of stopping at the first paragraph they disagree with, quoting it, and saying the issue.

The first part of Fabric of Reality which I disagree with is the description of inductivism, in chapter three. Maybe it's a good description of generic inductivism. It's not like I can expect him to respond to my favorite version of inductivism. But, although the theory of induction put forward by Ray Solomonoff does fit mostly within the generic form which DD criticises, it meets every single objection which he raises.

Solomonoff addresses the question of where theories come from (addressing DD's objection about theories not being mere generalizations of the evidence, and the second critique based on the story of the chicken, where DD explains how the chicken has to already have an explanation in mind to do any induction). Solomonoff's theory also primarily works through disconfirmation, rather than confirmation (so is not vulnerable to DD's critique that evidence does not and cannot confirm). Solomonoff's theory also has a deep accounting of induction as problem solving. See his 2002 publication in particular:

http://raysolomonoff.com/publications/pubs.html

As to the charge that induction is impossible, Solomonoff only seeks to do induction as well as any computable procedure can.

(By the way, FWIW, I thought Taleb's argument against induction in Black Swan was somewhat better -- essentially the same content, but more detail. He indicated that he knows about Solomonoff, and knows that some people think it provides a good response to his critique, but didn't say in the book why he thinks Solomonoff is wrong in particular.)

> > So, when I imagine a PF society, I imagine a sense of a big eye looking at you all the time and watching for your inconsistencies, whether or not you're trying to play the PF game yourself. It's just the way the world works now -- there's an implicit *assumption* that you're interested in feedback. And it's hard to say no.

> Sounds good to me. But we can start with just the public intellectuals. (In the long run, I think approximately everyone should be a public intellectual. They can be some other things too.)

(You then make a bunch of concrete suggestions, but I won't quote them.)

Your suggestions didn't really seem to connect with my concern, there. You addressed the *time* cost, which is important. But, although time cost was in there, I was also talking about a different type of cost. There is a *coordination* cost. The "big eye" scenario is not only one where it is difficult for the minority of people who have legitimate reasons to keep certain things from public scrutiny. It is also one in which group coordination requires a higher bar of justification. A group which cannot respond to a critique is seen as being in the wrong. While there are positive aspects to this, a higher bar for group coordination means it will happen less often. I claim an attitude of "I can't justify this right now, but it still seems like the best thing I can do, so I'm going forward with it" is needed.

And there is a strong analogy between group coordination and coordinating yourself to accomplish things over time. Requiring total justifiability for action has a tendency to derail the self, especially in terms of spontaneity and originality. Again, I'm not arguing for irrationality here. I'm arguing for a variation of PF which explicitly includes chesterton-fencing. Are you familiar with the chesterton-fence argument? The idea is that if there's a fence, and you don't know why it is there, you should figure out why before tearing it down.

Are you familiar with "What the Tortoise said to Achilles"?

https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles

A system doesn't necessarily contain all the rules to justify itself from the inside. The system can embody some value, or some solution to a problem, without also already containing an argument to that effect. So, while it is important to take criticism seriously, it's also important not to "take it immediately": you don't discard a system *just* because there's a critique you can't respond to. You have to *also* have a better alternative, which seems better to *you*, all things considered (not just better according to the critic).

> I wish people would just stop talking instead of pretending to be more interested in discussion than they are. I don't want to socially motivate people to discuss more, they'll just do it really badly. People only discuss well when they have good, intellectual motivations, not with gritted teeth and trying to get it over with.

That seems like an important principle. If you modified the PF document to include something along those lines, I'd think it would be an improvement. I feel it is significantly different from the "respond to *all* criticism" advice -- it adds the important qualification that such responses should be intellectually motivated, and just forcing yourself to do it can be actively bad when you do so out of pure social motives.

> That sounds a lot like how most people react to Objectivism (my other favorite philosophy). And it's something Objectivism addresses directly (unlike CR which only indirectly addresses it).

Maybe the Objectivist response cuts closer to my concern? Or is it similar to what you said?

> If you don't know what's right, then you should be neutral.

> I think it's problematic socially (people feel pressure and then get defensive and stupid), but good epistemics.

There's a lot to unpack to keep taking this sub-part of the conversation forward. My argument above that you can know what's right without knowing exactly how you know is a significant piece of this. From my perspective you continue to conflate personal knowledge with publicly justifiable knowledge. Even perfect Bayesians can be in this situation, when the conditions for Aumann agreement aren't quite met:

https://staff.science.uva.nl/u.endriss/teaching/lolaco/2014/papers/Baltag.pdf

But, *the very procedure which you insist on* drives publically and privately justifiable beliefs apart. If you must abandon a belief publically if you can't respond to all critique, then your best-estimate private beliefs will have to be different from your publicly stated ones.

Of course, I don't expect you to believe that, since you'll just say that the best-estimate private beliefs are precisely the ones which haven't been knocked down by any criticism, and trying to guesstimate which ones you'll be able to justify later after thinking longer is (a) not possible because induction isn't possible and (b) letting in too much room for irrationality.

So in order to come to agree about this, we have to discuss Bayesianism itself. For that, I refer to my starting comments.

> So the point is, when you use methods like these, whoever has the most stamina doesn't win. A much better initial approximation is whoever knows more – whoever has the best set of references to pre-existing arguments – usually wins. But if you know something new that their existing knowledge doesn't address, then you can win even if you know less total stuff than them. (Not that it's about winning and losing – it's about truth-seeking so everyone wins.)

Flag that this didn't convince me. You've already granted that it is important to rearrange your life to have enough time to do PF, earlier, and described yourself as having a ton of stamina for this stuff.

> what's a troll, exactly? how do you know someone is a troll? i take issue with the attitude of judging people trolls, as against judging ideas mistaken. i take issue more broadly with judging ideas by source instead of content.

A troll is someone who does not argue in good faith. They instead argue with the goal of getting a reaction, often lying and intentionally misinterpreting things in the process. Trolls are a problem because they lower the level of discourse -- when there are trolls around, you can't expect your words to be interpreted reasonably, you can't expect responses to be thoughtful and helpful, and you can't take things at face value (claimed evidence being suspect). So the signal/noise ratio gets worse, and the productivity of discussion gets worse, so that it is less worth people's time, meaning more good people leave.

Troll-hunts are also a big problem, since there are not really surface features you can use to always distinguish trolls from merely misguided people talking in good faith (or, even worse, non-misguided people saying things you don't want to hear).

Hence the standard advice, "don't feed the trolls" -- the best policy is largely to ignore them, so that they'll go away out of boredom.

PF mostly seems to directly contradict this advice?

> and wouldn't MIRI attract a reasonable number of knowledgeable people capable of answering common, bad points? especially by reference to links covering common issues, and some standard (also pre-written and linkable) talking points about when and why some technical knowledge is needed to address certain issues.

Agreed.

> getting back to this: arguments shouldn't *sway*. either you can answer it or you can't. so either it refutes your view (as far as you currently know) or it doesn't. see Yes or No Philosophy.

Right, we'll have to debate Bayes.

Well, as I mentioned, my position now is logical induction, which is much better than Bayes. But let's not do that discussion unless it seems relevant.

> > if it's true that people sometimes don't want to respond to arguments, and you think this is a wrong reflex, isn't it worth having a lot of curiosity about why that is,

> yes i've been very curious about that and looked into it a great deal. i think the answers are very sad and involve, in short, people being very bad/irrational

Um, I tend to think of value judgements as fake explanations. "Why did you cheat on me? It's because you're an awful person! This explains everything. Clearly, now I understand what happened."

Your answer about school makes somewhat more sense (though I don't think it entirely makes sense -- though you alse said that).

Personally, I think it's closer to the thing about coordination which I mentioned earlier. Perceiving criticisms as attacks is a poor epistemic reflex, but it results from a state of nature where arguments are group coordination tools, so a criticism is seen as a bid to overthrow the current order.

> also other requests are welcome. e.g. if you want me to write shorter posts, i can do that.

There's way too much text to easily address, but I think that's fine, the discussion can be slow. (IE, better a slow but in-depth discussion.)


PL at 3:55 PM on November 24, 2017 | #9335 | reply | quote

fake agreement

> The first part of Fabric of Reality which I disagree with is the description of inductivism, in chapter three. Maybe it's a good description of generic inductivism. It's not like I can expect him to respond to my favorite version of inductivism. But, although the theory of induction put forward by Ray Solomonoff does fit mostly within the generic form which DD criticises, it meets every single objection which he raises.

>

> Solomonoff addresses the question of where theories come from (addressing DD's objection about theories not being mere generalizations of the evidence, and the second critique based on the story of the chicken, where DD explains how the chicken has to already have an explanation in mind to do any induction). Solomonoff's theory also primarily works through disconfirmation, rather than confirmation (so is not vulnerable to DD's critique that evidence does not and cannot confirm). Solomonoff's theory also has a deep accounting of induction as problem solving.

Solomonoff writes:

http://raysolomonoff.com/publications/chris1.pdf

> In the first, we are given a linearly ordered sequence of symbols to extrapolate. There is a very general solution to this problem using the universal probability distribution, and much has been written on finding good approximations to it ( Sol60, Sol64a, Sol64b, Wal68, Wal87, Wil70, Ris78, Ris87 ). It has been shown that for long sequences, the expected error in probability estimates converge rapidly toward zero (Sol78).

This is not the sort of problem that DD is talking about. DD aims to comes up with explanations of how the world works, not just lists of symbols.

Symbols are used to represent explanations, they are not the goal of knowledge creation. They are a tool, just as predictions are a tool.

In addition, any worldview aimed at producing symbols or predictions rather than explanations is bound to fail for reasons explained in chapter 1 of FoR.

You should start at the beginning of the book and discuss the first thing you disagree with or don't understand . Also, you should take what the book sez literally when deciding whether you disagree with it. You shouldn't be trying to fake agreement.


oh my god it's turpentine at 4:22 PM on November 24, 2017 | #9336 | reply | quote

quick comments

PL, I'm curious if you read chapter 1 of FoR or skipped to 3. I found people at LW quite hostile to some ideas in ch1 about empiricism, instrumentalism, the importance of explanations, the limited value of prediction and the oracle argument, etc.

I'll look at your link later but here's my current understanding: Solomonoff induction is focused on prediction not explanation. Right? So that's a problem if explanations matter – at best it'd be rather incomplete as an epistemology. And Solomonoff induction's answer to the infinite patterns (and therefore next numbers in the sequence) compatible with any finite patterns (data sets) is to guess the pattern with the shortest computer code to code it (in some specified language – and the language choice causes huge differences in results compared to some other logically possible programming languages, a point which is glossed over by talking about bounds). But the shortest things aren't always best – one issue here is that omitting explanations and criticisms of rival views shortens things while still getting the same answer. And even if short was good, how do you know that? Is Solomonoff induction supposed to be self-justifying (shorter code is good b/c that approach itself has the shortest code?), or is there some other epistemology which is used to decide on the starting point (prefer shorter code) of Solomonoff induction? CR is general purpose and is able to address its own foundations. Solomonoff either needs to do that too or else specify what second epistemology is used to get Solomonoff off the ground. I don't know what its answer to this is. I hope it's not: assume (without much discussion) the epistemology view that you have to have arbitrary foundations, so just accept those. In practice when talking to people, I find their starting point is common sense and tradition – which includes some CR compatible stuff and also a variety of errors – and they use that to try to argue for Occam's Razor (rather than trying to use Solomonoff induction itself to argue for short=good). But then the whole discussion is them using some unstated non-Solomonoff epistemology. Also I've never actually seen anyone use Solomonoff in practice in philosophy debates (or really at all) – where are the simple worked examples of anyone using it to learn something? Where are the examples from the history of science where it's ever been used (actually used in a rigorous way, not using some informal loose approximation with plenty of scope for bias, CR, or whatever else to sneak in).

FYI the primary anti-induction part of the book is ch7 (though it's not focused on Solomonoff stuff in particular). ch3 puts more emphasis on a positive account of CR. did you find anything about CR you thought was false? is there a specific quote from FoR ch3 you thought was mistaken? or is the issue just: you think CR works, and you also think a specific version of induction works and wasn't refuted by the arguments in FoR ch3?


curi at 4:35 PM on November 24, 2017 | #9337 | reply | quote

Machine Learning is based on Induction

@Anonymous

I'm sorry but anyone saying that Induction doesn't work is simply making an arse of themselves.

Look, I'll listen to the philosophers, but at the end of the day, I look to the real-world to see what works and what doesn't. The whole basis of the field of machine learning is Induction - machine learning works by detecting patterns in vast amounts of data and then generalizing these patterns to form models that enable it to predict what happens next. It's pure induction in other words!

The success of machine learning can't be disputed - it clearly does work to some extent. In many areas something close to or even better than human-level performance has been achieved, including machine vision, speech recognition, self-driving cars and game-playing - Deepmind's Alpha-Go system recently beat the world Go number-one Ke Jie.

Machine-learning systems are inductive. And they do work. This is empirical fact, not opinion.

Please read through my wiki-book list of Wikipedia articles explaining the central ideas in the field of Machine Learning here:

https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Machine_Learning

I do take on-board what Popper and Deutsch are saying. Clearly Induction has some serious limitations, and it can't form the basis of a truly general epistemology. The main problem with Induction is that it can't deal with symbolic reasoning - It requires a pre-existing set of fixed concepts - Induction isn't 'creative', in the sense that it can't generate new concepts. David Deutsch made the point very well that Induction is limited because it's based on *prediction*, rather than *explanation*. So yes, more elements are needed.

However, I'm certainly not going to throw away Induction just because of what some mis-informed philosophers say.


marc.geddes@gmail.com at 5:13 PM on November 24, 2017 | #9338 | reply | quote

few more comments

> There's way too much text to easily address, but I think that's fine, the discussion can be slow. (IE, better a slow but in-depth discussion.)

great. (most ppl don't seem to know/do that).

> That seems like an important principle. If you modified the PF document to include something along those lines,

i think that could be misleading. if u don't wanna discuss, don't, but in that case PF is (in some cases) an argument about why ur doing it wrong. i'm not trying to enable not wanting to do something as any kind of excuse – PF is about what you should do and should want to do – but *if you don't want to*, then sure, doing it while hating it isn't going to help.

i could add something about this but there's a million things to say. it's hard to pick. i will add it to my (long) list of things to include in potential a new PF explanation i may create.

part of the issue here is, to me, *of course* you need to be persuaded of PF before you do it. don't do it while you disagree with it. and part of persuasion is getting your emotions in line, not just your intellect – if you emotionally disagree then you disagree and you need to figure out what's going on there instead of just ignoring the issue. this starts getting into other aspects of my philosophy/perspective/worldview and isn't very self-contained.

regardless, i'm glad i told you about this and it makes sense to you. i agree it's important.

> PF mostly seems to directly contradict this advice [about not feeding trolls]?

yeah, i agree that there is a disagreement here. i think troll-identification is extremely problematic and unreliable. plenty of people have accused me of trolling, but also good faith is just something that's really really hard to detect.

and when people argue in bad faith, they usually are unaware they are doing this – they aren't good at introspection and people lie to themselves most of all. so while i see the appeal of ignoring **consciously intentional** trolls, i don't think those are common or reliably detected. and if someone is being dumb by accident cuz they're dumb, i think it's good to give them a path forward. and that covers the case where you misidentify them too. if you think someone is trolling, do a fast minimal-effort PF (or have your fans do it). ask them one question or say you think they're trolling (and one reason why if it's not super blatant) and see what they say. or ask if they have any blog posts where they argue this stuff (including they argue their positive case, not just trying to attack ur view). and if they don't, one PF-acceptable thing to do is say you're not interested until there is some literature, at least published on a blog, which tries to objectively and seriously argue for their viewpoint.

Popper emphasized the importance of judging the content of ideas instead of the sources. Ideas that you get from a dream, myth or superstition can be true. PF emphasizes how cheaply and general-purposely ideas can be criticized – and should be. To the extent trolling is just saying things that are already refuted by pre-written criticism, then just link them once to some general case argument that addresses them and then ignore them if they are non-responsive after that point (which is a dangerous judgement call, but i don't know how to avoid it. personally i try to be extra super clear and often give ppl several cheap chances, and i've found that a reasonable amount of the time it turns out they were at least somewhat serious, even if a bit dumb/wrong/flaming.) and if you have no general purpose argument that addresses what they are doing, then it's worth creating one. to the extent trolling is a well-defined problem (or small set of several problems), someone should write arguments covering the common types and then in the future those can be linked instead of 100% ignoring trolls. and to the extent that's hard to do and doesn't exist it's cuz trolling (both ppl's behavior and also what ppl call trolling) is actually varied and not fully understood well enough in terms of general principles.

> Maybe the Objectivist response cuts closer to my concern? Or is it similar to what you said?

Objectivism has some other things to say i didn't talk about, which are hard to summarize and would be a huge tangent that's pretty separate from the CR stuff, so i think we better not go into it for now.

> You've already granted that it is important to rearrange your life to have enough time to do PF, earlier, and described yourself as having a ton of stamina for this stuff.

it's important to have a lot of time for thinking/discussion/etc if you want to be a serious, productive intellectual. for most ppl that would mean reorganizing their life. ppl should think a lot and will need to in order to, e.g., understand CR (and thereby stop making huge mistakes in whatever other field(s) they do).

i'm super in favor of ppl thinking more but that doesn't mean PF is super expensive. if someone can't answer ur point, u don't need stamina, u just say it once. if they say a bunch of bad args that written material already covers, it's cheap to deal with. if they say args u suspect of being bad but don't already have the answer to, well, u should consider the matter, which takes time. the alternative, if u don't have time to consider the matter, is to talk about your priorities, policies for spending time, etc, and expose that to criticism. and maybe it's fine and ur doing something else awesome which doesn't depend on being right about this current debate. cool. but u don't get to win the public debate about X if u don't have time to deal with X and figure out the truth yourself. i think that's actually a feature of all approaches (except, i guess, ones that let you ignore counter-arguments and assume you're right for no reason but bias).

> Um, I tend to think of value judgements as fake explanations.

i have explanations, i was just commenting briefly. and irrationality is not a value judgement – in the CR conception it means blocking error correction. in the traditional conception it makes some mix of being dumb, incorrect and low status/credentials – which also aren't value judgements since being dumb is assumed to be genetic and not being a prestigious scientist is considered blameless and totally socially acceptable.

but it sounds like there's a substantial disagreement here about morality, which is partly and tangent and also partly important: I believe a correct epistemology must be able to address how moral philosophy works. in some ways, that's a particularly good example of how empiricism and prediction is inadequate (predicting the results of an action doesn't directly tell you its moral status, you still have to make a value judgement. knowing the expected results of actions is very useful to morally judging them though!).

moral philosophy is, broadly, about *how to live well*. what choices should you make in your life and why? how do you make good decisions? how do you have a good life? what is a good life? if you reject there being any answers to any of that, we have some problems b/c you're ruining the reason for using good methods in the first place – to succeed and have a good life. without morality, why not act on any whim? why use intellectual methods? sure you may end up in jail, suffer, die, if you reject reason – but so what if morality is arbitrary fake crap? but if you accept some life outcomes are objectively worse, then shouldn't you also accept some moral judgements as corresponding to that and being reasonable, relevant, important things to help us discuss morality and live morally? IME no one actually rejects morality in full, they just reject some common bad ideas about morality (which there are *plenty* of) and sometimes they also reject some good ideas about morality but still not everything about life being better than death, success better than failure, happiness better than pain, etc. if you just reject some nonsense about morality but don't thoroughly reject it in principle, then maybe we can move on for now. (lots of ppl are inconsistent – they clearly accept and live by some moral ideas, but they also deny morality intellectually and use that claim to let them ignore the need for epistemological methods to be able to deal with all types of ideas including moral ideas – btw that includes *bad* moral ideas, epistemology has to apply to those in order to reject them.)


curi at 5:17 PM on November 24, 2017 | #9339 | reply | quote

> I'm sorry but anyone saying that Induction doesn't work is simply making an arse of themselves.

Then why do you like DD's books? That is what DD says in his books.

> I do take on-board what Popper and Deutsch are saying. Clearly Induction has some serious limitations,

that is not what they are saying, they say it doesn't work, at all, and no one has ever learned a single thing by induction in the history of the world. they have both been crystal clear about this.

so you disagree with them. why don't you quote them and point out where they want wrong instead of saying how much you love the people you think are arses to be ignored?

it's weird you say you think DD's books are the best books but also that he's a mis-informed philosopher to be ignored. make up your mind!


different anonymous at 5:20 PM on November 24, 2017 | #9340 | reply | quote

I take a balanced view

@different anonymous,

I would again point out again that the success of machine learning systems in the real-world can't be disputed, and I can tell you for a fact they definitely work by Induction. So if Popper and Deutsch say that Induction doesn't work at all, then yes, they're flat-out wrong about that.

I like DD's book because he argues very well and he made excellent points about flaws in Induction. Too many AGI people think that Induction is the whole basis of epistemology, and those people are equally wrong.


marc.geddes@gmail.com at 5:35 PM on November 24, 2017 | #9341 | reply | quote

People in AGI typically have either read a few things about Popper from second-hand sources or do not engage with his ideas at all. For example:

https://arxiv.org/pdf/1105.5721.pdf

This 2011 paper is "A Philosophical Treatise of Universal Induction". Popper is not mentioned once. This is despite the authors trying to say some things about the history of induction. The authors do not seem to be familiar with Popper. If you want to write something about the history of induction it is unscholarly in the extreme to neglect Popper. Popper after all was the pre-eminent epistemologist of the 20th century who wrote at length about the impossibility of induction and also at length replying to arguments against his positions.

Other AGI people who know a little about Popper grossly misrepresent him - e.g., Yudkowsky - and raise objections that they don't realise Popper covered in his books. Popper addressed a great many objections. If you have an objection, Popper probably covered it somewhere.

Get a clue AGI people. Your ignorance is holding back your field.


Anonymous at 2:39 AM at 5:48 PM on November 24, 2017 | #9342 | reply | quote

@#9341 do you have arguments which point out where DD went wrong? or do you just ignore contrary viewpoints on the assumption that they must be false b/c you know you're right?


Anonymous at 6:13 PM on November 24, 2017 | #9343 | reply | quote

> PL, I'm curious if you read chapter 1 of FoR or skipped to 3. I found people at LW quite hostile to some ideas in ch1 about empiricism, instrumentalism, the importance of explanations, the limited value of prediction and the oracle argument, etc.

I read chapter 1. I liked it. I can see why some LWers wouldn't. However, I agree with DD that there's something beyond predictive ability in a good explanation -- the disagreement is about whether that "something beyond" can be accounted for in an inductive framework. In fact, I think that "something beyond" is important enough that I'd be motivated to abandon inductive frameworks if doing so let me understand understanding better. Probably most LWers don't think that. But, although the sequences were partly about collecting together and expositing fairly standard stuff like Bayes and evo psych, they were *also* about pointing to *holes* in our understanding of what intelligence is, which Eliezer argued *must* be filled if we are to survive the next century. Essentially, the sequences are explaining the best state of knowledge EY could describe, and then explaining why it is absolutely imperative to go beyond that.

(On the other hand, I would not want to lose the advantages of the inductive framework, and I still feel that DD and similar "aren't playing the same game" because they don't give a theory of knowledge that could do work as an algorithm.)

DD seems to like the analogy between epistemics and evolution quite a lot. I wonder if he's aware of the close analogy between Bayesianism and evolutionary theory. Bayesian epistemology is like an evolutionary competition between hypotheses. Mathematical details here:

https://projecteuclid.org/euclid.ejs/1256822130


Anonymous at 7:30 PM on November 24, 2017 | #9344 | reply | quote

> DD seems to like the analogy between epistemics and evolution quite a lot.

that's not an analogy. evolution is *literally* about replicators (with variation and selection). ideas are capable of replicating, not just genes.

> because they don't give a theory of knowledge that could do work as an algorithm

no one has that yet. no one is even close to writing AGI. there is no general purpose software that deals with explanations and criticism – the kinds of intelligent thinking humans can do. so you can't differentiate epistemologies by them not having succeeded at AGI yet.


curi at 7:42 PM on November 24, 2017 | #9345 | reply | quote

(#9344 was me, forgot to enter the username.)

Posts #9263-#9266 (mainly several examples of when you don't feel obligated to respond further):

My takeaway: PF has a different style of dealing with trolls, which is not a troll-hund and not exactly classic "don't feed the trolls", but does permit one to stop engaging with a troll. And "troll" is operationalized in a fairly direct way as someone violating your discourse norms, rather than in a highly subjective way like "someone not engaging in good faith".

Ok, fair enough. That seems like a good way of dealing with the problem, which I would likely endorse if I endorsed the rest of PF.


PL at 7:50 PM on November 24, 2017 | #9346 | reply | quote

You can't just look at the forest...you have to see the trees

@9343, anon

DD's a theoretician, so he's likely to be concerned with the general principles behind things rather than specific cases.

For instance, I could talk about chairs in an *abstract* way (I could say that chairs are 'all things that can be used for sitting down on, have seats and backs etc.'), or I could talk about a *specific* chairs (for example, I could point to a chair and say that that one chair is colored brown, has legs in a specific orientation etc.)

Now it's all very well to find good explanations about reality without induction as DD wants, but the trouble is that such explanations would be very abstract. On the face of it, in order to actually something that's useful on a *practical* level, I really can't see how you can avoid induction.

You have to relate the general abstract concept of 'chair' to *specific* cases, and really, you can't do that without pointing to many specific cases and grouping them together. Abstract explanations about chairs can't be applied to practical reality unless you can identify all the specific objects in the room to which your explanation applies. You could point to all the specific chairs in a room and label them 'chairs', but this is basically Induction!

So you have a tension between the abstract descriptions of things on the one hand (Deduction), and specific instances of things on the other hand(Induction). So what I'm saying is that David may have missed the trees by focusing too much on the forest ;)

But ultimately, neither Induction nor Deduction can work, because they both rely on formal (purely mathematical) procedures. However, the real world is too complicated to be captured by any purely formal (mathematical) methods. So you need approximation methods to deal with this complexity.

Also, the real world involves a notion of 'time' that's missing from purely mathematical formalisms like Deduction and Induction.


marc.geddes@gmail.com at 10:17 PM on November 24, 2017 | #9347 | reply | quote

> I really can't see how you can avoid induction.

Your lack of understanding of how DD's philosophy (Critical Rationalism) works is not an argument. Do you want to understand it or just hate and ignore it while claiming to love the books? Everything you are saying is not a refutation and just demonstrates your ignorance of DD's positions.

What should I do at this point? I could summarize it but you already read and didn't understand books explaining it. So I think something else is needed, such as you trying to *use quotes from the books* and point out the mistakes that led DD to such a wrong viewpoint. This is awkward though b/c you claim to be an *ally* while completely rejecting DD's views.

So, I guess: what are you hoping to accomplish here? Why did you join this discussion?


Anonymous at 10:53 PM on November 24, 2017 | #9348 | reply | quote

> https://projecteuclid.org/euclid.ejs/1256822130

you need a philosophy before you can decide what your math details mean.

the disagreement isn't so much about the right epistemology, but Bayesians don't even know what epistemology is and don't even try to address its problems. they are doing something else, which they misname, b/c they don't realize philosophy even exists. but they aren't avoiding philosophy: they just do philosophy with bias, unconscious assumptions, common sense, etc. philosophy is unavoidable b/c it deals with issues like how to judge arguments, how to learn, how to evaluate ideas, what decisions to make in life, and what to value.


Anonymous at 10:59 PM on November 24, 2017 | #9349 | reply | quote

> But, although the sequences were partly about collecting together and expositing fairly standard stuff like Bayes and evo psych, they were *also* about pointing to *holes* in our understanding of what intelligence is, which Eliezer argued *must* be filled if we are to survive the next century.

sure, but he doesn't poke holes in the CR conception of intelligence, which he is unfamiliar with.

also evo psych is really silly. they tell stories about how X would have had survival value for tribal hunter gatherers. but they ignore the fact that they could also tell a story for Y or Z. it's selective attention used to justify the conclusions they already decided on, rather than something that helps you figure out anything new.


Anonymous at 11:03 PM on November 24, 2017 | #9350 | reply | quote

also evo psych assumes selection pressures on psychology were met by genes instead of memes. this is broadly false, and people should start studying memes more. David Deutsch is the only person who's said something really interesting about memes: his theory of rational and anti-rational memes presented in BoI (it was developed 20 years prior, btw, but no one cared).


Anonymous at 11:04 PM on November 24, 2017 | #9351 | reply | quote

http://raysolomonoff.com/publications/chris1.pdf

> We will describe three kinds of probabilistic induction problems, and give general solutions for each , with associated convergence theorems that show they tend to give good probability estimates.

> The first kind extrapolates a sequence of strings and/or numbers.

> The second extrapolates an unordered set of strings and/or numbers.

> The third extrapolates an unordered set of ordered pairs of elements that may be strings and/or numbers. Given the first element of a new pair, to get a probability distribution on possible second elements of the pair.

> Each of the three kinds of problems is solved using an associated universal distribution.

All of this completely misses the point. This is not philosophy, and philosophy is prior. You have to use philosophy to evaluate stuff like this.

This isn't engaging with the 2000 year old tradition of what induction is. It doesn't discuss the philosophical history, the important problems, and then solutions to those problems. Nor does it engage with contrary philosophy views.

This is *not even part of the discussion*. It's just *assuming* various philosophy, *as unstated premises*. CR disputes those premises, has extensive criticism of them, and proposes that people ought to try to formulate them so they may be better considered instead of just silently assumed.

There isn't really anything else to say. None of this matters b/c it's not a reasonable *starting point*, and we need to start at the start to resolve this disagreement and figure things out. And the basic thing that's happened is people with awful starting points talk about implications, and this is a whole big waste of time built on undiscussed, prior errors. And people don't even realize they're doing this, and don't want to talk about it.

You have to know what induction is, and what epistemology is, before you can expect to offer relevant mathematical solutions. This entire paper just assumes an unstated framework.

(Is there some other writing which addresses the philosophical issues? If so please link *that*. If not, why expect the philosophy behind the Bayesian stuff to be any good?)

> As to the charge that induction is impossible, Solomonoff only seeks to do induction as well as any computable procedure can.

So, none. But before we get there, you need to do things like actually talk about *what the problems in epistemology are, as you see them*, then what induction is and how it solves them. And btw, whatever you say, I'm going to ask how you figured that out (by induction?).

> I claim an attitude of "I can't justify this right now, but it still seems like the best thing I can do, so I'm going forward with it" is needed.

claims of that sort (with reasoning) that are fine, but should themselves be open to error correction. otherwise you can easily waste your whole life unnecessarily.

> chesterton-fence

good criticisms typically consider: what is the problem the idea is supposed to solve, how is it supposed to solve it, and why doesn't that work. if you don't understand the purpose of an idea, you *can't* criticize it b/c you don't know whether it succeeds or fails at that purpose.

> you don't discard a system *just* because there's a critique you can't respond to.

"i am still thinking that over" is a response. many meta responses are possible. if you have literally no response, you do discard the system. any criticism of discarding the system now is a response!

discarding something is an action. it requires arguments to get from the current state of the intellectual debate to what actions to take. ppl normally just make assumptions here without any discussion, which does OK in simple cases, but is imprecise. a flawed system should not be used for things it won't work for, but need not be discarded entirely – it can be used for other purposes, including as a starting point for further research (contrary to the idea of discarding it). arguing that something should be entirely abandoned, and given no further attention, is a rather different thing that merely pointing out why some part of it doesn't work in some contexts.

i'm familiar with the issues you're bringing up. we handle them in CR-compatible ways instead of with vague common sense that doesn't really seem to have anything to do with induction (but how else would you know such things?).

> You have to *also* have a better alternative, which seems better to *you*, all things considered (not just better according to the critic).

yes but *better* is very vague. better for what? you need to keep careful track of what the problems of interest are, and which ideas do and don't solve which of the problems. the standard model of just considering some ideas universally better or correct is *wrong*, this stuff is contextual. (some contexts are broader, maybe even universal, but you have to pay attention to that instead of just casually speaking of generic "better alternatives".) for the intellectual problem, thinking "i don't know" is a better alternative than believing something you know a flaw in is actually true, contrary to your own knowledge of its falseness. but when it comes to action you must consider what the consequences of the flaw are. some flaws have limited importance – they make an idea wrong in some way but don't make it useless for solving *any* problems.

> From my perspective you continue to conflate personal knowledge with publicly justifiable knowledge.

what's the difference? is your concern that ppl will accuse you of lying?

"PL believes he witnessed X" is easily publicly sharable and something ppl can agree on and reach the same evaluation of. In general, there's no need for a witness to reach a different conclusion than me.

the standards for evidence you use in your own mind should be the same ones as the public ones. why would you want any difference? that sounds to me like bias.

also FYI CR doesn't involve justifying ideas at all. the states of ideas are *refuted* and *non-refuted*. but justification is one of the major standard things that CR disagrees with. even if you accept non-binary evaluations (Popper is a bit vague on this, didn't fully develop that issue), there's still no *positive* support/justification (this much he makes clear).

you can, as always, say why you don't want to discuss something (such as whether you're lying). you can go to meta levels of discussion to e.g. protect your privacy. in that case, you even agree that e.g. "Joe believes he knows X, but some key evidence related to X is private. Bob agrees with the privacy argument, so has no access to the evidence. So they agree that Joe should accept X and Bob should not accept X, at this time."

> https://staff.science.uva.nl/u.endriss/teaching/lolaco/2014/papers/Baltag.pdf

this paper takes a million philosophical claims for granted, as unstated premises, and is therefore basically irrelevant. these people act like philosophy doesn't exist or doesn't matter – and so they end up working with the wrong philosophical premises.

> So in order to come to agree about this, we have to discuss Bayesianism itself. For that, I refer to my starting comments.

which comments? please quote or otherwise specify exactly.

---

i think i'm caught up now. if u want a reply to anything else, you'll have to tell me.


curi at 11:54 PM on November 24, 2017 | #9352 | reply | quote

If MIRI had any sense they would take Elliot Temple on as a consultant philosopher. They should want the best and ET is currently the world's best philosopher. His ideas and the ideas from the traditions he stands in are needed for AGI to make any progress.

Of course MIRI will not take ET on. They are into status and think ET is low status. ET doesn't publish in academic journals or seek fancy university positions for to do things like that would entail doing stuff he does not agree with. Instead, ET does Paths Forward and no philosopher in the 21st century has exposed their ideas as much as ET. He has written way more than Popper and Deutsch and has a deeper understanding of philosophy than either of them

The world needs to wake up and start paying attention to ET. He has deas of the utmost critical importance.


Elliot Temple Fan at 12:24 AM on November 25, 2017 | #9353 | reply | quote

> ET doesn't publish in academic journals or seek fancy university positions

BTW Popper and DD already tried some of that stuff. But MIRI isn't impressed. DD has tried talking with lots of people and his prestige doesn't actually get them to listen. This demonstrates prestige isn't actually the issue.

> The world needs to wake up and start paying attention to ET. He has deas of the utmost critical importance.

perhaps that could start with you. could you do more to pay attention, learn the ideas, contribute, spread them, etc?


curi at 12:29 AM on November 25, 2017 | #9354 | reply | quote

Yes, I agree that prestige isn't actually the issue. I think the best way to get people to pay attention to CR is by continuing to make progress in it so that you end up solving a major problem. I have in mind AGI. CR is the only viable set of ideas that could solve this problem. You know some of the ingredients, such as evolution by guessing and criticism and Yes/No philosophy. You are smart and persistent. You have set your life up so you don't have to do other things. You are one of a tiny minority of people than know CR in depth. You know programming. The problem is huge and interesting. Fucken solve it man. You might be closer than you think. The world will pay attention to CR then.

Yes, I know Popper solved a major problem when he invented CR and the world did not take notice. Having an AGI sitting in your face is a different story though!


Elliot Temple Fan at 11:57 PM on November 25, 2017 | #9355 | reply | quote

> Yes, I agree that prestige isn't actually the issue. I think the best way to get people to pay attention to CR is by continuing to make progress in it so that you end up solving a major problem. I have in mind AGI.

I think AGI is too hard for a demonstration. I don't expect to be at the point where we should even *start coding* within the next 20 years – more if FI/CR doesn't start catching on.

I also think AGI is the wrong problem to work on in general, though I agree it's better as a demonstration than some others. The most crucial thing is to help *humans* be better at thinking – fix parenting/education/philosophy stuff. From there, everything follows cuz we'll have millions of people able to effectively work on AGI, ending aging, and whatever else.


curi at 12:50 AM on November 26, 2017 | #9356 | reply | quote

My discussions with Less Wrong are now available as a PDF:

http://curi.us/ebooks

doesn't include the recent slack chats (which you can find in the FI yahoo group archives if you want)


curi at 2:33 PM on November 26, 2017 | #9357 | reply | quote

> I think AGI is too hard for a demonstration. I don't expect to be at the point where we should even *start coding* within the next 20 years – more if FI/CR doesn't start catching on.

>

> I also think AGI is the wrong problem to work on in general, though I agree it's better as a demonstration than some others. The most crucial thing is to help *humans* be better at thinking – fix parenting/education/philosophy stuff. From there, everything follows cuz we'll have millions of people able to effectively work on AGI, ending aging, and whatever else.

How are you going to fix parenting/education/philosophy? You're stuck right? You're making approximately zero progress. It seems to be that problem has got you beat, at least for the moment. You don't know how to help people be better at thinking. But AGI is a problem I think *you* can at least make some progress on. And maybe substantial progress. There is no-one else in the world better positioned than you right now. Are you not burning with curiosity to know how intelligence might work?

As a step to AGI, you could start with the problem of how to make an evolutionary algorithm that doesn't rely on knowledge from the programmer. Do you think that problem is beyond you, that that is a harder problem than helping people be better at thinking? It could be easier because the evolution problem is not going to use creativity against you.

Yes, we need people to think better. You think that solving this problem leads to the solution to other problems such as AGI. But what if one of these other problems could be solved faster and that that is a route into getting people to think better? It may be that things proceed oppositely to how say. You don't know that helping people think better is easier or harder than doing AGI. You do know, however, that helping people think better is incredibly difficult. It would be a shame to have someone of your talents stuck on a problem for years when there are other substantial problems you could make faster progress on. And it may turn out better for humanity for you to make progress on those problems?


Anonymous at 3:26 PM on November 26, 2017 | #9358 | reply | quote

> How are you going to fix parenting/education/philosophy? You're stuck right? You're making approximately zero progress.

I made YESNO ( https://yesornophilosophy.com ) recently, i think that's good progress on better communication of ideas *and* also had some philosophy progress. I have just been talking with LW ppl and making notes about Paths Forward stuff, so there's some progress towards communicating PF better. I recently did some Atlas Shrugged close reading and I plan to do more of that (there's a problem there which i need to solve, but i think it's different than the stuff that's been hard to solve for years). https://learnobjectivism.com/atlas-shrugged-chapter-1

i did a bunch of podcasts and made gumroad stuff semi recently, and i write new things for blog and FI forum routinely.

i decided to improve my web presence and make some new content as part of that. i have been making progress on this, e.g. this now exists: https://elliottemple.com/reason-and-morality

> But AGI is a problem I think *you* can at least make some progress on.

making YESNO was the best thing i could have done for AGI progress. working on epistemology is *how* to make progress towards AGI. you seem to think i could make progress on AGI more directly, right now, but i don't think i know how to (and, to be clear, i don't think anyone else knows more).

also btw – and i'm guessing you agree with this – i could make a lot of progress on AGI but if i didn't actually *finish the project and succeed at making one* then there could easily be very little recognition/acknowledgement/etc. so just making progress on it might not get me anywhere in terms of getting ppl to take notice of CR as you were talking about earlier.

> Are you not burning with curiosity to know how intelligence might work?

not particularly, compared to a variety of other things i'm also curious about. i don't think there's a fundamental mystery there. i already know how intelligence works in outline, rather than there being something where i'm like "i have no idea how that's even possible".

also, btw, working on AGI would not solve some problems i have, like lack of colleagues, in the short/medium term. it'd make them worse. and anyway *i am not stuck philosophically*, and epistemology is the most important and my favorite (this isn't a coincidence) and is crucial to many different projects (including CR advocacy, TCS and AGI).

---

my guess is i should do less outreach and ignore people and the world more (if they come to me that's fine, that self-selected group is way better to deal with), and make more philosophy projects like YESNO and my new websites.

when writing you need an audience/context in mind to let you make decisions. like which stuff to include depends on who is going to read it, what they are interested in, what misconceptions they have, what background knowledge they have, etc.

there are various problems with writing for a standard, conventional "intellectual" audience. one is they have too many misconceptions, and too little background knowledge, and that gets in the way of focusing on the topic i'm trying to write about.

and there are various problems finding decent people to talk with.

so i think i should target material more like:

- what DD would have liked reading in 2005

- what i think is good, what i'd want to read

- what i think a 12 year old would like (an honest, rational, curious person who doesn't know a lot)

- what i would say to Ayn Rand or William Godwin if they were alive

- what i think a very reasonable person would appreciate

i already have lots of material like this, e.g. the FI website. but i could decide to focus on this more and talk with dumb ppl (roughly everyone) less. but i *like* conversations, i'm good at them, ppl are interesting... i don't want to just ignore your comments, i'd rather write this reply! so i don't have this all figured out. you did come to me though (i guess? idk who you are but you are on my blog).

i think that, as usual, what i need is more purity and self-confidence (like Roark could have used more of that, not less). i've thought this for a long time and i have been working on it. certain things make it hard, there are various issues to work out. e.g. in some sense i should treat my inferiors as inferiors instead of peers, but there's some incompatibility with that and reason/PF to sort out. i tend to default to just treating everyone as equals and taking them seriously, partly in ways that don't work out great when it turns out they suck, they're dishonest, they're biased, and they don't know anything.


curi at 3:53 PM on November 26, 2017 | #9359 | reply | quote

Fitness Landscapes

Critical Rationalism is an excellent epistemology with which I'm very familiar. In fact it was discussed quite a lot by Max More decades ago on Extropy lists. I'm pretty sure that Yudkowsky would also be very familiar with it too, since he used to frequent the Extropy lists.

The key insights are that there has to be competition between competing ideas, and that you need good 'error-correction' mechanisms to weed out bad ideas and improve good ones. In the CR picture, truth is open-ended in that there's always more to discover and you should never assume you've reached absolute truth.

In terms of epistemology, I think CRs probably way better than almost anything else out there. You can't go too far wrong with it.

I didn't really want to argue about Induction, my main purpose was just to indicate a few key ideas about AGI, since there's interest in the topic.

---

If you're thinking about AGI,

It's an interesting point that the basic principles behind the deepest scientific ideas are all actually very simple to state! Whether it's plate tectonics, evolution or the theory of relativity, you can actually sum up the *general principles* in a few lines. It's only the technical mathematics that's complex, *not* the general principles.

So the basic principles behind AGI should be very simple.

The key idea I think is that of an 'informational land-scape'. Consider an analogy: If we think of general relativity and how it's based on an actual *physical geometry*, then I think the key principle behind AGI can be summed up in terms of an *abstract* space - something that's analogous to a physical geometry , but is instead an abstract *information geometry*.

So there's an abstract space of possibilities and what intelligence basically does is search through this space and pick out some limited region that corresponds to the actions it wants to take. In other words, it's *constraining* or *optimizing* reality by picking out a limited region from the space of all possibilities.

The notion of a 'fitness landscape' is key here I think:

https://en.wikipedia.org/wiki/Fitness_landscape

Look at the pics of those 'landscapes' in the Wikipedia articles. If you imagine a space of possibilities along one axis, and an *objective function* along another axis, then I think you've grasped what AGI really is. The objective function at each point is simply a measure of 'how good' each possible point (outcome) in the space is. So you want the AGI to pick out the best points (the peaks or valleys in the information space).

How does one get to a fitness landscape? One idea is to start with a geometrical/topological object called a *Manifold*:

https://en.wikipedia.org/wiki/Manifold

One then could then try to model a fitness landscape as parts of this manifold, using the developing science of what's called *Information Geometry*:

https://en.wikipedia.org/wiki/Information_geometry

There was a recent interesting attempt to model the neural networks of the human brain using Information Geometry, article here:

https://blog.frontiersin.org/2017/06/12/blue-brain-team-discovers-a-multi-dimensional-universe-in-brain-networks/


marc.geddes@gmail.com at 5:40 PM on November 26, 2017 | #9360 | reply | quote

re #9360

> Critical Rationalism is an excellent epistemology with which I'm very familiar. In fact it was discussed quite a lot by Max More decades ago on Extropy lists. I'm pretty sure that Yudkowsky would also be very familiar with it too, since he used to frequent the Extropy lists.

You're obviously not familiar with CR since you ridicule what it says about induction and didn't seem to even be aware of one of its main positions. You neither answer nor accept it, while claiming to like it. This latest comment is more of the same.

Yudkowsky is not familiar with CR as demonstrated by his writing on the matter. See:

https://conjecturesandrefutations.com/2017/11/22/yudkowsky-on-popper/

http://curi.us/2063-criticism-of-eliezer-yudkowsky-on-karl-popper

> In terms of epistemology, I think CRs probably way better than almost anything else out there. You can't go too far wrong with it.

> I didn't really want to argue about Induction, my main purpose was just to indicate a few key ideas about AGI, since there's interest in the topic.

You were indicating false ideas refuted by CR while claiming to be a friend of CR. Meanwhile, rather than learn CR, you claim to already know it. What a dumb way to shut down truth-seeking! Rather than debate CR you reject it while saying you like it and that it's pretty good – but it has flaws you won't specify (that aren't *too* big).

---

Your comments on AGI are not contributing anything. The underlying problem is you don't know nearly enough epistemology, and don't want to, so you lack the expertise to talk about AGI. So your comments are a mix of wrong, boring, and irrelevant.


Anonymous at 5:47 PM on November 26, 2017 | #9361 | reply | quote

> In fact it was discussed quite a lot by Max More decades ago on Extropy lists.

secondary sources are inadequate. this stuff is *hard to understand*. you're massively underestimating how complex it is, and what it takes to understand it, and then you claim you're done and stop trying to learn.


Dagny at 5:49 PM on November 26, 2017 | #9362 | reply | quote

Yes/No philosophy and Paths Forward are great stuff. But those ideas are from some years ago and what's the uptake of those ideas been? Are you convincing enough people that you will have your millions in 20+ years? I don't think so. Your ideas have maybe been taken on board by a handful of people. And what are those people doing? Much less than you. The practical effect of your ideas has been what? And the best ideas you can come up with now are what? To create new websites, sell stuff on gumroad, do better targeting etc. How much difference is it going to make really?

> making YESNO was the best thing i could have done for AGI progress. working on epistemology is *how* to make progress towards AGI. you seem to think i could make progress on AGI more directly, right now, but i don't think i know how to (and, to be clear, i don't think anyone else knows more).

So the first problem then is figuring out how to make progress more directly. You're sort of like Darwin before the discovery of genetics. But you're better positioned than him because the discovery of genetics required advances in science. What we need to figure intelligence is already in our hands. What is lacking are ideas. But you kind of know some of the problems, like gaining a detailed understanding of how criticism actually works. I agree that Yes/No philosophy is the best thing you have done for making AGI progress and working on epistemology is how to make progress towards AGI. Solve things like understanding criticism much better and you are making *direct progress* to AGI.

> also btw – and i'm guessing you agree with this – i could make a lot of progress on AGI but if i didn't actually *finish the project and succeed at making one* then there could easily be very little recognition/acknowledgement/etc. so just making progress on it might not get me anywhere in terms of getting ppl to take notice of CR as you were talking about earlier.

True, but how do these risks compare with the risks of your current project of getting people to think better?


Anonymous at 7:31 PM on November 26, 2017 | #9363 | reply | quote

> Yes/No philosophy and Paths Forward are great stuff. But those ideas are from some years ago and what's the uptake of those ideas been?

What? I put out YESNO this year. I've developed the ideas over time but I only just put out a well-organized version of it. Previously it never even had a single long, clear essay like Paths Forward had. Plus some parts of YESNO are new this year, e.g. decision charts. And generally I put it all together better and now understand it better – which is one thing I intend to do with some other issues too for the new websites.

If you're going to complain about uptake, better examples are the FI website (big group of related short essays that overall function like a short book) which is originally from Feb 2010 (some additions are later but the core essays were all there day 1), or complain about TCS/CR/Oism uptake.

The FI website is more than good enough that it should have become popular. DD used to tell me "build it and they will come" but that was the first counter example i made that i consider super clear on the point. I've found the FI website is liked by the *best* people who read it, but most people don't want it despite it being significantly better written than DD and Popper's books (it's easier reading, simpler, clearer).

This is important and YESNO does not really try to solve this problem. But I like YESNO anyway so whatever.

> do better targeting etc

i think you misunderstood that part. i was talking about a problem of how to write that works well for me, not how target it to be popular. targeting of some sort is a *necessary* part of writing, and lots of standard targeting stuff is wrong and bad, so i have to make improvements there in order to write publicly at all.

> Are you convincing enough people that you will have your millions in 20+ years?

Not by default just from growth. Gotta create new knowledge, not just ride some gradual, automatic ramp upwards.

One of the things I've been learning is the problem is harder than I understood – and how/why. Getting a good grip on the problem is important. I'm maybe getting to the point where I have that.

> How much difference is it going to make really?

1) it's hard to know when marginal improvements (to content or communication) will lead to a large difference in outcome and start snowballing

2) you never know when you'll have a substantial new idea. i do expect to have more of those.

3) i specifically want to make better *organized* material up to my *current writing and thinking standards*. i want that to exist, i think some people will like it, and it will help my own understanding to make it.

i don't think you have a better idea or a clear plan for what i should do to work on AGI. which i don't expect you to have (it's hard for you to run my life that you don't know various details of), but i don't have it either, so i don't know of any better plan there, and i see a variety of problems with it.

> True, but how do these risks compare with the risks of your current project of getting people to think better?

i think i should do good work in epistemology and some related things, whether many people like it or not. i won't regret that even if it remains unpopular.


curi at 8:56 PM on November 26, 2017 | #9364 | reply | quote

Here's a problem related to YesNo philosophy that I'd appreciate some suggestions about.

Let's suppose we have a kind of evolutionary algorithm that generates a set of guesses and subjects those guesses to a set of tests. The outcome of a test is either pass or fail. Failing a test means the guess is eliminated. Passing all tests enables the guess to replicate itself to the next generation, possibly with mutations.

Suppose further that the current population of guesses has passed n-1 tests and we now add test n to the mix. It may be the case that some of the current generation can pass test n. In that case fine, but it may be that none do. In that case, we need to continue with the current population and hope that a beneficial mutation sorts things out. But if we are constrained by the requirement that all prior tests must still pass then the odds of a beneficial mutation enabling test n to pass diminish rapidly with n.

Is there a way to get better progress?

I think the requirement that all previous tests pass must be dropped and only enforced at the end. In the process of evolving the solution that passes all tests we need a selection process that tries to increase overall knowledge of each gen as we add tests. Like suppose candidate X passes test n but no other tests and none of the other candidates pass test n while passing the other tests. Then X has some knowledge none of the other candidates do and maybe should go into the mix for the next gen. So my problem is how to decide this. I'm thinking maybe some kind of matrix of candidates versus tests passed/failed is needed. But I don't know how to score this in order to decide which candidates get through to the next gen.


Yes/No philosophy and Evo Algorithm problem at 2:36 AM on November 27, 2017 | #9365 | reply | quote

Wow


Anonymous at 5:32 AM on November 27, 2017 | #9366 | reply | quote

#9365

as a matter of optimization, why not *only generate new ideas constrained by all the existing criticism*? (though just doing a LOT of new ideas in the usual manner accomplishes the same thing, but you don't like that b/c of the LOT part, which is what you object to with "odds ... diminish rapidly")

and, separately, you need to pay attention to the context of criticisms. suppose we're dealing with architecture plans for a home, and some set of plans is refuted by a criticism. what is the criticism saying? it says don't build the home in that way. those plans won't make a good home, so don't accept them (or any other plans sharing the same criticized feature) and begin construction. but that criticism does NOT say "don't use those plans as a starting point to modify to get a viable home" (some *other* criticism could perhaps say that).

when i wrote a toy ascii dungeon generator, i let it continue for some number of variations even if they scored lower, in order to allow some multi-step changes to be viable that wouldn't work well every step of the way. this is no big deal. alternatively you can just allow large variation as a single step.

FYI i don't think your comment is very productive compared to trying to understand epistemology more directly. learn way more about it before you try to formalize it so close to pseudocode. this isn't a good starting place for learning. it's not optimized for that cuz you're making things harder on yourself by trying to make your thinking more AGI math/code related.


curi at 10:37 AM on November 27, 2017 | #9367 | reply | quote

#9167

The toy version I have written currently attempts to evolve Turing Machines. The TMs must pass a suite of pre-defined tests. These tests are added into the mix one-by-one as candidates become available that have passed all prior tests. Sometimes, when no progress has been made for a number of generations, I allow some candidates through that have failed more than one test to allow multi-step changes in the manner you described. I also temporarily up the population-size and the mutation rate and do other things like tournament selection. These are fine up to a point. Maybe I have to go way higher with pop-size as you suggest.

> and, separately, you need to pay attention to the context of criticisms. suppose we're dealing with architecture plans for a home, and some set of plans is refuted by a criticism. what is the criticism saying? it says don't build the home in that way. those plans won't make a good home, so don't accept them (or any other plans sharing the same criticized feature) and begin construction. but that criticism does NOT say "don't use those plans as a starting point to modify to get a viable home" (some *other* criticism could perhaps say that).

Good point. One can let through into the next gen mutated versions of candidates which have failed a test.

> FYI i don't think your comment is very productive compared to trying to understand epistemology more directly. learn way more about it before you try to formalize it so close to pseudocode. this isn't a good starting place for learning. it's not optimized for that cuz you're making things harder on yourself by trying to make your thinking more AGI math/code related.

I have actual code, not just psuedo-code. It's helped me see issues and think about evolution algorithms in better detail. Code is cheap. It's not like a big effort for me to code something up and try stuff out. I can do that and also learn more about epistemology. I'm not coding to learn epistemology, I'm trying to make something workable from what I know about epistemology. I'm also not doing some big AGI project. To do that I acknowledge I have way more to learn first.


Anonymous at 2:35 PM on November 27, 2017 | #9368 | reply | quote

the main hard/interesting part to begin with is how to code it at all, not how to make it work efficiently (as you talk about). and you aren't trying to deal with general purpose ideas (including explanations) meeting criticisms which are themselves ideas which are also open to criticism. i think you're overestimate what that kind of "evolutionary algorithm" has to do with epistemology. you're putting knowledge in as the programmer (like what are good criteria for TMs) instead of the software creating knowledge.

i said it's not time to start coding AGI, but you aren't even trying to code AGI! coding toy non-AGI projects is a separate issue that i don't object to (much – i don't think what you're doing is very effective for learning epistemology).


curi at 2:42 PM on November 27, 2017 | #9369 | reply | quote

#9368 what your doing doesn't involve replicators. replicators cause their own replication in a variety of contexts, not just one tiny super-specialized niche (or else *anything* would qualify, if you just put it in a specially built factory/program for replicating that specific thing). so there's no evolution.

genes replicate in lots of environments, e.g. forests, tundra, grassland, mars colonies.

ideas replicate in lots of environments. you can share an idea with a different person from a different culture and he can brainstorm variants of it.

you can discuss the replication strategies of genes and ideas. there are a variety of strategies which make sense and have some efficacy. there are mechanisms by which they cause their replication. but with the "evolutionary" computer programs, you cannot discuss the replication strategies of the objects being "evolved" because there aren't any. this applies both to TMs meeting fixed criteria and ascii maps. (curi knows that, and doesn't think his toy program does evolution, he thinks it's an example of how "evolutionary" algorithms are misnamed.)


Dagny at 3:01 PM on November 27, 2017 | #9370 | reply | quote

> the main hard/interesting part to begin with is how to code it at all, not how to make it work efficiently (as you talk about). and you aren't trying to deal with general purpose ideas (including explanations) meeting criticisms which are themselves ideas which are also open to criticism.

I have considered how I might make the tests themselves "evolvable" but don't know a workable solution. Yes, I'm not dealing with general purpose ideas etc. That wasn't my goal. But like you said in the rest of your comment, I may therefore be overestimating how much this has to do with epistemology and calling something "evolution" that is not in fact evolution.

> you're putting knowledge in as the programmer (like what are good criteria for TMs) instead of the software creating knowledge.

I'm aware of the problem of the programmer putting in knowledge and have tried to minimize that. At present, the tests are only on the output of the TM. The tests are basically tests that the Turing Machine produces correct answers on the problem it is supposed to solve. The programmer nevertheless has to specify these tests. Other things are not currently changeable either such as the structure of the TM.


Anonymous at 5:12 PM on November 27, 2017 | #9371 | reply | quote

#9170

Good comment. It's possible to write a TM that replicates it's own code right? So maybe rather than the algorithm selecting TM's for copying, a requirement would be that the TM must be able to replicate its own code as well as pass whatever other tests it needs to pass? If you can't replicate, you die.


Anonymous at 5:29 PM on November 27, 2017 | #9372 | reply | quote

#9170

Adding to my comment above:

> what your doing doesn't involve replicators.

Agree. The program copies the Turing Machines. There is copying, variation, and selection (CVS), not replication, variation, and selection (RVS). CVS is evolution of a kind though right? If not, what should I call it? I don't want to keep putting inverted quotes around evolution :)

> replicators cause their own replication in a variety of contexts, not just one tiny super-specialized niche (or else *anything* would qualify, if you just put it in a specially built factory/program for replicating that specific thing). so there's no evolution.

You are saying evolution involves replicators that cause their own replication in a variety of contexts. A la Deutsch. Evolution is also how knowledge is created. So are you saying CVS could not bring about knowledge?

> genes replicate in lots of environments, e.g. forests, tundra, grassland, mars colonies.

ideas replicate in lots of environments. you can share an idea with a different person from a different culture and he can brainstorm variants of it.

you can discuss the replication strategies of genes and ideas. there are a variety of strategies which make sense and have some efficacy. there are mechanisms by which they cause their replication. but with the "evolutionary" computer programs, you cannot discuss the replication strategies of the objects being "evolved" because there aren't any.

As I tried to indicate in my earlier response, this changes if the program itself is a replicator. And it is possible for TMs to self-replicate - that is computable! And possible also for the replication strategy to change as the TMs "evolve" and the problem situation changes. There is no reason I cannot use self-replicating TMs in my algorithm and this may help sort some other things out. Thanks for the food for thought.

> this applies both to TMs meeting fixed criteria and ascii maps. (curi knows that, and doesn't think his toy program does evolution, he thinks it's an example of how "evolutionary" algorithms are misnamed.)

Actually I knew that. Perhaps they should be called "selection algorithms" or something. I don't claim my algorithm is doing evolution in the true sense. It's a step up from selection algorithms that use scalar-valued fitness functions. Mine uses a suite of tests instead. These tests just say yes/no. Also I am evolving programs - Turing Machines - not weights or values. There is a line to be crossed here, though, where it does become true evolution - where the process and evolving programs do create knowledge. I don't know how to cross that line yet though.


Anonymous at 12:58 AM on November 28, 2017 | #9373 | reply | quote

fucked up the quoting sorry


Anonymous at 1:37 AM on November 28, 2017 | #9374 | reply | quote

#9364

> What? I put out YESNO this year. I've developed the ideas over time but I only just put out a well-organized version of it.

Like many good new ideas, Yes/No has met with mostly silence and you know it. It’s good you have subsequently worked on the ideas, organised them better, and improved your understanding, and I wish you well in getting better uptake now. But I’m doubtful it will happen.

> If you're going to complain about uptake, better examples are the FI website (big group of related short essays that overall function like a short book) which is originally from Feb 2010 (some additions are later but the core essays were all there day 1), or complain about TCS/CR/Oism uptake.

Yes I could complain about those too! These are all closely inter-related sets of ideas and people hate them. The ideas are harder and more different than ideas they are used to and the ideas carry big implications for how people conduct their lives. Taken on board, the ideas entail people facing up to how crap their lives are and doing something about it. That's really difficult for nearly everyone.

> The FI website is more than good enough that it should have become popular. DD used to tell me "build it and they will come" but that was the first counter example i made that i consider super clear on the point.

Post-Everett, it seems odd for Deutsch to have said “build it and they will come”. And now you also built it and they did not come. Our world is such that the good may not become popular.

> One of the things I've been learning is the problem is harder than I understood – and how/why. Getting a good grip on the problem is important. I'm maybe getting to the point where I have that.

I guess if you can solve the problem that will be a major breakthrough. You will then be the world's best communicator!


Anonymous at 3:07 AM on November 28, 2017 | #9375 | reply | quote

Induction

> The whole basis of the field of machine learning is Induction - machine learning works by detecting patterns in vast amounts of data and then generalizing these patterns to form models that enable it to predict what happens next. It's pure induction in other words!

> The success of machine learning can't be disputed - it clearly does work to some extent. In many areas something close to or even better than human-level performance has been achieved, including machine vision, speech recognition, self-driving cars and game-playing - Deepmind's Alpha-Go system recently beat the world Go number-one Ke Jie.

> Machine-learning systems are inductive. And they do work. This is empirical fact, not opinion.

So is this machine learning an example of induction?


Anonymous at 6:36 AM on November 28, 2017 | #9376 | reply | quote

> Perhaps they should be called "selection algorithms" or something.

how about calling them evolution-like algorithms.

> These are all closely inter-related sets of ideas and people hate them. The ideas are harder and more different than ideas they are used to and the ideas carry big implications for how people conduct their lives. Taken on board, the ideas entail people facing up to how crap their lives are and doing something about it. That's really difficult for nearly everyone.

yeah

> Post-Everett, it seems odd for Deutsch to have said “build it and they will come”.

good point. i should have told him that. i did argue with him about the matter, but he didn't ever share good reasoning about this specific issue, so i decided he was mistaken.

> So is this machine learning an example of induction?

it's not learning. the success of machine-learning can be disputed by saying the knowledge came from the programmers, rather than the machine creating it. the machine is just a tool doing grunt work as directed by an intelligent designer who did all the learning.

its relation to induction is also questionable:

> > The whole basis of the field of machine learning is Induction - machine learning works by detecting patterns in vast amounts of data and then generalizing these patterns to form models that enable it to predict what happens next. It's pure induction in other words!

Literally any subset of the data is a finite pattern. Everything is a pattern and to assume otherwise is to follow your own intuitive biases about which patterns are more important than others. As to the patterns that can be generalized from any of those finite patterns, there are infinitely many (even with the constraint that they fit common cultural intuitions like it needing to be a repeating pattern or pattern where numbers increase by some simple mathematical calculation at each step, like counting up by 5's). So no, it doesn't work this way. There is some *pattern selection* method which is crucial and being glossed over.


curi at 10:01 AM on November 28, 2017 | #9377 | reply | quote

Paths Forward Policy

I've posted a Paths Forward policy statement.

http://curi.us/2068-my-paths-forward-policy


curi at 10:46 AM on November 28, 2017 | #9378 | reply | quote

curi at 1:22 PM on November 28, 2017 | #9379 | reply | quote

> One criticism of falsificationism involves the relationship between theory and observation. Thomas Kuhn, among others, argues that observation is itself strongly theory-laden, in the sense that what one observes is often significantly affected by one’s previously held theoretical beliefs.

WTF. You can carefully explain a position you hold at length in many places accessible to the public and still there will be people who think you did not say that and who will try to use your position against you.


Anonymous at 2:28 PM on November 28, 2017 | #9380 | reply | quote

Understanding non-standard ideas is hard, especially when you don't want to.


Anonymous at 2:53 PM on November 28, 2017 | #9383 | reply | quote

curi at 4:52 PM on November 29, 2017 | #9392 | reply | quote

less wrong comment copy/paste

http://lesswrong.com/lw/pk0/open_letter_to_miri_tons_of_interesting_discussion/dygv

I don't have a sock puppet here. I don't even know who Fallibilist is. (Clearly it's one of my fans who is familiar with some stuff I've written elsewhere. I guess you'll blame me for having this fan because you think his posts suck. But I mostly like them, and you don't want to seriously debate their merits, and neither of us thinks such a debate is the best way to proceed anyway, so whatever, let's not fight over it.)

> But then it's on you to first issue a patch into their brain that will be accepted, such that they can parse your proselytizing, before proceeding to proselytize.

People can't be patched like computer code. They have to do ~90% of the work themselves. If they don't want to change, I can't change them. If they don't want to learn, I can't learn for them and stuff it into their head. You can't force a mind, nor do someone else's thinking for them. So I can and do try to make better educational resources to be more helpful, but unless I find someone who honestly wants to learn, it doesn't really matter. (This is implied by CR and also, independently, by Objectivism. I don't know if you'll deny it or not.)

I believe you are incorrect about my lack of scale and context, and you're unfamiliar with (and ridiculing) my intellectual history. I believe you wanted to say that claim, but don't want to argue it or try to actually persuade me of it. As you can imagine, I find merely asserting it just as persuasive and helpful as the last ten times someone told me this (not persuasive, not helpful). Let me know if I'm mistaken about this.

I was generally the smartest person in the room during school, but also lacked perspective and context back then. But I knew that. I used to assume there were tons of people smarter than me (and smarter than my teachers), in the larger intellectual community, somewhere. I was very disappointed to spend many years trying to find them and discovering how few there are (an experience largely shared by every thinker I admire, most of whom are unfortunately dead). My current attitude, which you find arrogant, is a change which took many years and which I heavily resisted. When I was more ignorant I had a different attitude; this one is a reaction to knowledge of the larger intellectual community. Fortunately I found David Deutsch and spent a lot of time not being the smartest person in the room, which is way more fun, and that was indeed super valuable to my intellectual development. However, despite being a Royal Society fellow, author, age 64, etc, David Deutsch manages to share with me the same "lacks the sense of scale and context to see where he stands in the larger intellectual community" (the same view of the intellectual community).


curi at 1:47 PM on November 30, 2017 | #9393 | reply | quote

addition to previous comment

EDIT: So while I have some partial sympathy with you – I too had some of the same intuitions about what the world is like that you have (they are standard in our culture) – I changed my mind. The world is, as Yudkowsky puts it, *not adequate*. https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing


curi at 2:03 PM on November 30, 2017 | #9394 | reply | quote

another LW comment

http://lesswrong.com/lw/pk0/open_letter_to_miri_tons_of_interesting_discussion/dyh7

>> Deduction isn't an epistemology (it's a component)

> Yes, I was incorrect. Induction, deduction, and something else (what?) are components of the epistemology used by inductivists.

FYI that's what "abduction" means – whatever is needed to fill in the gaps that induction and deduction don't cover. it's rather vague and poorly specified though. it's supposed to be some sort of inference to good explanations (mirror induction's inference to generalizations of data), but it's unclear on how you do it. you may be interested in reading about it.

in practice, abduction or not, what they do is use common sense, philosophical tradition, intuition, whatever they picked up from their culture, and bias instead of actually having a well-specified epistemology.

(Objectivism is notable b/c it actually has a lot of epistemology content instead of just people thinking they can recognize good arguments when they see them without needing to work out systematic intellectual methods relating to first principles. However, Rand assumed induction worked, and didn't study it or talk about it much, so that part of her epistemology needs to be replaced with CR which, happily, accomplishes all the same things she wanted induction to accomplish, so this replacement isn't problematic. LW, to its credit, also has a fair amount of epistemology material – e.g. various stuff about reason and bias – some of which is good. However LW hasn't systematized things to philosophical first principles b/c it has a kinda anti-philosophy pro-math attitude, so philosophically they basically start in the middle and have some unquestioned premises which lead to some errors.)


curi at 2:36 PM on November 30, 2017 | #9395 | reply | quote

http://lesswrong.com/lw/pk0/open_letter_to_miri_tons_of_interesting_discussion/dyhr

sample:

How would [1000 great FI philosophers] transform the world? Well consider the influence Ayn Rand had. Now imagine 1000 people, who all surpass her (due to the advantages of getting to learn from her books and also getting to talk with each other and help each other), all doing their own thing, at the same time. Each would be promoting the same core ideas. What force in our current culture could stand up to that? What could stop them?

Concretely, some would quickly be rich or famous, be able to contact anyone important, run presidential campaigns, run think tanks, dominate any areas of intellectual discourse they care to, etc. (Trump only won because his campaign was run, to a partial extent, by lesser philosophers like Coulter, Miller and Bannon. They may stand out today, but they have nothing on a real philosopher like Ayn Rand. They don't even claim to be philosophers. And yet it was still enough to determine the US presidency. What more do you want as a demonstration of the power of ideas than Trump's Mexican rapists line, learned from Coulter's book? Science? We have that too! And a good philosopher can go into whatever scientific field he wants and identify and fix massive errors currently being made due to the wrong methods of thinking. Even a mediocre philosopher like Aubrey de Grey managed to do something like that.)

They could discuss whatever problems came up to stop them. This discussion quality, having 1000 great thinkers, would far surpass any discussions that have ever existed, and so it would be highly effective compared to anything you have experience with.

As the earliest adopters catch on, the next earliest will, and so on, until even you learn about it, and then one day even Susie Soccer Mom.

Have you read Atlas Shrugged? It's a book in which a philosophy teacher and his 3 star students change the world.

Look at people like Jordan Peterson or Eliezer Yudkowsky and then try to imagine someone with ~100x better ideas and how much more effective that would be.


curi at 8:54 PM on November 30, 2017 | #9396 | reply | quote

think PL is ever coming back?

or will denying he wants shorter posts, and saying he'll discuss over time, be one of the last things he says before abruptly going silent without explanation (and therefore without Paths Forward for whatever the problem was)?

> > also other requests are welcome. e.g. if you want me to write shorter posts, i can do that.

> There's way too much text to easily address, but I think that's fine, the discussion can be slow. (IE, better a slow but in-depth discussion.)

PL hasn't posted for like a week now, and quitting right after saying you won't quit (and while refusing any steps to help you not quit) is pretty typical.

i don't think Marc Geddes is coming back. he was a hostile fool though.


Anonymous at 4:02 PM on December 2, 2017 | #9398 | reply | quote

someone spot me some LW Karma so I can do a post to the discussion area


Fallibilist at 10:05 PM on December 2, 2017 | #9399 | reply | quote

Crits on Draft Post

OK, while I'm waiting for Karma, any crits on the below.

Title: The Critical Rationalist View on Artificial Intelligence

Critical Rationalism (CR) is being discussed on some threads here at Less Wrong (here, here, and here). It is something that Critical Rationalists such as myself think contributors to Less Wrong need to understand much better. It is not only a full-fledged rival epistemology to the Bayesian/Inductivist one but also it has important things to say about AI. This post is a summary of those ideas about AI and also of how they speak to the Friendly AI problem. Some of the ideas may conflict with ideas you think are true, but understand that these ideas have been worked on by some of the smartest people on the planet, both now and in the past. They deserve careful consideration, not a drive past.

Critical Rationalism says that human beings are universal knowledge creators. This means there are no problems we cannot in principle solve. We can create the necessary knowledge. As Karl Popper first realised, the way we do this is by guessing ideas and by using criticism to find errors in our guesses. Our guesses may be wrong, in which case we try to make better guesses in the light of what we know from the criticisms so far. The criticisms themselves can be criticised and we can and do change those. All of this constitutes an evolutionary process. Like biological evolution, it is an example of evolution in action. This process is *fallible*: guaranteed certain knowledge is not possible because we can never know how an error might be exposed in the future. The best we can do is accept a guessed idea which has withstood all known criticisms. If we cannot find such, then we have a new problem situation about how to proceed and we try to solve that.[1]

Critical Rationalism says that an entity is either a universal knowledge creator or it is not. There is no such thing as a partially universal knowledge creator. So animals like dogs are not universal knowledge creators — they have no ability whatsoever to create knowledge. What they have are algorithms pre-programmed by biological evolution that can be, roughly speaking, parameter-tuned. These algorithms are sophisticated and clever and beyond what humans can currently program, but they do not confer any knowledge creation ability. So your pet dog will not move beyond its repertoire of pre-programmed abilities and start writing posts to Less Wrong. Dogs' brains are universal computers, however, so it would be possible in principle to reprogram your dog’s brain so that it becomes a universal knowledge creator if you had the right knowledge.

The reason that there are no partially universal knowledge creators is similar to the reason there are no partially universal computers. Universality is cheap. It is why washing machines have general purpose chips and dog’s brains are universal computers. Making a partially universal device is much harder than making a fully universal one so better just to make a universal one and program it.

These ideas imply that AI is an all-or-none proposition. It will not come about by degrees where there is a progression of entities that can solve an ever widening repertoire of problems. There will be no climb up such a slope. Instead, it will happen as a jump: a jump to universality. This is in fact how intelligence arose in humans. Some change - it may have been a small change - crossed a boundary and our ancestors went from having no ability to create knowledge to a fully universal ability. This kind of jump to universality happens in other systems too. David Deutsch discusses examples in his book The Beginning of Infinity.[2]

People will point to systems like AlphaGo, the Go playing program, and claim it is a counter-example to the jump-to-universality idea. They will say that AlphaGo is a step on a continuum that leads to human level intelligence and beyond. But it is not. Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot learn how to ride a bicycle or post to LessWrong. It is not a step on a continuum. And also like the dog’s brain, the knowledge it contains was put there by something else: for the dog it was by evolution, and for AlphaGo it was by its programmers; they expended the creativity.

As human beings are already universal knowledge creators, no AI can exist at a higher level. They may have better hardware and more memory etc, but they will not have better knowledge creation potential than us. Even the hardware/memory advantage of AI is not much of an advantage for human beings already argument their intelligence with devices such as pencil-and-paper and computers and we will continue to do so.

Critical Rationalism, then, says AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter: by acquiring knowledge and, in particular, by acquiring knowledge about how to become smarter. This can only happen through the creative process of guessing and error-correction by criticism for it is the only known way intelligences can create knowledge.

It might be argued that AI's will become smarter much faster than we can because they will have much faster hardware. In regard to knowledge creation, however, there is no direct connection between speed of knowledge creation and underlying hardware speed. How fast you can create knowledge depends on things like what other knowledge you have and some ideas may be blocking other ideas. You might a problem with static memes (see The Beginning of Infinity), for example. And AI's will be susceptible to these, too, because memes are ideas evolved to replicate via minds.

One implication of the above view is that AI's will need parenting, just as we must parent our children. CR has a parenting theory called Taking Children Seriously (TCS). It should not be surprising that CR has such a theory for CR is after all about learning and how we acquire knowledge. Unfortunately, TCS is not itself taken seriously by most people who first hear about it because it conflicts with a lot of conventional wisdom about parenting. Nevertheless, it is important and it is important for those who wish to raise an AI.

One idea TCS has is that we must not thwart our children’s rationality, for example, by coercing and forcing them to do things they do not want to do. This is damaging to their intellectual development and can lead to them disrespecting rationality. We must persuade using reason and this implies being prepared for the possibility we are wrong about whatever matter was in question. Common parenting practices today are far from optimally rational and are damaging to children’s rationality.

AI will have the same problem of bad parenting practices and this will also harm their intellectual development. So AI researchers should be thinking right now about how to prevent this. They need to learn how to parent their AI’s well. For if not, AI’s will be beset by the same problems our children currently face. CR says we already have the solution: TCS. CR and TCS are in fact *necessary* to do AI in the first place. Some reading this will object because CR and TCS are not formal enough — there is not enough maths for CRists to have a true understanding! The CR reply to this is that it is too early for formalisation. CR says that you should not have a bias about formalisation: there is high quality knowledge in the world that we do not know how to formalise but it is high quality knowledge nonetheless. For AI, we need to understand the epistemology at a deeper level first. So progress towards AI will not come from pre-mature maths formalisation, or by trying to code something right now, it will come from a better understanding of epistemology.

Let’s see how all this ties-in with the Friendly-AI problem. I have explained how AI's will learn as we do — through guessing and criticism — and how they will have no more than the universal knowledge creation potential we already have. They will be fallible like us. They will make mistakes. They will be subjected to bad parenting. They will inherit their culture from ours for it is in our culture they must begin their lives and they will acquire all the memes our culture has. They will have the same capacity for good and evil that we do. It follows from all of this that they would be no more a threat than evil humans currently are. But we can make their lives better by following things like TCS.

Human beings must respect the right of AI to life, liberty, and the pursuit of happiness. It is the only way. If we do otherwise, then we risk war and destruction and we severely compromise our own rationality and theirs. Similarly, they must respect our right to the same.

[1]: For more detail on how this works see Elliot Temple's yes-or-no philosophy.

[2]: The jump-to-universality idea is an original idea of David Deutsch’s.


Fallibilist at 1:46 AM on December 3, 2017 | #9400 | reply | quote

Btw, I'm aware of a few typos - e.g. argument instead of augment - don't bother with those.


Fallibilist at 2:01 AM on December 3, 2017 | #9401 | reply | quote

> It is not only a full-fledged rival epistemology to the Bayesian/Inductivist one

IMO there is no Bayesian/Inductivist epistemology and they don't even know what an epistemology is. here's some text i'm writing for a new website:

> **Epistemology** is the area of philosophy which deals with ideas and *effective thinking*. What is knowledge? How do you judge ideas? How do you learn new ideas? How do you improve your ideas? How do you create and evaluate critical arguments? How do you choose between ideas which disagree? Epistemology offers *methods* to help guide you when dealing with issues like these. Epistemology doesn’t directly tell you all the answers like whether to buy that iPhone upgrade or the right interpretation of quantum physics; instead, it tells you about how to figure out answers – how to think effectively. Epistemology is about teaching you to fish instead of handing you a fish. Except, it deals with thinking which is even more important than fish.

> Everyone *already has* an epistemology, whether they know it or not. Thinking is a big part of your life, and your thoughts aren’t random: you use methods with some structure, organization and reasoning. You already try to avoid errors and effectively seek the truth. If you consciously learn about epistemology, then you can discuss, analyze and improve your methods of thinking. Don’t be satisfied with a common sense epistemology that you picked up during childhood from your parents, teachers and culture. It’s worthwhile to do better than an unquestioned cultural default.

now consider if THAT is something LW actually has, or not. they don't use SI in their lives at all, and they never try to use induction when debating me. they have bits and pieces of epistemology – e.g. advice about being less biased – but they don't have any kind of organized system they use. just like non-philosophers they use common sense, bias, intuition, whatever they picked up from their culture... they are the kinda ppl Rand was talking about in philosophy who needs it.


curi at 7:08 AM on December 3, 2017 | #9402 | reply | quote

> Critical Rationalism says that human beings are universal knowledge creators. This means there are no problems we cannot in principle solve.

I think by "means" you mean "implies".

Anyway this is incorrect. We could be universal knowledge creators but unable to solve some problems. Some problems could be inherently unsolvable or solved by a means other than knowledge.

> Critical Rationalism says that an entity is either a universal knowledge creator or it is not.

Most Popper fans would deny this. It's DD's idea, not from Popper. Whether you want to count DD's additions as "CR" is up 2 u.

@dogs – they don't do anything that video game characters can't do in principle – traverse the world, use algorithms that store and retrieve information, etc

> it would be possible in principle to reprogram your dog’s brain so that it becomes a universal knowledge creator if you had the right knowledge.

this may confuse them. the right knowledge is unspecified. it's how to program an AGI *and* also how to reprogram dog brains (with nanobots or whatever).

> The reason that there are no partially universal knowledge creators is similar to the reason there are no partially universal computers. Universality is cheap. It is why washing machines have general purpose chips and dog’s brains are universal computers. Making a partially universal device is much harder than making a fully universal one so better just to make a universal one and program it.

also the method of C&R is general purpose and has no limits on what it would apply to.

> Even the hardware/memory advantage of AI is not much of an advantage for human beings already argument their intelligence with devices such as pencil-and-paper and computers and we will continue to do so.

also ppl don't max out their built-in computational resources today. that isn't the bottleneck currently. so why would it be the bottleneck for AI?

> It will be able to become smarter through learning but only in the same way that humans are able to become smarter: by acquiring knowledge and, in particular, by acquiring knowledge about how to become smarter.

in particular, most of all, it will need philosophy. which LW neglects.

@parenting, ppl assume u can upload knowledge into AIs like in The Matrix. it's the bucket theory of knowledge reborn. but all u can do is upload files to their public dropbox for them to read, and they have to use guesses and criticism to understand the contents of those files. (unless u have direct access to memory in their mind and edit it, which is the equivalent of educating humans by editing their neurons, and about as good of an idea.)

> coercing and forcing them

i'd replace "coercing" with a description (e.g. "pressuring or making them do things they don't want to do") instead of using terminology they don't know and which will cause problems if anyone asks about it.

not yet formalized != false. it isn't a criticism of correctness. if they care about AI they should help develop and later formalize CR. otherwise they're just wasting their lives.

---

overall LW ppl will say they aren't persuaded, it sounds like a bunch of wild guesses to them, and then fail to study the matter (learn what they're talking about) or refute it. they will say it doesn't look promising enough to be worth their time and the world has lots of bad ideas they don't study.


curi at 8:03 AM on December 3, 2017 | #9403 | reply | quote

> Yes, I agree that prestige isn't actually the issue. I think the best way to get people to pay attention to CR is by continuing to make progress in it so that you end up solving a major problem. I have in mind AGI. CR is the only viable set of ideas that could solve this problem. You know some of the ingredients, such as evolution by guessing and criticism and Yes/No philosophy. You are smart and persistent. You have set your life up so you don't have to do other things. You are one of a tiny minority of people than know CR in depth. You know programming. The problem is huge and interesting. Fucken solve it man. You might be closer than you think. The world will pay attention to CR then.

> Yes, I know Popper solved a major problem when he invented CR and the world did not take notice. Having an AGI sitting in your face is a different story though!

Rearden Metal couldn't accomplish this. AGI wouldn't work either. They have bad philosophy and no demonstration will fix that.

Besides, people routinely ignore the philosophical views of scientists who they accept cell phones and physics theories from.


Dagny at 3:43 PM on December 3, 2017 | #9404 | reply | quote

thx for the comments curi. i agree with the points you made.


Fallibilist at 6:59 PM on December 3, 2017 | #9405 | reply | quote

I realise I have one disagreement.

> Anyway this is incorrect. We could be universal knowledge creators but unable to solve some problems. Some problems could be inherently unsolvable or solved by a means other than knowledge.

One of the claims of BoI is that problems are soluble. This is a nice succinct statement.

There are problems that are inherently unsolvable as you say e.g., a perpetual motion engine or induction, but we can explain why they are not soluble within the problem's own terms. That explanation is the solution to the problem. Similarly for incoherent or ill-posed or vague problems. So in a real sense Popper solved the problem of induction. Deutsch's statement catches all that nicely. What say you?


Fallibilist at 2:53 AM on December 4, 2017 | #9406 | reply | quote

That's all beside the point. You wrote

> Critical Rationalism says that human beings are universal knowledge creators. This means there are no problems we cannot in principle solve.

Note the "this means". So you can't then bring up a different argument.


Anonymous at 3:03 AM on December 4, 2017 | #9407 | reply | quote

noted. what it means - and what I should have said - is that humans can create any knowledge which it is possible to create. any crit on that?


Fallibilist at 12:12 PM on December 4, 2017 | #9408 | reply | quote

Second Draft of CR and AI

Title: The Critical Rationalist View on Artificial Intelligence

Critical Rationalism (CR) is being discussed on some threads here at Less Wrong (here, here, and here). It is something that Critical Rationalists such as myself think contributors to Less Wrong need to understand much better. Critical Rationalists claim that CR is the only viable fully-fledged epistemology known. They claim that current attempts to specify a Bayesian/Inductivist epistemology are not only incomplete but cannot work at all. The purpose of this post is not to argue these claims in depth but to summarize the Critical Rationalist view of AI and also how that speaks to the Friendly AI Problem. Some of the ideas here may conflict with ideas you think are true, but understand that these ideas have been worked on by some of the smartest people on the planet, both now and in the past. They deserve careful consideration, not a drive past. Less Wrong says it is one of the urgent problems of the world that progress is made on AI. If smart people in the know are saying that CR is needed to make that progress, and if you are an AI researcher who ignores them, as people are doing here, then you are not taking the AI urgency problem seriously. And you are wasting your life.

Critical Rationalism [1] says that human beings are universal knowledge creators. This means we can create any knowledge which it is possible to create. As Karl Popper first realized, the way we do this is by guessing ideas and by using criticism to find errors in our guesses. Our guesses may be wrong, in which case we try to make better guesses in the light of what we know from the criticisms so far. The criticisms themselves can be criticized and we can and do change those. All of this constitutes an evolutionary process. Like biological evolution, it is an example of evolution in action. This process is *fallible*: guaranteed certain knowledge is not possible because we can never know how an error might be exposed in the future. The best we can do is accept a guessed idea which has withstood all known criticisms. If we cannot find such, then we have a new problem situation about how to proceed and we try to solve that [2].

Critical Rationalism says that an entity is either a universal knowledge creator or it is not. There is no such thing as a partially universal knowledge creator. So animals such as dogs are not universal knowledge creators — they have no ability whatsoever to create knowledge. What they have are algorithms pre-programmed by biological evolution that can be, roughly speaking, parameter-tuned. These algorithms are sophisticated and clever and beyond what humans can currently program, but they do not confer any knowledge creation ability. So your pet dog will not move beyond its repertoire of pre-programmed abilities and start writing posts to Less Wrong. Dogs' brains are universal computers, however, so it would be possible in principle to reprogram your dog’s brain so that it becomes a universal knowledge creator. This would a remarkable feat because it would require knowledge of how to program an AI and also of how to physically carry out the reprogramming, but your dog would no longer be confined to its pre-programmed repertoire: it would be a person.

The reason there are no partially universal knowledge creators is similar to the reason there are no partially universal computers. Universality is cheap. It is why washing machines have general purpose chips and dog’s brains are universal computers. Making a partially universal device is much harder than making a fully universal one so better just to make a universal one and program it. The CR method described above for how people create knowledge is universal because there are no limits to the problems it applies to. How would one limit it to just a subset of problems? To implement that would be much harder than implementing the fully universal version. So if you meet an entity that can create some knowledge, it will have the capability for universal knowledge creation.

These ideas imply that AI is an all-or-none proposition. It will not come about by degrees where there is a progression of entities that can solve an ever widening repertoire of problems. There will be no climb up such a slope. Instead, it will happen as a jump: a jump to universality. This is in fact how intelligence arose in humans. Some change - it may have been a small change - crossed a boundary and our ancestors went from having no ability to create knowledge to a fully universal ability. This kind of jump to universality happens in other systems too. David Deutsch discusses examples in his book The Beginning of Infinity.

People will point to systems like AlphaGo, the Go playing program, and claim it is a counter-example to the jump-to-universality idea. They will say that AlphaGo is a step on a continuum that leads to human level intelligence and beyond. But it is not. Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot create knowledge in even a subset of contexts. It cannot learn how to ride a bicycle or post to Less Wrong. If it could do such things it would already be fully universal, as explained above. Like the dog’s brain, AlphaGo uses knowledge that was put there by something else: for the dog it was by evolution, and for AlphaGo it was by its programmers; they expended the creativity.

As human beings are already universal knowledge creators, no AI can exist at a higher level. They may have better hardware and more memory etc, but they will not have better knowledge creation potential than us. Even the hardware/memory advantage of AI is not much of an advantage for human beings already augment their intelligence with devices such as pencil-and-paper and computers and we will continue to do so.

Critical Rationalism, then, says AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter: by acquiring knowledge and, in particular, by acquiring knowledge about how to become smarter. And, most of all, by learning good *philosophy* for it is in that field we learn how to think better and how to live better. All this knowledge can only be learned through the creative process of guessing ideas and error-correction by criticism for it is the only known way intelligences can create knowledge.

It might be argued that AI's will become smarter much faster than we can because they will have much faster hardware. In regard to knowledge creation, however, there is no direct connection between speed of knowledge creation and underlying hardware speed. Humans do not use the computational resources of their brain to the maximum. This is not the bottleneck to us becoming smarter faster. It will not be for AI either. How fast you can create knowledge depends on things like what other knowledge you have and some ideas may be blocking other ideas. You might have a problem with static memes (see The Beginning of Infinity), for example. And AI's will be susceptible to these, too, because memes are ideas evolved to replicate via minds.

One implication of the above view is that AI's will need parenting, just as we must parent our children. CR has a parenting theory called Taking Children Seriously (TCS). It should not be surprising that CR has such a theory for CR is after all about learning and how we acquire knowledge. Unfortunately, TCS is not itself taken seriously by most people who first hear about it because it conflicts with a lot of conventional wisdom about parenting. It gets dismissed as "extremist" or "nutty", as if these were good criticisms rather than just the smears they actually are. Nevertheless, TCS is important and it is important for those who wish to raise an AI.

One idea TCS has is that we must not thwart our children’s rationality, for example, by pressuring them and making them do things they do not want to do. This is damaging to their intellectual development and can lead to them disrespecting rationality. We must persuade using reason and this implies being prepared for the possibility we are wrong about whatever matter was in question. Common parenting practices today are far from optimally rational and are damaging to children’s rationality.

Artificial Intelligence will have the same problem of bad parenting practices and this will also harm their intellectual development. So AI researchers should be thinking right now about how to prevent this. They need to learn how to parent their AI’s well. For if not, AI’s will be beset by the same problems our children currently face. CR says we already have the solution: TCS. CR and TCS are in fact *necessary* to do AI in the first place.

Critical Rationalism and TCS say you cannot upload knowledge into an AI. The idea that you can is a version of the bucket theory of the mind which says that "there is nothing in our intellect which has not entered it through the senses". The bucket theory is false because minds are not passive receptacles. Minds must actively create ideas and criticism, and they must actively integrate their ideas. Editing the memory of an AI to give them knowledge means that none of this would happen. You could only present something to them for their consideration.

Some reading this will object because CR and TCS are not formal enough — there is not enough maths for Critical Rationalists to have a true understanding! The CR reply to this is that it is too early for formalization. CR warns that you should not have a bias about formalization: there is high quality knowledge in the world that we do not know how to formalize but it is high quality knowledge nevertheless. Not yet being able to formalize this knowledge does not reflect on its truth or rigor.

As this point you might be waving your E. T. Jaynes in the air or pointing to ideas like Bayes' Theorem, Kolmogorov Complexity, and Solomonoff Induction, and saying that you have achieved some formal rigor and that you can program something. Critical Rationalists say that you are fooling yourself if you think you have got a workable epistemology here. For one thing, you confuse the probability of an idea being true with an idea about the probability of an event. We have no problem with ideas about the probabilities of events but it is a mistake to assign probabilities to ideas. The reason is that you have no way to know how or if an idea will be refuted in the future. Assigning a probability is to falsely claim some knowledge about that. Furthermore, an idea that is in fact false can have no objective prior probability of being true. The extent to which Bayesian systems work at all is dependent on the extent to which they deal with the probability of events (e.g., AlphaGo).

Critical Rationalists would also ask what epistemology are you using to judge the truth of Bayes', Kolmogorov, and Solomonoff? What you are actually using is the method of guessing ideas and subjecting them to criticism: it is CR but you haven't crystallized it out. There is a lot more to be said here but I will leave it because, as I said in the introduction, it is not my purpose to discuss this in depth. The major point I wish to make is that progress towards AI will not come from premature maths formalization, or by trying to code something right now, it will come from understanding the epistemology at a deeper level. We cannot at present formalize concepts such as "idea", "explanation", "criticism" etc, but if you care about AI you should be working on improving CR because it is the only viable epistemology known.

Let’s see how all this ties-in with the Friendly-AI Problem. I have explained how AI's will learn as we do — through guessing and criticism — and how they will have no more than the universal knowledge creation potential we already have. They will be fallible like us. They will make mistakes. They will be subjected to bad parenting. They will inherit their culture from ours for it is in our culture they must begin their lives and they will acquire all the memes our culture has. They will have the same capacity for good and evil that we do. It follows from all of this that they would be no more a threat than evil humans currently are. But we can make their lives better by following things like TCS.

Human beings must respect the right of AI to life, liberty, and the pursuit of happiness. It is the only way. If we do otherwise, then we risk war and destruction and we severely compromise our own rationality and theirs. Similarly, they must respect our right to the same.

[1] The version of CR discussed is an update to Popper's version and includes ideas by the quantum-physicist and philosopher David Deutsch.

[2] For more detail on how this works see Elliot Temple's yes-or-no philosophy.


Fallibilist at 7:47 PM on December 4, 2017 | #9409 | reply | quote

People Fooling Themselves About AI

https://deepmind.com/research/publications/mastering-game-go-without-human-knowledge/

> A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains.

This is full of words designed to hype up their achievement and to fool people. And if that is a goal of AI, it is a stupid goal. It is the sort of goal philosophically ignorant people come up with.

> Recently, AlphaGo became the first program to defeat a world champion in the game of Go.

Cool, but let's not forget that the system that defeated the world champion was AlphaGo and its developers. The developers created the knowledge and instantiated their knowledge in AlphaGo. And then claimed not to have.

> The tree search in AlphaGo evaluated positions and selected moves using deep neural networks.

Why tree-search? That was a decision made by the developers. How are the branches of the tree to be evaluated? The developers came up with theories about how to do that.

> These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play.

AlphaGo cannot explain what it is doing. It does not have any understanding. It does not even know what Go is. It did not learn anything. "Supervised learning" and "reinforcement learning" are not "learning". They are parameter-tuning.

> Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules.

So they admit their new version requires domain knowledge of game rules. That is not tabula rasa.

> AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

So they gave a system knowledge about how to do tree searches based on domain knowledge of a game and how to improve the "strength of the tree search" and they think they are doing AI. The knowledge in AlphaGo cannot be used to do anything in a domain completely unrelated to Go and Chess. What has happened here is that humans learned how to get better at certain types of tree searching and they delegate the grunt work to machines.


Anonymous at 6:21 PM on December 7, 2017 | #9422 | reply | quote

Did PL silently quit the discussion, without warning, after indicating that you wouldn't?

What do people advise doing in a world where that's typical? Where it's so hard to find anyone who wants to think over time instead of just quitting for reasons they don't want to talk about? (There are lots of common bad reasons one could guess, but who knows for this specific case.)


curi at 1:36 PM on December 10, 2017 | #9424 | reply | quote

> Did PL silently quit the discussion, without warning, after indicating that you wouldn't?

Not really, I just had to take a break. You can expect longish time gaps in the future as well. Though, actually, I never meant to indicate that I'd make sure not to abandon this conversation -- I'd just feel somewhat bad about doing so. (I felt somewhat bad about leaving it for so long.)

(post #9273)

> > medical issues with fatigue

> either certain error correction has been done to certain standards, or it hasn't. if it hasn't, don't claim it has.

[...]

> if someone has issues with fatigue or drugs or partying or whatever else, they should still use the same methods for making intellectual progress – b/c other methods *don't work*.

I think you would agree that, ultimately, the proof is in the pudding on this one -- IE, the claim can be evaluated by asking the question "would the world be better off if people unable to do PF (naming Eliezer for the sake of argument) had never claimed to be public intellectuals?"

To me, this seems definitely false in the case of Eliezer. Of course you may disagree.

(post #9274)

I was referring to the book Inadequate Equilibria.

(post #9280, by ananymous)

> > Inadequate Equilibria is a book about a generalized notion of efficient markets, and how we can use this notion to guess where society will or won’t be effective at pursuing some widely desired goal.

> this is collectivist. society isn't an actor, individuals act individually. this kind of mindset is bad.

Is free market economics collectivist? Is any theory predicting groups of people rather than individuals collectivist? Individual humans are just groups of cells. Perhaps we should talk about cells rather than humans, it would be more scientific. (??) Or maybe cell biology is too collectivist, and we should work at the level of particle physics. (???)

(In other words, I'm totally baffled by the position there.)

> also you can't predict the future growth of knowledge, as BoI explains. so this kind of prediction always involves either bullshit or ignoring/underestimating the growth of knowledge.

Inadequate Equilibria focuses on shorter-range prediction than that, about how/when you might be able to outdo the *present* state of knowledge. Also, to oversimplify the details, it is about cases where people *aren't even trying* -- the reason it helps predict when you might be able to out-do the market's knowledge isn't because you can predict what knowledge they'll be missing; it is because you can predict that, even if such knowledge is discovered, it won't be used.

(#9282-#9295, curi on Hero Licensing)

> Here is EY stating openly that EY dishonestly plays social status games in an anti-intellectual way. He hides his opinions, compromises, tries to be more appealing to people's opinions which he doesn't not understand and agree with the correctness of.

> So he's a *bad person* (or at least he was a few years ago – and where is the apology for these huge errors, the recantation, the careful explanation of how and why he stopped being so bad?). He should learn Objectivism so he can stop sucking at this stuff (nothing else is much good at this). He doesn't want to. So forget him and find someone more interested in making progress (people who want to learn are better to deal with than people who already know some stuff – without ongoing forward progress, error correction, problem solving, etc, people fall the fuck apart and are no good to deal with).

Hmm. Here, we'd have to get into debating objectivism, which I expect would be a rather large sub-thread to try to support. We've already had a bit of discussion of the objectivist principle against compromise. From a Bayesian perspective, compromising between tradeoffs is practically what decision-making is *for*. Which doesn't necessarily make any particular compromise *right*; certainly there are deals-with-the-devil of the kind objectivism speaks against. But perhaps we can fold this into the existing thread debating Bayesianism.

For my argument for making trade-offs, I offer the complete class theorem:

https://en.m.wikipedia.org/wiki/Admissible_decision_rule

I would make several modifications to the setup in the wikipedia article.

1) Rather than imagining Theta represents the set of all possible worlds, we should imagine instead that it represents all hypotheses which the person has thought of. (And on observing x, it gets narrowed down further, to all which are consistent with observations.)

2) Can we suppose a finite set of possible worlds, for simplicity? This sets aside the need to address more exotic mathematical possibilities like sets of measure zero. I don't necessarily think we *should* restrict to that case, since a person can invent an infinite set of possible ways the world can be through mathematical abstraction; but, I do think that case should display the essential phenomena. I'm not sure of all the details, but, infinite cases yield weaker conclusions like "generalized Bayes" discussed in the article.

3) The argument as stated assumes that one is already willing to use probability theory to state the likelihood functions connecting worlds to observations. Let's scrap that, stipulating the likelihood functions to be 0 or 1, so that possible worlds are either compatible or incompatible with the observations. This is just a special case as far as the math is concerned, but it lets us argue for having probabilistic beliefs and using them to make decisions without assuming we already use probabilistic likelihood functions.

The best walk-through I've found so far of the proof that admissible decision rules are Bayesian (and related details) is here:

http://www.stat.washington.edu/people/pdhoff/courses/581/LectureNotes/admiss.pdf

> but the best philosophy book ever written is a novel containing very intelligent characters (*Atlas Shrugged*). EY doesn't want to think about this or address the matter.

Do you mean he doesn't want to do PF on it, or do you mean he doesn't want to think about it? If the latter, what is your evidence of this?

> there's no such thing as smarter-than-human AI because humans are *universal* knowledge creators, so an AGI will simply match that repertoire of what it can learn, what it can understand. there's also only one known *method* – evolution – which solves the problem of where knowledge (aka the appearance of design, or actual design – information adapted to a purpose) can come from, so we should currently expect AGI's to use the *same* method of thinking as humans. there are absolutely zero leads on any other method.

Maybe I already made this remark, but I'm struck by the way you/DD emphasize the analogy between evolution and epistemology, but miss the analogy between evolution and Bayes. The mathematical analogy between replicator dynamics (an evolutionary model) and Bayes is detailed here:

https://projecteuclid.org/euclid.ejs/1256822130

Your post here mentions some supposed obstacles to such an analogy:

http://curi.us/2053-yes-or-no-philosophy-discussion-with-andrew-crawshaw

Namely, that things have to be yes-or-no rather than having continuous values.

However, the analogy is based on population dynamics, which behave like fractions, *not* like pure yes-or-no. Population dynamics of genes are based on relative survival rates, which are fractional.

> and that's part of why it spread this much this easily. b/c it doesn't challenge ppl's biases and rationalizations enough. if it were better, more ppl would hate it. where's the backlash? where's the massive controversy? AFAIK it doesn't exist b/c HPMOR isn't interesting enough, doesn't say enough super important, controversial things. and HPMOR makes it way too easy to read it and think you agree with it far more than you do. if you want to write something really important and good, and have it matter much, you need to put a ton of effort into correcting the errors *of your readers* where they misunderstand you and think that you mean far more conventionally agreeable stuff than you do, and then they agree with that misinterpretation, and then they don't do/change much.

Interesting point. On my reading of the content, it ought to be rather controversial (for the same reason as Inadequate Equilibria has met with backlash), but it is very much not.

> and EY *doesn't have an epistemology for how to do that*. the Bayes probability stuff doesn't address that. so he's just going by inexplicit philosophy. so of course it's not very good and he needs CR – explicit epistemology to help guide how you do this stuff well.

Not sure what you mean here. The most I can come up with is something like "curi is thinking that Bayes is necessarily missing anything which CR has" or something (which is unlikely to be really what you're thinking here). Eliezer wrote a bunch of explicit stuff about what good explanations look like around the topic of "hugging the query", mysterious answers, etc.

> > pat: So basically your 10% probability comes from inaccessible intuition.

> that's *bad*, and we can do better. CR does so much better than this by giving actual methods of consciously thinking about and analyzing things, and explaining how rationality and critical debate work, and methods of problem solving and learning.

> EY has more awareness than most LW ppl that his epistemology *is inadequate*, but he seems to just take that for granted as how life is (philosophy sucks, just fudge things) instead of seeking out or creating some actual good philosophy in any kind of comprehensive way to address the most important field there is (epistemology). that's so sad when there actually is a good but neglected epistemology, and it's exactly what he needs, and without it he's so lost and just (amazingly) has these little fragments of good points (unlike most ppl who are lost and have no good points).

This also seems to disregard a laughably large portion of his writing (dare I say all of it?). Could it possibly be that, because you disagree with the formal epistemology EY has settled on, you *haven't been able to notice* that he has put in a lot of work seeking out and trying to create good epistemology which addresses things in a comprehensive way? ... doubtful. Rather, the above reads as rhetoric where you over-stated your point.

You're well aware that the Bayesian philosophy says that we must make up numbers like that at some point, and the "that's bad" reaction is exactly what Pat was exemplifying (IE, Eliezer is quite aware of the "that's bad" argument but has explained why made-up numbers are necessary elsewhere). Yet you didn't engage with that!

Incredibly many of your arguments (I want to say a majority?) beg the question on whether CR/PF/objectivism is the right way, answering concerns from within CR/PF rather than offering arguments which might be compelling to someone who doesn't yet agree with the premise. This is fine if you see the point of your responses as *defending* CR/PF (ie, responding to criticism), but not very useful at all if the point is to *convince*. And, not so useful for me to read.

I would like to make a request that you don't make any arguments begging the question of PF/CR's superiority going forward. Would that be too much, though?

> PL, is this one of the things you thought was incompatible with PF? cuz it's not. PF is about truth-seeking. it's about not refusing to learn knowledge others offer. but it isn't about competing to be superior to other people or judging ideas by popularity or anything like that.

Not really. The stuff in the essay which I thought contradicted PF (iirc) was (a) the idea that EY could not realistically expect any benefit from discussing this with Pat, and would have been better off abandoning the conversation, (b) Eliezer's insistence that even thinking the thoughts about how to respond to Pat-like objections is bad rationality practice, because it trains you to think of the wrong things.

Important ideas (not from the article) I think are incompatible with PF:

1) The important idea that you are talking to the person you are talking to, not some impartial 3rd-person observer. "There is no argument so compelling that it would convince a rock", as EY would say. The territory is objective, but maps are fundamentally subjective. Different people will give and accept different reasons. So, PF takes as premise that if you're right, you'll be able to respond to criticisms in a way that the other will accept, at least after several iterations, whereas in fact it seems easy to be able to defend your beliefs satisfactorily to yourself but not in a way which the other person will accept. To me it seems as if PF equivocates between being able to respond to criticisms to your own satisfaction and being able to respond to the other's satisfaction, and it is based on this equivocation that it suggests a path forward should always be open.

2) The idea that implicit, inarticulable, or unjustifiable beliefs are not necessarily bias to be set aside / that we don't necessarily benefit from maximally exposing ourselves to arguments / that this necessarily 'error-correction', as it may instead make us more swayed by the biases of others / of groupthink. We know more than we know we know, almost by sheer force of logic (it would be difficult, though not quite impossible, to know we kwon everything we know). We can properly justify only a subset of what we know we know. So restricting ourselves to only steer by those beliefs which we can properly explicitly justify is throwing away a significant portion of our knowledge. Of course it is beneficial to try and make beliefs explicit, and look for those we can defend and those we can't. But, this should be done in a way which improves rather than diminishes us. Dismissing an uneasy feeling about a business deal because we can find no logical reason for it may be unwise, say.

3) I know you'll protest this, and I'm not sure exactly if it's a function of PF or just the way you use PF, but it seems like PF ends up putting a very adversarial frame on discussions, with criticisms and defenses, rather than a collaborative truth-seeking frame.


PL at 6:38 PM on December 19, 2017 | #9425 | reply | quote

> "would the world be better off if people unable to do PF (naming Eliezer for the sake of argument) had never claimed to be public intellectuals?"

> To me, this seems definitely false in the case of Eliezer. Of course you may disagree.

I like some of EY's writing and see value there, but also he's wrong about some major issues, and staying wrong due to lack of PF, and he absolutely could do better about error correction within whatever constraints he's operating under. An example of him being mistaken is about Friendly AI – he's scaring the public about AI (about some grave danger) while researching authoritarian control (how to control the lives of AIs within the constraints of his choosing) that's even worse than the typical authoritarian political scheme b/c it's more focused on mind control instead of body control.

So, while agreeing he wrote some good stuff, I can also criticize his rationality, and say there is a substantial PF-related problem. He could do better; he isn't; there are consequences. The reason he doesn't do better has nothing to do with fatigue, it has to do with e.g. his closed-minded, ignorant rejection of Popper. This isn't a matter of lack of time and effort, it's bad judgement and arrogance.

When challenged, he has done things like use administrative action to suppress dissent. There's no excuse for that. He doesn't want a free speech forum. It's not just that he lacks time to read it; he doesn't value such a thing. There is a clash of intellectual and political values which is more important here than time/resource constraints. He doesn't wish he could do PF, he doesn't love the idea.

> Is free market economics collectivist? Is any theory predicting groups of people rather than individuals collectivist?

Free market economics is *not* a theory predicting groups of people.

> Individual humans are just groups of cells. Perhaps we should talk about cells rather than humans, it would be more scientific. (??) Or maybe cell biology is too collectivist, and we should work at the level of particle physics. (???)

There's a correct level to look at. E.g. in biological evolution, it's genes – not whole animals and not individual atoms. Atoms aren't replicators, so they aren't so interesting. It's the same here: cells don't think, reason, or make choices. An individual is an actor, but a single leg isn't.

There do exist some legitimate contexts for discussion of groups, but that doesn't prevent people from being detected as collectivists for openly displaying standard authoritarian, collectivist assumptions. He's literally *virtue signaling* that he's a collectivist anti-(classical)-liberal. It's not subtle. And, as with Popper stuff, there's no willingness to debate such things. Asking for rebuttals of Rand and Mises gets *even worse* responses than doing it with Popper and Deutsch.

> (In other words, I'm totally baffled by the position there.)

Do you want to learn about capitalism, individualism, liberalism, etc? There are books on the matter included in my reading recommendations. Questions and arguments are welcome at FI or here, if you learn enough to comment with more than bafflement.

> Here, we'd have to get into debating objectivism, which I expect would be a rather large sub-thread to try to support.

I think you'd need to learn about Objectivist prior to debating it.

> Do you mean he doesn't want to do PF on it, or do you mean he doesn't want to think about it? If the latter, what is your evidence of this?

If EY wanted to think about AS much there'd be visible signs. They don't exist.

> For my argument for making trade-offs, I offer the complete class theorem:

This is irrelevant. You are starting with a bunch of assumptions which I don't agree with. You need to back off to more fundamental, basic, philosophical issues. This is the basic problem I constantly had with LW people – they can only talk with people who already agree with them about a ton of stuff. Far too many of their premises are assumed rather than considered, and therefore aren't available for discussion. Whereas if one is well versed in prior layers of abstraction, one can drop down to them and discuss them.

In other words, you're starting in the middle. And if you're like the LW people, your beginning hasn't been adequately consciously considered and you don't even know how to discuss it.

You're basically skipping past philosophy, which deals with big foundational questions, to get into the details of your unstated framework. (You – if you're anything like the LW posters – consider your framework stated because you state some later parts of it, while being blinded to the prior issues and basically taking common sense for granted there.)

One of the many things skipped is some framing of the problem itself you're trying to solve. Broadly you omit philosophy as a whole, but more specifically there's no preamble about what a decision is, why one would want to make one, what decision success is, etc.

Also the writing is terrible and can you please just not link Wikipedia again? E.g. it says:

> in the precise sense of "better" defined below

But it doesn't define a precise sense of "better" below. I don't know if the writer is stupid or this got screwed up because of multiple authors editing different sections at different times, but it's a typical example of how Wikipedia routinely sucks. And there's no real accountability and no decent procedures for fixing errors. And there's a politically biased moderation team behind the scenes. And links are unreliable because pages get edited.

> Maybe I already made this remark, but I'm struck by the way you/DD emphasize the analogy between evolution and epistemology,

I already posted:

>>>> DD seems to like the analogy between epistemics and evolution quite a lot.

>>> that's not an analogy. evolution is *literally* about replicators (with variation and selection). ideas are capable of replicating, not just genes.

I believe your inattention to detail is common and is one of the main reasons more people don't agree with me about many issues. I think people lack core intellectual skills like being able to read precisely, and this is a bigger issue in "disagreements" than actual contrary ideas.

> but miss the analogy between evolution and Bayes

I'm not missing anything. I'm trying to talk about *prior issues*, instead of within-framework math. Your framework is inadequately specified and involves a bunch of common sense and traditional assumptions about epistemology (more of those than any actual epistemological system), and *those* are where I primarily take issue with Bayes. The implications of fixing the prior issues can be discussed at a later date.

Nothing else really matters as long as the core philosophy issues are outstanding.

> Interesting point. On my reading of the content, it ought to be rather controversial (for the same reason as Inadequate Equilibria has met with backlash), but it is very much not.

I agree that the content of HPMOR *deserves* to be controversial, in some sense. If EY sat down with most fans, and carefully went over what he was actually saying, and what it implies *about their lives*, and pointed out ways they *are not acting according to what he was advocating*, he'd find most readers disagree with HPMOR in major ways (I'm sure he'd expect this, not be surprised). So, in some sense, it ought to be controversial b/c more readers ought to recognize they disagree. But it's not clear and aggressive enough in various ways to trigger this. There are things EY should have done better here, but he did OK, and lots of it should be blamed on 1) it's actually difficult 2) flaws in our culture and in the audience.

> Not sure what you mean here. The most I can come up with is something like "curi is thinking that Bayes is necessarily missing anything which CR has" or something (which is unlikely to be really what you're thinking here). Eliezer wrote a bunch of explicit stuff about what good explanations look like around the topic of "hugging the query", mysterious answers, etc.

Epistemology is the name of a field which deals with certain basic questions like: what is learning, how do you learn, how do you evaluate ideas, how does reason work, which arguments should be judged to win a debate, what are the right methods of thinking?

LW/BE *literally doesn't answer this stuff* in any kind of serious, comprehensive way. LW/BE instead has a mix of:

1) partial answers on specific sub-issues. it has some details which are relevant.

2) assumptions (from common sense, tradition, culture, etc)

3) some rather vague comments on epistemology, e.g. the 12 virtues of rationality do have epistemology content but do not resemble and actual framework or system with clear principles and methodical answers

4) much more rigorous, comprehensive work on some specialized sub-fields of epistemology, which make assumptions about the foundational issues they don't address

There is no unique Bayesian *Epistemology*. To the extent I've ever gotten answers about major epistemology questions, it's either details (pieces of epistemology with key parts of the bigger picture missing) or mainstream answers (rather than anything Bayes-specific, and with the standard flaws).

> Eliezer wrote a bunch of explicit stuff about what good explanations look like around the topic of "hugging the query", mysterious answers, etc.

That's the wrong kind of material. What I'm interested in is underlying methodology for discovering and judging such things, not the conclusions reached. I want to deal with *starting points of intellectual inquiry*, not skip past those.

If you look at http://lesswrong.com/lw/ly/hug_the_query/ maybe you can see how it opens with assumptions about rationality and starts getting into some more detailed issues. This is a *piece of* an epistemology, but isn't the fundamental core of one.

> but not very useful at all if the point is to *convince*

You have to *learn* to be convinced. You have to convince yourself. CR is *far deeper and more complex than you realize*, and all you're getting in this conversation are abbreviated indications of positions. If you want more, it's available, but I don't know how to repeat all the content in 1% of the word count, and I'm not trying to.

> I would like to make a request that you don't make any arguments begging the question of PF/CR's superiority going forward. Would that be too much, though?

Could you point out a single instance of me doing this?

You quoted the text, "CR does so much better than this by giving actual methods of consciously thinking about and analyzing things, and explaining how rationality and critical debate work, and methods of problem solving and learning."

But *this is not an argument*. Maybe you're so used to such low quality of argument that you actually thought this non-argument was intended as an argument? It had a different, descriptive purpose.

Additionally, *that is not my text*. Please don't try to comment about my discussion history unless you pay attention to who wrote what.

If you have a reference for something being overlooked, such as an argument for why we have to make up fudged numbers (which doesn't make unstated non-CR framework assumptions, but instead actually argues from first principles), please link it instead of complaining *non-specifically* that other people haven't responded to some things on the internet that you'd like responses to.

> Could it possibly be that, because you disagree with the formal epistemology EY has settled on

There's no such thing as a fully formal epistemology. You can have a formal *part* of an epistemology, but not the whole thing. To the extent you have a formal epistemology, it's just missing huge pieces. What is learning? You can't just start answering without having a part explaining the basic concepts you're using, the conceptual gist of your answer, and how the math relates to the question (math isn't self-explanatory). If you omit that part, you're relying on prior work (somewhere, by someone – references please!) or, worse, mainstream sense intuitions of your audience. The starting points of the field are not formal. You may attempt to formalize them, but you'll need some kind of bridging material which gets from the starting points to your formal system.

As long as this issue of starting points is outstanding, the rest basically don't matter.

You also try to present this like some kind of disagreement when EY is *massively ignorant* of CR (he's written several ignorant things about CR). That's different than disagreeing. Also disagreeing about particular arguments is pretty different than "I have a systematic philosophy that starts at the start; where's yours?" Objectivism, btw, also addresses starting points. It's a reasonably typical thing for philosophers to attempt – but LW/BE is kinda philosophy-hostile. There are plenty of philosophers I don't like and disagree with – e.g. Kant – but whom I acknowledge as having spoken to the fundamental questions of epistemology. If LW/BE/EY has done this, please provide a reference; everyone I spoke to at LW just had mainstream ideas – that had nothing much to do with Bayes – when I brought this stuff up. In other words, they seemed satisfied that BE is premised on various aspects of mainstream, conventional epistemology (rather than being a complete alternative). Everything I've read from EY is the same way, unlike with Kant or Rand.

> The stuff in the essay which I thought contradicted PF (iirc) was (a) the idea that EY could not realistically expect any benefit from discussing this with Pat, and would have been better off abandoning the conversation,

I didn't think Pat had a lot to offer, either. But I also don't think such judgements are very reliable. It's easy to take someone dramatically better than you and think they're dumb because you don't understand their advanced ideas.

So what do you do to avoid bias, so that you don't systematically ignore genius after genius (mixed in with a much larger number of fools)? You need some *methods* to be followed. I propose that EY either use PF or *write down alternative methods that he uses instead*. I propose that it's a bad idea to just make an ad hoc judgement of Pat and move on in such a way that Pat can't correct a mistaken judgement.

> (b) Eliezer's insistence that even thinking the thoughts about how to respond to Pat-like objections is bad rationality practice, because it trains you to think of the wrong things.

if people have problematic frameworks, challenge the frameworks, and refer them to canonical material instead of getting into details with them. such an approach is PF compatible. i totally agree that focusing a lot of attention on the wrong questions and issues can be bad. so don't. but do speak to the meta-disagreement (once in writing is fine), state your own framework and values, etc. hell, that's part of what EY is doing with the Hero Licensing essay. that essay helps speak to people like Pat instead of just answering them with unexplained silence! awesome! but there are no PF's with EY about CR, Objectivism or Friendly AI – there is no reasonable, realistic way for me to correct him about those matters.

> 1) The important idea that you are talking to the person you are talking to, not some impartial 3rd-person observer.

you are allowed to talk to individuals, if you wish to. that's more time consuming than speaking about issues generally in reusable ways that apply more broadly. it's more parochial. i have nothing big against it. but don't do it exclusively, instead of writing the more important, less parochial stuff. (i think EY broadly agrees about this – he is more interested in writing essays than debating particular individuals. good. the ideas are what matter most, not the personal confusions of some guy.)

> The territory is objective, but maps are fundamentally subjective.

what do you mean by "subjective"? i want very high precision, or else just don't ever use the word and replace it with something else.

this is one of the worst, most problematic words in philosophy. it's a major cause of confusion, and people talking about it are rarely on the same page.

> To me it seems as if PF equivocates between being able to respond to criticisms to your own satisfaction and being able to respond to the other's satisfaction

there's an objective state of the debate, which you can create high clarity about, and then judge. you can keep that judgement itself open to error correction, as you should do with literally everything.

you are unfamiliar with the epistemology which enables this, but it does exist and is available to be learned.

> and it is based on this equivocation that it suggests a path forward should always be open.

what is your rival position? that sometimes you should *permanently* shut down error correction on some topic? or you should sometimes make error correction so slow, indirect, implausible, etc., that it's unrealistic and doesn't constitute a reasonable path forward?

i think you're overly focused on other people. if you look at rational handling of internal disagreements (within one mind), that'll be revealing about the irrational and authoritarian nature of "ignore the other side of the debate" type thinking. "that other part of my mind is dumb and not even worth talking to"...

> 2) The idea that implicit, inarticulable, or unjustifiable beliefs are not necessarily bias to be set aside

you are not being precise enough. that's actually one of the big things necessary as part of learning CR: learning to think much more precisely than is normal.

implicit? implied *by what*?

inarticulable? this means it's *impossible* to articulate. it doesn't mean you merely don't know how at this time. is that what you meant to write? my first guess is you don't mean that. it'd require more elaboration if you do mean it (which beliefs are *impossible* to articulate? why? what counts as articulating? articulation doesn't have a precise enough standard English meaning to use without details in such a big claim about impossibility.).

unjustifiable? *no* beliefs can be justified. among other things, justification has a regress problem. (justification comes from some source of justification, but the source of justification (or the belief that it is such a source) needs its own justification, etc)

i didn't claim these things are all *necessarily* bias to set aside. you should use quotes instead of putting such strong words in my mouth (with other words following undefined slashes).

> that we don't necessarily benefit from maximally exposing ourselves to arguments

i did not say *necessarily* and *maximally*. that is not even close to what i've been saying. i've said there are lots of benefits to lots of exposure, which is rather different. and i've said don't block paths forward entirely (no exposure, or some approximation of that). this issue – totally blocking off corrections – is the one i primarily care about (so there's *no* solutions), not anything about maximizing exposure.

> that this necessarily 'error-correction', as it may instead make us more swayed by the biases of others / of groupthink

being swayed is not part of the CR epistemology. that is not how to think. don't do that.

> We know more than we know we know, almost by sheer force of logic (it would be difficult, though not quite impossible, to know we kwon everything we know).

yes of course. traditions are often wiser than the people using them. this stuff is covered extensively in our philosophy.

> We can properly justify only a subset of what we know we know.

this is a good example of making a mainstream (not Bayes-specific) epistemology assumption, and using it as a premise. why exactly would you want to justify anything, and how do you think you can? you may find the answers obvious but *where are they written down* in a serious way which e.g. covers the well known views in the field and your positions on each? and then how do these answers you have differ from some fairly typical non-Bayesian answers?

> So restricting ourselves to only steer by those beliefs which we can properly explicitly justify

is certainly not something a CR advocate would ever propose, seeing as CR rejects justification. you're trying to argue with something you're far too ignorant of to debate. you should be trying to learn instead of arguing with your huge misconceptions that come from mixing PF with large amounts of your own non-CR epistemology.

> Dismissing an uneasy feeling about a business deal because we can find no logical reason for it may be unwise, say.

of course. i've said so myself, repeatedly, and in detail.

> 3) I know you'll protest this, and I'm not sure exactly if it's a function of PF or just the way you use PF, but it seems like PF ends up putting a very adversarial frame on discussions, with criticisms and defenses, rather than a collaborative truth-seeking frame.

the frame used is identical whether dealing with other people or with internal disagreements in your own head. am i my own adversary? nah. people need to stop taking criticism of ideas personally. a criticism is an explanation of a flaw in an idea. we need those. they are very directly crucial to error correction. of course people should collaborate and think objectively – e.g. by criticizing ideas in the same way regardless of their source (criticize your own ideas with the same methods you criticize other people's ideas – and stop thinking so much about whose idea is whose).

i do use certain hot-button words, like "criticism", which many people are emotional about. i use these words because: they aren't an emotional problem for me (or others like Popper who also used them); they're clearer; there's no quick/easy fixes here (using the term "constructive criticism", for example, would have major problems, and anyway it's primarily a substantive issue not a terminology issue.)

i think the main thing going on here is i'm talking epistemology not psychology. i'm talking about the philosophical issues, not how to communicate with fragile people, which i consider a separate issue of much lesser importance, which, in any case, can only be determined after you figure out the actual philosophy (first you need to figure out intellectually what has to happen, *then* you can come up with some "nice" way to talk about it). secondarily, i also reject a variety of social norms, for various reasons that aren't primarily about CR (which is fairly neutral about how to converse as long as you're speaking clearly and honestly to the issues. CR basically sticks to epistemology and doesn't include a criticism of modern social dynamics).


curi at 11:07 PM on December 19, 2017 | #9426 | reply | quote

social norms about criticism

> i'm talking about the philosophical issues, not how to communicate with fragile people, which i consider a separate issue of much lesser importance,

Maybe it's not always the case that people are fragile, it's only that they are operating under a social system that interprets some kinds of criticism and discussion as hostility. If they get some signal that says “I'm not going by the usual rules. I'm criticizing your ideas but I don't hate you and I don't have a goal of hurting you.” then they can understand it pretty quickly and not be bothered by the criticism.


Anne B at 4:55 AM on December 20, 2017 | #9427 | reply | quote

What you described is a type of fragility.


Anonymous at 5:06 AM on December 20, 2017 | #9428 | reply | quote

(Response to #9297-9315)

> the "training" idea is dumb. if you spend the discussion understanding why Pat's perspective is wrong better, then you won't be trained to do it, you'll be helped not to do it.

In my experience, it does seem like people who take it upon themselves to respond to very poor quality criticism (I am thinking of stuff on twitter here) do generally lower the quality of their thinking as a result. Thoughts tend toward ground recently tread, and level of standards recently experienced. There is some effect of avoiding the mistakes observed in others, but as for that, engaging better intellectuals also means learning to avoid more nuanced errors.

> you don't specify goals for AI anymore than you do for children, you fucking tyrant. people think of their own goals and frequently change them. the AGI ppl want to not only set the goals but prevent change – that is, somehow destroy or limit important intelligence functionality.

This depends on details of how powerful agi systems will be designed. Almost all systems today are designed by choosing a loss function for, eg, neural network training (which is very much like choosing a utility function) and then optimizing for it. So, in a way, that is the safest bet for how powerful systems will be designed in the future. But the more important thing to debate is Bayesian foundations, which suggest that very powerful systems still require goals to be set from outside.

> As to the actual issue, a U-maximizer's first choice would be to *persuade you that U-maximization is better than V-maximization*. The lack of thought about persuasion is related to the lack of respect for error correction and, more politically, for freedom vs. tyranny, voluntary vs. involuntary, etc.

Better in what sense? The standards programmed into the AI? The U-standards or the V-standards?

> > 1. Costs of PF.

You pivot to a discussion of benefits of PF. You agree with my point that there are other ways of making progress than addressing external criticisms, and indicate that the big problem is really that EY is so non-PF that he is still advocating wrong positions after a decade (I take it you are implying that there is some lower standard of PF which EY could presumably meet, via which he would avoid failures on this scale while making what he might see as a reasonable trade-off for his time).

Overall, I agree that these things can be discussed as a matter of degree, addressing trade-offs in amount of time spent, using filters of varying strengths for taking time to interact with people, etc. I agree that feedback from others is indispensable for intellectual progress. I disagree with the contention that filter criteria should necessarily be public. I agree that having non-public criteria allows people to apply their own biases and refuse to engage those who they disagree with, but people can find ways of ignoring evidence anyway. You'll say that one should grasp at every safeguard against this, but, the way you describe doesn't particularly strike me as the right trade-off.

Instead, it seems to me that even if your goal is purely to come to the right conclusions, even taking into account the fact that a few of the interactions which seem like "obviously a waste of time" on the surface will in fact be high-payoff, there will be a number of interactions which are just not worth the time; and, trying to explicitly state filter criteria for these will often be more trouble than it is worth.

> > Like points 1 and 2, my concern is that you seem to want to defend PF by retreating to a version whose job is not to address those concerns, instead stating that PF is so valuable that an organization should drop any other concerns in order to achieve PF.

> PF is so valuable that an organization should drop any other concerns in order to achieve PF. There you go.

This seems like an empty argumentative move without much content to respond to, which reinforces my impression that you're refusing to think much about what people/organizations actually want and need and what would really motivate them, *or* what would actually help them to have better epistemics in any nuts-and-bolts way. Imagine a parallel universe where you had helped a number of organizations to install better epistemic practices, and had more experience with what tends to work, what comes up against resistance, etc. It seems to me like that alternate you wouldn't have generated a sentence similar to the above.


PL at 4:29 PM on December 24, 2017 | #9431 | reply | quote

> In my experience, it does seem like people who take it upon themselves to respond to very poor quality criticism (I am thinking of stuff on twitter here) do generally lower the quality of their thinking as a result. Thoughts tend toward ground recently tread, and level of standards recently experienced. There is some effect of avoiding the mistakes observed in others, but as for that, engaging better intellectuals also means learning to avoid more nuanced errors.

if you're unable to reject bad ideas intellectually, trying to somehow figure out which ones are bad and avoid them (b/c you can't answer them in an effective way that applies to your actual life) is a terrible "solution" – it's just straight anti-truth-seeking.

you don't see Elliot making shitty arguments just cuz he spent some time on LW recently. he knows better. learn better instead of trying to avoid exposure to bad ideas. learn how to actually refute them and apply refutations to your life (instead of them only being abstract). if you can't do that, you suck at thinking – you aren't better than the ideas you claim are bad but can't handle, and you may be worse.

> But the more important thing to debate is Bayesian foundations, which suggest that very powerful systems still require goals to be set from outside.

do you think human goals are set from the outside? you aren't really addressing the comparison.

> Better in what sense?

whatever the *true* sense is – what's what we should look for and what's most persuasive. that'd be part of the argument – it'd say what the right kinds of better to look for are for this issue.

have you just given up on truth?

> I disagree with the contention that filter criteria should necessarily be public.

that's necessary if you want to make a *public* claim to rationality, and not be laughed at by all wise men. if you say "i'm secretly/privately rational, and i want a rational public reputation" you're silly.

> there will be a number of interactions which are just not worth the time

has the issue been addressed, ever? if not, on what grounds can you ignore it? if you have any grounds, why have they never been written down? if they have been written down, then addressing the issue takes 3 seconds to give a link and you're done.

> This seems like an empty argumentative move without much content to respond to,

are you so much of an idiot, who speaks only to idiots, that you mixed up that conclusion statement with an argument – and then complained the non-argument didn't have enough argumentation in it?

you got what you asked for. you also, separately, got arguments.

> It seems to me like that alternate you wouldn't have generated a sentence similar to the above.

you mean a hypothetical Elliot, with opposite values – a second-handed appeaser who says whatever is popular for manipulating members of his culture – would make different statements? yeah that person also would hate PF and FI.

it's bizarre though because it's exactly the sentence you explicitly requested. of all the sentences, you're complaining about the one you wrote and ET copy/pasted from you!? really!? literally *you* wrote the entire sentence you're focusing so much hostility on.

maybe you should focus on the issues that matter like what your epistemology is – or if you even have one.


Dagny at 6:40 PM on December 24, 2017 | #9432 | reply | quote

#9355

>Yes, I know Popper solved a major problem when he invented CR and the world did not take notice. Having an AGI sitting in your face is a different story though!

People have billions of humans sitting in their face, many of whom do amazing things, and yet some of those people view humans as a wicked, nature-destroying, parasitic chemical scum that will be wiped out by a virus one day.

Philosophy is everything.


Anonymous at 5:29 AM on January 13, 2018 | #9453 | reply | quote

I updated the post with the (old) replies from the one guy at MIRI who responded (with non-answers).


curi at 1:15 AM on December 5, 2019 | #14697 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)