Most of this paper is code/math rather than philosophy. I am only criticizing its philosophy. It may be a very good paper within its field.
Consider a real world phenomenon f that is being investigated by an agent M. M performs discrete experiments x on f. For example, x might be a particle diffraction experiment and f(x) the resultant probable distribution on the other side of the diffraction grating. By a suitable encoding of the experiments and results we may treat f as a function from = {0,1,2,...}, the set of natural numbers, to N. A complete explanation for f is a computer program for f. Such a program for f gives us predictive power about the results of all possible experiments related to f. We are concerned about the theoretical properties of the agents which attempt to arrive at explanations (possibly only nearly correct) for different phenomena. In what follows we will conceptualize such agents as learners (of programs for function).The use of the word 'explanation' here is not how Popper uses it. This is because they are not philosophers and are not doing philosophy. I am not faulting them for that, but I was rather hoping for a critique of Popper's philosophy, which this is not. I discuss the word 'explanation' more later.
An inductive inference machine (IIM) is an algorithmic device which takes as its input a graph of a function N -> N, an ordered pair at a time, an, as it is receiving its input, outputs computer programs from time to time.
The use of the word 'induction' here *is* how I use it. Their use of induction here has *data first*, and then "explanations" are created second, based on the data.
They assert their machines arrive at "explanations" (computers programs) which are correct or nearly correct using this inductive approach. This is, at least according to Popperian philosophy, impossible. Here are some reasons:
Any finite set of data is compatible with infinitely many theories. Only one is correct. The machine has no way to judge which theories are better than others. Therefore the machine cannot succeed. (Note: if we did have a theory telling us how to judge which are better than others, that would no longer be induction because all the content would be coming from this theory and not by induction from the data.)
There is no way to generalize data points into a theory. Imagine the data points on a 2-dimensional graph. A theory is a line on the graph (or the function which generates that line, if you prefer). I don't mean a straight line, it can curve around or whatever. A theory *consistent with the data* would have to go through every point. There are infinitely many ways to draw such a line. Any portion of the line between any two points, or after the last point, or before the first, can go absolutely anywhere you feel like on the graph at your whim while remaining fully *consistent with the data*. The points can be connected in any order. The data points provide no useful restrictions at all on which theories (lines) are possible. (IIRC this argument is in _The Logic of Scientific Discovery_).
Some people would say you should draw the most smooth line between the points, and avoid the bendy ones. This kind of sounds nice in English. But it's not so easy when you deal with real theories, especially philosophical theories. What is the smooth line to tell me the right theory about the morality of stealing? But also consider a field which has lights which are turned off at 6am, and turned on at 6pm, every day. If you make observations at noon and midnight daily, and draw straight lines between the data points, you will predict the lights are partially on in between your observations, which is wrong. When it comes down to it, no one has ever made a general purpose theory of this sort (draw the smooth line) which works.
Further, the smooth line theory involves having a *theory first* (about the type of line most likely to correspond to a true theory), and then making guesses *based on the theory* and interpreting the data in light of the theory. So it's not really induction anymore.
In other words, because (following Popper) induction does not work, the inductive inference machine will not work.
In the paper's abstract it asserts the way the induction machines are "Popperian" is that they make use of Popper's "refutability principle". Later the paper says:
Karl Popper has enunciated the principle that scientific explanations ought to be subject to refutation[23]. Most of this paper concerns restrictions on the machines requiring them to output explanations which can be refuted.Unfortunately the word "explanation" in Popper's principle has a different meaning than the "explanations" which the machines output. In fact they are not creating any explanations at all in Popper's sense. So their machines do not follow his principle (except perhaps by loose analogy or metaphor).
Their sense of "explanation" is a correct program, i.e. one that can *predict* data points. But Popper's idea of an explanation is an English statement to *explain the underlying phenomenon*, not just to make predictions. The idea that scientific explanations are nothing more than instruments for making predictions is *instrumentalism*. You can find criticism of instrumentalism in _The Fabric of Reality_ chapter 1. Also in Popper (various places, like OK p 64-65).
The paper also talks about the reliability of their inductive inference machines. Their approach is justificationist. They attempt to *establish* the reliability of some knowledge (not as absolutely certain truth, just as reliable/partially-certain/supported/weakly-justified whatever you want to call it). This is anti-Popperian. They do not provide criticism of Popper's arguments on the subject.
Messages