Suppose an approach to answering moral questions is, in the limit, convergent with the truth, but the calculations involved are more complex -- require more computing resources. This would be, in the limit, a *wrong* approach. At the least, because wasting all these resources (as opposed to using the right approach) means less resources to avoid mistakes, create value, etc...
Well before the limit, this allows us to say the non-utopian versions of consequentialism and deontology may well be convergent with true morality, but are still wrong to hold or use.
Messages