A different consequentialism

I don’t know if the views I outline here are novel or not- I asked some professional ethicist friends and they weren’t sure. I know that G.E. Moore said somewhat similar things, but I still need to do some reading to discern how similar. That’s next on the list once I get over this fever.

I wanted to give an outline of my own ethical views which could best be described as a non-utilitarian consequentialism for two reasons 1. Because if I am right there are interesting implications for AI alignment 2. Because too often utilitarianism and consequentialism are equated in popular discourse. In truth utilitarianism implies consequentialism but not vice-versa.

Let’s say someone asked you what specific features make something beautiful. Does symmetry make something beautiful? Does complexity make something beautiful? You explain that there isn’t really a specific list of features that will capture all the beautiful things and only the beautiful things. You might have a theory about what it means to say something is beautiful- maybe you think it means “I would like it under certain ideal conditions”, but this isn’t the same as a list of features of all and only beautiful things. Now presumably in principle this list could be drawn up, but it would be so very long and disjunctive as to not really count in a practical, humanly applicable sense. Of course it’s always possible that someone will find a parsimonious, workable set of criteria for beauty, but it seems unlikely- we have had thousands of years.

This is how I feel about the good. It’s no one thing, anymore than beauty is “symmetry”, “complexity”, “novelty” or whatever else. Probably the most plausible attempt to define good in terms of a single thing is utilitarianism, yet it falls apart at the margins. Consider, for example, a universe transformed into nothing but endless Brains-in-a-Vat experiencing and re-experiencing a simple but blissful moments and/or desire satisfying moments. Is this good? I conjecture that every simple attempt to give a manageably short list of criteria for identifying the good will fail. Of course it’s possible that a simple enough definition does exist hidden thus far, but shouldn’t the burden of proof be on the advocate who thinks this is true to prove it, rather than on myself to disprove it?

That doesn’t necessarily mean such definitions are useless, mind. Utilitarianism seems to work well in some cases. I agree with Robert Goodin that utilitarianism works best as a public philosophy- clear, reasonably precise rules for judging the actions of governments and other public actors. That these rules might fail at the margins is just reason to monitor their application carefully.

In no way does my approach to the good require us to abandon consequentialism. Consequentialism is simply the view that you should try to maximise the good, placing no special or agent relative weight on whether your own actions are good. This is in no way incompatible with the anti-theory of the good I have outlined here.

Attentive readers might wondering how this goes with my previous post in which I pondered which ethical theories were the most and least susceptible to being used to rationalise whatever it is that you wanted to do anyway. It might seem that one unfortunate property of the anti-theory I’ve outlined here is that it is extremely susceptible to being used to rationalise whatever it is that you wanted to do anyway. There are huge free parameters both in the estimation of consequences and in the decision about which consequence would be most good.

I’ll concede this is a problem, but I have a 90% solution. In order to maximise accountability, my solution in public policy is to stick to utilitarianism in practice unless there is a very good reason to do otherwise. So we keep being utilitarian right up until the moment someone says “I have an idea to slightly increase overall utility but it will require us to destroy the Elgin Marbles”, or “Let’s send a species extinct to increase mean income by a dollar, or “Let’s tile the universe with wire-headed people”. Generally speaking utilitarianism will do alright. In the unusual cases though, we allow ourselves the option to step back.

Certain readers might be wondering about the implications of this view for AI alignment. I guess there are broadly three ways to create a safe superintelligence. 1) Create a superintelligence that understands, and supports, the human concept of good 2) Create a superintelligence that is obedient to its masters, and to the spirit of what they want, not just the letter. 3) Some combination of 1 & 2 My argument, if correct, creates a problem for options 1 & 3, at least if we are trying achieve 1 or 3 by giving an artificial intelligence necessary and sufficient conditions for identifying the good. Of course, if the machine learning explosion has shown anything it’s that there are other ways artificial intelligence can grasp concepts. The problem is though that such pattern recognition based understanding is famously difficult to scrutinise or check.

If you enjoyed this article please consider joining our mailing list: https://forms.gle/TaQA3BN5w3rgpyqeA also, a collection of my best writing between 2018 and early 2020 is available as a free e-book “Something to read in quarantine: Essays 2018-2020”. You can grab it here.

Note to self: questions for further research include- does the concept of welfare or “good for a person” make sense on this conception of the good- where good is a feature of whole situations (evaluative holism)? Does the idea that some things are “intrinsic goods” make sense, if good is a feature of whole situation?.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s