My moral compass: Utilitarianism and fairness

I would like to dedicate this post to a discussion of my moral beliefs, in a broad, non-specific, philosophical sense. As I alluded to in a previous blog entry (first paragraph), I have long since settled on a variant of utilitarianism. For those among you, dear readers, who do not know: utilitarianism is the philosophical position that those things are desirable, or moral, that maximize happiness, or minimize suffering, or both. At first glance, this seems very sensible, but there are various problems that may be raised in response to this, some of which I will cover later on in this post. To begin with, however, I will describe in more detail my precise variant of utilitarianism. To do this, I will need to raise at least one possible objection to utilitarianism per se: the notion of fairness.

One might imagine that it is in principle possible to have all happiness in one half of individuals, and all suffering in another. If utilitarianism looks at the total of happiness and suffering in all of existence, it would seem that this would be no more or less moral or desirable than a uniform distribution of happiness and suffering across all individuals. Personally, however, I do see a clear difference here, with the latter option being highly preferable over the former. Clearly, fairness comes into play here. However, I do not actually see this as a problem with utilitarianism. Rather, it seems to be a question of how you quantify happiness and suffering.

First, I will need to clarify some of the definitions I will use in this blog. It is quite well established that the experience of happiness is rather relative. To a person who has known nothing but happiness, a certain absolute quantity of happiness (however one would determine such a quantity) might be worth less than this same quantity would be worth to a person who has seen much less happiness in their time. However, there is a linguistic problem with this comparison. After all, if you use “happiness” to mean the experience of happiness, then these persons would not really have seen the same amount of this at all, even if the same happy events happened to them both. Therefore, I will use “absolute happiness” or just “happiness” to describe happy events or circumstances in absolute terms, and “experienced happiness” to reference the subjective happiness an individual feels due to such events or circumstances. The same distinction will be made for absolute and experienced suffering.

I posit that it is experienced happiness that is relevant to the concept of utilitarianism. What good, after all, is absolute happiness, if it cannot be enjoyed? In this way, there is a clear difference between the division of individuals into a group of “happies” and a group of “sufferers”, and the uniform distribution of happiness among all individuals: the second situation will have a much greater amount of total experienced happiness.

Similarly, you might expect a person to adjust to their level of suffering in a qualitatively similar manner. However, this leads to another problem: suffering, unlike happiness, should be minimized. Going by the same logic as above for happiness, then, it really would be preferable to put as much suffering as possible on as few individuals as possible, so that the total experienced suffering would be lowest. This quite clearly clashes with our intuitions about morality and fairness. I see two possible solutions to this. One is that suffering actually does not behave qualitatively similar to happiness, and suffering actually counts more strongly the more of it you have. Perhaps more and more suffering would lead to despair and thus matter more and more. This would be gratifying, as we could stick to utilitarianism without any further additions. The other solution would be to say that utilitarianism per se does not suffice, and it requires an additional, explicit notion of fairness that ensures suffering is spread out equally.

I have no clear preference for either of these solutions, although I suppose it would in principle be possible to determine experimentally whether the first solution will do. The first is probably more elegant, but even the second is not so far removed from utilitarianism. After all, utilitarianism just states happiness should be maximized and suffering should be minimized; it says nothing about the required method of measuring and quantifying these emotions. You could say that even with this notion of fairness explicitly included, you still have a form of utilitarianism, where happiness and suffering are weighted somehow based on how much of it there is in an individual.

This is, in fact, precisely what I believe. In the interest of fairness, both happiness and suffering should be spread as uniformly across individuals as possible. This will maximize experienced happiness, and it may indeed also minimize experienced suffering. Allow me to illustrate this concept more precisely with the help of some simple mathematic formulas. Let

{ H }_{ i }={ h }^{ x },\quad 0 \textless x \textless 1,

where { H }_{ i } is the experienced happiness of individual i, and h is the total absolute happiness of this individual. The fractional exponent ensures additional happiness counts ever less strongly, without it ever becoming totally devoid of value.

The total experienced happiness in existence can then be a simple summation of each { H }_{ i }:

{ H }_{ tot }=\sum _{ i=1 }^{ n }{ { H }_{ i } },

where n is the total number of individuals capable of experiencing happiness in existence.

Similarly,

{ S }_{ i }={ s }^{ y },\quad y \textgreater 1,

where { S }_{ i } is the experienced suffering of individual i, and s is the total absolute suffering of this individual. In this case, the exponent greater than 1 makes suffering count more strongly when more of it is present in any given individual.

Again, we can sum the total experienced suffering in existence simply as follows:

{ S }_{ tot }=\sum _{ i=1 }^{ n }{ { S }_{ i } }

Now, all that remains is to find the distribution of absolute happiness and suffering that maximizes the term { H }_{ tot } - { S }_{ tot }!

It is quite easy to see that this model will distribute both suffering and happiness uniformly among individuals. After all, given a certain amount of absolute happiness, it would be preferable to start off by giving everybody a little, since the first bit of happiness counts the strongest. If there is still happiness left over, it will again be given to everybody equally, to make optimal use of the still rather high payback of absolute happiness, and so forth. Similarly, since it is desirable to minimize suffering, absolute suffering will be distributed equally across individuals, to make optimal use of the relatively low amount of experienced suffering with the first little bit of absolute suffering.

Of course, this is all very theoretical, and in practice it will not be possible to so deliberately distribute happiness and suffering among individuals. However, this model will allow the analysis of any given set of choices such that, given perfect knowledge (a minor obstacle, I will admit), that choice may be selected that achieves the utilitarianistically optimal distribution, taking into account the (additional?) principle of fairness.

The model leaves certain things as free parameters. The x and y parameters can be manipulated in order to change the weight placed on fairness. The h and s parameters are defined to represent total absolute happiness and suffering, whose scheme of quantification can incorporate differences in sentience or consciousness between species.

It may be said that suffering and happiness should not be seen as separate variables, but rather extremes on a single continuum, in which case the number of equations could be cut in half. However, I chose to leave them separate to allow for more freedom in determining both x and y individually. Additionally, merging of these variables would disallow distinctions between much suffering offset by much happiness, and little suffering offset by little happiness, which probably really are different in some way. It is probably best to spread both suffering and happiness equally among individuals, so that every individual has a bit of suffering to contrast their happiness against, but not too much.

Finally, let me come to some common objections to the philosophy of utilitarianism that are not remedied by this model of weighted absolute happiness and suffering. One such objection is its inability to take the value of life into account. After all, what is to stop a utilitarianistic society from killing its depressed members outright so as to limit the amount of suffering in the world? Well, the suffering that would cause on loved ones, for one. Those of them who have few loved ones remaining in the world, then, perhaps? Even assuming no additional suffering would be created, it seems intuitively obvious that this is an undesirable course of action.

I will counter this objection not by positing that inherent value should indeed be given to human (or other) life, but with more pragmatic considerations. Utilitarianism in my view should desire to maximize happiness and minimize suffering not just at a given point in time, but in all of time and space. It is possible that depressed individuals will in the future become happy, just as it is possible that happy individuals will fall into depression.

By itself, this counterargument seems lacking, as in spite of these possibilities, the fact that these individuals are depressed now must mean that their expected lifelong happiness is lower, and killing them remains an attractive prospect to probabilistically improve the balance. This is where another practical deliberation comes into play: that of precedence. It is important to go about pretending that there is an inherent value to life even if there is none, in order to prevent ever more lenient applications of such reflections, or even the justification of murder. This may sound like a terribly fallacious slippery slope argument, but it need only reach as far as the former argument fails, which perhaps is not all that far.

The first argument, I think, is the stronger one: There may really be cases where killing an individual is the best course of action from a purely utilitarianistic point of view, but our imperfect knowledge means that it is unlikely that we will ever be able to reliably make that judgment, and must therefore remain on the safe side.

Another objection to utilitarianism is that it fails to consider the value of (human) freedom to choose one’s own path. What if a person chooses circumstances which will end up making them unhappy? What if a person grows up under harsh conditions, but does not wish to leave because that is what they are familiar with? What if it is beneficial to the greater good to medically study some individuals, people perhaps, against their will? What if for this purpose, people are chosen who are already living a hard, unhappy life, anyway?

I think again, the answer, this time more convincingly, is precedence. In the face of uncertainty of the costs and benefits, it pays to be conservative. Furthermore, practically speaking, who determines the experienced happiness and suffering of individuals but the individuals themselves? Utilitarianism is interested in actual, not interpreted experienced happiness and suffering.

Much remains uncertain even to me, and I am not positive this model of utilitarianism with an inclusion of the concept of fairness is sufficient to truly be a moral framework. Even insofar as it is, I am unsure how practically applicable it is, given that it remains to be specified how to quantify happiness and suffering, even if we were to have perfect knowledge about where it occurs (which we also do not). Perhaps it really is needed to place value additionally on life or freedom. However, it is my opinion that those raising these objections too easily do away with utilitarianism, without considering counterarguments to these problems.

I have not extensively studied utilitarianism, so it may well be that a model such as I have posited it here, even with the exponents x and y chosen explicitly below and above 1, respectively, exists already. Considering how well established the philosophy is, this would not surprise me. All the same, I wanted to post this blog entry to share my own thoughts about the subject, original or otherwise.

Advertisements

Share your thoughts

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s