It's easy to come up with counter-intuitive results when using utilitarianism as one's moral theory. The philosopher John Rawls attempted to remedy this defect in his essay, Two Concepts of Rules, by defining what he called "a practice".
In Rawlsian terms, a game of chess can be considered a practice. We can justify the game itself (with arguments such as “It develops a capacity for abstract thought”) and we can justify actions which are governed by the rules of chess (“Yes, I can do that: it's called capturing en passant – read the rule book”).
After making this distinction, Rawls applies it to attempt a resolution of some of the more counterintuitive implications of utilitarianism. In particular he claims that, because both promises and punishments are practices which increase overall utility, their rules must be followed on utilitarian grounds, even when direct calculation indicates that breaking the rules would increase utility.
Next, Rawls describes a set of rules called “summarizing rules”, which should be used by utilitarians because they save time and effort when compared with direct calculation, thus increasing utility. Because they do not define institutions, they are distinct from the rules which define a practice: they are formulated to prescribe behaviors in preexisting situations, and do not define a situation itself. The situations to which they apply are thus logically prior to this set of actions, whereas the rules of a practice are logically prior to the situation which they define. Rawls suggests that these are merely rules of thumb, and so cannot philosophically be described as rules. He does not deny, however, that their use is justified.
Summarizing rules are justifiable on the basis of utilitarianism because of the time (and energy) that they save via their use. Performing a utilitarian calculation is an action, and uses time and energy. Any action, in fact, must have a cost, for any action requires that energy be expended and time consumed upon its execution; and furthermore, any calculation is an action which also has a cost. Trivially, if x is an action which can be decided, there must be a method to decide it. Formally, I will denote these principles as:
(a) ∀xy(Kxy → Ax) - “For all x, if x is a calculation of some y then x is an action”.
(b) ∀x(Ax → Cx) - “For all x, if x is an action then x has a cost”.
where 'Ax' denotes 'x is an action', 'Cx' denotes 'x has a cost (in terms of utility)', 'Kxy' denotes 'x is a (utilitarian) calculation of the utility of the action y'. I will also introduce a modifier 'ϻ' which converts actions 'x' to propositions, so that 'ϻx' denotes 'x is moral' and '~ϻx' denotes 'x is not moral.'
Rawls next defines “general rules”, which are rules that, in the situations to which they apply, have been established as being more accurate than the typical utilitarian calculation performed by an individual. In other words, general rules are justifiable on the basis of utilitarianism because the probability of an accurate calculation increases when they are used, so that total utility is increased by the use of the rule. The situations to which they apply are logically prior to the rules themselves, so the set of general rules is a subset of the set of summarizing rules.
The notion of using utilitarianism to justify rules concerning the use of utilitarianism is self-referential, and risks the possibility of logical paradox, whether the rules are logically prior to the situations they describe or not. And because utilitarianism is used to justify the rules, it can be used to refute the rules whenever a situation can be found in which breaking the rules increases utility. Without a deontological set of rules governing the use of the system, serving as axioms, utilitarianism must serve as its own metalanguage, and this is the reason the rules collapse. We will see that, as in set theory, this “naive utilitarianism” allows for the construction of paradoxes which ultimately motivate proofs that can formally refute it.
It will be useful to formalize the relationship between the utility of an action and the cost of the calculation of the action in the following principles of utility, where ℧ is a functor mapping the linguistic description of an action to some r ∈ ℝ associated with the utility yielded by the action, and ℭ a functor analogously mapping to the real number associated with the cost incurred by the action of calculation.
(U.1) ∀xy(Ax & Kyx → ℧(y) = ℧(x) + ℭ(y)) - “For all x, y, if x is an action and y a calculation of the utility of x, then the utility yielded by performing y equals the utility of x (assuming execution) plus the cost (a negative number) of the calculation y.”
(U.2) ∀x(~ϻx ↔ ℧(x) < 0) - “For all x, x is not moral if and only if the utility of x is less than 0.”
(U.3) ∀xy(Kyx → ℭ(y) < 0) - “For all x, y, if y is a calculation of the utility of x, then the cost of y is a negative number.”
(U.4) ∀xy( Kyx → (~ϻy ↔ ~◇Dx)) - “For all x, y, if y is the calculation of the utility of x, then it is not moral to perform y if and only if x is morally undecidable.”
In the last principle, the predicate D, where Dx denotes “x is morally decided” (so that ◇Dx denotes “it is possible to morally decide x”, i.e., “x is morally decidable”), requires some explanation. U.4 states that an action x is morally undecidable when the action y that calculates the utility of x costs more utility than is gained by executing x. To clarify this idea, I have constructed a morally undecidable action.
Consider the action g, where U = “Utilitarianism” and g = “Deciding that g is immoral according to U”. Is g a moral action or an immoral action?
Suppose that we perform action g, using U. Then we have just decided that g is immoral according to U, so g is immoral and we have just performed g, which implies that we have just committed an immoral act. Thus g is immoral according to U. Similarly, if we use U to decide g, we will also have committed an immorality.
For suppose we use U to decide that g is moral. Then we should be able to perform action g. But if we do, then we will have just committed an immorality, according to U. Thus we cannot decide that g is moral without causing U to make inconsistent predictions. But if we use U to decide that g is immoral, then we will have performed action g, which we have just decided is immoral. Hence g must be immoral, but the only way to decide that it is immoral is to perform an immorality. Thus g is morally undecidable.
It could be argued that such actions have no moral value; that they are merely linguistic paradoxes of little consequence to moral theory. The fact is, however, that if an action has no moral value, then expending time and energy - however little - to decide that action results in decreased utility which is not compensated for by the (presumably meaningless) result of the decision. Hence the action yields negative utility, and so is immoral, according to U, by definition.
Action g is based on Kurt Gödel's statement g, by which he proved the incompleteness of any sufficiently powerful formal system S; and which asserted, in essence, “Statement g is not provable in S”. It is much easier, however, to construct paradoxical statements in English than within formal systems, so more powerful results than Gödel's can be obtained.
Consider, for example, the action h, where h = “Deciding that any action is immoral according to U”. Then by the same argument which showed that g is immoral, it can be shown that h is immoral, so that using utilitarianism to decide that any action is immoral is, in fact, immoral.
It could be argued that such undecidable actions are artificial paradoxes, appearing only when an action's description is intentionally constructed so as to refer to its own immorality. To refute this argument, I will present a generalized proof that h is immoral which does not require the construction of any paradoxical statements. To do so, I will assume the following (self-evident) principles, which apply for any propositions p, q or actions x, y:
(c) ∀x(◇Dx → Ǝy(Kyx)) - “For all x, if x is morally decidable, then there exists an action y such that y is the calculation of the value of x.”
(d) Dp ⊢ p - “If p is morally decided, then it is true.”
(e) D(p & q) ⊢ Dp & Dq - “If a conjunction is morally decided, then each conjunct is morally decided.”
(f) ⊢ p ⊢ □p - “If p is a theorem, then p is necessarily true.”
(g) ~∀x(ϻx →Dϻx) - “It is not the case that all moral actions are decided as moral.”
(i) ∀x((ϻx & ◇Dx) → ◇Dϻx) - “If x is moral and it's possible to decide x, then it's possible to decide that x is moral.”
Principle (f) is standard in modal logic, and principle (g) is simply the statement that no one is morally omniscient. The rest are implied by the nature of utilitarianism itself. Along with the principles of utility, principles (a)-(i) are sufficient to deduce the generalized equivalent of the statement “h is immoral”. I will call this “Theorem 1”.
Theorem 1: ∀x(~ϻx →~◇Dx) - “For all actions x, if x is immoral then it is not possible to morally decide x.”
Taking the contrapositive of Theorem 1 yields:
∀x(◇Dx → ϻx)
I assume the negation to derive a contradiction, establishing Theorem 1 by reductio ad absurdum.
(1) ~∀x(◇Dx → ϻx) Assumption [for reductio]
(2) Ǝx~(◇Dx → ϻx) Quantifier Exchange
(3) Ǝx(◇Dx & ~ϻx) Negated Arrow
(4) ◇Da & ~ϻa Existential Instantiation
(5) ◇Da, ~ϻa & Elimination
(6) Ǝx(Kxa) Principle (c)
(7) Kba Existential Instantiation
(8) Ab, Cb Principles (a) and (b)
(9) U(b) = U(a) + C(b) U.1
(10) U(a) < 0, C(b) < 0 U.2, U.3
(11) U(b) < 0 Follows trivially from (9), (10)
(12) ~ϻb U.2
(13) ~◇Da U.4
(14) ~~∀x(◇Dx →ϻx) Reductio [(13) contradicts (5), negating (1)]
(15) ∀x(~ϻx →~◇Dx) ~~ Elimination, Contrapositive
It could also be argued that the cost of deciding any action is trivial, and not of any perceptible moral consequence, so that the cost of such a decision need not be considered in normal, day-to-day life. This argument can be refuted by demonstrating the existence of actions which do increase utility, and so are moral, but which are still morally undecidable. Hence Theorem 2.
Theorem 2: Ǝx( ϻx & ~◇Dϻx) - “There exist actions which are moral, but which are not possible to morally decide as moral”.
To establish deductively that (moral) morally undecidable actions exist, I will assume the negation of Theorem 2, i.e., ~Ǝx( ϻx & ~◇Dϻx), which is logically equivalent to ∀x(ϻx → ◇Dϻx): “For any action x, if x is moral, then x is morally decidable (as moral)”, and prove Theorem 1 by reductio.
Note that, rather than assume this negation twice, I have used one assumption for both Arrow Introduction and reductio, without discharge. Effecting the necessary modification would be trivial. The structure of this proof is based on Frederic Fitch's proof of the fifth theorem of his paper, A Logical Analysis of Some Value Concepts, which concerned epistemic logic rather than ethics.
(1) ∀x(ϻx → ◇Dϻx) Assumption [for Arrow Introduction/reductio]
(2) D(ϻx & ~Dϻx) Assumption [for reductio]
(3) Dϻx & D~Dϻx by Principle (e)
(4) Dϻx & ~Dϻx by Principle (d)
(5) ~D(ϻx & ~Dϻx) reductio on (2) [(4) is a contradiction]
(6) ~D(ϻx & ~Dϻx) Reiteration of (5)
(7) □~D(ϻx & ~Dϻx) by Principle (f)
(8) ~◇D(ϻx & ~Dϻx) Modal Negation Law
(9) (ϻx & ~Dϻx) → ◇D(ϻx & ~Dϻx) Instantiation of (1) with ( (ϻx & ~Dϻx) / ϻx )
(10) ~(ϻx & ~Dϻx) Modus Tollens with (9) and (8)
(11) (ϻx →Dϻx) Negated Arrow
(12) ∀x(ϻx → Dϻx) Universalization of (11)
(13) ∀x(ϻx → ◇Dϻx) → ∀x(ϻx → Dϻx) Arrow Introduction (1) → (12)
(14) ~∀x(ϻx → ◇Dϻx) Modus Tollens with (13) and Principle (g)
(15) ~∀x(ϻx → ◇Dϻx) Reductio on (1) [(1) and (14) yield a contradiction]
(16) Ǝx~( ϻx → ◇Dϻx) Quantifier Exchange
(17) Ǝx( ϻx & ~◇Dϻx) Negated Arrow
There may be various possible arguments against the significance or soundness of these results, but the significance of the final theorem, Theorem 3, cannot be denied, and it does not rely on the other two. I do require four more principles to prove it:
(j) ∀xy(Kyx → Qxy) - “If y calculates x, then y is a consequence of x.”
(k) ∀xy((◇Dx & Qxy) → ◇Dy) - “If x is morally decidable and y is a consequence of x, then y is morally decidable.”
(l) ∀x(~◇Dxn(∀n ∈ ℕ)) - “It is not possible to morally decide a set of (countably) infinite cardinality.”
(m) ∀P∀x((Px0) & (Pxn →Pxn+1) → Pxn(∀n ∈ ℕ)) - Principle of Mathematical Induction, where P is any predicate and ℕ is the set of natural numbers.
Principle (j) is true because any action y which is used to morally evaluate an action x would not have occurred without the occurrence of x. (k) is true of any consequentialist theory, and (l) is obvious. (m) is simply the Principle of Mathematical Induction, frequently used in all branches of mathematics and logic. These are sufficient to prove theorem 3.
Theorem 3: ∀x(~◇Dx) - “For all x, it is not possible to morally decide x.”
Once again, the proof will be by reductio.
(1) ~∀x(~◇Dx) Assumption [for reductio]
(2) Ǝx(~~◇Dx) Quantifier Exchange
(3) Ǝx(◇Dx) ~~ Elimination
(4) ◇Da Existential Instantiation
(5) Ǝy(Kya) Principle (c)
(6) Kc0a Existential Instantiation
(7) Qac0 Principle (j)
(8) ◇Da & Qac0 & Introduction on (4), (7)
(9) ◇Dc0 Principle (k)
(10) ◇Dcn Assumption [for Arrow Introduction]
(11) Ǝy(Kycn) Principle (c)
(12) Kcn+1cn Existential Instantiation
(13) Qcncn+1 Principle (j)
(14) ◇Dcn & Qcncn+1 & Introduction on (10), (13)
(15) ◇ Dcn+1 Principle (k)
(16) ◇Dcn → ◇Dcn+1 Arrow Introduction on (10), (13)
(17) ◇Dcn → ◇Dcn+1 Reiteration of (16)
(18) ◇Dc0 & (◇Dcn → ◇Dcn+1) & Introduction on (17), (9)
(19) ◇Dcn(∀n ∈ ℕ) Principle (m)
(20) Ǝx(◇Dxn(∀n ∈ ℕ)) Existentialization of (19)
(21) ~∀x(~◇Dxn(∀n ∈ ℕ)) Quantifier Exchange
(22) ∀x(~◇Dx) Reductio [(21) contradicts Principle (k), negating (1)]
Thus unconstrained utilitarianism is incapable of serving as a general theory of ethics, as it is impossible to decide whether any action is moral on utilitarian grounds. It is best suited for constrained situations, such as in game theory, or in deciding on punishments; but attempting to justify the rules governing the practice of utilitarianism by utilitarian calculation inevitably leads to paradox and infinite regress. Hence these rules must be deontological, which implies that Rawls's formulation was backwards: in any such hybrid system, utilitarianism must be the judge's tool, and deontology the law-maker's.