A Utilitarian Code That Usually Prohibits or Does Not Require Optimific Behavior

ABSTRACT

I defend the utilitarian goal by arguing that although people differ in their capacity for sympathy, almost everyone has enough to make it reasonable to favor the most beneficial morality even if it is not in their own best interests since very little sympathy is required when the conflict is between your own welfare and that of millions of other people. Favoring a morality differs from following it, so I also consider the problem of motivation.

The next question is what kind of morality would be best suited to attaining the goal. Act utilitarianism is far too demanding to be viable. For example, it requires you not only to refrain from killing innocent persons but also to save all of the lives that you could both near and far, which would leave very little time for anything else. Instead I defend a kind of rule utilitarianism with rules concerning help and harmful side effects that only require sacrifices when others would gain more that you would loose though the demands should be greater for harmful environmental side effects. But there are others that are strict in the sense of only allowing optimific violations in extreme cases as with promises, theft, and killing innocent persons who would lose less than you would gain, and paradoxically only a limited area that requires optimific behavior. Though it is often objected that utility can’t be measured, I offer two ways of doing so. 

In sharp contrast with common versions of rule utilitarianism the present version has rules that prohibit optimific behavior except for extreme cases, and rules that do not require it because they allow room for self interest, while it only leaves a much smaller role for the requirement for optimific behavior when the requirement is both feasible and appropriate. The prohibitions include among others torture, theft, murder, other forms of deliberate harm, and the violation of promises. Rules for most forms or help and the avoidance of harmful side effects leave room for self-interest (and therefore supererogation) because the requirement for optimific behavior would be too demanding to be viable. For instance, we are only required to save lives when the gain to others would be a number of times our personal sacrifice, although this applies less to help in emergencies. Though this is sometimes ignored, it is clear that a moral code must be viable to be beneficial and both act utilitarianism and some naive kinds of rule utilitarianism, are too demanding to be viable. Nevertheless, I argue that despite this kind’s prohibitions, reduced requirements, and the limited scope of its requirement for optimific behavior, it is the most beneficial kind of morality.
It is also clear that utility oriented ethicists need to defend the utilitarian goal. To do so I observe that although people differ in both their actual and potential amounts of sympathy, almost everyone has enough sympathy to make it reasonable to favor the morality for their society whose effects on the world would be most beneficial even if it would not be ideal personally. (I say for their society because various factors would make different moralities most beneficial in different societies.) The reason is that very little sympathy is required when there is a conflict between your own welfare and that of millions of other persons, though since favoring a morality differs from following it, I also discuss the problem of motivation. This answers the objection that there hasn’t been a successful defense of the utilitarian goal. I consider later the objection that utilitarianism abandons justice for utility, and the objection that it does not give sufficient weight to projects.

 Graded demands for personal sacrifices
This version of rule utilitarianism is viable because it limits the overall amount of sacrifices by grading its demands. There are four levels of rules plus a level for supererogation. The rules at the first level are, as for keeping promises, and not killing innocent persons, strict though not absolute. It might seem that while a utilitarian theory needn’t always require optimific behavior, to be true to its name even its most demanding rules should never prohibit it. I would certainly like to agree that optimific violations are never wrong, but once it is recognized that utilitarians should favor the most beneficial morality, the question is whether the inclusion of rules that only allowed violations in extreme cases would, as I argue, be more beneficial.
The strict level includes those areas where compliance is both important and for the most part consistent with going about our affairs. Though strict rules are always fully demanding they are less restrictive than the less demanding rules at the third and fourth levels. A rule is demanding to the extent that following it may require sacrifices on particular occasions even if such occasions hardly ever arise, while it is its overall effects on our behavior that determine the extent to which a rule is restrictive. Accordingly, a strict and therefore fully demanding rule which seldom hampers us isn’t as restrictive as a rule that is less demanding but does so frequently. For instance, a strict rule against killing innocent persons is far less restrictive than a much less demanding rule to save lives.
Second level rules are fully demanding: they must be followed whenever doing so is optimific. It might seem that this would include the rule against killing. However, the rule should be strict instead of allowing you to kill whenever you would gain more than the victim would lose, and since the same would hold for other forms of deliberate harm and rules regarding help usually are not fully demanding, strange as it may seem for a utilitarian code, save for two important exceptions, most rules do not fall in this category.
The rules at the third level concern help and the avoidance of harmful side effects, particularly those concerning the environment. Those at the fourth are still less demanding. They concern those forms of help and side effects that are less important and therefore are not covered at the third level.
Sacrifices are supererogatory when they go beyond the demands of the rules. As I use “sacrifice” you make a sacrifice whenever you act in a way that goes against your interests even if not making it would be highly immoral. Though there are limits to the amount of personal sacrifice that can be required for help and the avoidance of harmful side effects, the state can to a considerable extent compensate for this.
Besides making the code viable, grading is decidedly beneficial. There is a limit to the amount of sacrifices that we can be led to make and a morality should use this “sacrifice pool” as efficiently as possible. Next, I present a more thorough discussion of the four levels and supererogation.

The strict level
A rule that allowed optimific violations instead of being strict would, for example, let you break your promise to pay someone to paint your house in order to donate the money to a charity where it would it more beneficial. Nevertheless, it follows from the extremely valuable institution of promising that you should pay. A violation would only be permissible if you needed the money for some urgent and important purpose such as buying medicine to save a life, and even then you should pay as soon as you didn’t need the money for such a purpose.
More important there should be a strict rule against deliberately harming innocent persons even if it would be optimific because your gain would exceed their loss. This would obviously include killing and even apply to killing miserable persons who don’t want to die.[1] Without such a rule there would be insecurity and incessant conflict. And there should also be a strict rule against the use of torture. Here too the benefits of the institution banning torture would outweigh the losses of optimific violations. It is worth adding that all the act utilitarians I know of, including for many years myself, deviate from their view by following the strict rules whenever they apply.
As I’ve observed, a strict rule should not be too restrictive to be viable and sufficiently important to outweigh its restrictiveness. This clearly includes the rules against deliberately harming innocent persons and torture. The rule regarding promises is less important but far more restrictive: you can live a normal life without ever deliberately harming an innocent person or using torture, but there are likely to be numerous of occasions when promise keeping is restrictive. Nevertheless, its importance outweighs its restrictiveness. The value of the institution both to you and others justifies a strict rather than even a fully demanding rule.
The rule for promises is also more complex than the other rules. The commitment is seldom absolute since most promises involve the implicit acceptance of reasons for failure to comply, as in breaking a luncheon engagement to save a life. There are also promises that should not be treated as binding, such as a promise to commit a murder. It would be wrong to make the promise, but fulfilling it would be even worse. Nor is a promise binding if you are forced to make it by immoral means. Nevertheless, save for extreme circumstances, legitimate promises should be kept.
There should also be strict rules against most forms of discrimination in determining which sorts of persons should be allowed to associate with one another.[2] This should include among others racial and sexual discrimination.

Killing to save lives
Although the rule against killing is strict while the requirement to save lives isn’t even fully demanding, there are at least five reasons for condoning killing to save a sufficiently larger number of lives as in throwing a man in front a trolley to derail it in order to prevent the deaths of five other persons; and the same holds when the ratio is between far larger numbers of persons. This is in contrast with the view that it would be wrong to kill even one person while we should accept the death of one person as a side effect of saving five by redirecting a trolley to a track on which it would kill one person instead of five.
1.     Although a strict rule against killing is viable because its requirement for sacrifices isn’t, like a requirement to save lives, too restrictive, the agent is willing to make the sacrifice of going against his or her natural revulsion to killing.
2.     Protection from deliberate harm is usually more important than help to prevent harm, but not in this kind of case.
3.     As I have observed, a strict rule is needed to protect us when the killer’s gain would exceed the victim’s loss and to protect persons whose lives aren’t worth living but want to survive: in both cases we need a strict rule to avoid insecurity and conflict. Nevertheless, a rule that allowed you kill one person to save a considerably larger number of persons would be beneficial.
4.     It would be rational to accept a practice that entailed the risk for the greater likelihood of survival in such a situation.
5.     A rule that allowed the practice would not lead to constant conflict.
6.     It is also worth noting that though Robert Nozick has objected that harming one person to avoid harm for five others would overlook the fact that he is a separate person who has just one life to lead, and who would receive no compensating benefit for being harmed, as Samuel Scheffler observed, the same would apply to the five others.
I should add that while harming an innocent person or persons to prevent greater harm would be commendable, our natural revulsion to killing and other forms of deliberate harm makes refraining from killing excusable.

Deontology and conflicting intuitions
If instead, like a majority writers on applied ethics, you relied on deontological intuitions, there is the problem that deontologists often disagree. In the case in question a few hold that it would always be wrong to kill to save lives; many, that it would be wrong in the trolley case; while some would regard it as permissible. Here as elsewhere deontologists face the problem of conflicting intuitions about supposedly self-evident ethical rules.[3] The lack of agreement also counts against the view that, despite the lack of natural ethical qualities, the rules are justified by our apprehension of self-evident ethical truths that concern non-natural ethical qualities. Furthermore, while this objection would apply even if there were such qualities, there are ample grounds for rejecting G.E. Moore’s ad hoc hypothesis that they do in fact exist.[4]
And although most deontologists hold, like rule utilitarians, that consequences should be considered in cases where rules conflict or following them would have bad consequences, they are definitely not entitled to consider them in determining the nature of supposedly self-evident rules. Nevertheless, since they are surely influenced by consequences, a utilitarian code with strict rules and graded demands would fall within the range of all but the most extreme deontological codes.[5] Since the same applies to actual moralities, my reluctance to accept counter-intuitive views leads me to regard this as a decided advantage. And besides supporting, for example, the almost universally accepted view that killing is not on a par with letting die,[6] I can take sides on controversial issues without defending views that all actual moralities would reject.
Consider as another example, the highly controversial issue of abortion. Though I defend abortion, I agree with its opponents that there is a strong case against it because some abortions prevent the existence of persons with lives worth living. Paradoxically, this concession makes a defense of abortion more rather than less persuasive: opponents are far more likely to be receptive if you agree that in such cases this is a serious objection to abortion.
A common response to the last would be that while you ought to prevent the existence of a miserable person because the person will be sorry if you don’t, preventing the existence of a happy person is morally neutral because there won’t be a person to be sorry that you did. But if you don’t prevent it the person will be glad, and if being sorry counts negatively by showing that the outcome is bad, being glad should count positively by showing that the outcome is good. Also suppose that the contrast was instead between preventing the existence of unhappy persons with not preventing the existence of happy persons. In this case the unhappy persons would not exist and therefore would not be glad that we had prevented their existence, so it would follow by a similar inference that it is refraining from preventing the existence of unhappy persons that is morally neutral. Since advocates of the neutrality thesis would certainly reject this conclusion they should, to say the least, question the theory it was used to defend.
As for abortion’s defense, it is widely held that abortion is only justifiable if the mother’s life is at stake or there is a major threat to her health. However, there can be other serious losses for the mother and for those fathers who accept their responsibility, plus the fact that many women would lead more useful lives without children than with them. There are also the poor prospects of children of single, impoverished, or unwilling parents, and the fact that many women would have happier children if they waited till they were better equipped to care for them. Furthermore, besides the fact that if abortion was illegal there would be many female deaths or injuries, plus decreased respect for the law, banning abortion would lead to a rapid increase in the population of an already overpopulated world and entail an increase not only in poverty and starvation but also in the already daunting threat of global warming. In contrast to this utilitarian defense the debate is usually deontological, which, besides ignoring the dire overall effects of banning abortion, leaves us again with conflicting intuitions.
 Though the issue of homosexuality is equally controversial, it is easier to defend because I can neither think of nor know of a strong case against it. A number of arguments are designed to show that it is immoral but none of them are successful. Though it would take too much space to consider all of them, the most common objection, namely, that it is unnatural fails because the percentage of homosexuals is roughly the same in all cultures, and though it is unusual, an unusual characteristic needn’t be bad and many are good.
Furthermore, the condemnation of homosexuals harms them in a variety of ways, and a ban on marriage encourages promiscuity and the spread of Aids.
Another objection is that homosexuals don’t have children, but it is instead very important to reduce reproduction in an overcrowded world. It may seem to be misguided to use overpopulation to defend the rights of homosexuals, that rights should only be based on the interests of their holders, but it is worth adding anything that strengthens a right, and much more if the right is challenged. In short the immorality lies instead in the condemnation of homosexuality.
Returning to the general subject of deontology, though the agent-relative agent-neutral distinction is sometimes used to distinguish deontology from consequentialism, most forms of rule utilitarianism including mine and Richard Brandt’s are agent-relative. Another view is that a theory is deontological if it includes other factors as well as consequences in determining whether an action is morally permissible, but this would, for example, make any form of rule utilitarianism that treated killing as worse than letting die count as deontological. I hold instead that, despite deviations in actual practice, deontology is committed to the view that ethical rules are self-evident while consequentialism bases them either directly or indirectly on utility.

The fully demanding level
There should be a fully demanding rule against raising animals in a way that, as on factory farms, keeps their lives from being worth living. This is very important for two reasons. First, it applies to an enormous number of animals, particularly chickens and hens. Second, since we don’t count human beings’ level of intelligence as relevant to the importance of their pleasure and pain, even if most animals are less intelligent than most human beings, the same should hold for animals. Furthermore, to take an extreme case, we wouldn’t regard the pleasure and pain of an idiot as less important than that of a normal human being.
Nevertheless, the rule against factory farms should be fully demanding and optimizing instead of strict. The reason is that it wouldn’t, even on the whole, be beneficial to allow them in cases where adherence to the rule against them would not be optimific, while the strict rules against harming innocent persons and stealing still apply to cases where these practices happen to be optimific. Unfortunately, however, while our natural inclination to care for our children keeps the requirement for their care from being restrictive and the same holds for animals as pets, this usually does not extend to our concern for other animals.
There should also be a rule requiring parents to provide adequate care for their children, which is not overly restrictive since it is limited to their children, and most parents have a natural inclination to comply. However, the rule should not be strict or even in this case fully demanding. Falling somewhat short should be permissible if, as in the case of politicians, it is necessary to take time away from your children to promote considerably more beneficial causes.
Another, less beneficial, natural inclination is for the rich to give most of their fortunes to their children. Charitable donations would usually be far more beneficial, and children need love, approval, and a good education rather than riches. Even so, though the choice of charities should be commendable, a feasible rule restricting this parental inclination couldn’t be very demanding. Also, here as elsewhere morality can require us to exercise our intelligence to the best of our ability in making major decisions. We should determine which charities are most beneficial and treat them as serious contenders, even though it might be supererogatory to choose such a charity rather than one to which you were more attached emotionally.

The moderately demanding and least demanding levels
Sacrifices are only obligatory at this level when the benefit for others would be a number of times greater than the sacrifice. Furthermore, there is a rough upper limit to the total amount of sacrifices required. A viable morality couldn’t require us to devote our lives to saving lives, or to preventing suffering and promoting the happiness without which life isn’t worth living. Instead it must grant us free time to promote our own welfare whether in the form of personal projects (a subject I consider later) or in other ways.
One reason for making such rules less demanding is that if they were fully demanding they would be far too restrictive to be viable. Nevertheless, while the demand to save all the lives we could both near and far would be overwhelming, this doesn’t extend to saving the lives of persons who are near us particularly in an emergency. Another is that while we need to be confident that most people won’t kill us or rob us, will keep their promises to us etc., we do not need to be confident that most people will help us even if it is by saving our lives. I would be willing to visit a museum in a country where I wouldn’t be helped but not if it was in a country that allowed killing.
We also need rules regarding harmful side effects. Like those for help, they are more restrictive than rules against deliberate harm, but for side effects, general conformity is important. We don’t need to have most people disposed to help us, but we do need to have them disposed to avoid harming us even if it is through side effects. Fully demanding rules regarding side effects wouldn’t be viable, but general compliance is important and sometimes crucial.
This applies particularly to harmful environmental side effects. Industrialization has made the problem acute. The world’s most serious problem, and probably the most serious ever, results from the side effects of excess production and consumption, and it is increased by overpopulation. I mean, of course, global warming and the various other threats to our environment that may lead not only to vast amounts of suffering and deprivation but even the end of most forms of life on our planet. Accordingly, we need to revise our morality by stressing the need to avoid harmful environmental side effects even though doing so is sometimes quite sacrificial.
Though rules requiring help and the avoidance of harmful side effects must be less demanding to keep them from being too restrictive, there should be two levels. Since it is extremely important to avoid harmful environmental side effects, the rules should be more demanding than those for other sorts of side effects, and rules regarding the more important kinds of help should be more demanding than those for lesser kinds.
Their varying demands explain in part the neglect of less demanding rules. Another reason is that we are less likely than with fully demanding rules to use the standard ethical terms to express disapproval when they are violated. There are, however, other ways of expressing disapproval: sometimes a frown or tone of voice is sufficient. The same holds for expressions of approval. The main function of a moral code is to tell us how to assign approval, acceptance, or disapproval in the most beneficial ways, the first and last in varying degrees.
The need for revising our practices in regard to the environment also extends to the law. The state, besides imposing legal requirements on individuals and more on corporations, can impose costs and restrictions on the more harmful kinds of production. This applies particularly to both the production and use of automobiles but also to both the production and use of trucks owing to the extensive reliance on trucks instead of trains` even though trains should receive federal support because they cause less pollution.
Furthermore, the state can share the burden in additional ways that would be too demanding to be feasible for most corporations and individuals. Among other things, it can aid the development of environmentally friendly technologies, and given the need to expand various services but to reduce production it can impose costs and restrictions on the more harmful kinds of production. This is only possible, however, if our leaders and enough of the public can be convinced that such restrictions are morally justifiable. This would not be difficult if future generations could speak, and the same holds for altering both our morality and our laws to give due respect to the welfare of animals. In both cases we can try to speak for them, but it is far easier to gain an audience for such things as lower taxes, the death penalty, or opposition to abortion and to homosexuality even in an overpopulated world.

Supererogation and the role of praise and blame
Any sacrifice, even a relatively minor one, is supererogatory if it exceeds the demands of the four levels. Nevertheless, an advantage of supererogation is that supererogatory behavior merits praise which provides much more motivation than freedom from blame. However, help and the avoidance of harmful side effects needn’t be supererogatory for the same reason to justify praise, and the same holds both for telling the truth when it is decidedly against your interest, and for keeping a promise that is unusually hard to fulfill. Sometimes freedom from blame is sufficient but praise is often in order.
When rules aren’t fully demanding the amount of demand is inevitably vague. This includes the boundaries between third and fourth level rules and between fourth level rules and supererogation.

The significance of intention and motivation
Jonathan Bennett (1995) holds that an action’s rightness or wrongness only depends on its outcome so that your intention, and by implication your motivation, can only determine whether you deserve praise or blame. Note first that praise and blame both from others and from you would be extremely important even if they only concerned your intention and motivation. Second, intention and motivation also play a role in the layman’s judgments of rightness and wrongness in, among others, the following kinds of behavior:
1.     It would be worse for x to kill y intentionally than if x’s motivation was limited to accepting y’s death as a side effect.
2.     While intentionally bringing about y’s death by preventing y from saving himself would be almost as bad as killing y, it would be permissible if x only accepted y’s death as a side effect of self preservation; it might even be permissible as a side effect of attaining a lesser objective; and in all three cases x’s motivation would at least make it less bad.
3.     For the same reason it might be permissible for x to let y die to avoid a minor sacrifice, and even if it wasn’t it would be less bad than for x to let y die because x intended to have y die.
These judgments of the layman are not mistaken. It is more important for a morality to oppose behavior that is malevolent or even indifferent to harm than to support behavior where the intention is benevolent. That is why killing intentionally is worse than killing when the death is a side effect, why the same holds for bringing about a death, and why letting die intentionally is worse than doing so in order to go about your affairs. If instead only the outcomes were relevant to rightness or wrongness, the alternatives would be on a par morally in all three cases.

The nature and measurement of utility
Though the problem of measurement is almost always left to economists, their interpretation of utility in terms of our economic interests doesn’t require them to be subjectively rational[7] in the sense of being capable of retaining your approval after critical scrutiny plus a vivid idea of the matter in question. With utility construed economically its maximization is correlated with the maximization of consumption. It consists in getting what you want even if it provides little or no satisfaction, and it is often a matter of keeping up with your peers. If we choose instead subjectively rational interests, pleasure and pain are likely to play a far more prominent role. There is room too for extra-hedonic values, but they are not, as G.E. Moore maintained objective non-natural qualities, and even after critical scrutiny they would vary for different individuals. Though I accept extra-hedonic values, I believe that most people are more concerned with pleasure and the avoidance of pain and that if we were subjectively rational (S-rational) there would be even more agreement.
Promoting pleasure is commonly regarded as far less important than preventing pain. This might perhaps be true in the sense that we can be more effective in preventing pain. But the reverse might be true, and there could be lengthy empirical arguments on both sides. In any case my present concern is with the claim that pleasure is less important than pain, and there is a short and simple refutation, namely, that we are all willing to undergo pain to get pleasure, and that for most of us, aside from benefits for others, a life with pain but without pleasure would not be worth living.
A more plausible claim is that once our physical needs are satisfied other things provide very little pleasure. It is true that consumerism involves the multiplication of objects designed for enjoyment, but they are often costly and ineffective. However, the arts, both fine and popular and including movies, are surely important sources, and there are relatively inexpensive ways of providing access to them. It is also true, unfortunately, that pleasure is intermittent more often than pain, and that pain is often longer lasting than pleasure. However, pleasure can distract us from both physical and mental distress, and its anticipation can have an overall effect on the emotional tone of our lives.
Nevertheless, besides pleasure we also value periods that are hedonically neutral or that even include some pain.[8] This is very important because a large part of our lives fall in these two categories. Sir A.J. Ayer once observed to me that he knew of no one who had more pleasure than pain. Though I am more optimistic, I agree to the extent that many of us, perhaps even a majority, have more pain than pleasure. Nevertheless, the value of periods in these two categories can both outweigh a considerable excess of pain and add to the value of the lives of more fortunate persons. Furthermore, there are also extra-hedonic values though there is far less agreement, and concern for a given value isn’t always S-rational.
Turning to the measurement of what may be called experiential utility, though a direct approach would be extremely problematic, I can offer two alternatives.
First, an important advantage in measurement is that it always involves amounts of time because temporal measurements are objective and can be fairly accurate. Furthermore, the amount of time of a good or bad period counts as much as the extent to which it is good or bad. For instance, sixty minutes of pain of a given intensity is sixty times as bad as a minute of pain of the same intensity. In contrast it is much harder to determine the amounts to which one or the other prevails when the experiences are simply good or bad. For instance if you work at a bad job it is easy to determine the number of hours a day, but hard to determine how bad it is in terms of your series of experiences. Nevertheless, we can cope with the problem by asking such questions as how many days you would be willing to spend working at the less disagreeable job A to avoid having to work for a day at the more disagreeable job B.
It counts too that the problem of measuring the extent to which experiences are good or bad does not apply to hedonically neutral periods and applies far less to periods that are hedonically neutral save for small amounts of pain. This is important because such periods constitute a large and valued part of our lives.
Second, there is a rough way of weighing a life’s good parts against its bad parts. I take days as the parts—our initial response to longer intervals such as years would often be less confident—and count days on which you would prefer to be unconscious as bad, and days on which you would prefer to be conscious as good. For instance, apart from lost pay, a seamstress working in a sweatshop might prefer to spend her workdays in dreamless sleep and most of her other days awake; and the same would unfortunately apply to a large proportion of other workers, even including many supposedly fortunate professionals.
We could, for example, ask our seamstress to keep a record rating a day that was just bad enough to be worth avoiding at minus one, and a worse day that she would be indifferent between enduring and enduring a higher number of minus one days as minus two, three, or whatever. And we could rate good days at plus one when she would be indifferent between being conscious on such a day plus a day at minus one and sleeping through both, and compare the values of good days in the same manner as bad days. Then, using this record we could ascertain the total value of her good days and bad days to determine both whether her life had been worth living or avoiding and the extent to which one or the other prevailed. Further, even without such a record we could make rough estimates as to the judgments a person would make.
These tests are not hedonic. Besides the reasons that I’ve given for holding that periods with somewhat more pain than pleasure can be valuable, some people would sometimes choose to be conscious during periods in which there was much more pain than pleasure. Nevertheless, I believe that most people are to a large extent concerned with pleasure and pain.

Utilitarianism, justice, and increased concern for the poor
For many the most important objection to utilitarianism is that it abandons justice for utility. Turning first to justice in rewards and punishments, the conflict is with retributive justice rather than with the kind of justice that defends rewards and punishments by their utility. It is agreed that determinism is not compatible with the ultimate kind of freedom needed to justify retribution but claimed that it is compatible with indeterminism. However, as Hume observed, an uncaused action would not be free, nor would being partially uncaused make it free. We have a kind of freedom but not the kind needed to justify retribution. Though Peter Strawson’s (1974) defense of retributivism is right to the extent that our reactive attitudes are innate and play an inevitable and often beneficial role in our daily lives, in major matters we should override the desire for retribution. Punishments should be regarded instead as an evil needed to deter behavior leading to greater evils.
It is also argued that rejecting retributive rewards and punishments keeps utilitarians from treating us as individuals. Utilitarians needn’t, however, deny that we are individuals to reject retributive rewards and punishment for individuals. And rule utilitarians can accept justice in the form of impartial rules that encourage beneficial behavior by rewards and discourage harmful behavior by punishments.
John Rawls’s (1999) main objection is instead that utilitarianism is unjust because maximizing utility gives insufficient weight to the interests of the poor. His view is based on a fictitious contract that we would accept behind a veil of ignorance that concealed our own status. But since a fictitious contract cannot bind us, he needs to give a different reason to accept the contract. He could argue instead that justice requires equality except insofar as rewards are needed for work that is beneficial to the rest of a society. However, even by Rawls’s test, justice does not require equality, since, despite Rawls, we would be willing behind the veil to accept the risk of a loss to improve our overall prospects. Furthermore, it would not be a major loss because utilitarians can use the law of diminishing returns to support the interests of the lowest class far more than in almost all societies.
This law counts both against a sales tax because of its impact on the poor and for unusually progressive taxation. It also justifies besides food stamps and increased Medicaid, provisions for housing, and means for raising poor children in ways that, besides improving their prospects, benefit the rest of society by making them less likely to become criminals.
And turning to the positive side of their lives even if after a far more beneficial distribution of income the poor still couldn’t afford expensive clothing, expensive paintings or attendance at expensive concerts or sporting events, they could afford both used clothing that was originally expensive (which many prosperous people know enough to buy) and reproductions of art at thrift stores. They could also hear music on the radio, enjoy free sporting events, and watch others on used television sets; and there should, of course be far more commercial free radio and television than is currently available. Furthermore, their education could lead them to encourage these practices by showing their benefits, and using their money more wisely would enable them to work fewer and/or at more agreeable jobs.
There should also be among numerous other things heavy gas taxes to support more and cheaper public transportation. The last has for drivers the advantage of decreasing traffic, and for the poor it decreases the sometimes exaggerated disadvantage of doing without a car, which can be remedied to a considerable extent by the use of an inexpensive used bike. And there is, of course, the extremely important advantage of reducing pollution, whose effects can be even worse for the poor. Note, however, that these observations regarding financial distribution do not extend to the distribution of pleasure and pain. Though the loss or the gain of a thousand dollars matters far less for a rich man than for a poor man, we don’t downgrade the importance of a rich man’s pleasure or pain vs. that of a poor man.
In any case both the need for wartime drafting and for punishments as a necessary evil to deter crime, set precedents for maximizing utility at the cost of losses for some. Furthermore, given these remedies for the poor, their losses can be far less than imprisonment or fighting in a war, which, besides the risk of death or serious injuries, involves both mental and physical effects and after effects. It is worth adding, however, that drafting has several advantages. It doesn’t impose an undue hardship and strain on the existing forces; it doesn’t discourage future enlistments; it doesn’t discriminate in favor of the rich; and even more important it provides a reason for even the rich to oppose unnecessary wars.

Graded moralities with a central role for projects
The codes of Samuel Scheffler (1994) and Bernard Williams (1985) are not deontological; they have three levels; and their most demanding level resembles mine in its rule against inflicting harm. Furthermore, Scheffler agrees that a moral code must, in my terms, be viable, and that, save for commitments, it is always permissible to maximize utility. The main differences between our views are that besides rejecting consequentialism, their primary concern is with their second level which allows “an agent centered prerogative” for projects—we aren’t required to sacrifice them unless they would be harmful to others—and that they don’t use the need for viability to justify the prerogative for retaining projects. Also, though they reject consequentialism, their focus is on act utilitarianism, so they may be more sympathetic to a version with strict rules and graded demands, which is much closer to their “hybrid” view than to act utilitarianism. And since they might also agree with my addition of less demanding rules regarding help and the avoidance of harmful side effects, I concentrate on the significance they ascribe to projects.
For both philosophers, projects are one’s deepest personal concerns round which one has built one’s life (Ibid p.7), such that we have what I would call a third level prerogative to tend to them, and such that abandoning them involves a loss of integrity, by which I believe they mean the condition of being whole or undivided. Also instead of holding that the prerogative should be greater for some projects than for others, they seem to treat all projects as on a par.
I maintain instead that some projects such as becoming an excellent artist, novelist, composer, scientist, philosopher etc. should rank higher than the numerous projects whose merit would seem to lie in whatever gratification they afford. These include, among others, many forms of collecting, hunting and fishing, and the acquisition of skill at sports or games. If instead you believed that their only merit was hedonic and would abandon them for some other more gratifying activity, they would not count as projects. There are also the all too common projects of acquiring wealth, or power as ends in themselves. In contrast, many people are most concerned with the more reasonable personal objective of having lives with more pleasure than pain, which would not count as a project. I say personal objective since it is consistent with devoting a major part of your life, or even most of, it to helping others despite the loss of time for enjoyment.
Furthermore, having accepted projects as in effect extra-hedonic values, it would be hard to reject other extra-hedonic values, and it would seem that the cultivation of some kinds of extra-hedonic values needn’t be your deepest personal concern to be more valuable than many projects.
Though I agree with Scheffler that a morality should leave ample room for projects, I question the special, extra-hedonic significance they ascribe to them. This is appropriate for some projects, but hard to defend for others.
They might limit the term “project” to activities that do have extra-hedonic value, but it would be their extra-hedonic value rather than their status as projects that made them special. Furthermore, it would follow that most people do not have projects, while Scheffler and Williams believe that projects play a central role in most of our lives.
Nor would it do to claim that projects are especially important because people should be free to do what they want to do, since the same applies to simply enjoying yourself. Still another claim might be that we need projects to provide meaning for our lives. But, as Peter Singer has observed, the best sort of projects for that are those concerned with the welfare of others. Death is in store for all of us, and one way of alleviating our dismay is to concern ourselves with matters that will benefit others after we are gone.
Furthermore, the special status ascribed to projects needs a defense and claiming that we care about our projects more than anything else in our personal lives would only enable them to hold that they have more utility. Though they could, of course, hold that their value is objective rather than dependent on what we care about, I would be surprised if they did.
Note too that the view that your projects are your deepest personal concern around which you have built your life doesn’t leave room for abandoning projects in favor of new ones, say stamp collecting for a project with extra-hedonic value, or, as I consider next, a project that would benefit others.
Suppose, for instance, that Jane would prefer to be an artist, though she would be a minor one and only a few people would enjoy her works. Nevertheless, when Jane becomes aware of the enormous amount of suffering in factory farms, she chooses, instead, to devote her life to opposing them even though her talents will not be fulfilled and the work will be tedious. However, to be fair I should add that though here too she would sacrifice the condition of being whole or undivided, which might seem to make her behavior wrong, Scheffler holds that it is always permissible to, in my terms, maximize utility.
Nevertheless, Williams (1995) would object that the suffering of animals should count less than ours because as human beings it is reasonable for us to care less about the suffering of other species. We would not, however, downgrade suffering in a non-human species that resembled us in other respects. Nor would it do to argue instead that they differ from us in various respects, particularly their lesser intelligence. I have observed that we wouldn’t downgrade the importance of the suffering of an idiot and a similar response would apply to other characteristics, including, for example, aesthetic sensitivity, and in some but not all animals, possession of a moral code. In fact, though we count the suffering of idiots as much as that of normal human beings, the intelligence of many animals differs less than that of idiots from the intelligence of normal human beings.

More on the defense of the utilitarian goal
Williams (1985) would still object that consequentialism is indefensible. I concur to the extent of being dissatisfied with the traditional defenses. In response to Sidgwick (1906) and currently Shelley Kagan (1998), he observes that since one’s concerns aren’t those of the universe it isn’t rational to take its point of view. I agree, and I also reject the universalization argument.
The problem with the latter is that since most people wouldn’t obey the utilitarian command, issuing it would commit you to far more sacrifices than would be repaid by sacrifices from others. Thus no one would issue it from an egoistic point of view, and the argument would be superfluous if we were fully benevolent. A Kantian might respond that the reason for issuing the command is purely cognitive so that our benevolence or lack of it is irrelevant. But if we didn’t care about either our own welfare or that of others, there would be no point in issuing any universal command.
Also, I must reject Sidgwick’s view (1906) that instead of depending on what we care about, not only the disinterested point of view of the Universe but that of a complete egoist are objectively rational. For in morality it is a question of subjective rationality which depends on what one could care about as well as true beliefs rather than objective rationality, and for most of us it isn’t subjectively rational (henceforth S-rational) to be either totally egoistic or fully benevolent. Nevertheless, the utilitarian goal is defensible on the grounds that it is S-rational for anyone with even a minute capacity for sympathy to favor the most beneficial morality even if it would not be ideal personally.
Objective rationality is purely cognitive, so we should all accept objectively rational assertions, unless, for example, rejecting a true but dismal forecast would increase your chance of recovering from an illness. Though S-rationality requires objectively rational beliefs, it also depends on what you could care about, so something may be S-rational for x but not for y. A desire or a decision is S-rational if you would not disapprove of it after critical scrutiny based on any a priori or empirical considerations that could influence it, plus as vivid an appreciation as possible of the matter in question. I say as vivid as possible because there are limits to what we can envisage. There are, however, ways to compensate for this. For instance, while you can’t envisage the suffering of a million people, you can envisage the suffering a single person, envisage the suffering of two persons regard it as twice as bad and conclude inductively that you would regard the suffering of a million persons as a million times as bad if you could envisage it.
Though a choice or desire may be S-rational for x but not for y, I suspect that if we were S-rational our differences would be far less than one might expect. And while the requirements for S-rationality are so extensive that we can never be sure that we have met them, this also leaves ample room for considerations leading to increased agreement.
For instance, besides having the obvious sorts of information, we should know whether a desire results from irrational indoctrination. For if it does, and the desire isn’t defensible on other grounds, recognizing this may enable you to either eliminate the desire or override it in making decisions. Some innate desires are also vulnerable to rational criticism. This applies particularly to the desire for retribution since neither determinism nor indeterminism are compatible with the kind of freedom needed to justify it, but in this case it is virtually impossible to eliminate it, so we must be satisfied with overriding it in major decisions even though the expression of retributive attitudes in lesser matters is often beneficial.
I once assumed that, despite my use of a different term, my concept of subjective rationality was the same as Richard Brandt’s concept of cognitive psychotherapy (1979). I was, however, mistaken. Brandt held in effect that a desire isn’t vulnerable to criticism if it can survive cognitive psychotherapy. This would, however, include the innate desire for retribution which, as I’ve observed, you are likely to retain to at least some extent even though you disapprove of it because the desire isn’t subjectively rational. Furthermore, the same holds for many other desires including desires based on indoctrination or conditioning. Though the requirement for your approval or disapproval to be subjectively rational is still a kind of emotivism, it very different from the original kind. Let me add that there are many respects in which I admire Brandt’s work, and, like Peter Singer, I believe that he was for a long time America’s and probably the world’s foremost ethicist.
Turning to the argument in defense of the utilitarian goal, first it is probably S-rational for almost all of us to make sacrifices for others if their gain would be millions of times as great as our own loss. (This even includes those total egoists whose egoism is not S-rational.) Second, this also applies to voting since even if the best public policy would not be ideal for yourself, the overall gain would be enormously greater than your personal loss. Though it is against your interest to vote for progressive taxation if you are rich, millions of people are involved, so, given the law of diminishing returns, your loss would be minute in proportion to the overall gain. Thus with even a slight capacity for sympathy, it is S-rational for even the rich to vote for progressive taxation, and the same applies to voting in general.
Some of us would, of course, if we were S-rational, require a higher ratio of gains to losses than others, and while it would be S-rational for most of us to require more favorable ratios if the sacrifices were great, here too our ratios would differ. Such differences, however, are not significant in deciding not only how to vote but what sort of morality to favor. Nor is it important that we can only form a rough idea of what ratios would be adequate for us. In both cases anything but virtually complete egoism would suffice.

The motivation for moral behavior
Starting with the motivation of billionaires, the personal utility of their funds decreases so rapidly after the first few millions that sacrifices would be both slight in themselves and minute in proportion to the gains for others. And when one adds that the sacrifices would be outweighed by providing a subjectively rational kind of meaning to their lives, it seems clear that even in terms of self interest it would be S-rational for most billionaires to make them. Still more important, a similar case can be made for political leaders and other powerful persons whose decisions affect large numbers of people.
Most of us however are neither very rich nor very powerful, and though it is usually in our long-term interest to be moral, morality sometimes requires sacrifices which the ratio argument cannot support. Thus, even we were S-rational, following the morality that we favor would require other sorts of motivation.
These include the following: favoring a morality provides motivation for following it; it is in our long term interest to acquire moral habits even though they may lead us to make sacrifices; we are influenced by conscience, i.e., self directed reactive attitudes and by praise or blame based on the reactive attitudes of others, and for many the desire to be virtuous as an end in itself. Conscience, praise, and blame are needed even more when it is in our long term interest to be moral but we don’t realize that it is, or we aren’t moved by our long term interests—for many even the threat of Hell does not suffice.
Furthermore, though I’ve been mainly concerned with the sacrifices that rules may require and the extent to which they can be restrictive, the kinds of behavior they prescribe tend on the whole to be personally rewarding. Among other things they can give meaning to our lives. Singer makes this point in defense of act utilitarianism’s fully demanding rule, but it also applies to moderate demands. You can, for instance, find meaning in giving away most of your assets but still, like most act utilitarians, retain for yourself and your family funds that would benefit others several times as much. Some readers may still accept act utilitarianism as their personal morality. But while they grant that they will fall far short of fulfilling its demands, they fail to recognize that since there is a limit to the amount of sacrifices they are willing to make they should use this “sacrifice pool” as efficiently as possible.
I have argued that our morality needs to be revised, particularly for behavior with environmental side effects. But since the influence of philosophers is limited and there are numerous moralities even within a single society, one can’t hope for anything approaching unanimity on a new morality. Nevertheless, even a partial shift can be beneficial, and at least some political leaders, especially Al Gore (1992), have urged increased concern for what is arguably the most crucial issue, namely the environment. Whether the shift will be sufficient to preserve an inhabitable world for future generations is an open question.
In conclusion, my main project has been to determine what sort of morality would be most beneficial. It strikes me that this is the most important problem for all utilitarians as well as other consequentialists. It is a far more extensive project than defending the utilitarian goal, and I’ve only outlined such a code’s general structure. The more extensive project will require a book, and even that will leave ample room for further work by others.




[1]This doesn’t count against killing an infant with horrible prospects or against euthanasia when it is both requested and beneficial. For a defense of euthanasia see (Singer 2002).
[2]I say most because this wouldn’t apply, for example, to children associating with criminals.
[3]Though the agent-relative agent-neutral distinction is sometimes used to distinguish deontology from consequentialism, most forms of rule utilitarianism including Richard Brandt’s are agent-relative. Another view is that a theory is deontological if it includes other factors as well as consequences in determining whether an action is morally permissible, but this would, for example, make any form of rule utilitarianism that treated killing as worse than letting die count as deontological. I hold instead that deontology treats ethical rules as self evident while consequentialism bases them either directly or indirectly on utility.
[4]For Peter Strawson’s critique of non-natural ethical qualities see (1949).
[5]This explains why the present kind of utilitarianism is closer to most deontological codes than to act utilitarianism.
[6]For an exception see Bennett (1995).
[7]Though I’ve assumed that my notion of subjective rationality was the same as Richard Brandt’s (1979) despite my use of a different term, I was mistaken. Unlike Brandt, I hold that a desire can survive what he calls cognitive psychotherapy, but, like the innate desire for retribution, not be subjectively rational because the kind of freedom needed to justify it is incompatible with indeterminism as well as determinism. Thus, for example, you are likely to retain to some extent your desire for retribution but disapprove of the desire because we can't have the kind of freedom needed for you to approve of it. Furthermore the same holds for desires based on indoctrination or conditioning. This is a version of emotivism, but it is very different from the original version since it requires your approval or disapproval to be S-rational.
[8]A good way to see if you value periods that involve hardly any pleasure but a significant amount of discomfort or pain is to ask yourself if you would prefer to sleep through such a day. Our evolution based dread of death makes this better than asking if you would prefer to have such a day, week or longer period instead of dying.