Unforeseeable Consequences

D. A. Lloyd Thomas’s paper, ‘Consequences’ (Analysis, March, 1968) includes a clear summary of a traditional criticism of act utilitarianism (which I will refer to henceforth as AU):
The act utilitarian holds that an act is right if it is that action which has the best consequences out of the alternatives available. Thus, if we are to know which action is right we need to know what the consequences of each of the alternative courses of action will be. Quite often we have a good idea as to what the immediate consequences of an action will be. But the consequences of an action may extend on into the future indefinitely, and we cannot foresee what these consequences will be. Hence one never knows whether any action one performs is right, and so act utilitarianism is unworkable (p. 133).
The first thing to note is that if the conclusion that AU is unworkable is to be as damaging as is generally thought, it must imply roughly that AU is not an effective aid in the guidance of conduct, i.e., that no benefit is to be derived from trying to maximize good consequences. It would seem then that AU would be workable in the relevant sense so long as better results can be achieved by following its precepts than by failing to do so.

If this is true, then the argument as presented is fallacious because one would not need to know that certain actions would have the best consequences for it to be beneficial to follow AU. Probability would suffice. In fact, less than this would do. One doesn’t need to know that it is probable (that is, that the probability is greater than 50 per cent) that a given action will have the best consequences for it to be worthwhile to perform it. A number of actions might well be possible, none of which would be assigned a probability of greater than 50 per cent of having the desired result, yet one might have a significantly higher probability than the rest—in which case it would clearly be worthwhile to select that one.

The main objection to our knowing that an action will have the best consequences is that its consequences will extend beyond the foreseeable future. It might be thought that this objection would apply also to the matter of probability. But we would have a good reason for choosing the action that seemed most likely to have the best consequences in the foreseeable future so long as we could determine that there was no reason to prefer actions that would not have the best consequences in the foreseeable future to those that would. But if the unforeseeable future is indeed unforeseeable, it would follow logically that there could be no such reason.

It might be suggested that the unforeseeable consequences of our actions are more important than the foreseeable ones. This would mean that the main consequences of our actions would be a matter of chance. But even if we granted this, since we would still have to do something, we would have every reason to act in terms of the consequences we could foresee.

Another supposition might be that the so-called unforeseeable consequences aren’t altogether unforeseeable, and that we can foresee enough of them to infer that one would secure better overall results if one didn’t try to get the best (so-called) foreseeable results. But this would simply require a more subtle method of prediction.

Or again it might be supposed that consequences aren’t the only thing that determine the rightness of an action and that if we can’t tell very much about consequences we should decide what to do in terms of something else—e.g., in deontological terms. But this requires us to suppose that it is false that consequences are all that count and thus introduces an entirely new line of criticism.

At this point it may be in order to make some additional comments on the view that AU is unworkable if we can’t know that certain acts are right. One could, of course, use ‘workable’ in a sense such that the claim was true analytically. But it would then become relatively uninteresting, for the majority of act utilitarians have after all admitted that we can never be sure that an action is right. Thomas suggests that only at the end of the human race could one be sure that an action would have no more effects so that only then could a utilitarian determine whether an action was right. He then observes:
One essential function of the word ‘right’ is that it is used to guide actions. If it is only when there are to be no more actions that it can be said that an action is right, then the function of the word ‘right’ has been changed (p. 134).
Thomas is wrong in supposing that the word ‘right’ could no longer be used to guide actions, for we could still choose to do the thing that would have the highest probability of being right. However, he is undoubtedly correct in supposing that an AU analysis of ‘right’ would involve some change in its meaning. If AU is to be defensible, it must be treated as a proposed revision of our ethical thinking and not as a description of it. Our unrevised ethical thinking is partially deontological and this plus other factors makes it less implausible to claim knowledge of rightness as that term is ordinarily used than in the way act utilitarians would have us use it.

Some philosophers have attempted to save AU from the objection of unforeseeable consequences by suggesting that many of the events for which an action is a necessary condition are not consequences of it. As ‘consequence’ is ordinarily used, this may well be true, but we have just seen that act utilitarians do not need this way out. In addition, there are reasons why it should be avoided. Consider the argument that the chance that your child will turn into a barbaric dictator needn’t be taken into consideration in determining whether or not to conceive him because if he became a barbaric dictator, this would not be a consequence of the act of conceiving him, even though the act was a necessary condition for his becoming a dictator. It is correct that an act utilitarian might well agree that such a possibility could in a sense be ignored in reaching a decision, but the reason would be its extreme unlikelihood and the fact that it would be balanced by the possibility of his becoming a benefactor. In general, an act utilitarian would want us to take into account as many of the events for which our action would be a necessary condition as possible. His decision would then be subjectively right if it were right in terms of the conditions he could be expected to take into account, whereas to be objectively right it would have to be right in terms of all the events for which his action was a necessary condition. An individual would not be blameworthy for doing something that was not objectively right so long as he did the thing that was subjectively right. We could, of course, never be sure about objective rightness, but, as we have already seen, we could make numerous probability judgments in regard to it. The distinction between subjective and objective rightness may well not be reflected in ordinary usage, but as I said before, AU is plausible only as a revision of our ethical thinking, not as a description of it.

Zeno Vendler raises another objection to AU by showing that it is incorrect to speak of foreseeable consequences. However, it is possible to save the AU position by translating it—as I have already done—into a claim about those events for which an action is a necessary condition. Many of those events would be unforeseeable, but my argument has been that this does not keep AU from being workable.

Unforeseeable Consequences
Author(s): R. I. Sikora
Source: Analysis, Vol. 29, No. 3 (Jan., 1969), pp. 89-91
Published by: Oxford University Press on behalf of The Analysis Committee
Stable URL: http://www.jstor.org/stable/3327638