Sinister: Totally without utility.

sdf26 at xxx.edu sdf26 at xxx.edu
Thu May 24 17:27:22 BST 2001


Matt wrote:

  "we learned that Utilitarianism is 
  assigning values to certain situations and deciding what was morally right 
  based on what came out with a higher number of points.  I think the idea of 
  assigning a value to joy or sadness is preposterous.  Who is to decide what 
  is more joyous to another person than it is for another?  Does that make 
  sense?"

In a sense this is correct, for it does show that utilitarianism doesn't provide 
a practical guide for day-to-day choices about what to do, but it's a rather 
unfair critique of utilitarian positions in meta-ethics.  The meta-ethical 
utilitarian is going to say that the practical difficulty of *actually* 
assigning value isn't the issue; the issue is simply that 'utility' (some 
abstract measure of human happiness or quality of life) is the only notion that 
can ground a system of ethics.  Utilitarianism is very compelling from a 
meta-ethical perspective (we're doing meta-ethics when we wonder whether there 
really is such a thing as 'right' or 'wrong', 'good' or 'bad', and, if so, what 
it is that makes something right and another thing wrong) because it is very 
difficult to argue that other ways of providing a foundation for an ethical 
system actually do so.  Moral anti-realists seem to have the upper hand against 
Kantian practical reason based approaches, for instance, for these rely on 
contentious claims about human nature, and full-out realist positions fall prey 
to Mackie's argument from queerness (If 'morals' or 'values' are real, what kind 
of a thing are they?...They must be a very queer sort of thing, indeed.)  So, it 
can seem that we are forced into a utilitarian position (e.g. an action is good 
if it contributes to utility, and better if it contributes more) or into an 
anti-realist position (e.g. let's face it, "right" and "wrong" are imaginary 
human constructs...if we think about it, we realize there really are no such 
things).  But, utilitarianism is cumbersome as a decision-making tool, because 
of the very problem Matt mentions: assigning value to states of human happiness 
seems totally arbitrary.  Is a burst of intense joy better than a lifetime of 
mediocre satisfaction?  Furthermore, should we not include animals?  Is a human 
death a good thing if 50 million rabbits are thereby given a simultaneous 
orgasm?  Again, the meta-ethicist will simply shrug her shoulders and say, "I 
don't know and it doesn't matter...the point is that in general and in 
abstraction, the moral worth of an action depends on its utility."  There are 
more serious problems with utilitariansim, however.  For example, we can 
contrive of thought experiments in which really "bad" things (that is, things we 
are very tempted pre-reflectively to call evil or bad or morally vicious) result 
in unexpected utility.  Suppose that we had an omniscient computer, and this 
computer enabled us to chart out a finite history of the world from any point in 
time onward, and to determine how the course of history depends upon the 
occurrence of any past event.  I could ask it, "What would my life be like in 
2030 if I hadn't brushed my teeth yesterday, and what will it be like given that 
I did?"  I could then compare the two, and see what contribution to my future 
well-being my tooth-brushing made.  Of course, we'd expect the contribution to 
be quite small, and it would in that case be difficult to determine whether 
or not the action contibuted to my happiness or well-being.  Now, suppose we ask 
the computer, "What will the world be like in 2030, and what would it have been 
like had the Nazis never ravaged Europe?"  Imagine the computer tells us that 
the world in 2030 will be utopic, with peace and prosperity enjoyed by all 
people, but that -unexpectedly and due to some unfathomably complicated chain of 
circumstances- had the Nazi's never existed, the world would be in a state of 
utter decay and devastation.  What would the utilitarian say to that?  Whether 
we can realistically assign value to happiness or not, it seems totally obvious 
that the world in 2030 post-Nazi's would be much, much better than it would be 
had they never existed (it doesn't take any precise or controversial assignment 
of 'value' or 'utility' to license the claim that peace and prosperity for all 
is much better than a hellish nightmare on earth).  This happiness (or misery) 
will be enjoyed (or suffered) in 2030 by, say, 10 *billion* people, and the 
price we'd have paid (or trade-off we'd have made) in the 1940's was a (by 
comparison) minimal (say) 10 *million*.  It seems that the utilitarian is forced 
to admit that the Nazi's did a good thing - after all, the whole world will (ex 
hypothesi) profit enormously from their actions.  But that seems a crazy thing 
to say, for we are inclined to hold onto the belief that the Nazis' actions were 
incontrovertibly evil - and therefore it seems that a notion of moral value 
founded on human utility isn't a notion of moral value the majority of us 
recognize as such.  Philosophers tend to get themselves into this kind of 
situation all the time: their arguments seem to push them into a position on the 
issue of "X", which position prompts the rest of us (even those of us who are 
philosophers) to say, "Well, if *that's* what you mean by "X", then clearly we 
just mean different things."  In my mind, this latter problem is the sort on 
which Utilitarianism beaches itself - the problem of precisely assessing 
utility, however, isn't really so serious.

Hope you didn't read all this if you found it boring...you certainly weren't 
obliged!

--Sam        
+-------------------------------------------------------------------------+
        +---+  Brought to you by the Sinister mailing list  +---+
     To send to the list mail sinister at missprint.org. To unsubscribe
     send "unsubscribe sinister" or "unsubscribe sinister-digest" to
     majordomo at missprint.org.  WWW: http://www.missprint.org/sinister
 +-+       "sinsietr is a bit freaky" - stuart david, looper           +-+
 +-+  "legion of bedroom saddo devotees" "peculiarly deranged fanbase" +-+
 +-+    "pasty-faced vegan geeks... and we LOST!" - NME April 2000     +-+
 +-+  "frighteningly named Sinister List organisation" - NME May 2000  +-+
 +-+               Nee, nee mun pish, chan pai dee kwa                 +-+
+-------------------------------------------------------------------------+



More information about the Sinister mailing list