The following is a mixture of my own thoughts and thoughts from “The Moral Course of Thinking” in Gathered for the Journey: Moral Theology in Catholic Perspective, ed. David Matzko McCarthy and M. Therese Lysaught. Grand Rapids: Michigan: Eerdmans Publishing Company, 2007. pp. 1-19.
Two of the most popular approaches to ethics in modern philosophy are utilitarianism and deontological ethics, both of which are normative theories. Normative theories of ethics are those that offer a principle as the key criterion by which actions are determined to be good or bad.
The more common of these two approaches today is probably utilitarianism. The strength of this view can be seen, for example, in the influence of ethicist Peter Singer, professor of bioethics at Princeton University. As one of the leading ethicists of our day, his paradigm for ethics is thoroughly utilitarian. It leads him to some very counter-intuitive opinions about what is right and what is wrong. He argues, for example, that killing handicapped infants is the best thing to do if the parents will have a second infant who has the prospects for a happier life (Peter Singer, Practical Ethics, 2nd ed. Cambridge: Cambridge University Press, 1993. pp. 181-91). How does he come to such a conclusion? In order to understand this, you would have to have a basic understanding of the utilitarian philosophy of ethics.
What is utilitarianism?
“Utilitarianism is the moral doctrine that we should always act to produce the greatest possible balance of good over bad for everyone affected by our actions” (9). By this criterion, actions considered by themselves are morally neutral—it all depends on their consequences as to whether they are good or bad. Apart from consideration of such consequences, actions are neither blameworthy nor praiseworthy.
Because of this criterion, it is often the burden of utilitarian thinkers to convince their readers—against their better intuitions—that the reason we call certain desires or actions “good” or “bad” is not because they are bad in themselves but because we associate good or bad consequences with such actions. Thus, we come to think of them as good or bad actions, when in reality, the actions are not good or bad, but are widely believed to have good or bad consequences. (NOTE: In a previous post, I showed how one utilitarian took on the ambitious task of convincing his readers that the desire to torture other human beings is not wrong).
At this point, I need to make a qualification. Many people (myself included) would probably incorporate some degree of utilitarianism in their criterion for ethics. For example, although I personally believe that certain actions are inherently wrong (apart from evaluation of their consequences), I would still allow for the degree of wickedness to increase or decrease depending on its consequences.
For example, it’s a bad thing for a man to rape and beat a woman (regardless of consequences), but it’s even worse if as a result of the brutality, her unborn daughter is killed and the rape victim who survives gets AIDS. This makes the crime much, much worse.
I also believe that consequences are built into the very logic of why we label actions as inherently right or wrong in the first place. For example, adultery is wrong because it hurts the person who gets cheated on, creates the risk of irresponsible baby-making, introduces the risk of STD’s into an otherwise risk-free marriage (if both entered into that marriage without any STD’s). Adultery is always an injustice, and it is wrong in itself. Yet, at least a great part of the reason that it is always wrong (regardless of context) is due to its destructive consequences. I happen to think the dichotomy between actions as inherently right or wrong verses their being right or wrong based on consequences is a bit overdone.
With this caveat on the table, then, let me proceed to distinguish what I call the utilitarian factor (incorporation of consequences into one’s ethical thinking) from utilitarianism. While some might consider it a good thing to keep consequences in mind when making moral choices, utilitarianism has the burden of claiming that such criterion be the exclusive grounds for judging the merit of all ethical action. On the basis of this distinction, then, I will sometimes refer to utilitarianism as exclusive utilitarianism.
What’s wrong with utilitarianism?
McCarthy and Lysaught rehearse some of the standard criticisms of utilitarianism, for which I have given my own articulation and creative names. They run as follows:
1) The Inevitability of Arbitrariness—It has no way to objectively determine the nature, importance, and value of consequences. To put it another way: How do we know what are “good” and “bad” consequences? What consequences count most? Whose opinion of what are “good” consequences and what are “bad” consequences counts most? Failure to give coherent and rational criterion for answering such questions spells decisive defeat for the whole theory of exclusive utilitarianism. It seems to need something else to help it out. That is why I personally think that the utilitarian factor is legitimate when considered as part of the picture, but exclusive utilitarianism always leads to arbitrary judgment of consequences, and therefore arbitrary ethics.
2) The Contrary Intuition—It often undermines our common sense and moral intuitions, often demanding certain actions that rub our conscience the wrong way. For example, what if I knew I could cheat on my wife with my female boss without her ever finding out in order to get a raise, which would have “good” consequences for my family (less financial stress, my wife could cut back to part time to spend more time with the kids, the kids could benefit from more parental care, I could save more money for the kids for college, etc.)? My gut tells me: Don’t do this, it is wrong, wrong, wrong. But utilitarianism tells me it’s like a math problem (good consequences = good action).
3) The Omniscience Requirement—Sometimes it is impossible to know the totality of the potential (much less the actual) consequences of one’s actions. Sometimes what looks to us to be a disaster turns out to be a blessing in disguise. We get fired only to later realize that the new job we attain as a consequence pays better and is more enjoyable. On the flip side, sometimes we think something is going to turn out great, but in the end is a big let down. If these small scale experiences in the lives of ordinary people demonstrate how difficult it is to know the consequences of certain actions—how much more difficult must it be for people whose decisions effect an entire nation (e.g. the President) to judge the full weight of the consequences of their decisions?
I agree with McCarthy and Lysaught that these criticisms are decisive and that the wide variety of contrary opinions to the same ethical questions among exclusive utilitarians “makes clear that the theories are not doing a good job accounting for what actually shapes moral judgments” (12).
Since The Enlightenment, unaided reason so often attempts to bypass the God question and arrive at “neutral” criterion for judging right from wrong through autonomous reason (without trying to bring “religion” into the question). In my opinion, The New Enlightenment is this: The Old Enlightenment has proven to be bankrupt for ethical foundations. Maybe the God question is relevant after all.
If there were some sort of omniscient utilitarian being, that would solve all the problems, wouldn’t it? Oh wait, never mind, this would only be true if that omniscient being either constantly communicated to us all in a consistent, clear, and obvious manner, or perhaps never gave us free will! Darn, so close!
Wait, I think I’ve got it now; Let’s write a few books and convince people they are the word of God, and simply have the ethics contained therein be the best available at the time, so as to convince people of their correctness. Though we will never have the opportunity to update them without the threat of war between religious factions, that shouldn’t be a problem, as no such thing has happened before after all. One might think that as the writers of these books, we might be tempted to slip in a few rules for our benefit or which makes it easier for us to control the masses, but that would be foolish, as we, the authors of these books, are all ethically pure.
Thanks for your cynical and skeptical comments Red Scourge!
I’m not quite sure I understand the theological ideology behind a “utilitarian being” (as opposed to a utilitarian ethical paradigm) but it certainly sounds interesting. A god who is the greatest good for the greatest number of people maybe? Anyway … I certainly understand the flavor of your skepticism: inconsistent multiplicity of claims to “revelation,” the plethora of ambiguity of their meaning within each religious tradition, the philosophical tensions of the coexistence of God, evil, and free will.
Do you really think the “ethics” of ancient Judaism or Islam were “the best available at the time”? Perhaps some would find this comment disagreeable, as it entails a highly subjective value judgment on your part while simultaneously requiring a wide range of intimate knowledge of religious mythologies and ideologies (to which ancient ethics were intimately attached) of the Fertile Crescent and the rest of the ancient world. For example, was the ethic of the Torah better than the ancient Code of Hammurabi?
I’m not clear on what historical referent you have in mind when you talk about the “threat of war” that necessarily attends all religious “updating.” I can’t think of any ancient culture that didn’t regularly update their religious ideology, but I’m not so sure it always (or even most of the time) involved the threat of war? Ancient mesopotamian cosmology, for example, tied up in religious belief systems and appropriated in different religions of the Fertile Crescent, was constantly evolving, but I don’t know of any consistent historical landmarks of war plotting each development?
I do know religion and many ancient sacred writings and religious ideologies were used as tools of manipulation from the top down in a hierarchical and monarchical structure of government. Perhaps this is most obvious with the powerful empires (e.g. the Pharaoh must be worshiped as a god; Caesar must be worshipped as a god). The southern kingdom of Judah used their sacred writings (some of which have been preserved in the Jewish Bible) as a way to consolidate a separate religious identity from the northern kingdom of Israel as each of these areas (Judah and Israel) developed different religions over time. Judah’s kings often used religion to reinforce class distinctions, justify their wars, and other atrocities, etc.
At any rate, these are familiar themes of theology, philosophy, and the sort of dialogue that takes place between religious scholars and secular scholars of religion, theologians and atheists, historians of all stripes, religious and skeptic philosophers, etc.
You say in your article that several actions are “inherently wrong (apart from evaluation of their consequences)”
How are they wrong separately from their consequences?
A utilitarian argues that murder is wrong because a person would suffer as they are killed.
If the person doesn’t die, i.e. the consequence is removed, how is the action still wrong?
I’ve perhaps not explained this well but by saying that actions are right/wrong independent of their consequences implies the action has some sort of moral character INDEPENDENT of its consequence, as if it’d still be wrong if somehow the consequence didn’t happen.
I hope this question made sense and you can clear this up for me?
Thanks. It’s a great question. There are several ways to approach an answer.
Mr. Pedo Saves Amy’s Life :: A 45 year old man named Charles Pedophlia kidnaps a 5 year old girl Amy Innocent from the city of New Orleans just before Hurricane Katrina, intending to rape, torture, and kill Amy before disposing of the body. As it turns out that the police catch him just after he flees across the Arizona state line and away from the destructive weather. Her family all dies in the flood waters of Katrina, but because she was kidnapped and taken out of harms way, it turns out her life was saved (unintentionally of course when considering Mr. Pedo’s motives). Although at various points along the ride Mr. Pedo violates the young girl with light petting, he does not get a chance to fully explore his psychopathic lust for Amy before he is captured. Her phycological trauma, although serious, is certainly treatable; the good news is that she is alive. If she had been with her family in her hometown, she would’ve died in the floodwaters.
If the newspaper headline in Arizona read: “Charles Pedophilia Saves Life of Potential Hurricane Victim,” and the kidnapping was seen only in terms of its consequences, we would have to say that although Amy suffered phsycological trauma, this is much lesser of an evil than if she had perished completely with her family and potentially suffered even more in the 24 hour period before her and her family finally breathed their last (trapped for hours with broken bones and profuse bleeding for example).
This story raises several questions:
1) How can we really measure Amy’s phsycological trauma (which may last all of her life) with the brief trauma of the floodwaters? Is there any objective way to do this? Perhaps Amy will grow up to be a criminal and will harm others her entire life. If this is the case, we should wish she would’ve perished in the floodwaters (if we think about consequences only). This would make the story above morally ambiguous for those who read it that morning in the newspaper. They can’t say “Great! It’s a good thing she was rescued!” They would have to reserve judgment entirely until the end of her life (and actually longer than that, for the consequences of her actions throughout her adult life may have generational consequences too).
2) Even if we judge based on the immediate consequences only (though this would be an inadequate judgement for a utilitarian) of her life being saved vs. the trauma of being kidnapped, and were somehow able to objectively conclude that saving her life was a greater good than her suffering, we have to ask: Was this intentional kidnapping by Mr. Pedo good or bad? If the apparent consequences judged objectively show the consequences of his action were, in the big scheme of things, good (in the sense explained above–in terms of its apparent consequences), we would have to say Mr. Pedo’s actions were good–in spite of his intentions.
So …. to answer your question: What would make his action wrong if the consequences were good? Well, my argument is an argument from common sense, moral intuition, and conscience. In general (there are exceptions to everything) people just know (or “sense” or “intuit”) that Mr. Pedo’s actions were bad based on his motives, even if they coincidentally ended up causing something good to happen.
Does that make it more clear?
P.S. – I look forward to your reply Mr. D., but I will be tied up for 4-5 weeks and my counter-reply (if you reply) will have to wait until mid-August).
1) The Inevitability of Arbitrariness – This isn’t a point against Utilitarianism. This point is like saying the whole of quantum physics is wrong because we can’t simultaneous measure the position and movement of a particle, therefore we should just fall back to Newtonian physics for measuring the subatomic. It’s better to have something that’s right and inaccurate, rather than something that’s wrong and appears to be accurate (but really gives worse answers.). My point is, to say something is wrong, you need to say that the actual thing is wrong, not that it’s inaccurate. This point almost assumes that all moral theories are equally valid, so we may as well choose the one that’s gives the easiest to predict predictions.
2) The Contrary Intuition – Just because something goes against intuition, that doesn’t mean it’s wrong. Many people would have build up completely wrong intuitions in life, like that it’s alright to murder for fun. Yet just because these views are intuitions they aren’t given any more validity. The only validity in this point is that if you break down certain natural intuitions (like the desire not to kill), in order to fulfil a utilitarian cause, that may mean you won’t be as adverse to doing the certain thing in the future (because you’ve desensitized yourself to that intuition), when not following a utilitarian cause (for instance, you kill to save peoples lives, but then, you start killing more because you’ve desensitized yourself to the act). This could work both ways, though; it could help people build up more correct intuitions in life. You also need to keep in mind that if someone is about to do an action, and they figure out if the action is correct in a utilitarian way, utilitarianism dictates that they would also need to take into account any psychological consequences on themselves.
3) The Omniscience Requirement – This is very similar to point one. Someone can’t know everything, but they may as well get the best idea as possible, determined from the consequences, and act on that, rather than pull something out of thin air.
All of the above points of yours attack the efficiently and usability of utilitarianism, however they don’t actually attack the theory itself, and if the theory still stands, it’s a direct consequence that utilitarianism still stands. Still, it was a good and interesting article. 🙂
Thanks for your counter-critique of my post. It is a very hearty defense of Utilitarianism! 🙂
We appear to be coming at the subject matters with very different presuppositions, however. I’m afraid I don’t really believe that any ethical theories can be “proven” right or wrong (as your comments imply). Ethical theories have to be weighed carefully according to critical principles of reasoning, but they can’t be proven right or wrong, so we are left to have a critical dialogue about which ethical theory seems to make more sense in light of a wide range of phenomenon. Still, here are my specific responses to some of your thoughts:
1. :: The Inevitability of Arbitrariness ::
“It’s better to have something that’s right and inaccurate, rather than something that’s wrong and appears to be accurate.” Your critique here is the same as your critique against (3) where you say: “Someone can’t know everything, but they may as well get the best idea as possible, determined from the consequences, and act on that, rather than pull something out of thin air.” Your arguments here are circular. They presume that all other ethical theories are “pulled out of thin air” or assume they are “wrong and appear to be accurate.” Now, if one is already convinced of utilitarianism, she will give you an “amen,” but for those of us who are not, you are arguing in a circle, presuming the very thing you are setting out to argue for. In this way, your argument’s persuasive punch is limited to those who already agree that Utilitarianism is right, and all other ethical theories are spurious.
2. :: The Contrary Intuition ::
“Many people would have build up completely wrong intuitions in life, like that it’s alright to murder for fun.” :: Yes, we call these people psychopaths; we fear them; we lock them up, give them medicine, treat them as having a sickness and often cut them a brake on sentencing due to the fact that they are believed to not know the difference between right and wrong due to some mental illness. They are the exception, not the norm. I mentioned in my post that there are exceptions. People’s moral intuitions are not always right, but yet when the majority of society agrees that murder (or rape, or racism, etc.) is wrong and should be illegal and punishable by very strict standards (even execution or “death penalty” in some places), and when the majority of other societies around the world have this same intuition (and again I am willing to grant exceptions), then such an intuition must be reckoned with in any theory of ethics. But Utilitarianism allows the possibility that torture and murder or infants and adults in many circumstances (so long as they appear to our limited judgment to have good consequences) be considered morally acceptable or even praiseworthy. In my story “Mr. Pedo Saves Amy’s Life,” (see previous comments on this thread) we can see that Utilitarianism rubs against some of our most commonly shared moral intuitions very poignantly at crucial junctures in the out-workings of human affairs, which casts doubt on the theory.
3. “All of the above points of yours attack the efficiently and usability of utilitarianism, however they don’t actually attack the theory itself.”
I’m afraid you have made your distinction too eagerly. One method of arguing against a position is to ask “Ok. So what if we assume this theory is true? Let’s explore the consequences of the theory as applied to specific situations and see how well it accounts for ethical phenomena.” A fancy label for this line of reasoning is Reductio ad absurdum. It explores the demerits of a position by critically examining it’s absurd consequences for human life (kind of ironic huh? … judges by rational consequences an ethical theory based on consequences). LoL! Anyway … Ethical theories can be called into question (or as you say “attacked”) indirectly as well as directly however you like, but all such “attacks” are on the theory itself and are designed to cast doubt on the theory by exploiting the unacceptable or impractical implications of the theory. There is more than one way to skin a cat. Or, to use ancient battle imagery: A city can be taken by storming the main gate and scaling the city walls, or by waiting for the people inside to die of thirst or famine. In the latter case, the city’s own military strategy, although not an intentional or direct attack on itself, results in defeat all the same.
Thanks for such a prompt and well done response.
Firstly, when I talk about a moral theory being right or wrong, this is what I mean:
A moral theory will have a logical argument (syllogism) for it, eg. in the case of utilitarianism (just a very basic one, which is quite self-evident):
1) Pleasure is good, and pain is bad.
2) Only pain and pleasure, and things that lead to pain and pleasure are measurable to be moral or immoral.
3) It’s better to have more good things in total and less bad things in total.
4) Therefore an action is only better the more it is acted in order to cause more pleasure (and less pain).
Now, when I say “prove” utilitarianism is wrong, I mean show that either one of its basic assumptions is wrong or that the argument doesn’t follow from the assumptions. I also hope it’s obvious, that in its above form, utilitarianism is only compatible with itself ie. it, and another moral theory can’t be completely right at the same time (it can be partially right though, as you said above, the utilitarian factor.).
1) Yes, I do assume here that utilitarian is right, and I’ll show you why I do so:
1) If something has an argument for it that appears to follow, and a person finding fault with it doesn’t show fault in the argument, then the thing hasn’t been proven “wrong” by that person.
2) Something that still logically stands to be the only correct thing on its specific continuum (as I discussed above), then, no matter how arbitrary that thing is, that doesn’t change the fact that it’s right.
That is, I assume utilitarianism to be right because, in this case, it has an obvious argument for it, that hasn’t been shown to be wrong. Therefore, if it hasn’t been shown to be wrong in any way, then “the inevitability of….” argument is like saying , “Hey, let’s dump all of quantum mechanics because, although we haven’t shown it to be wrong, it’s a bit hard to use, and you can’t be completely accurate with it. To Newtonian physics (for measuring the subatomic)!”
I’m not really assuming Utilitarianism to actually be correct, I’m just assuming that you haven’t shown it to be actually wrong, and therefore, since it has an obvious argument for it, in this forum, it’s still assumed to be correct.
2) You’re right, that guy with that intuition is a psychopath. But everyone has intuitions that are wrong, quite frequently. But that doesn’t matter, because you haven’t actually given any reasons why these intuitions matter at all, and I can’t think of any reasons apart from the one I mentioned above. Intuitions are influenced by things such as society, people around you, various media, basic beliefs, the colour of the walls of your bedroom, and where your dog goes to the toilet. They are the most subjective and inaccurate measure of anything, unless someone else is measuring your own subjectivity.
3) The argument you used above wasn’t “Reducto ad absurdum”; It was a logical fallacy, called “Reducto ad ridiculum”. That is, Reducto ad absurdum shows something to be wrong if it contradicts itself if true, or shows something that is actually wrong, like that the whole earth is made of fire. Reducto ad ridiculum, only tries to show that something is wrong by showing that if it’s true, it has ridiculous or strange consequences, that may contradict intuition. This doesn’t show something to be wrong at all.
As I said before, if something has an argument for it that follows, and that argument hasn’t been shown to be wrong and if that thing is the only thing that can be correct in it’s specific type of thing (moral theories) (that is, assuming it’s true), then, no matter how arbitrary that thing is, it’s better than everything else.
Thanks for considering my comment(s). Gee, these responses are going for ages now. A few more posts, and we’ll be writing encyclopaedias. 🙂
I wouldn’t consider your syllogism “self-evident,” for it has a debatable premise: namely, that pleasure is good. It’s not that I disagree and think that pleasure is necessarily bad, it’s just that I don’t think any ethical theory is that simple in the end. What if the “pleasure” we are talking about is the “pleasure” of a sadist who likes to torture people for fun? Would her pleasure be good? Or what if Hitler saw himself as creating “pleasure” for the superior race? Does that make his ethical motivation justified? What if men outnumbered women 5 to 2, and men decided that the greatest pleasure for the greatest number of people would be for men to be able to make women their slaves and sex toys instead of promoting equality? Would that be a good thing? These questions will likely elicit from you more qualifications for your theory that will show that for utilitarianism to really work, it must be presented in a more sophisticated fashion.
The problem here is that your premises are subjective, and therefore simply can’t be “proven” wrong the way a police department would “prove” that a bag of white dust was indeed cocaine or the way scientists “prove” through repeated testing that certain results obtain under strictly specified conditions. I’m afraid in the field of ethics, we are dealing with something quite different that requires a great deal of inductive reasoning rather than chiefly deductive reasoning. As I mentioned before, however, utilitarianism can be shown to “not work” by examining its uncertain meaning (How do we measure pleasure? What are the most important kinds of pleasure that should take precedence in ethical considerations? On what objective basis can we decide which pleasures are of a better quality?) and by its unacceptable consequences (e.g. Mr. Pedo is a good man [see comments in thread above]).
[on a side note: What’s the “argument” that “appears to follow” for utilitarianism’s premise that “pleasure is good”?]
Well … Again, I’m not trying to “prove” that utilitarianism is wrong. Our definitions of “proof” are very different, and I think it’s important to keep language of “proof” outside of discussion like this where we aren’t dealing with scientific experimentation. What you call “proof” (namely, showing that another persons contrary arguments can be rebutted) is what I simply call a rejoinder (or counter-critique), and whether someone’s counter-critique “shows” or “proves” anything will itself turn out to be difficult to “prove” or “show,” for any rejoinder can be counter-critiqued as well. This is the process of critical dialogue, and different people will conclude differently at various points in critical discussions whether any of the dialogue partners has “proven” or “shown” something to be the case conclusively. Deductively speaking, just because someone can’t come up with any good arguments against an ethical paradigm doesn’t mean that paradigm is right or best. Nevertheless … I’m confident that more open minded readers will be persuaded that I have found considerable holes in the utilitarian argument by exploring its consequences and ambiguities. That was part of the aim of my post, to explore it’s weaknesses. I’m not persuaded you have been able to successfully overcome these problems with your hearty defense (although I enjoy the exchange because it helps clarify things).
I’m afraid you are back to assuming your position is right here. For this argument to work, the utilitarian paradigm has to “still logically stand,” but as I have argued, its premise is ambiguous and it’s consequences unacceptable. The fact that a utilitarian paradigm has no resources to measure the qualities of pleasure objectively, and is therefore left to make such decisions arbitrarily is not the same as dumping quantum mechanics because its hard to apply. First of all, I’m not “dumping” anything (remember in my post I argued for a utilitarian factor, so I’m not rejecting utilitarianism entirely, just exclusive utilitarianism). Secondly, a more fitting analogy is this: it’s more like questioning your dad when he says “Son, gettin’ a job is more important than gettin’ some fancy college degree!” How does the dad know? He doesn’t. How is he judging? By his subjective experience and values.
What’s more important for the desperate poverty stricken ghetto dweller: making sure his family has food to eat for the next month, or making sure he doesn’t afflict his conscience for the next month (or however long) for holding a pistol to the head of a wealthy suburbanite to take his wallet? Asking the question “What’s more important in life?” is, in the utilitarian paradigm, nothing more than the question “What is most pleasurable”? But answers to these questions when raised in specific life circumstances (as I have done in a few examples above), although fundamentally crucial for the ethical paradigm to “work” in everyday life decisions, appears to be quite arbitrary and subjective. You can’t just dismiss my argument here about the inevitability of arbitrariness by recourse to an analogy about concluding something is wrong because it’s “hard to use.” My argument is different, and I’m afraid your crass analogy oversimplifies my argument and shows that you haven’t fully reckoned with the weight of my concern or subtlety of my argument.
Now here lies our real contention: namely, whether the arguments in my post (or as extrapolated in the comment thread) have any merit.
I’m suggesting that the argument from “contrary intuition” is a shot against the persuasiveness of the utilitarian paradigm, since it will be counter intuitive in a deeply morally disturbing way for most people (remember my story about Mr. Pedo in the comments thread above). My story about Mr. Pedo doesn’t logically prove it’s not true, but it does mean it’s not going to be persuasive to those who value their deeply ingrained moral intuitions that motives must count (to which I might add: motives must be weighted heavily in any ethical paradigm—in some cases even more than consequences), and persuasive ability is important for any ethical paradigm’s adoption. Ironically, as it turns out, for all your talk about the utter subjectivity (and therefore unreliability) of moral intuitions, you end up needing them even in the utilitarian schema to decide questions about pleasure quality (as I mentioned above).
Here I have to wonder what books you are reading. I’m getting my definition from Dr. David Zarefsky, the Owen L. Coon Professor of Argumentation at Northwestern University. But just to be sure, I looked it up in a peer reviewed Philosophy Encyclopedia (http://www.iep.utm.edu/reductio/). Definitions emphasize its positive use in defending a position by arguing that its denial has absurd consequences, but it can also be used (as Dr. Zarefsky says) to critique a view without necessarily defending a specific alternative position simply by showing that view has absurd consequences. In either case, whether to defend one’s alternative view or just to attack another, the Reducto ad absurdum (as the wording implies) is a method of indirect reasoning that attempts to reduce a position to absurdity.
My best argument against utilitarianism, however, is this: it does not bring the greatest amount of pleasure to the greatest number of people. LoL! 😉 (that’s sort-of a joke, but I am actually curious as to how a utilitarian could critique such an argument based on utilitarian principles?).
Thank you for your reply. Firstly, I think you will find that if you scroll down a bit on your peer reviewed source, it will explain the finer differences between Reductio ad Absurdum and Reductio ad ridiculum – but that’s really irrelevant. Colloquially, Reductio ad Absurdum can refer to a great many phenomena, so who cares (just to be clear though, let’s stick to the stricter definitions, so there isn’t any miscommunication).
Also, you appear to have misinterpreted my arguments in some cases, so just be careful to read and understand thoroughly, even though the responses are so long.
Clarifications about my arguments:
When I say prove, I use the term loosely. I’m using it just to make clearer the difference between actually “calling into question” the basic principles of something (for which I use the term “prove wrong” if the “calling into question” is sound), and, in contrast, questioning the usability or the consequences which someone might find to be a bit strange, to contradict intuition, etc. (reductio ad ridiculum). Also, keep in mind the term, in its strictest definition, Reductio ad Absurdum does actually “call into question” the basic principles of something, that is, by showing it is inconsistent with itself; that it contradicts itself (reductio ad ridiculum, doesn’t however work the same way.).
Secondly, throughout this whole argument you appear to have assumed I’m a “utilitarian” (unless I’ve misinterpreted), and that I support utilitarianism. I don’t. I haven’t come to any concrete conclusions about ethics (I like to keep my mind open), but there is one thing I support: the truth, and logical arguments and points, and in this case, that’s all I’m arguing for.
Thirdly, when I said self-evident, I didn’t mean self-evidently true, I meant it was self-evidently the basic argument for (in its case) pleasure utilitarianism, which I just picked because it’s the most common.
Fourthly, I agree; there is a lot more to an ethical theory than what I presented. It has its basic principles/s, which come from its basic argument, and on top of these, it has all its finer points (like, how does utilitarianism apply to this situation, or, is this a form of pleasure?), however, all that determines whether an ethical theory is sound or not, is its basic argument, and then after you’ve determined that, you get its finer points that don’t affect the basic argument. (e.g. You can conclude that 1 + 1 = 2, and then you can go further into the ramifications of that, like 2+2=4, etc. But you don’t need to know about that part to conclude that 1+1=2.)
Then, there are also the finer points that lead to the actual acceptance of the premises, which will probably have arguments for them also. Yes, if I wanted to present utilitarianism in a totally from the ground up manner, I would do that, but my only point in showing that syllogism was to show what it means to show that the basis of an ethical “called into question”; to show one of its premises to be wrong, or show that its premises are wrong. Of course, you don’t have to necessarily do that in the strictly syllogistic fashion, you can just do it casually as we are doing now.
So; yes, there is more to an ethical theory than its basis, but you only need to show its basis (when I say “basis”, I mean basic argument for it) to be true/flawed to show the actual idea of the ethical theory to be right or wrong. And, yes, there can be more depth to its basis than you think.
To give an example of what I mean of “proving” (I use the term loosely) something wrong, or showing something not to be sound, here’s an example argument for an ethical theory:
1) I like rocks, they remind me of butterflies.
2) Therefore, an action’s “rightness” is determined by how many rocks it carves into the shapes of butterflies.
Try and show it to be wrong. Seriously, do it. (I’ll give you a clue, the conclusion doesn’t follow from the premise.). – That’s what I mean by “proving” something wrong. And yes, I do use the term rather loosely, for the reason I stated above. Seriously, it’s not the end of the world.
“The problem here is that your premises are subjective, and therefore simply can’t be “proven” wrong the way a police department would “prove” that a bag of white dust was indeed cocaine or…” et cetera, et cetera.
No, of course you can’t *empirically* prove an ethical theory to be wrong, but you can show its conclusion not to follow from its premises, or one of the assumptions or arguments for/of the premises to be wrong (which is what I’ve been talking about this whole time).
Furthermore, if an ethical theory does have some sort of purely subjective assumption, and then tries to treat that assumption like it’s an objective one in (eg.) its conclusion, then that is a flaw in its reasoning, and you can use that as grounds to say that the basic argument for an ethical theory is wrong. If the conclusion does follow from the premises, then the argument is valid. What you are saying is entirely consistent with what I’m saying – if a theory tries to portray something subjective as objective, later on, you can use that to show it to be wrong, because that’s a flaw in its basic argument – it’s an inconsistency, that is, it doesn’t follow. You haven’t, however, done that with utilitarianism.
“Just because someone can’t come up with any arguments against a paradigm doesn’t mean it’s best….”
Of course it doesn’t, and you’ll find that if you actually read and understood everything I said, I never once said that (I’m not sure, were you implying I have? I’m sorry if I’m mistaken.)
When I used the term “dump the whole of quantum mechanics”, I’m using that as an analogy for the term, “dump (heavily implied: exclusive) utilitarianism.” I would have thought that would have been evident in the context, considering that’s what we’re actually arguing about. I’m sorry if the term “dump” sounded a bit strong. Just replace it with “lay aside”, or something like that.
Now (onwards), you’ll find that if you reread my “crass analogy” it says:
“Hey, let’s dump all of quantum mechanics because, although we haven’t shown it to be wrong, it’s a bit hard to use, and you can’t be completely accurate with it. To Newtonian physics (for measuring the subatomic)!”
Despite your limited quoting it doesn’t only say, “it’s a bit hard to use.” And, yes, incidentally it does have a whole context and argument behind it.
Now, the key words are “…you can’t be completely accurate with it…”.
Let’s compare that with your “The inevitability of Arbitrariness” argument, and see how crass my analogy is:
You say: “It has no way to objectively determine the nature, importance, and value of consequences.” (This sums up the basic points of your arguments)
I say (in my analogy): “…you can’t be completely accurate with it…”
Now, they look completely different, don’t they? Well, they aren’t. They use different words, but the idea that I’m demonstrating in my analogy is completely applicable (in the way I meant it to be). Observe:
Now, a consequence of the theory of Quantum Physics is that you can’t simultaneously measure the exact movement and the exact location of a particle. However, you can try and get as accurate a measurement as possible by rebounding particles off other ones.
Now, a consequence of the theory of Utilitarianism (depends on type of utilitarianism, let’s just assume pleasure utilitarianism) is that you can’t objectively measure the exact nature or value of the effects of an action (although the theory is based on an objective idea*), however you can make a (in some cases, at least) very close approximation by estimating what effect the consequences of the action will have on the level of pleasure of beings affected (which, in 99% of situations, isn’t as hard as you think).
Read through those two comparisons (seriously do it again). See what I meant in my analogy yet? I really, really, hope you do.
Pretty much, both have ways to determine the nature of something, but because of their theories, they can’t always be completely accurate, but they can still make a (often close) approximation. Now reread my argument with the analogy in my second comment, and hopefully you’ll understand it.
*Before you go all “it’s subjective”, keep in mind that the theory at its conclusion is: cause as much pleasure as possible. Pleasure = activation of certain receptors in our brains. And “as much as possible” therefore = cause as much activation of these receptors as possible. Objective idea; just too hard to objectively measure.
Onwards. You argued, “What’s more important for the desperate poverty stricken ghetto dweller: making sure his family has food to eat for the next month, or making sure he doesn’t afflict his conscience for the next month…” et cetera. (This is really just another “reduction ad ridiculum”)
Here you ignore something that I actually talked about in my first comment (although I don’t blame you for missing it). Go and reread (the final sentence of) the 2nd section of that, and hopefully you’ll see, if not, read on:
Utilitarianism takes into account the consequences on all beings, and if you’re about to do something, like murder, that will psychologically scar you, then, you also need take into account the effects on yourself. If the effect of your potential psychological scarring looks like it’ll outweigh the other good and bad consequences, don’t go ahead (that is, if you determine it in a utilitarian way). Utilitarian also takes into account consequences on the person doing the action, not just everyone else.
“For this argument to work, the utilitarian paradigm has to “still logically stand,” but as I have argued, its premise is ambiguous and its consequences unacceptable…” et cetera.
(I hope you understood my re-explanation of the analogy above, because that’s relevant to this.)
The Utilitarian paradigm does actually still logically stand (as I, incidentally, have argued too.); of course its premise is going to be ambiguous, that’s what leads to there being a great many different kinds of Utilitarianism that measure “goodness” in different ways and each of those more specific ones measure it far more accurately than the very, very, basic utilitarianism. Furthermore, although something can’t be objectively measured (what ethical theories can be objectively measured?), it can be measured in other means that can come very close to objective measuring, as I explained above (in the analogy part of this response). The “ambiguous” point doesn’t actually have any effect on the logical standing of utilitarianism.
The consequences points (so far) don’t either. Something’s logical standing can only be judged by its consequences if they lead to: either a contradiction with itself, or something impossible (like a square circle). You haven’t done either of them, you’ve just shown that you don’t like some of the consequences of utilitarianism**, that they may contradict intuition or they aren’t that pleasant, etc. none of which affect the actual logical standing at all (It’s like saying: relativity leads to time dilation. That contradicts my intuition and I think that’s a ridiculous idea. Therefore it’s wrong.*** obviously, this doesn’t have any effect on relativity’s logical standing at all.). For more clarifications on this, refer to the intuition clarification below.
**You’ve also misinterpreted utilitarianism a bit. Under pleasure utilitarianism, that Pedo guy did the wrong thing, on two levels. However that’s a totally different argument, and isn’t relevant right now.
***I’m sorry if this sounds like I’m reducing your argument (it isn’t trying to). It was just the easiest example to give. It, however, is the exact same idea for the purposes of this point.
Finally, intuition. A moral theory’s ability to be convincing (that is, appeal to intuition) to people doesn’t have any effect on whether it’s right or wrong at all; and that’s all I’m concerned with.
If a hypothetical moral theory is right, hopefully people would eventually start accepting that rationally, but not intuitively, and that’s, for the most part, all that you’d have to do, as at least all their rational decisions will be affected. By the time the next generation comes, the theory would have been adapted into culture, and it probably would have become more part of people’s intuitions****. *****But I don’t really care about that; all I care about is the moral theory itself, because if a moral theory is right, then, in almost all situations, it’s better than other moral theories.
Furthermore, the idea of utilitarianism already has quite a strong base in people’s intuitions. When someone decides whether an action is right, on the fly, often what they do is weigh up the various consequences, just like in utilitarianism. The idea of the level of goodness and badness of consequences is very strongly in people’s intuition. Now that I think about it, utilitarianism, for 99.9% of the actions it leads to, appeals more to one’s intuition than almost any other moral theory. Hmmm
****This point may sound a bit weird, but it’s happening all around us. Books, movies, plays, conversations, political viewpoints, are all affected by moral theories and philosophies, and can affect, to a certain extent, our intuitions. I know all those episodes of “Doctor Who” have.
*****The above point about the person living in a ghetto is slightly relevant, and also the 2) point in my first comment.
Also, I just want to add; there is a difference between the subjectivity of one’s intuitions, and that of deciding the “rightness” of an action under utilitarianism. That being, one’s intuitions are based on purely random things that have affected your subjectivity, and they have no objective basis. Utilitarianism reasoning however does have an objective basis (as I explained above), and can come quite close to the truth through reasoning that is actually objective, but is based on subjective ideas (e.g. pleasure, pain, I mean.). Although, it does depend on the type of utilitarianism.
I think that’s everything, but this comment has gone on for so long, I feel it’s too much to add any more (tell me if I’ve missed anything of relevance, and if you don’t understand anything I’ve said, please go ahead, and explain why). Good work if you’ve got this far. And extra praise if you go through this comment again to see if you missed anything, and understood everything (In a comment as long as this, it would be easy to miss, or misunderstand, something). Anyway;
Your Thoughts? 🙂
Thanks for your thorough response! This has become a very involved dialogue, but it’s also the sort of dialogue that must inevitably ensue in critical discussion: clarifying of terms and arguments, etc. takes more time than most people are patient enough to invest, but I’m glad you have invested the time for our exchange! I trust we will eventually see more clearly our areas of disagreement vs. agreement.
The only reason the terms became important to me in our exchange was because you made the distinction between Reducio ad Absurdum and Reductio ad Ridiculum in order to suggest that my argument was a “logical fallacy” and was not an attack against Utilitarianism. If we don’t keep this in mind, we will loose sight of the progression of our dialogue here. What I would like to know, then, is why Reducio ad Ridiculum (I’ll grant you the term) is a logical fallacy? I’ve never heard it labeled as a logical fallacy, nor have I ever seen it categorized this way in any philosophy textbooks.
I think I understand the way you are using the term now, but my concern is this: You don’t have to “prove” (in your sense of the term) that a position is self-contradictory for it to be implausible and unpersuasive. For you to expect me to “prove” (in your sense of the term) that exclusive utilitarianism is “self-contradictory” in order for my arguments to be considered as casting doubt on exclusive utilitarianism’s plausibility is unfair, since my aim was never to “prove” (in your sense of the term) it wrong in the first place.
Nothing I have said implies this, so I think maybe it’s just your impression. I wasn’t assuming you were an exclusive utilitarian.
If all you mean by “self-evident” is that the basic utilitarian argument is clear, and not that it’s true, I would partially agree, for I even think you did a good job summing it up in your previous comments:
My concern is twofold:
1) It’s not clear what “pleasures” are more important or what “actions” would necessarily lead to “more good.” Even if “pleasure” could be defined objectively and every person wore a helmet that counted how many pleasure receptors go off in the brain immediately following each of their actions, we would still need to know how many other people’s pleasures were effected through such actions. Even if we could do this (as implausible as it is) we would still need to take into account the full historical scope of all of its consequences. How the actions of one generation effect future generations is notoriously difficult to predict (although hindsight is, as they say, 20/20). The consequences of one’s (or societies’) actions may be reversed after one, two, or three generations from being “good” to being really “bad.” All this makes the bullseye very unclear. Therefore, in spite of the fact that the “argument” for exclusive utilitarianism is clear, the terms of the argument are still ambiguous. What is “pleasure”? How does one measure “pleasure”? What is the “greater pleasure”? What are the “actions” referred to in the argument that cause “greater pleasure”? Although no ethical theory is free from ambiguities, the basic argument for, say, Judaism could be considered much more clear (i.e. obeying the Torah (the ten commandments) is good, and disobeying them is bad. These commandments are: honor your mother and father, don’t lie, don’t murder, don’t commit adultery, etc.). Furthermore, what will give great pleasure to one person or society may not give any pleasure at all (or much less) to another.
[Adding 6/1/15:] So then, to know what “actions” are referred to in the argument, you would need to perceive the distinct point of view of all others whom you know will be potentially effected by your consequences in order to know what will cause pleasure for someone with their unique cultural/intellectual/spiritual/ethical/tribal/national perspective.[End Addition]
As I pointed out in my examples, the ambiguity of the terms in the utilitarian argument (including the phrase “actions that produce more pleasure”) touches on practical decisions in everyday life, for the consequences of our actions are very unpredictable. Just because some scientists can measure receptors in the brain in controlled situations doesn’t mean that those wanting to follow the utilitarian ethical paradigm can accurately judge whether our actions will produce such receptors, or whether they will produce the greatest amount of those receptors for the greatest number of people over the longest period of time. It’s too hard to judge with confidence unless are willing to be incredibly presumptive.
And remember: it’s not our motive that counts in exclusive utlitarianism; only the consequences. Whether someone made a reasonable guess about the consequences of their action becomes irrelevant for judging its rightness or wrongness (remember my story about Mr. Pedo).
2) You make it seem like the soundness of the argument for utilitarianism makes it impenetrable by the arguments I have presented. Let’s remember that soundness [here I meant “validity”] of argument (correct “form”) is never a good reason for accepting the conclusion. For example:
1. Pain is good, pleasure is bad.
2. Only pain and pleasure, and things that lead to pain and pleasure are measurable to be moral or immoral.
3. It’s better to have more good things in total and less bad things in total.
4. Therefore an action is only better the more it is acted in order to cause more pain (and less pleasure).
This argument is just as sound [here I meant “valid”], but comes to the opposite conclusion. It’s the premises of the argument that I’m questioning, not the soundness [here I mean “validity”] of argument. Sound [here I mean “valid”] arguments can be counterfactual, as Zarefsky puts it.
If in your opinion all it takes for an ethical theory to “stand” after criticism is that it not be self-contradictory, I think you hold too low a standard for ethical theories, for then all ethical theories that are not “self-contradictory” would be equally valid. As I said before, there is more than one way to skin a cat. You don’t have to prove an argument is self-contradictory in order to undermine it’s credibility as a plausible ethical paradigm, [Added 6/1/15:] and by plausible ethical paradigm, I imply one that is designed for actual use by humans, not a theory to hang on the shelf as “non-self-contradictory” or “formally valid in form.” [End Addition]
I wasn’t attributing the statements’ contrary to anything you said, just pointing it out. In the context of our discussion, I think what I want to say is: Just because you can’t make a case against the soundness of an argument doesn’t mean the argument is “good” overall, for as I have stressed, an argument can be undermined in other ways.
Accuracy is measurable only if the bullseye is clear. Although the argument for exclusive utilitarianism is “sound,” I have argued that the terms of the argument are unclear. My argument isn’t just that you can’t be completely accurate, for this would assume the bullseye is clear but the archer’s aim is not perfect. Rather, my argument is that accuracy itself in this paradigm is ambiguous and unclear. To continue the analogy: one can’t shoot at a bullseye unless one knows where the bullseye is fixed.
Actually, it’s not too strong as long as you are only referring to exclusive utilitarianism, not what I call the “utilitarian factor.” My point wasn’t that the word was too strong, but that your analogy was more misleading than helpful since it oversimplifies my argument about arbitrariness to “This doesn’t work well, so let’s dump it.” As I mentioned before, if the bullseye was clear but the archer imperfect, this would not be a good case against an ethical paradigm, but if the bullseye itself cannot be fixed with any confidence to a certain location, the archer will have nothing to aim at in the first place. One operating with an exclusive utilitarianism not only has no way to objectively determine the nature, importance, and value of consequences, but also has no way to know the extent of those consequences. Compare the two:
Your summary of my argument: “Hey, let’s dump all of quantum mechanics because, although we haven’t shown it to be wrong, it’s a bit hard to use, and you can’t be completely accurate with it.”
My actual argument: Hey, lets dump exclusive utilitarianism since there is no way to objectively (not “accurately”) measure whether the consequences of our actions will produce the most pleasure for the greatest amount of people over the long haul with confidence, which is all that counts in exclusive utilitarianism, even though the form of the argument is sound, since soundness of argument never guarantees the plausibility of an argument overall (as I have pointed out) and since if we use only soundness of argument to justify our ethical paradigm, we could justify contradictory claims (as I have illustrated with my “pain is good, pleasure is bad”).
Again, notice that “objective” isn’t the same thing as “accurate.” Is it better for Utilibob that he has a steady production of 10 pleasure receptors in the brain a day throughout his life and live to be 90, or is it better that he have a steady production of 30 pleasure receptors a day (thus being much more pleased each day) but only live to be 30? In both scenarios Utilibob’s total pleasure receptor production is the same, but how can we judge which version of Utilibob’s life is “best” without some other standard for measuring? This effects very real practical questions in life, like: Do we want to take risks in life to ensure we live to the full each day, or do we want to avoid risk as much as possible in attempts to ensure a longer life? Some might say: “I’d rather die happy at 30 than live moderately happy for 90 years.” Others might look at a tragic accident of a risk taker (someone who is an adrenalin junky for example) and say: “That might be really fun, but it’s not worth the risk.” In exclusive utilitarianism, there is no other standard (or at least no other objective standard implied by the argument) for measuring questions like these, and thus the very nature of the term “greater pleasure” is ambiguous and must ultimately be defined arbitrarily unless the exclusive utilitarian borrows from some other argument, at which point she would no longer be an exclusive utilitarian, but will be using criterion foreign to the utilitarian argument.
Not only this, but without having a way of measuring the total consequences of one’s actions (e.g. guessing only the intended immediate consequences of one’s actions without knowing their unintended effects on all future generations) the method is not only inaccurate, but impossible. Accuracy is something that takes place when the bullseye is clear; in exclusive utilitarian ethics, the bullseye is not clearly fixed. Furthermore, the archer’s hit cannot be measured objectively without borrowing from other criterion.
In Quantum Physics you can still approximate relatively accurate measurements because rebounding particles off other ones gives an indirect (but objective) criterion for judging. There is no such criterion in exclusive utilitarianism (remember my example of Utilitbob).
I’m afraid we will have to agree to disagree here. For a society or a person to think they can even approximate all the consequences of their actions for all generations—when in many cases these actions have unexpected and unintended consequences that end up effecting the happiness or misery of others or the masses of others is (in my opinion) incredibly presumptive.
We could take my example with Mr. Pedo and transpose it to societal actions as a whole. Did the euro-centric men who were stealing human beings from Africa to enslave them in America know that these slaves would eventually be set free in a country that would become the most wealthy nation in the world? What if more generations and more people of those descended from such slaves reap more pleasure in the long run (than their ancestors reaped pain from slavery) due to the wealth that came to America? People just never have the ability to see that far down in the future or know what will happen, and I think one can only convince oneself that such judgments are possible to “approximate” if one has a naïve understanding of the full complexities (and mysteries) of how consequences work. Yet everything that will happen is entirely relevant and even necessary to judging what will cause “more pleasure,” and therefore to the rightness or wrongness of a given action or set of actions of a society. These are just a few examples, but you get the point I’m sure. And don’t forget, if the actions of these men who stole slaves does turn out to cause the wealth of their descendants which causes more pleasure than the pain inflicted from slavery, these men (in an exclusive utilitarian ethic) would have to be honored as hero’s, for their actions were good.
I’m not sure why you conclude that my example “ignores” something that all my comments assume. My point in the ghetto dweller example is this: How the ghetto dweller weighs the pain of his family suffering from hunger and lack of clothes (as well as the psychological suffering that attends such poverty) against the suffering of his own psyche (and the effect this will have later on his relations with his family, which is hard to tell) and the suffering of the suburban dweller who looses his credit cards and cash (and what consequences this will have on his life and his family’s life—if he has a family)—all in addition to the consequences this will have (whether foreseeable or unforeseeable, intended or unintended) for all future generations—are both difficult to judge and (as I have argued) even impossible for him to judge. When you say 99% of cases are more easy than I think, again, I have to conclude your understanding of the complex nexus of actions and consequences must be naïve (please don’t take offense).
I think your purview is too narrow (again, please don’t take offense). Sure, just because following an ethical theory through to its logical consequences (and saying “Mr. Pedo saves the day!” or “Euro-Centric Slave Traders Were Hero’s” for example) is deeply disturbing to the majority of people’s conscience, it doesn’t necessarily logically follow that the theory is wrong. But I’m not as fixated on whether something such as this necessarily means the ethical theory is wrong [Adding 6/1/15] (Here I mean logically self-contradictory.) It seems like you are applying terms like “good” “right” and “correct” to the ethical argument for utilitarianism, then using “good,” “right” and “correct” vs. “wrong,” “flawed,” etc., interchangeably for “valid or invalid in terms the argument’s form,” and applying it not just to the argument for utilitarianism, but to the ethical paradigm of utilitarianism. I would argue (1) the goodness of a paradigm encompasses much, much more than merely whether its supporting argument is formally valid once you grant the premises and assume the terms of the premises are clear, and (2) I would make a distinction between a logical argument and an ethical paradigm just as I would between an outlook and one’s argument or reason for that outlook. [End Addition] I have more considerations in my ethical outlook than mere logical necessity. Among others, persuasiveness, plausibility, and various practical considerations must all be considered in the end.
I appreciate your engagement, and critical discussions like this always benefit everyone who is willing to be open minded. I know I have clarified (in my own mind) many of these matters as we discuss them. I hope you are also enjoying and benefiting from this exchange.
More Thoughts? 😉
Firstly, you have an extremely tenuous grasp on the meaning of the word “sound”. Go and read this: http://www.iep.utm.edu/val-snd/ , then go and reread my previous comment because I used the word a fair bit. Actually, reread my previous comment a lot of times. Because your previous comment implies that you didn’t.
Moral theories. Theories that determine what it means for an action to be right or wrong. They all rest on an argument (They aren’t pulled out of thin air). Their respective arguments for them are what determines whether the moral theory is correct, or flawed. No two moral theories can have correct (correct = correct premises and argument that follows from the premises) arguments at the same time, because if they were, they would be the same moral theory, or one of them would be a subset of the other.
Hence, if you want to show a moral theory to be flawed, you NEED to show its argument to be flawed, because otherwise anything else you can put on it is feckless in the light that you haven’t shown it to be wrong (I hope it’s obvious that no flawed theory can predict things more accurately than a correct theory. If it isn’t, I feel sorry for you.) That means, to actually place any scratch on the metaphorical armour of a moral theory, you need to:
1) Show one of it’s premises to be wrong.
2) Show the argument not to follow.
3) Show the argument to be circular (which is fairly equivalent to 1))
4) Show the consequences of the argument to lead to the impossible, to contradict itself, or to something that’s obviously untrue (which leads to the implication that the argument doesn’t follow or its premises are flawed)
Pretty much: if an argument’s premises are true, and the argument follows, then (obviously) the conclusion is true (or sound, if you’ve bothered to find the correct definition yet). The above four ways are ways to show otherwise, and hence show it’s argument to be flawed.
I also hope you know what I mean by the argument following means. (ie. it being valid. I hope you also know what valid means. It says in the link at the start of this comment)
I didn’t say that the only way to show an argument to be wrong was to show it contradicted itself. Did you even read my last comment? (I honestly have no idea where you got that from. )
Now, have you done any of 1-4? No. You haven’t. Therefore the conclusion of utilitarianism still stands and ergo, your argument that your arguments have anything on utilitarianism is invalid.
What doesn’t show an argument to be wrong (I hope these are obvious):
1) Show that the consequences of an argument lead to counter-intuitive consequences, or ones you don’t like – this is fallacious because whether someone likes or finds an argument counter-intuitive, obviously has no effect on the logical standing of an argument. Your intuitions, and your feelings do not follow the laws of logic (duhhh), and therefore aren’t a way to tell that an argument is wrong.
2) Show that something has no way of objective measurement. Firstly, as you clarified in the above comment, you didn’t argue that utilitarianism is inaccurate, just that it can’t objectively measure things. You went from that, to, “therefore one can’t measure whether the consequences of our actions will produce the most pleasure for the greatest amount of people over the long haul with confidence”, which, obviously, is totally different to “therefore one can’t accurately measure what utilitarianism defines to be right or wrong”. I mean, come on, they mean the same thing, but, CLEARLY, as you explained, they’re different. Duh. You can’t deny that two things that mean the same thing, have different meanings. Seriously? They mean the same thing. Go and reread my explanation in the analogy part of my previous comment, and actually understand what I write. That analogy explanation pretty much explains this point.
3) Claim that its premises are ambiguous. Actually, this is a valid method; you can’t determine whether the premise of an argument is right or wrong, if you don’t know what its premise actually means. But, in this case, you haven’t actually done this. You’ve done the good ol’ “argue something that correlates closely to a rational argument, but isn’t actually a rational argument”. For, your argument has gone: utilitarianism doesn’t tell you the different values of pleasures. No, it doesn’t. All it tells you is “you need to maximise positive feelings (pleasure)”. That’s not ambiguous. We all know what positive feelings are. We all know what negative feelings are. We don’t need to actually know which feelings are of greater value to determine the correctness or incorrectness of that premise; we just need to theoretically know. (like: in chemistry, to know whether type A of chemical will react with type B of chemical, if there is a relevant common link between the respective types that will tell you if they react, then you can use that to determine the reactivity, but you don’t actually have to know all the individual chemicals in the types A and B)
Just so you know, you can determine if a pleasure is greater than another if people find the pleasure more pleasurable in the long and short terms, and if they’ve experience both pleasures (I suppose you’d also have to take into account any possible biases.).
Finally, I’m not naive. To know whether an action is right or wrong, you don’t need to know every single consequence; you just need to know enough to tell whether the action will cause more pleasure or more pain (than other possible actions). You don’t need to know every single minor consequence; usually the actual major consequences, the ones that matter, are obvious. Nearly all actions that I did last week, have been forgotten, and no longer have any effect this week, and all the ones that are still having an effect, are those with effects that I foresaw. Yes, 0.5% of the time, an action will cause a significant unforeseen consequence, but, this isn’t a world where being kind to someone leads to living on the street (and when it does, it’s foreseeable); usually a relevantly major negative consequence has a relevantly major negative action to go with it. This is debatable though, and not relevant until we get over the point that I have posited at the beginning of this comment. From personal experience though, I have only very rarely had a significant consequence from an action that was entirely unforeseeable. As I said, this is irrelevant until we get over the point at the start of this post. Just keep in mind; no ethical theory can tell the future, you’ve got to choose the one that’s the most correct, and use that as well as you can.
There’s a whole lot of irrelevant stuff to the main point in contention that I could bleat on about, my main, relevant point, though, is the one at the start of this comment. Read and understand it. Also, about skinning the cat- I get it, there’s a whole lot of ways to skin a cat. But, you need to actually explain how the ways come to skin the cat at all, because otherwise it just looks like a hopeful rain dance.
Sorry, wrote this comment in a bit of a mood. Hope I didn’t seem hostile. Yes, this exchange is enjoyable, but I’m starting to get tired at all the back and forth-ing and having to write a few thousand words each time, and it gets tiring when most of my points still aren’t understood. Anyway. Read and understand everything in this and all my previous comments. Peace out.
Perhaps we will eventually have to admit that eventually reaching a mutual conclusion would take too much discussion; all these points are like rabbits; for every point in contention, there will be five offspring from it every exchange.
Sorry it took me so long to reply to your last thread post, but I’m just now finding the time. Thanks again for your contribution to the dialogue here!
Among other things, your last round of comments seem to imply two things: 1) I’m not reading most of what you write, so I need to go back and re-read everything you said several times, and 2) none of my arguments have any validity whatsoever.
As for point 1, I think this strategy is a conversation stopper. It’s also condescending. Miscommunication and the need for clarification are common aspects of dialogue, and I don’t think it’s very charitable for you to assume that everything you said was clear and if there is a problem in communication it must be that the other dialogue partner simply isn’t reading what you wrote. You’ve done a good job in many of your previous posts clarifying points you made that I misinterpreted. For example, by “self-evident” you didn’t mean “self-evidently true,” and you were using the word “prove” in a very generic way, etc. These types of clarifications have helped our dialogue progress to where it is now, and it would be a shame for you to now begin assuming that I’m not even reading your comments, and to instruct me to read them over and over again until I “get it.”
As for point 2 … well, this is where the dialogue picks back up.
Please forgive me for the solecism. By “sound” I meant “valid” as in “valid in form,” as in “the conclusion necessarily follows from the premises.” I have gone back and put the right word in play [in brackets to preserve my original mistake even when correcting it] in my previous comments. With this terminological blunder aside then, my argument still stands, and based on your most recent comments, I think you would agree, for as I hope to show, you admit that the premises of an argument must be true (or at least plausible) in order for an argument to be sound overall (and now I am using the word “sound” in its proper way). It follows that the validity of an argument is no reason to accept its conclusion if the premises are questionable. But more on that in a moment.
Please forgive if I have misinterpreted your argument, but your previous comment seemed to imply that an ethical paradigm’s logical standing rested only on its argument not being a logical contradiction (which doesn’t address whether the premises are true or false, questionable or unquestionable), for you said:
By “something impossible” here you seem to mean something logically impossible (I get this from your example of a square circle which is a logical impossibility). From this comment I had the impression that all you cared about was whether the argument for Utilitarianism was valid, and that so long as it wasn’t a self-contradiction, it still “stands.” Another place where I think I got this impression was where you said:
Here you seem to rest the case entirely on validity of argument without concern for plausible premises. That is why I used the opposite premises (pain is good and pleasure is bad) to show that the validity of the argument or its non-contradictory form are no reason to claim that the argument “stands” in spite of my criticism.
Another place you appear to do this is here:
Leaving aside the circular aspect of the latter part of your comment here about “assuming it’s true,” and about something being “the only thing that can be correct in it’s specific type of thing” (a comment I am not sure how to understand), you seem to continue to stress only the validity of argument here.
And here again:
Although now I am confident I have misinterpreted you, all these comments served to give me the (wrong) impression that you believed that if my arguments against Utilitarianism haven’t shown the argument to be invalid, I had no case against Utilitarianism. I hope you can be sympathetic here, and see how I might have gotten that impression.
Now if I comb through all of your comments for help on this, I can see that at other places you appear to grant that validity of argument is not sufficient for the soundness of an argument.
Ah! There we have it. If I translate your term “assumptions” in the first comment to “premises,” then I can conclude the following: Although at some places you appear to stress validity (or “form”) of argument only, you didn’t really intend to imply that the only way to argue against a moral theory is by showing that the argument doesn’t follow from the premises. You do intend to maintain that if one has a case against the assumptions of an ethical paradigm (or its premises), then such a case could undermine the soundness of the ethical paradigm–even if the argument for such a paradigm is valid .
But you appear to contradict this in many other places (as I have shown by my quotations above). To give an example from your most recent comments:
But if what I have established so far holds (and you appear at other places to agree), then one does not NEED to show its argument to be flawed, for this is not the only way to undermine an ethical paradigm: one can also attack the premises . Thus, although at some places you appear to grant this, at other places you say things that appear to contradict it.
But this is not the only place you appear to be inconsistent. In your last round of comments, you say:
Besides the observation that I have made several arguments (not just one) against utilitarianism, you have already granted that at least one of my arguments follows the indirect method of reasoning against a view known as Reducto ad ridiculum. We saw from our mutual checking of the peer-reviewed Philosophy Encyclopedia that this type of argument is categorized under the broader rubric of Reducto ad absurdum, and is not categorized therein as a logical fallacy. Which brings me to another point:
Some of our dialogue is being held up by your lack of response, for you argued:
I granted you the more specific term Reducto ad ridiculum (and we can leave aside whether Reducto ad ridiculum is considered a type of Reducto ad absurdum) and I challenged you to provide an argument for why such a form of reasoning should be considered a “logical fallacy,” for it is a common logical form of argument (enough to have its own name!) and yet I’ve never seen it categorized as a logical fallacy. The ball was left in your court, then, to show that Reducto ad ridiculum was a logical fallacy, but you didn’t respond at all to this point, you just kept telling me to go back and re-read everything you had already said.
That brings me to another point: Instead of fully engaging with my arguments, in many cases you are shortcutting the exchange by jumping to the denouncement of them. I have provided several arguments that jumpstarted our discussion, about which we have continued to dialogue. A great part of my case comes from arguments against the premises of utilitarianism, and another that tries to show that utilitarianism is impossible. I labored to illustrate the ambiguity in the most important parts of the premises (What is “pleasure,” what is the “greater pleasure”?) and argued that even if we had an objective way of defining these, applying them (which is what the theory of utilitarianism is all about) would be impossible. Rather than sustained satisfying engagement of these arguments, you have now resorted to denouncements:
Here you are assuming what you are supposed to be showing: that my arguments don’t undermine the premises (# 1 in your list) and that utilitarianism is not impossible (possibly #4 in your list if we can go beyond mere logical impossibility to practical impossibility).
Here are at least two of my arguments in summary form, which I hope to clarify further the comments that follow them.
:: The Inevitability of Arbitrariness ::
• In order for an ethical paradigm to be sound or intelligible, its key terms must be at least relatively clear (otherwise they will be defined arbitrarily).
• Several of the key terms in the premises for the Utilitarian argument are not even relatively clear. (remember all my examples and illustrations of this)
• Therefore, the ethical paradigm is unsound or intelligible.
:: The Omniscience Requirement ::
• The Utilitarian paradigm requires for us to do what will cause “greater pleasure (and less pain)”
• In order for us to do what will cause “greater pleasure” we must know what the consequences of our actions will be.
• In order to know what the consequences of our actions will be, we would need to know immediate and future consequences.
• But it is impossible to know the full range of all future consequences for any given action or set of actions by an individual or society.
• Therefore, we do cannot know what the immediate and future consequences of our actions will actually be.
• Therefore, we cannot know what will cause the “greater pleasure.”
• Therefore, we cannot do what is required from us in the Utilitarian paradigm.
• Therefore, utilitarianism is impossible.
As far as showing the premises to be ambiguous, you admit that:
[Adding 6/1/15:] When you say “All it tells you is ‘you need to maximize positive feelings (pleasure)'” Well, that’s not a fair summary of what exclusive utilitarianism tells you. That could summarize my utilitarian factoring: you need to consider c as well as a, b, d, e, and f (with c being utilitarianism). Here is a fairer summary: the only criterion for right and wrong in light of all possible actions to be taken is which one will cause more total pleasure receptors to go off in the immediate and indefinite future.
The word “pleasure” doesn’t stand alone in the argument—it has to be understood as a certain kind of pleasure, namely, the “greater” pleasure. So it’s not just that I’m arguing that we don’t know what pleasure is—I am willing to grant you for the sake of argument that pleasure can even be mathematically determined based on number of pleasure receptors generated as a consequence.
You seem to be stretching here. I hope now you can see even based on your own last comment that judging such things can boil down to intuition or prejudice, but also false memory, as the science of psychology has demonstrated that people’s memories are often re-constructions of the past rather than pure memories, and they take parts of what we actually remember and fill in the blanks or gaps in our memory with what we intuitively imagine).
But I have not simply argued that utilitarianism doesn’t tell you the different values of pleasure, but that especially the key term “actions that cause the greater pleasure” in the premises is ambiguous for two reasons: it is impossible to know what “actions” produce the “greater pleasure,” and I also argued that even if we define “pleasure” objectively in terms of how many pleasure receptors are produced in the brain, this still leaves ambiguity (remember the “safe” person vs. the “adrenaline junky” argument, where I illustrated how an equal amount of such pleasure receptors over a given persons lifetime would still leave morally ambiguous two radically different approaches to life—and here by “morally ambiguous” I simply mean that when choosing between these two options as a utilitarian, one would have no means of deciding which to follow).
If an ethical paradigm (designed to guide action) is only clear in theory, but not when we begin to apply the terms of the premise to a given situation, its design has failed.
Now since you agree with me that it’s impossible to know the future, I cannot see how you can escape my argument that it’s impossible to know what “actions” would produce the “greater pleasure,” since to know this, we must know all the possible future consequences (whether expected or unexpected, immediate of far off) of every action or set of actions of an individual or society on all future generations (whether they are effected intentionally or unintentionally). For utilitarianism to “work,” then, you would have to know these future consequences. Now unless you intend to amend the utilitarian theory to exclude long-term future consequences from the moral criterion, you have a serious problem with the theory, for its moral criterion are rendered impossible to use, since whether any given action produces or does not produce the greatest pleasure for the greatest amount of people over the longest period of time is completely ambiguous without knowledge of the total consequences.
Your argument that “you can determine if a pleasure is greater than another if people find the pleasure more pleasurable in the long and short terms, and if they’ve experienced both pleasures” is not only a bit redundant, but does not reckon with the impossibility of consulting with all possible future generations who will have been effected by the acts and actions of individuals and societies in the past and present. Furthermore, consulting them may not do any good, since, as you admit, “you’d also have to take into account any possible biases.”
I feel as though you have receded from the utilitarian criterion when you say:
Incredibly, you are here assuming that unpredictable consequences are 1) minor and 2) unimportant for judging whether an action (or set of actions) of an individual or society produce the “greatest pleasure” in the long run. But as my examples have already shown, unpredictable consequences are not always minor and turn out to be very important for judging whether a given action or set of actions produce “greater pleasure” in the long run. For example, who could’ve predicted that Mr. Pedo’s actions would end up saving a girls life? Who could’ve predicted that Salvetraders would, over generations, potentially end up producing “more pleasure than pain” for the future generations of enslaved Africans in America? In these scenario’s it’s the unpredictable (and often ironic) events of history that make knowing all future consequences necessary for the utilitarian paradigm. You can’t arbitrarily say “those don’t count” and then denounce my argument. You have to deal with them.
If you think that only predictable consequences are major or important, you have not shown this to be the case, and my examples have shown how the opposite appears to be the case in the concrete unfolding of actions and their consequences. Your ex-nihilo statistical claim that “0.5% of the time, an action will cause a significant unforeseen consequence” is, in my opinion, naïve and unfounded, arbitrary and implausible (where did you get it?). It does not salvage utilitarianism from this argument. Your examples from personal experience (“from personal experience, though, I have only very rarely had a significant consequence from an action that was entirely unforeseeable”) are subjective and parochial. We have to take into account human experience as a whole up to this point in history as best we can, not just our own subjective and limited experience. As my example of the slavetraders’ unintended consequences show, history tells us that in many cases, relevant, important, and major future consequences can be unpredictable. And as my example of Mr. Pedo also shows, consequences unpredictable to the individual and counter to his or her intentions can fundamentally change whether the action they are performing turns out to be “good” or “bad.”
I conclude that utilitarianism, even if “clear” in theory, is not clear in the application it is designed for. I conclude that utilitarianism is not just “inaccurate” but impossible. I conclude that unpredictable consequences can be important, and since people cannot know the relevant consequences of their actions, they will be left (in the utilitarian paradigm) to ultimately operate off biases rather than knowledge (biases that will be arbitrary).
Here our dialogue has regressed to cutesy rhetoric. If there has been miscommunication and lack of clarity, I confess that I am partly responsible, but I certainly don’t think that what I’m doing is a “hopeful rain dance”! LoL!
Sometimes dialogues like this are wearisome because with every exchange new misunderstandings occur. But if old misunderstandings from previous comments are thereby clarified, progress is (I think) still being made. I hope my last round of comments will take us a step further.
“Here you are assuming what you are supposed to be showing:…”
No. I’m assuming what I have shown, what you haven’t shown, and what you have yet to show, which you eventually try to show in your comment, which I show to be flawed below.
Also, in regards to all the miscommunication, when I say shown an argument to be wrong, etc. I mean the “argument”, and the “argument” = premises that follow to form conclusion. Not just the actually validity. So to show an argument to be wrong, you can show it to be invalid, but you can also show its premises to be wrong.
Also, the assumptions of an argument = the premises. Depends on the context though.
So, I’ve decided to only target the actually relevant stuff in this discussion, as it’s really tiring me out to try and respond to everything. So, The only point I’m covering is, is do any of your arguments fulfil 1-4? No. They don’t (Unless you have an incredible amount of writing in ellipsis.) Observe:
First relevant part (to do with ambiguity in utilitarianism):
“But I have not simply argued that utilitarianism doesn’t tell you the different values of pleasure, but that especially the key term “actions that cause the greater pleasure” in the premises is ambiguous for two reasons: it is impossible to know what “actions” produce the “greater pleasure,” and I also argued that even if we define “pleasure” objectively in terms of how many pleasure receptors are produced in the brain, this still leaves ambiguity (remember the “safe” person vs. the “adrenaline junky” argument, where I illustrated how an equal amount of such pleasure receptors over a given persons lifetime would still leave morally ambiguous two radically different approaches to life—and here by “morally ambiguous” I simply mean that when choosing between these two options as a utilitarian, one would have no means of deciding which to follow).
Okay, this point is utterly ridiculous. Just because two different options in utilitarianism are morally equivalent (junky vs. safe person), that is nothing against utilitarianism. How, on earth, does two situations being morally equivalent under utilitarianism have anything against it? Does it matter that one has no means of deciding? No. One decides the same way as when I’m choosing between my two favourite colours of handkerchief; just go one way or the other. Although, generally utilitarianism would go with the safe person, as generally one reaches greater total amount of pleasure if one isn’t on a continuous roller-coaster of ups and downs; the greater pleasure lies in just plodding along happily (think Epicurus), because extravagant pleasure often just leads to greater sadness when you lose that pleasure, and greater pain that lies in unfulfilled desire, which you incited from eating too much chocolate (or other, somewhat more lethal substances)
Second relevant part:
Your argument that “what pleasure actually is is ambiguous”; I covered this in my previous comment, and you actually quoted the part where I responded to that. You didn’t argue against it, so maybe you conceded to it. Pretty much, we know enough about what “positive feelings” or “pleasure” is/are to determine whether the premises that mention “pleasure” are correct or incorrect.
So the above two things I covered show that you haven’t actually shown the premises of utilitarianism to be ambiguous:
Therefore, the second premise of your “inevitability of arbitrariness” argument as set out in your above comment has not yet been shown to be true.
Onwards to your “The omniscience requirement” argument, as set out above.
For this, I will simply target the second premise (and there appears to be a bit of a misunderstanding here.):
“In order for us to do what will cause “greater pleasure” we must know what the consequences of our actions will be.”
This is where it gets a lil’ complicated. You see, utilitarianism, more specifically says, “cause as great amount of pleasure, and least suffering, as possible” (You accept that right? It depends, but that’s usually implied, at least.). The key word is “possible”. You see, it’s impossible for us to know the future, so we just have to act in order to cause as great amount of pleasure as possible according to what we CAN do. We just have to do what will cause the greatest amount of pleasure according to what’s in our sphere of control. What I mean in a syllogism:
1) Utilitarianism requires us to cause as great amount of pleasure as “possible”, and least pleasure as “possible”.
2) It is impossible for us to tell the future.
3) It is therefore impossible to know what exactly the consequences of our actions will be.
4) It is therefore impossible for us to cause the greatest amount pleasure possible according to the known consequences of our actions, since we can’t know them(always keeping in mind the possibility of a piano falling from the sky. Jokes.)
5) Therefore Utilitarianism doesn’t expect us to know the consequences of our actions, just to act in order to (that is, with the purpose of) cause/ing the greatest amount of pleasure possible.
Cogitasne? (Latin for “do you understand?” Sorry. In a multilingual mood today.)
So, since premise no.2 of your second syllogism isn’t true. your second argument isn’t correct.
In conclusion, premises from both your syllogisms in your above post are incorrect. Therefore you haven’t fulfilled any of 1-4 and therefore your argument that your arguments have anything on utilitarianism is incorrect.
Good day. Also I must complement you on your syllogisms. I quite like the way the second one followed. It was very eloquent, less the second premise. 🙂
Actually my bad. It’s more of the first premise that’s flawed. Depends how you look at it. Anyway, hopefully you still understand my response to that anyway.
Thanks to your persistence, I believe we are getting somewhere important now, as is evidenced by our last round of comments.
That helps clarify things tremendously. Thanks.
Thank you sir. It’s good to see that we can appreciate each other’s rational discourse, and as my next point will show, I have also appreciated one of your critiques:
1. :: Are the Premises Ambiguous? ::
“Utterly ridiculous” is an exaggeration, but it’s definitely not my best argument. Every moral theory still leaves the agent in situations where they must choose between two apparently equal goods, so I’ll grant your objection sympathy here.
It is worth pondering, however, how radically different lives (the life a ruthless mobster, the life of a philanthropist, the life of a drug abuser and pimp, the life of a saint or monk) could all be potentially seen as causing roughly the same amount of pleasure-receptors over the lifespan of the person considered, and could therefore be considered as equally ideal (that is, if the quality of “good” is defined only in terms of pleasure receptors). Of course, the extent of “good” requires that we also take into account how their lives effect others, but the ambiguity that exists at the individual level is only compounded when its scope is widened to consider the pain and pleasure of all those effected by the individuals actions. One must first ask “What experiences or what kind of life leads to the greatest pleasure?” before one can ask, “What actions can I take that will produce these experiences or kind of life for the greatest number of people over the longest period of time?” The utilitarian criterion for moral action thus turns out to be incredibly accommodating for a wide variety of lifestyles usually considered morally at odds (or at least morally disparate). It’s not a knock-down argument against Utilitarianism, but it is a worthy of note and of practical concern for the one trying to live according to the utilitarian criterion, for judgments about what will maximize pleasure for other persons or groups implicitly relies on judgments about what causes pleasure for individuals.
Much of my discourse is prefaced with “but even if” arguments, which shows my willingness to allow you (for the sake of argument) to take certain things for granted. But I have let you off the hook too easily on a few of your attempts to critique my argument from ambiguity. For example, you said the premises are not ambiguous because you could define “good” in terms of pleasure and pleasure in terms of pleasure receptors in the brain. I went along with it for the sake of argument, but there are a few critical questions that must be asked which pose problems for this way of thinking:
1) Have scientists really learned to accurately measure pleasure quantity in the way that you propose? Let us not forget that the human brain is a great mystery to scientists still, and although we are learning more with the passing of each decade, our understanding is still largely undeveloped. Of course we have identified certain brain activity that indicates pleasure receptors produced in the brain that release dopamine, but can scientists actually quantify precisely such differences in pleasure experiences between two people experiencing similar kinds of pleasures? Take, for example, how scientists compare two types of experiences and suggest they cause “similar” reactions in the brain by showing pictures of the brain where active areas are highlighted in colors (http://www.huffingtonpost.com/2011/05/18/falling-in-love-with-art-_n_861812.html). This seems to me a quite primitive grasp of brain activity compared to being able to measure exactly how many pleasure-receptors are produced in the brain in a given period of time. This guy just produced 80 pleasure receptors simply by putting on his favorite song, but it only produced 40 for his friend who was present and enjoyed the music, but not as much. Have scientists really developed a tool that can measure pleasure quantity similar to a calculator so that pleasure quantity can be so precisely and neatly compared? If you know they have, please tell me where to read about it, but until then, I think your grounds for “objectivity” in defining and measuring pleasure may be quixotic after all.
2) Utilitarianism also has to be able to discern and quantify pain, because it requires for us to judge whether the pleasure produced will be more than the pain produced. Have scientists really reduced this down to a calculus as simply and straightforward as you have suggested, and as clear-cut as we would need it to be to arbitrate between experiences? Are there “pain-receptors” produced by the brain that scientists can count (19 produced in this scenario, 67 in that scenario, 960 in this other scenario). 3) And what about comparing the two (pleasure and pain)? Have scientists really developed a way to objectively measure quantity differences in pleasure and pain correlative to easy-to-calculate numbers that could be comparable like a greater-than or less-than math problem (60 > 50, 700 < 900)? 4) Furthermore, in this regard, keep in mind that not all chemicals are created equal, and some chemicals are more potent than others (compare dopamine with deadly venom for example and there is an obvious difference in quality and potency), thus measuring the physical quantity of pleasure-receptors vs. (hypothetical) pain-receptors (if there is such a thing) may be an entirely misleading enterprise anyway. This makes what will produce the most pleasure and least pain ambiguous, thus making key aspects of the premises ambiguous.
4) Yet even if scientists did one day have all this down to such an exacting science, and had determined, for example, that the potency of pain and pleasure-receptors were equivalent, for just one individual to judge what produces the greatest amount of pleasure for her (let alone all those effected by her actions!), they would have to be in possession of an accurate state-of-the-art mobile PRC (pleasure-receptor calculator) in order to objectively measure the consequences of their actions. 5) Even then they could never know for sure that some alternative action they could’ve chosen (whether this be an action they contemplated as an alternative or one they never contemplated) wouldn’t have unexpectedly produced a significantly higher amount of pleasure receptors (and less pain-receptors), for such possible alternatives could not be measured with the calculator, for they would be hypothetical unrealized consequences. This makes it impossible to know whether one is acting in a way that produces the greatest possible pleasure for the greatest number of people over the longest period of time.
6) How would such a person with their own personal PRC judge what will make other people experience pleasure, since what makes one person happy might be boring or even depressing for someone else, and vice-versa? (Consider: If you let evangelicals, for example, define happiness and pleasure, they might say a relationship with God will produce the deepest and most abiding and consistent pleasure in people. Empirical studies have tended to show (regardless of whether there is a God or not) that religious people on the whole tend to be more happy (= more pleasure receptors). Imagine what they would do with utilitarianism: to convert as many people to God as possible would produce the greatest amount of pleasure for the greatest number of people! That brings me back to my point about how incredibly accommodating utilitarianism could be. A secularist utilitarian would cringe at this application of the utilitarian ethical paradigm—but it underscores that fact that many different actions or sets of actions may not have any inherent pleasure-producing power, but may largely (or in many cases entirely) depend on the individual’s brain chemistry (or we might say, their individual preferences, penchants, peculiarities and appetites). And remember, judgments about what will maximize pleasure for other persons implicitly relies on judgments about what causes pleasure for individuals.
2. :: Transitional Comment ::
Although I’ve mostly revisited the ambiguity argument in detail above, I don’t really need it. For the sake of argument, I could still grant you that the premises of the utilitarian argument are not ambiguous, for even then utilitarianism is still not only incapable of being “accurate,” but is actually impossible.
3. :: Is Utilitarianism Impossible? ::
First, this argument does not rectify utilitarianism from my criticism that utilitarianism is impossible. To know what will cause as great amount of pleasure and least suffering as possible, one would still have to know all the possible consequences of their actions on all effected over the longest period of time. Such knowledge is still impossible.
Or are you trying to turn utilitarianism into a motivation-based ethical paradigm? This would mean as long as the moral agent thinks that what she is doing will cause the greatest amount of pleasure for the greatest number of people over the longest period of time, this makes her actions right and good. But this reduces the moral criterion for good action on motive (that is, whether or not the agent is motivated by her action in such a way that she thinks what she is doing is best, regardless of what the actual consequences of her action will be). In this case consequences are no longer important for the agent, only that she is motivated by thinking her action will cause the best result.
4. :: What is Utilitarianism Anyway? ::
Finally, and perhaps most importantly, as I suspected, you are now receding your position by the way you have attempted to define utilitarianism’s “good” and “bad” action apart from their actual consequences and in terms of some possibility or by limiting relevant consequences to only those within your own personal power to control (which raises a while new set of philosophical questions about what should be considered “within our control” or “sphere of influence”). Utilitarianism is a brand of consequentialism that defines actions entirely in terms of their consequences (not their possible consequences). See, for example , the Stanford Encyclopedia’s definition (http://plato.stanford.edu/entries/utilitarianism-history/), or The Cambridge Dictionary of Philosophy, 2nd Edition, Robert Audi, gen. ed. (New York, NY: Cambridge University Press, 1999), 942-944. Or you can just check a standard introductory philosophy textbook such as Ed. L. Miller’s widely used college textbook Questions that Matter: An Invitation to Philosophy, 4th Ed. (New York, NY: The McGraw-Hill Companies, Inc., 1984), 445 ff. (I have the 4th edition but the most recent edition is found here: http://www.amazon.com/Questions-that-Matter-Invitation-Philosophy/dp/0073386561/ref=sr_1_1?ie=UTF8&qid=1349176364&sr=8-1&keywords=Questions+that+Matter%3A+An+invitation+to+philosophy).
5. :: Relevance Rubric ::
I take it the point about how you failed to back up your claim that my Reducto ad ridiculum argument is supposedly a logical fallacy wasn’t relevant to our discussion? It covers one of my arguments about how utilitarianism leads to absurdity (and here I do not mean logical absurdity, as in self-contradiction, as we have already discussed, but that doesn’t make it irrelevant since we have to consider more than mere logical necessity or logical contradiction, as you have agreed). Utilitarianism makes Mr. Pedo a hero and certain Slave Traders admirable for their good deeds.
:: Conclusion ::
I conclude: What causes “more pleasure and less pain” is not as straightforward as you suppose. But even if it was, utilitarianism is still impossible—and this holds true even if we take your new definition (which is really an abandonment of utilitarianism’s consequentialist criterion). It holds true also of utilitarianism as its defined in encyclopedias and philosophical dictionaries and textbooks. Therefore, my syllogism still stands, and your definition sneaks in an aberration of the actual definition of utilitarianism in order to pry it out from the grips of my syllogism.
All I need to undermine utilitarianism as an ethical paradigm (designed to be followed) is to show that (apart from its being impractical, inaccurate, absurd, ambiguous) it is actually impossible. In the course of our dialogue, it all seems now to hinge on the actual definition of utilitarianism—specifically on whether its criterion for “good” or “bad” behavior is based on the actual consequences of actions (or the hypothetical actual consequences of some alternative set of actions). I believe the answer is found abundantly in philosophy books, encyclopedias, and dictionaries (as I have shown). Do you have any sources to show utilitarianism is not a form of consequentialism? (that is, that utilitarianism after all does not use all actual consequences of actions as the only criterion for “good” or “bad” decisions?)
I close with an excerpt from The Cambridge Dictionary of Philosophy, 2nd Ed., pg. 942.
This response will have two broad sections. Firstly, I will cover why the fact that the doer is forced to try and predict the consequences of her actions, isn’t a sound point against utilitarianism. Tied in with this, I will also cover whether utilitarianism actually expects the doer to be able to tell the future. Following this, I will secondly cover the point of relevancy; why some points aren’t relevant, and also cover a point at which you misinterpreted one of my arguments.
Firstly in regards to the whole future side to it:
Utilitarianism makes the claim that the only thing that determines how good an action is, is its consequences (more on this later).
It follows, that if this claim is true, then all ethical theories that don’t judge actions by their consequences, are nonsensical.
Thus, all ethical theories that are sensical will always force the doer to try and predict what the consequences of what her actions are, as it’s only the consequences that matter (by this criterion.)
It goes on then, that you can’t judge an ethical theory by the fact that it forces the doer to make a prediction about the future, as if it didn’t, it would be nonsensical; it’s a necessity.
Thus, if you really wanted to judge a theory by the fact that it forces doers to predict the consequences of their actions, you’d have to dispute the claim that the consequences of an action are all that matters. You tried to dispute this claim in your first comment about the Pedo to “D.”, but you never actually formed an argument, apart from telling a story. The implied argument in that comment is covered below.
You haven’t disputed the fact that actions can only be judged by their consequences, thus, all your arguments that centre around it forcing the doer to make a claim about the future, are flawed.
“But E, you assume what you should be trying to show in your above argument. You assume that the only thing that determines how good an action is, is its consequences.”
No. The thing is, I’m not try to show utilitarianism to be right; I’m try to show that your arguments against it are wrong. And what I’m showing actually is, is that since you haven’t shown X part of utilitarianism to be flawed, your argument that utilitarianism is flawed because of Y, is flawed due to the fact you haven’t shown X part to be flawed. And anyway, if you did actually show X part to be flawed, you wouldn’t have to worry about Y, because you would have shown utilitarianism to be wrong anyway.
Note: X=that consequences of actions are all that matters. Y=Utilitarianism forces doer to predict her actions.
“But, E, my argument was really that utilitarianism expects the doer to actually be able to tell the future, and since this is impossible, utilitarianism is impossible.”
Ah, my young padwan, there is a difference between a theory defining how good an action is by its consequences, and defining how good someone is for her actions. Utilitarianism, indeed, does define an action’s (the action itself) goodness by the consequences, however, it does not follow that it judges a person by the consequences of her actions. As you inadvertently quoted above in your comment, Mill said that the utilitarian paradigm for a person is: “…always act so as to produce the greatest happiness.” Utilitarianism judges people by how much so they follow its paradigm (duh), and that says that they should “act so as to produce the greatest happiness”. People can’t tell the future, so it would be grossly unfair for utilitarianism to judge people otherwise, and fortunately, nearly all forms of it, don’t (some would, and I agree, those would be flawed.).
Thus, utilitarianism, as you neatly demonstrated in your above quoting, judges people by how much they follow its paradigm, and actions, themselves,in a different manner: by their consequences; two somewhat different things. One is derived from the truth of the other.
“But, E, doesn’t that mean that utilitarianism is simply a motivationally based ethical theory, not a consequentialist one? But that wouldn’t be utilitarianism! *refers to various encyclopaedias*”
Well, firstly, I think J.S. Mill would know, but I’ll answer: no. Utilitarianism does judge actions by the consequences of them, and that is its whole basis. As quoted from the “Shorter Routledge Encyclopaedia of Philosophy”:
“Consequentialism assesses the rightness or wrongness of actions in terms of the value of their consequences.”
And that is what Utilitarianism does, exactly. However, judging actions is a completely different thing to judging the people themselves; the people can’t tell the future, and they can only do so much. Why, for it were be otherwise, it would by completely unethical for people to not turn themselves into gods and start changing the universe for good; however they can’t. Thus, utilitarianism, as JS Mill said above, judges people differently as it does to their actions.
Furthermore, utilitarianism doesn’t judge people just by their motives; it judges them by how much they rationally think “what will cause the greatest pleasure/happiness?”, and come to a rational answer, and then it also judges them how much they act to produce this greatest happiness.
“But E, surely how good the actions of someone are, and how the actual person is judged, are the same thing?”
No. The judgement of actions simply looks at just their effects. However, when judging a person, a plethora of other factor come into play (that obviously can’t just be ignored); they don’t know what the consequences of their actions will be, they may be disabled in some fashion, maybe they don’t even have the possibility of doing a good action, they just have a choice between an action that’s bad, and one that’s worse. Thus, you need to determine the two things completely differently, and judge the person, by how much they “….act so as to produce the greatest happiness.” How they act “so as to”, not how much they actually cause.
“I’m still not convinced.”
Well, form a rational argument that follows and doesn’t misinterpret my arguments, and I’ll consider it.
Thus the above dialogue and argument, in case you didn’t notice, shows that your arguments regarding judging utilitarianism by the fact that it assumes the doer can tell the future, and also about the fact that I’ve changed the definition of utilitarianism, are flawed. The actual truth is this: you didn’t actually know what utilitarianism was in the first place – you just went ahead and misinterpreted it. It was an understandable misinterpretation though.
Now, onto the second section of my response, covering your misinterpretation of an argument, and my explanation of relevancy.
I will respond to the part of your comment regarding my argument that: “….you said the premises are not ambiguous because you could define “good” in terms of pleasure and pleasure in terms of pleasure receptors in the brain.”
Now, before I go into the way you’ve misinterpreted my argument, I’ll just soften the blow by saying: I appreciate your efforts to understand what I write, even if you don’t end up doing so.
Okay, firstly, I never said that. I only said, in regards to the pleasure receptors, to demonstrate that determining what is good from utilitarianism does in fact have an objective basis, and it is in said way that its subjectivity different from our intuitions; it has an objective basis (there was more to this argument, but I’ll leave it at that.). This was in response to your point that “for all my talk about intuitions being subjective, utilitarianism is very subjective too.” (paraphrase), and I was showing how utilitarianism was different to the subjectivity of our intuitions.
My actual argument was that: we know enough about what pleasure is to determine whether the premises using that term (in the argument for utilitarianism) are true or not (and, obviously, that’s the greatest level of clarity we need to determine the trueness of utilitarianism.). And, hence, we can determine whether the conclusion of utilitarianism is true or not. You don’t appear to dispute this fact; an extremely significant part of your argument centres around the ambiguity of the conclusion, how the doer can’t know the consequences of her actions etc., rather than that the premises are too ambiguous to be determined true or not. This brings me to the relevancy part; why some of your points aren’t relevant to whether utilitarianism is the best ethical theory.
On Relevancy (I’ve covered this point a few times in above comments, just so you know. Although in different ways.):
My basic argument is this: whether these points are sound or not has no effect at all on whether utilitarianism is flawed or not. The points I address are:
1) Different lives can be viewed as having the same pleasure.
2) Scientists still don’t know how to measure pleasure.
3) It’s almost impossible to discern and qualify pain and pleasure.
4) The doer can’t judge all the consequences of her actions.
5) The doer can’t judge what makes specific individuals happy.
6) The theory leads to consequences that contradict intuition or are hard to accept. (Reductio ad ridiculum. See end.)
To do so, I will demonstrate two different hypothetical situations: one where, all of these points are true, and utilitarianism, also true, is unaffected, and one where utilitarianism is false, and these points still have no affect on utilitarianism. And, I will thus show that these points are utterly and totally irrelevant in regards to the argument of utilitarianism.
So, firstly, let’s assume that utilitarianism is true, and all of your above six points I’m targeting are also true to a significant degree. Thus, what is good is defined as what causes the most pleasure.
However, as a result of the trueness of your six points above, it’s almost impossible to determine what is right and wrong, and thus people are very confused, don’t know what to do, don’t like what actions are ethical. However, this doesn’t change that fact that utilitarianism is right, and pleasure and pain are still thus defined as being the only things that are good and bad. However, because utilitarianism itself is still true, no other ethical theory can usurp it.
All that has happened as a result of your six points is: what is good and bad is very hard to measure, and thus deciding what actions are good and bad is quite hard. It, however, has had no effect on the actual trueness of utilitarianism itself, and has given no more soundness to any other ethical theories; utilitarianism is still as true as when it would have been if these 6 points of yours weren’t true.
Let’s go to the next hypothetical situation: Utilitarianism is false, and these points are true. Once again these points obviously have no effect on the trueness of utilitarianism, as it’s false anyway, and they don’t even make it any more false.
Thus, in all situations, both assuming utilitarianism to be true, and to be false, your above six points clearly have no effect on it at all.
“But, E, my points show utilitarianism to be flawed at a point where it is neither wrong nor right, therefore you haven’t covered all possible situations! As, if utilitarianism is true, then it’s true, but if it’s false, then it’s false, so these situations don’t allow for my points to have any effect.”
Ah, my young padwan (I just love that phrase. It’s not meant to be condescending. It’s just awesome.). I suspect the point at which it’s neither true nor false is the point at which you mean that it is still being decided whether utilitarianism is or isn’t flawed or not. This, is however, the point at which I define utilitarianism to be assumed to be true ie. you assume the argument to be true, until you find a flaw in it, for whatever reason. Your above six points only show that the usability, appeal to intuition etc. of utilitarianism isn’t perfect, and thus have no effect on the premises or the following of the argument, in that situation.
“But, E, once again, you are assuming what you should be setting out to show; that these points don’t find a flaw in the argument of utilitarianism. Your argument is circular.”
No. My argument isn’t circular. My argument would only be circular if its conclusion was that “your points don’t find a flaw in the argument of utilitarianism. As such, my conclusion is that “your points have nothing against utilitarianism”. For, you see, you have demonstrated no way in which any of those above six points do any of the following:
1) Show one of it’s premises to be wrong.
2) Show the argument not to follow.
3) Show the argument to be circular (which is fairly equivalent to 1))
4) Show the consequences of the argument to lead to the impossible, to contradict itself, or to something that’s obviously untrue (which leads to the implication that the argument doesn’t follow or its premises are flawed)
I hope it is blindingly obvious, that if you go up and look at those six points, all of them talk about the application of the conclusion of utilitarianism, and don’t fulfil any of the above four requirements.
“But, E, some of those six points show utilitarianism’s premises to be ambiguous. Thus indirectly fulfilling point 1. Namely I’m talking about the ambiguity in pleasure and pain.”
As I have argued previously, we don’t need to know every single type of pleasure or pain – in fact, we don’t even need to know one. All we need to know to determine the truth of a premise is to know that pleasure=positive feelings and pain=negative feelings. And we know that. Everyone knows that. We know more than enough to determine the truth of utilitarianism’s premises from this information. It’s all theoretical. For example, you don’t need to know exactly what that explosive material you’re throwing into the fire is, for the purposes of knowing how far to get away from it, you only need to know how explosive it is , and whether it’s explosive at all. However you don’t need to know exactly what it is. Same with determining trueness of some of the premises of utilitarianism.
“But, E, you’re assuming that I don’t fulfil point 4, for I have shown that utilitarianism, in terms of practicality, is impossible.”
This argument doesn’t follow at all. Observe:
*Man in desert*
1) I need water or I will die.
2) It is, in terms of practicality, impossible for me to get water.
3) Therefore, since the conclusion that I need water leads to a practical impossibility, I don’t need water. Hurrah!
*Dies of Thirst*
It is the exact same argument that you’re making. Observe:
1) I need to cause maximum pleasure or I’m immoral.
2) It is practically impossible for me to cause maximum pleasure.
3) Therefore, since the conclusion that pleasure is good leads to practical impossibility, goodness isn’t defined in terms of pleasure. Woo hoo!
*Goes and kills people* (That was a joke)
Of course, I have explained above, utilitarianism doesn’t lead to practical impossibility anyway, as I have explained how it’s clearly a different thing for an action to be good, and a person to be judged to be good by her actions.
“But, E, you didn’t incorporate the effects of my six points enough into your hypothetical situations!”
Yes I did. Observe:
1) Different lives can be viewed as having the same pleasure. (This means that people would be unsure what to do)
2) Scientists still don’t know how to measure pleasure. (This would mean that people who go to scientists for answers couldn’t be helped with deciding what things are pleasurable, and what aren’t)
3) It’s almost impossible to discern and qualify pain and pleasure. (Confusion, people don’t know what to do.)
4) The doer can’t judge all the consequences of her actions. (Once again, confusion, what should I do?)
5) The doer can’t judge what makes specific individuals happy. (Confusion.)
6) The theory leads to consequences that contradict intuition or are hard to accept. (Reductio ad ridiculum. See end.) (People saying: I don’t want to do that!)
However, even if these are true, they only have the above effects (which I summed up in my hypothetical situations.). These effects, obviously, have no effect on the truth of utilitarianism whether you assume it to be true or false, and the truth of these six points give no more soundness to any other ethical theories. As, I hope it is blindingly obvious, any ethical theory that is right, is better at decided what is right, than any ethical theory that is wrong.
“E, you still haven’t explained why “Reductio ad ridiculum” is a logical fallacy!”
Well, as above, I have explained why it is totally irrelevant whether utilitarianism is right to wrong. Whether something contradicts intuition or not, or leads to strange or unwanted consequences, has no effect at all on the logical standing of an argument. Do your intuitions follow the laws of logic? Does what you like follow the laws of logic? NO. Course they don’t. You said,
“….doesn’t make it irrelevant since we have to consider more than mere logical necessity or logical contradiction, as you have agreed.”
No. It is still irrelevant. I only ever agreed that whether something has an effect on something’s logical standing, is only determined by (prepare for a tautology) whether something has an effect of its logical standing. And obviously intuitions, and things that you don’t like have no effect at all on something’s logical standing.
“But, E, intuitions are relevant in the manner that they affect how much people like the theory.”
Firstly, utilitarianism is the strongest ethical theory to appeal to people’s intuitions. People are constantly weighing up, what will be consequences of this action be? Will it cause sadness, hurt or pain, or happiness? (ie. they’re using the utilitarian paradigm) Yes, sometimes utilitarianism does contradict people’s intuitions, but such intuitions can be overcome, given enough eventual acceptance over generations of it. Just think, the ethical intuitions of people today would have changed significantly from a few centuries ago.
Furthermore the first thing that I’m concerned with is: is the ethical theory correct? For once this is resolved, it doesn’t matter if people don’t follow some of the more intuition defying aspects of “X ethical theory that is correct”, at least if they follow the ones that don’t contradict intuition, then that’s better than them following any other ethical theory that’s wrong anyway. Duh.
Now I conclude:
I have shown all of your points to be fatally flawed, whether it be from irrelevancy, the fact they don’t follow, misinterpretation of my arguments or utilitarianism. I have covered the problem of future prediction, to how does utilitarianism judge people, to why certain points are irrelevant.
Finally, I want to thank you. Yes, you. For admitting that you got something wrong in your previous comment. That is quite a virtuous thing to do, and I have an increased level of respect for you for that. I greatly admire anyone that does that.
I just realized; I made a fair few typos in the above comment, so beware!
Also, I just thought of another possible objection you may have to my above points.
“E, Isn’t your claim that Utilitarianism asserts that “nothing other than the consequences matter in determining the rightness of an action”, contrary to your claim later in the comment that: one judges the rightness of an action differently to judging an actual person by their action, because other factors come into play? As these other factors are external to the consequences?”
No. The claim about the consequences only mattering was only in regard to judging the rightness of an action, not also in regard to judging an actual person, where other factors can come into play. I explained this in my above comment. To give an example of what I mean by how judging the rightness of an action is different to the person herself, I’ll give an example:
Mary-Lou accidentally knocks a very expensive vase, and it falls and breaks. The consequences of the action will cause great sadness as someone loses a prized possession, and thus the action is quite bad. However, it is a totally different thing to actually judge Mary-Lou by this – it was in fact a total accident, and she was actually trying to be very careful around it, because she knew that breaking the vase would have bad consequences, and she was really acting with total regard to the utilitarian paradigm, and following it exactly. Hence, Mary-Lou isn’t judged as harshly as the action itself. Thus, I hope the difference is clear.
I just want to make it clearer how utilitarianism is significantly different from motivational based ethical theories, and I will do so below.
So, the basic aim and purpose of an ethical theory is to show what the best courses of actions are, thus allowing people to make use of such theories to decide what choices they should make.
So, utilitarianism would say that the best course of action is the one that causes greatest pleasure (something along such lines).
A motivationally based ethical theory would say something like, for example: the best course of action is the one that is committed with the greatest amount of love (ie. motivation)
Thus, when someone was deciding what to do with utilitarianism, it would tell her that the best action that she can do is the one that causes the greatest pleasure. (ie. consequential)
On the other hand, the motivationally based ethical theory I gave above as an example, would say that the best action that she can do will be the one that she does with the greatest love. (ie. motivational and completely different to the above)
Thus, I hope you see they are both very, significantly, significantly different from each other. The actual judging of people, itself, is something that is external to the actual basic aims of the ethical theory itself; which is the answering of the question: what actions are right, and which are wrong?
When I didn’t respond to your last comment, it was initially out of pressing priorities that left me little time to do the type of critical thinking necessary to reply. But the great thing about blogs and posts, is that you can still come back and pick up a conversation right where you left off. Further, for me I found that the lapse of time was beneficial in getting a clearer picture of how our dialogue seems to be getting to the place of impasse, which gave me hope for continuing our exchange if you are up to it. Another reason I decided to do this is because this post has quickly climbed the charts of hits per day, so I see it on my stats whenever I check them, close to the top or at the top of most viewed. Prior to our dialogue, it was not even close to one of my more popular posts, so I think your contribution to the discussion has been helpful for readers (although I seriously doubt even 20% of the readers are actually reading through the entire exchange as it is quite tedious, as most critical discussions must, in my opinion, inevitably be if they are to be based on a careful examination of the meaning of terms, technical distinctions, and deconstruction).
In any case, I hope you are doing well, and that you will be open to re-visiting this topic and dialogue, and that a fresh look at an aged disagreement will do the same for you as it did for me—allow fresh insight about where our communication might have broken down. I’ve done my best to diagnose this, and have attempted to show below that you confused our discussion by agreeing with me about certain things officially, but then in practice adhering to them inconsistently (more on that below), but I’m open to your rebuttal potentially pointing out areas where the culpability lies with me.
With that said, I pick things up where we left off …
I’m not sure what I did this time to cause my position to be misunderstood, but I never argued that any ethical paradigm that requires predicting future consequences of one’s actions is necessarily a strike against that paradigm, so your entire first section is misled. I argued that exclusive Utilitarianism’s narrow criterion of “only consequences count” makes it impossible, as the immediate and future consequences are an unknown—and I would add they are an unknown both prior to and after a given action so long as the future exists, for there are many conceivable scenarios in which an action’s last consequence overturns all prior judgment by outweighing all prior consequences in the quality and quantity of accumulative pleasure or pain effected.
Wait. The conclusion doesn’t follow from the premises unless you slip in that sly assumption – “if this claim is true…” How did you go from your premise—that “ethical theory of U’s exclusive criterion are C,” to your conclusion that “All ethical theories not using U’s exclusive criterion are non-sensical”? I think you might have really dropped the ball here, as the only way you can get to your conclusion is by adding in the very point at debate: whether Utilitarianism’s claim is true. Not to kick you while you are down, but my argument did not address what is sensical and non-sensical, but what is possible or impossible. So your argument isn’t sound here, and it doesn’t even address my point, but rather misunderstands it.
I just want to point out that here unless I’m misunderstanding the first sentence (quoted above) in viewing it as a statement and not a question, your wording here is kind-of a big deal. I am making a distinction between ethical theories where only consequences count vs. ethical theories where only the agent’s intended consequences count. If the first sentence is intended as a statement (and it ends with a period), you seem to be in agreement here with my understanding of what exclusive Utilitarianism actually uses as its exclusive criterion: actual consequences, which are logically related to the action in question in different ways. The intended consequences come first logically in the act as the motive of the agent, but the actual consequences come second and are distinct from this motivation as cause to effect.
Looking past the abyss that stands between your premise and conclusion here (addressed above), I will now need to make further distinctions here between a person’s imagination of possible consequences of various possible actions, that person’s judgment concerning which of the various possible actions is best, that person’s motive for acting, the choice to act, the physical action of the body, and the immediate and future consequences of the act. The criterion of Utilitarianism is fixed exclusively on the last two of these (immediate and future consequences). Intention is thus irrelevant. Consequences are the only relevant factor, as you have already agreed.
But again, I never judged exclusive Utilitarianism the way you are suggesting, as I have noted above. I’m afraid you are boxing shadows of my arguments here rather than my arguments. If I were making that argument I would be contradicting myself: I already stated that I believe motive is significant in determining the rightness or wrongness of actions, as illustrated by my story of Mr. Pedo. Now motive implies an envisioning of predicted consequences inherent to an intelligible ethical act. Also a syllogism was not supposed to be in the story, but illustrated a claim I made concerning the relevancy of motive to the criterion of a given ethical paradigm. It’s not fair to count it against me if I’m not making the type of arguments that demonstrate Utilitarianism to be self contradictory in my illustrations of a claim, anymore than you should count it against me for not making claims in my illustrations. They each have their place.
Since you admit there is much that must go into judging any given ethical paradigm beyond validity of form in a supporting argument (and paradigms might have multiple and various arguments), any “argument” for my claim that doesn’t undermine the logical validity of the form of the Utilitarian syllogism, thereby “proving” (in your sense of the word) it to be logically self-contradictory doesn’t exclude its relevancy, as your rebuttals tend to (inconsistently) presume. Ethical discussion doesn’t exclude arguments of a different type. It seems you are still gravitating towards thinking that it’s necessary to make a certain type of argument for that argument to even be capable of being relevant or of “penetrating the armor” of the Utilitarian paradigm, as you put it. It will not help to mislabel my arguments as non-arguments, for ironically, this fails to meet the criterion of a counter argument and is more like a attempt to restrict the discussion by arbitrarily re-defining terms, which enables you to neglect engaging them for what they are, not what you want them to be—namely, a syllogism that demonstrates by logical necessity that Utilitarianism is self-contradictory by its form of argument.
I have established above that you have misread my argument. It may be flawed, but you have to interpret it correctly before you can aim to soundly counter it. This in turn makes the first few of your anticipations of my possible objections off the mark. We are herein speaking past one another.
Now I have argued that Utilitarianism is impossible—that is, as an ethical paradigm intended to be the exclusive criterion for people’s actions to be counted as “good.”
If an argument’s premises are doubtful, so will it’s conclusion be doubtful. One of the premises requires thinking that consequences (and only consequence) are measurable to be moral or immoral. I have labored to show that the ethical paradigm, if true, is impossible as an ethical paradigm (that is, one intended to be lived by) by virtue of this premise, given that the consequences are immeasurable by the subjects the paradigm is intended to serve, given that future consequences are not measurable by humans who don’t know the future. Your response here is to say (paraphrasing): “But you haven’t shown the form of argument to be invalid.” We are spinning our wheels here.
Ah, my witty logician! You have just committed the fallacy of equivocation, for when you say “a theory” in the first half of this sentence, you mean by it that theory in which the criterion of the Utilitarian excludes other criterion and has only actions as the ethical object of evaluation—only consequences of actions count for … (for what?) the evaluation of the action in question—which is how Mill’s ethical theory of Utilitarianism is understood. But judging actions and judging people are two different things, and there is not a paradigm in exclusive Utilitarianism (as you have defined it in your syllogism) that would provide the necessary framework for judging persons apart from judging their the total consequences of their actions.
Furthermore, “to so act as to produce” certain consequences is different than “to so act as to be motivated by the right intentions,” (in this case the intention to produce good consequences) for the former still bases its criterion on what is produced by an action, but the latter (your forced interpretation) imposes something new which must bring in new criterion and become inclusive of some other ethical paradigm with new criterion. The equivocation fallacy (in case you didn’t see it already) comes in here: the second half of the sentence, you still have that same theory in play as the subject of your verb “defining” but have changed its meaning to refer to something alien to what we have been discussing. That is, you use it to refer to a theory that judges the goodness of a person (rather than a person’s actions) and to judge the person in question on non-consequential grounds—not the grounds of exclusive Utilitarianism as we have been in mutual agreement about numerous times and even (inconsistently) in your last reply.
So now you are either saying (1) that some other criterion is ethically relevant for the exclusive Utilitarianism we have been discussing (a contradiction in terms, as we agreed that exclusive utilitarianism excludes varieties of criterion in favor of consequences only), or (2) that somehow the Utilitarian paradigm itself includes other ethical criterion such as a different object of ethical evaluation and different criterion for evaluating the different object (the object no longer being action, but people, and the agents not being viewed by the consequences of actions, but by their intentions or motives in the act to produce the best consequences).
You commit this fallacy again when you say “Utilitarianism judges people by how much they follow its paradigm,” for Utilitarianism as we have agreed up to this point has actions as the proper object of ethical judgments, not people, and uses the exclusive criterion of consequences, not motives.
Now in the quotation above you are restating my point in a way that makes it sound less credible. Whereas I have argued that the idea of consequences implies immediate and future consequences, but such consequences are unknown, you keep summarizing my argument by saying I’m blaming people for not knowing the future. That’s not exactly what I’m saying, and the difference between that and what I’m actually saying is not merely semantic. I’m not blaming people, I’m blaming the ethical criterion for defining the merit or demerit of an action exclusively in terms of consequences—,and I’m blaming it because I have understood the word consequences to logically include immediate and future consequences, which makes the ethical paradigm of exclusive Utilitarianism (the proper object of which is human action not human motive) impossible for people to follow, as it requires inaccessible knowledge for judging the aforementioned actions.
Now if exclusive Utilitarianism is impossible, as I have argued and you have failed to counter (even agreed with it implicitly), then another ethical theory (one you mistakenly refer to as Utilitarianism by stepping completely outside the framework of what you have already established as the basic paradigm) that judges people’s motives as also having ethical worth and value based on how well they “follow” what is impossible to follow, is bound to judge all people as bad people for always failing to follow an impossible paradigm. So your new strategy of parsing things doesn’t work.
I’m sorry to beat a dead horse here, but your use of language here is confused. People might judge a person based on how well they think the agent is following the utilitarian paradigm, but that is not the same thing as Utilitarianism judging the people, for it doesn’t judge people, but actions, as we have established. You are reverting again to your equivocation fallacy and confusing the discussion.
Another caricature of my type of reasoning:
Well, in light your use of language above, you have equated the two very different ethical theories by calling them both Utilitarianism, but surprisingly when answering this hypothetical objection, you point out implicitly your own equivocation fallacy and contradict yourself by what you say next:
Before you had Utilitarianism judging people on the one hand and actions on the other, based on two distinct and separate criterion, but here you briefly escape your fallacy and return again to reason. But now we are back where we were before: Utilitarianism is impossible because “consequences” logically requires immediate and future consequences, which people cannot know and should not therefore be expected to base their decisions on. Again, this is not the fault of the person—as you seem to understand it in your summaries of my argument—but a fault with the ethical paradigm.
I believe you are presenting a novel idea here about how to understand Mill’s phrase, as I have pointed out above: his “so as to” refers not to intentions, but productions (i.e. consequences). To act so as to produce certain consequences is not the same as to act so as to intend certain consequences. So you have committed the equivocation fallacy it appears partially based on another fallacy of interpretation against the judgment of the dictionaries and encyclopedias I quoted. I believe you are now kicking against the ox goats, as the proverb goes.
I find it ironic that so much of your arguments change the subject (from Utilitarianism to a theory that judges motives instead, as I have shown above), misinterpret my arguments (as if I’m blaming people for trying to predict future consequences because I’m too myopic to realize it’s required by the ethical theory) and ends with a triumphalist response to comeback/rebuttals that you imagined me making rather than one’s I actually made. This is a clear instance of boxing shadows / slam dunking on a kiddy goal / clotheslining scarecrows made of hay, etc.
Well, form a rational argument that follows and doesn’t misinterpret my arguments, and I’ll be open to consider it thoughtfully.
Again, never said that. It’s an unforeseen consequence of the theory, not an intentional flaw. I wouldn’t say the argument assumes anything, but would prefer rather to say that the theory requires (whether this contradicts the beliefs of those who follow it or not) its followers to know what the consequences of action will be, which logically requires the knowledge of unintentional immediate and generational consequences potentially lasting indefinitely. History experts will love to show how unintended consequences deeply effected history. Historiographies will shed light on the naïvety of your perspective, which does assume something (as I have pointed out in previous replies): namely, that consequences are the sort of thing that fall mostly within the agent’s intentions and can therefore be more-or-less predictable in spite of the agent’s prejudices and limitations.
Well now we are truly at an impasse, but it’s helpful to point out exactly what that impasse is: we’ve been working all along with different definitions of Utilitarianism. The only difference is that I have been consistent and stuck with one definition, whereas you have shifted yours, and committed the equivocation fallacy when pressed concerning Utilitarianism’s impossible criterion, which requires omniscience. But how can we judge who is right? When two people differ over a definition, the dispute is best settled by deference to the fields of study to which such a definition belongs in hopes of a consensus to be found in that field. I have provided you with some sources, which you have misinterpreted (as I have shown above by pointing out the difference between action production and action-motive). That leaves the ball in your court to quote some sources that define Utilitarianism the way you have when pressed, which allows it to treat multiple criterion (criterion of consequences of actions, criterion of motives of actions, and criterion of persons according to their motive or actions).
Here is my summary of how this part (not the whole) of our dialogue is going:
Me: The terms of the premises are unclear because we don’t know what the greater pleasure is.
Your Reply: The premises are true because we know what pleasure is.
I hope you can see there is still a disconnect here, for my argument was that the concepts of the premises are ambiguous and unclear—especially the combination of terms “greater” with “pleasure” and then applied to concepts as comprehensive as “consequences,” (whether intended or unintended, whether immediate, future, generational or indefinitely extending) whereas your counter here argues they are “true” rather than arguing that they are “clear.” The concept “greater pleasure” can’t be “true” of “false” because it’s not a proposition. NOTE: There is an important difference between a word’s meaning and referent being clear and a combination of words and their referent being clear. In my case, I’m arguing that the premises are unclear not because of the former, but because of the latter. A kid might know what chocolate is, but may not be clear on exactly what a “future chocolate surprise” refers to, and whether it denotes a greater amount of chocolate than what he will gain if he goes to the store to buy 6 chocolate bars. “Future chocolate surprise” in this analogy refers to future consequences, which cannot (as I have argued) be predicted, given the limitations of human intelligence in predicting future consequences.
Now here you cover your basic argument again as a counter to my point that you have ignored my counters to your specific points—such as the accusation that my reductio ad ridiculum was a logical fallacy. The time you spend recounting your argument due to your perception that my arguments misunderstand yours could be seen as better spent aiming to offer rejoinders to specific points. This is true especially in light of your own misunderstandings of both my own and your own arguments. By saying you are misunderstanding your own arguments, I refer here especially to your equivocation fallacy that has confused the discussion (see my arguments above), but also your earlier expectation for me to understand your explicit repeated statement that I had not “proven” Utilitarianism to be false which you explained in terms of my having not provided an argument showing Utilitarinaism’s basic argument was invalid in form, even though you either meant something different or changed your mind at other places.
In critical dialogues like this both sides often inevitably misunderstand their opponents, and rebuttals will often miss their intended target, hitting shadows and straw men. It is just part of critical dialogue. But the culpability can be on the part of the one making the argument, for not being clear or equivocating on meaning, etc., or on the part of the reader for distorting the argument of her interlocutor. But with every misfire and rebuttal, we have the potential to get closer to the truth, unless the dialogue participants make straw men out of the rejoinders also, creating an indefinite vicious cycle of straw to tear down rather than addressing the concerns of their dialogue partner as they have been stated.
Once again, you’ve missed my argument, for I am arguing not that Utilitarianism is hard to follow or “almost impossible,” but that it is impossible in principle. Further: the referents of “greater pleasure” are ambiguous. That an argument where pain and pleasure are reversed (with pain being the greatest good) would meet the same criterion your argument rests on (validity of form), which makes two opposite ethical theories equally valid. The paradigm is unpersuasive as it fails to reckon with motive and intention and so runs contrary to our most basic sense of culpability—it doesn’t judge people, but actions, in spite of your equivocation fallacy (established above) which you appear to have used in attempts to remove the force of my syllogism, which I will restate below:
:: The Omniscience Requirement ::
• The Utilitarian paradigm requires for us to do what will cause “greater pleasure (and less pain)”
• In order for us to do what will cause “greater pleasure” we must know what the consequences of our actions will be.
• In order to know what the consequences of our actions will be, we would need to know immediate and future consequences.
• But it is impossible to know the full range of all future consequences for any given action or set of actions by an individual or society.
• Therefore, we do cannot know what the immediate and future consequences of our actions will actually be.
• Therefore, we cannot know what will cause the “greater pleasure.”
• Therefore, we cannot do what is required from us in the Utilitarian paradigm.
• Therefore, utilitarianism is impossible.
Now although you agree that one must consider many things in deciding whether an ethical theory should be viewed as flawed, you continue to regress back into a narrow category of whether the conclusion follows from the premises, for when I make arguments of another type, you object that they still don’t show any problems with the ethical theory. So in the end, your decisive evaluative terms “true or flawed,” “right or wrong,” “sound or unsound,” etc., keep coming back to this point. When considering my arguments for why the terms and referents in the syllogism are ambiguous, the same argumentative form per se can make a contradictory paradigm just as “true” by your terms, you don’t think it’s a problem. When I show it’s impossible to know what is the “greater pleasure,” you distort my argument or shift to a motive-based theory and call it “Utilitarianism,” thereby equivocating on what has turned out to be the most fundamental contention between us: What is Utilitarianism? Once you’ve used the equivocation to escape my syllogism, you take up my own view again as it suits you, and declare Utilitarianism is, after all, exclusively concerned with actions and their consequences. I don’t think you did this on purpose, I just think it’s natural to use arguments one hasn’t fully thought through when eager to offer a rejoinder to an argument that, at least on the face of it, appears to follow.
When I emphasis how impossible exclusive Utilitarianism is, you use random percentages with no scientific basis to argue it’s not as hard as I think, or distort my argument and summarize it as if I were simply saying it’s hard (not impossible) and describe it as if I’m arguing simply that the archer has a hard time hitting the bullseye, when I’m arguing that the bullseye is so ambiguous that it could be on opposite sides of the room, or we might even say it’s out of archer’s range entirely and therefore not even visible. If the bullseye cannot be clearly seen, we are placing ourselves ever at risk for committing the Texas sharpshooter fallacy.
The logic which makes Utilitarianism stand in your view, is the same logic that contradicts it, for as I have shown, we can make the same argument by simply reversing the words pleasure and pain. Thus the logic by which you appear to be considering it as true or sound, I have used to show it’s opposite to be likewise just as true or sound (by your terms, not mine). I have provided this logic below for your convenience:
1. Pain is good, pleasure is bad.
2. Only pain and pleasure, and things that lead to pain and pleasure are measurable to be moral or immoral.
3. It’s better to have more good things in total and less bad things in total.
4. Therefore an action is only better the more it is acted in order to cause more pain (and less pleasure).
Now this argument, remember, has the same form as yours. So unless you can find criterion outside of “validity” (that is, valid argumentative form) you will be unable to refute it. This will involve questioning the premises or terms involved.
This illustration is poorly suited for a discussion in which an ethical principle is shown to be logically impossible to follow in any known circumstance, as delightfully witty as your syllogism is here. By “known circumstance” I intend to rule out extravagant hypotheticals such as humans eventually acquiring the ability to know all future events longer down the chain of evolutionary development. The following scenario is, therefore, better suited for illustrative purposes (if that’s what your going for):
1) I need to use the wings on my body growing out from my shoulder blades to fly to the nearest town or I will die.
2) It is, in terms of practicality, impossible for me to fly, because I don’t have wings growing out of my back.
3) Therefore, for me to use the wings on my body in order to live is impossible.
No no … remember. This is where you were making a desperate move away from my syllogism, and were unwittingly falling into the equivocation fallacy.
So … I assume you mean “right or wrong”? In any case, I take it that you are then implicitly retracting your argument that I was committing a logical fallacy? To say the answer to this question is irrelevant is a curious claim to me. If you want to argue (as I have already agreed) that this type of argument doesn’t show Utilitarianism to be a logical contradiction, or doesn’t prove the premises to be false is one thing. But if you still wish to categorize my argument as a logical fallacy, please clarify. I would really prefer you just come out and say whether you retract this, or whether you defend this categorization of my argument. It looks like a copout when I ask whether you retract your accusation, and your response is to change the subject and say it’s irrelevant.
You need not “hope” I get this distinction. In fact, I pointed out this distinction before you did, because you shifted your description of Utilitarianism closer toward a motivational theory (if not exclusively toward one), so the distinction was foundational for my argument that you have committed the equivocation fallacy. It thus illustrates your inconsistency in how you are defining Utilitarianism one way when trying to make it sound unique and distinct from motivational theories of morality and ethics, but then borrowing from a motivational paradigm when it suits you to answer my syllogism concerning the impossibility of Utilitarianism (as I have provided quotations and analysis for above).
Moving on … I’m curious what you think about the “truth” of a hypothetical argument (not one I’m committed to) that I think might meet the same criterion of “truth” that you have held so high as the standard for rationality (i.e. validity of form).
1. Living for God is good, living for self is bad.
2. Only living for God and living for self, and things that lead to these are meaningful as moral or immoral.
3. It’s better to have more total good in life and less total bad in life.
4. Therefore an action is only moral or immoral, good or bad, the more or less it increases or decreases the degree to which one lives for God or lives for self.
I can imagine some initial objections one might have to this setup. For example, pain and pleasure are scientifically measurable, but “living for God” is not (not to mention that the whole concept of “God” is scientifically objectionable). But of course one could (for the sake of argument) grant a non-objectionable definition of God which could be compatible with science (such as “ultimate reality” including but going beyond the spectrum of reality that the human senses are fine-tuned for—such as invisible light, sound frequencies beyond the ability of the human ear to perceive, and a great host of other realities that combine those realities that are fine-tuned to the human senses). In any case, the ambiguity of this combination of terms, could understandably cause doubt about the “truth” (in your terms) of the conclusion.
However, I want to point out that if this is the case, I think you will better understand my own reservations about your argument, and how the ambiguity of terms brings doubt on the conclusion of the argument, regardless of whether it’s “form” is valid / whether the argument is contradictory, or whether the conclusion follows *if* one grants the premises. On the other hand, that only things that are “measurable” are moral or immoral is also a built in assumption in your own syllogism. It is highly questionable whether this is a safe assumption and should be allowed into the syllogism uncritically. Furthermore, it is possible to argue that “pain” can be pleasurable, so that placing these as opposites in your syllogism is questionable (at least on the face of it). Pain is one of the pleasurable fetishes of kinky sex, for example. But if pain is a necessary ingredient to many forms of pleasure (some considered among the highest forms of pleasure by many accounts – including scientific ones that show the powerful happy chemicals released during sexual arousal and climax), it calls into question the black and white hard-lines of distinction and dualism your syllogism requires.
Further, there are so many concepts of what it means for something to be “most good” or “most pleasurable” in the same way that there are so many competing religious ideologies for defining the term “God” (even if overlap can be detected). In any case, the syllogism above is not offered here as an argument, but as a possible aid in understanding by exploring curiosity: I wonder out of curiosity whether you think the argument above is valid in form, and whether the premises can be “proven” (by your standards) wrong.
In your next reply, I hope you will address especially my most important claim here, on which everything else hangs: that you have committed the equivocation fallacy. I initially wanted to reply with this critique as my only reply, but could not help myself in pointing out other problems that are not at the core of our disagreement. As far as I am concerned, until we are both talking about the same thing when we talk about “Utilitarianism,” we don’t actually disagree, but are simply working with different definitions as our starting points. If Utilitarianism simply refers to acting with the correct motive (intending to produce the greatest happiness for the greatest number, to maximize pleasure and minimize pain for the greatest number of people), then it seems what we have in Utilitarianism is a scientific specification or clarification/expansion of what it means to “love thy neighbor as thyself.” For then Utilitarianism can be seen as providing a hard (as opposed to soft) guideline on what will be loving for “thy neighbor”—namely, to make “thy neighbor” happy according to a material/scientific measurement of happiness. And if this motive drives the action, it will be moral, regardless of whether the consequences of the action turn out as the Utilitarian practitioner intended.
Furthermore, we would then need to consider more than just actions, but persons, since no single act defines one’s character, ideally we would need a person habituated in the practice of acting out of the right motive, which leads to a proper utilitarian disposition to act according to what causes the greatest happiness for everyone potentially effected by the action. Once the Utilitarian character was in place, the chances of securing the Utilitarian goal (acting out of the correct motive) would be much higher due to the neurological connections that are molded by this type of behavior, making it much more natural for the adherent to act out of this motive in the private secrecy of their mind.
And now we should be able to see how helpful religion might be in creating a culture where this type of ethic can be developed into maturity: be a part of your community and get to know others who form the larger population most likely to be effected by your actions, give yourself a “dose” every Sunday of inspiring messages about how important it is that you love your neighbors and do what’s best for them, create rituals that inspire love of your neighbors and practice them to further habituate your thinking to the point that your own identity is bound up with the concept, created metaphors and poetic prose that speak to you as a human being and not just a dry scientific analysis of chemicals in the brain, etc.
This “religion” may or may not have “God” as a required belief, but if it did, this God would know who out there was acting from the proper Utilitarian motive whether anyone else knew or not, creating a remarkable incentive to act properly—that is, to act with the proper motive. Who knows, maybe we will actually agree once we are both starting with the same set of definitions, and we could agree that if Utilitarianism is “A,” then “C” and “F” follow, but if Utilitarianism is “B,” then “D” and “E” follow. I don’t think we will agree on every fine point, but if we can get to an agreement about what Utilitarianism actually is, we may see that our differences are marginal, rather than substantial.
T h e o • p h i l o g u e
I’m really interested in utilitarianism and how it relates with theistic ethics because I see it as the only way a naturalist can account for objective moral truths.
I understand some of the criticisms you have of the theory, but it seems to me that epistemic and practicalibility arguments will plague any theory of ethics. On a divine command theory for example, we cannot claim to always know God’s commands or that it is always clear cut.
I think an argument against the arbitrariness of it can be more powerful, but I don’t see how a naturalist can argue as follows:
(1)Nobody want to feel pain. If given a button to press knowing that it would cause us pain, no one would press it
Thus, we can agree that pain is intrinsically bad, and well-being is intrinsically good.
(2)We can reject solipsism, which means we are able to rationally believe that other people can feel pain just as we do too, and if we empathize with people, we will understand that we shouldn’t cause pain unto them, and should promote well-being because we can view them as an extension of our consciousness.
(3)Therefore, this axiomatic truth can be true:”It is morally right to maximize well-being and minimize pain”
AFAIK, this sounds convincing to me as a form of explanation as to how well being can ground “goodness” and how pain can ground “badness”
Yet I have this unshakeable intuition that this theory of ethics leaves something out about ethics: Teleology. When we say that the world is full of evil, we are implying that there’s a sense in which the world should’ve turned out differently. But on the theory I presented above, the universe doesn’t care about how we behave. It is blind, pitiless and indifferent.
But I can’t seem to connect the dots and find out how to resolve the strong intuitions I have for these two arguments that are in tension with one another. I was hoping if you could help!
When discussing a meta theory of ethics like utilitarianism, we are discussing a theory that attempts to define teleology itself. That is, what makes something “evil” or “good”? So the debate over whether utilitarianism is true is a debate also about teleology. However, I think your intuition could be seen as distinctly human and based on human nature. That things “should’ve” turned out differently is an expression about unfulfilled potentiality in human nature with respect to the “final” or “ultimate” or “chief” aim of that nature. But then we must ask–what is the “goal” or “aim” of human nature? Aristotle argued that only one answer gets to the bottom of human nature: happiness. We can always find a higher purpose or ultimate motive or intention or goal for our individual actions and decisions in life (I work so I can pay the bills and afford x, y, and z, I want paid bills and x, y, and z because I want q, r, and s), but when we ask what we want out of life itself, it’s happiness (or at least, so argued Aristotle). Part of how we know this is by realizing it doesn’t make sense to apply normal teleology questions to happiness, because once you reach the aim of happiness, it doesn’t make sense to ask “Now for what purpose do we want to be happy?” The buck stops there. Therefore, he argued that human nature is primarily and most basically aimed at happiness in such a way that all our actions and motives are aimed at happiness in one way or another. Now you can take this in a natural direction (Evolution determined that people who are happy are more fit for survival, and that happiness is the most determinate factor in the struggle for survival) or a religious direction (e.g. Thomas Aquinas who argued that God made humans to be happy, and only in God does the soul find it’s final resting and repose of joy and peace, so that happiness and God are just two different ways of viewing the same “end” of human nature).
Does that at all help with your question?
Hi there, thanks for replying!
I understand that Aristotle’s view on ethics has this sort of teleology attached to it, but I’m trying to specifically deal with utilitarianism (I realized I wasn’t too clear in my previous post)
If you haven’t changed your views on utilitarianism, I was trying to get your thoughts on how you would further develop your “arbitrariness objection” against a critic who would argue that goodness and badness can be derived from well being and pain respectively.
Sorry again if I wasn’t phrasing myself very well!
I think I see what you are getting at now. What is painful in the short term may be pleasurable in the long term, and vice versa, making it not obvious what actions will produce the greatest pleasure over pain in terms of immediate and future consequences. Second, and more devastating, it would actually be impossible to know all future consequences, making *exclusive* utilitarianism impossible. Many scenarios can be imagined where unintended consequences make what might otherwise be considered a sacrificial act of well intentioned love (such as dying while rescuing babies from a fire) into a great evil or vice versa (e.g. my story about Mr. Pedo in the comment thread above). Third, you should read the dialogue above between “E” and I, where this was discussed at great length. I think everyone intuitively acts so as to produce the greatest pleasure over pain in general, but exclusive utilitarianism disregards motive in favor of setting the criterion on consequences only, and this is a major problem, as it is both counter-intuitive as I have shown in the comment thread above (and we should consider actually that to have one’s conscience plagued by guilt is not pleasurable so this must be also considered) and impossible since so much about the consequences of our actions are completely out of our control. A more defensible position would make good intentions a key criterion, this way we are not culpable for consequences outside of our control.
I hope that helps.
I’ll get to it! Thank you! it was helpful