Happy Friday Gardeners. I am curious to find out what other Gardeners' thoughts are on the view of desert that I hold to. The view is (perhaps deceptively) simple:
(D) S ought to be treated as a P if S is a P.
Principle D captures what I refer to as "ultimate desert". Ultimate desert is about what S ultimately deserves – this could run opposite to what we think S deserves in cases where we don't have all the facts about S. This could easily be framed in terms of the traditional heaven-or-hell desert.
Since there are (a fortiori) lots of cases where we don't have all the facts, the first principle needs a more practical corollary:
(D*) S ought to be treated by O as a P if O is warranted to believe that S is a P.
Principle (D*) captures what I refer to as "warranted desert". Warranted desert is about how O ought to treat S with respect to what O believes to be true about S. In cases where O's warranted beliefs about S's run contrary to facts about S, warranted desert could yield moral imperatives that run contrary to the moral imperatives derived from ultimate desert. Obviously (D*) would not suffice to answer the heaven-or-hell question.
Ultimate desert implies a 3rd person omniscient perspective (viz., the God's eye view), which is something we lack outside of thought experiments. In real life scenarios, we have to make the best effort we can to uncover the facts and inspect our reactions to the images that those facts represent, and thus warranted desert is the domain we primarily operate in. Both (D) and (D*) are sufficiently minimal to allow compatibility with different theories on "how a P ought to be treated" is interpreted (ethical theory will play the determining role here).
The Survey
While it is easy to imagine a whole dissertation being written on the tension between ultimate and warranted desert, I want to move past that tension for now. Instead, I want to collect your considered observations on the application of (D):
- S ought to be treated as a P if S is a P.
- S is a P.
- Therefore, S ought to be treated as a P.
Given such a scenario,
- Does #1 seem false to you? Why?
- Does #1 seem incompatible with determinism? Does #2?
- Does #2 seem incompatible with indeterminism? Does #2?
And also of (D*):
- S ought to be treated by O as a P if O is warranted to believe that S is a P.
- O is warranted to believe that S is a P.
- Therefore, S ought to be treated by O as a P.
Given such a scenario,
- Does #1 seem false to you? Why?
- Does #1 seem incompatible with determinism? Does #2?
- Does #2 seem incompatible with indeterminism? Does #2?
First Hypothesis
My first hypothesis is that our views generally do move from desert to responsibility. Because of this, my view is that one's views on value ultimately function as the underlying catalysts that motivate us to attempt to frame a concept of responsibility that coheres with our value theory.
If we don't believe that statements of value can be true or false, it will be impossible to present an acceptable account of desert. And if we cannot offer an account of desert, we will be left with the impression that responsibility cannot possibly get off the ground.
A corollary to this hypothesis is what those who are realists about responsibility are also realists about value.
Desert and Free Will
My gut tells me that, "#1 is true in both scenarios, #1 is compatible with determinism in both scenarios, and #1 is compatible with indeterminism in both scenarios." From this, my head tells me that, "since desert is compatible both with determinism and indeterminism, then responsibility has to be compatible both with determinism and indeterminism."
From here, it is relevant to bring up my view of free will:
(FW) S has free will if O would be able to reliably apprehend counterfactuals about S by observing S.
This principle requires a corollary to make it relevant to most of the usual conversations regarding free will:
(FW*) S is free with respect to A if O would be warranted to believe counterfactual P about S on the basis of combining O's observation of A with O's prior observations of S.
While I'm not going to plumb the depths of (FW) and (FW*) here, I do want to illustrate that they both nicely capture our operative intuitions in things like manipulation cases: in manipulation cases, O cannot reliably apprehend counterfactuals about S precisely because there is manipulation going on.
Now, I know that this debate has been raging for a VERY long time, and I know that it would be a very difficult project to try and convince anyone who disagrees to change their minds. However, setting aside those issues for the moment, I would like to present another question:
Do you think that someone who holds (D), (D*), (FW) and (FW*) (or very close approximations of these) has sufficient means to determine their stance on the compatibility-of-free-will-with-determinism question?
When considering this question, I ask you to set aside personal biases, especially any skeptical biases, and attempt to imagine such a person with fresh eyes. You may have strong disagreements with this person, but the question is about that person's beliefs and not about your own.
Moral Responsibility
Even though the definition of Free Will presented here may seem too novel and thus patently wrong, I want to recommend hesitating to make that conclusion. My sole interest in defining Free Will is simply to satisfy some mental math that could be stated roughly as:
Moral Responsibility = Desert + Free Will
And as,
Moral Responsibility* = Desert* + Free Will*
(This is why I said initially that the potential disagreement with semi-compatibilists and I on the compatibility-of-free-will-with-determinism question may be just semantics.)
It is of interest here to note that from a God's eye view, moral responsibility is unnecessary for desert: the omniscient observer simply has the facts about S and thus about what S deserves.
For many years, I have found myself growing more and more confused about the basis for some of the fundamental divides that have been established historically in the conversation-about-free-will.
Presently, I find that I can make an adequate account for certain kinds of skeptics: I can see why someone who rejects value realism would reject (D), (D*) and moral responsibility; and I can see why someone who accepts value realism (generally) would accept (D), (D*) and moral responsibility.
What I am having trouble with is that I am finding it increasingly difficult to see how someone who accepts (D) and (D*) would not be motivated to accept something like (FW) and (FW*) as sufficient to account for Moral Responsibility and Moral Responsibility*. Likewise, I am finding it increasingly difficult to imagine someone who accepts (D), (D*), (FW) and (FW*) and yet answers, "No," to the compatibility-of-free-will-with-determinism question.
The skeptic about value realism aside, is there something obvious I am missing?
Second Hypothesis
My best guess is that there are competing values running through our psychology. I focus on whether there might be something that is missing of value from this picture, and the only thing I can come up with seems unrelated to Desert and more related to Wish Fulfillment: we are creatures that regularly and routinely wish that we could change our circumstances.
Take Frankfurt's concept of 2nd order volitions for example: we often want to want differently than we presently want. Wouldn't it be better if we could just "force" those 2nd order volitions to become 1st order volitions? Or in more mundane terms, wouldn't it be grand if we could have everything we want? This motivation seems to boil down to, in a word: Discontentment, and in a phrase, "If I'm not content to want X, then I shouldn't be held responsible for wanting X."
To phrase this as a hypothesis, I hypothesize that incompatibilism in general derives its psychological motivators not from considerations about desert but from considerations about discontentment. (Perhaps that suggests a partial explanation as to why there happen to be fewer compatibilists in harsh climates/locales?) I could see how this could motivate an entirely different concept of desert; however, I find that I cannot fathom endorsing it.
Moreover, it seems that the better part of that discontentment can be explained simply by acknowledging the gap between (D) and (D*) without resorting to incompatibilism. I am at a loss to accept an attempt to modify (D) along these lines:
(DI) S ought to be treated as a P if S is a P and S wants to be a P.
Despite the many obvious ways of restarting (DI) to better capture the incompatibilists' intuitions, that whole line just seems hopelessly mired. It seems more productive to fold this kind of concern back into (D) and (D*): for example, we could recognize that people who are murderers and want to be murderers ought to be treated differently than people who are murderers and do not want to be murderers (or any number of other morally relevant distinctions). From this perspective, the stalwart incompatibilist's efforts to "remedy" (D) and (D*) seem misplaced.
Conclusion
I realize I may sound dismissive to incompatibilists, but until I am persuaded to accept an alternate concept of desert, or at least recognize a strong contender, that entails answering, "No," to the compatibility-of-free-will-with-determinism question, then this review does seem fair-minded.
If there isn't a strong competitor to (D) and (D*) that entails answering, "No," to the compatibility-of-free-will-with-determinism question, then incompatibilism seems to be lacking a solid foundation. Without this foundation, I wonder whether we cannot simply outright dismiss incompatibilist interpretations of alternate possibilities and ultimate sourcehood.
Mark,
1. I tend to agree with #1-3. To the extent I'm an anti-realist about free will, I don't think that anti-realism hinges on value/desert anti-realism.
I do think, though, that there is a remarkable correlation between moral skepticism and anti-realism about free will. Double's work is an example of this. Josh Greene's is too.
I think this happens for several reasons:
1. Such people are just more skeptically inclined in general.
2. More importantly: these skeptically minded people are still disturbed by meta-control, because they don't regard how the world turned out to be a happy coincidence.
In other words, moral skeptics entertain a much larger scope of possible worlds to live in: e.g. the world in which we all derive pleasure from torturing babies all day long. To the skeptic, there's nothing morally wrong with that world. So, the fact that we don't live there, and never had a choice about being the sort of person who loves to torture babies, feels constraining.
Most compatibilists are normal and healthy enough to not worry about such things. The compatibilist thinks: "Well, I never chose to be the sort of person who doesn't like to torture babies, but that's ok, because torturing babies is wrong and it all turned out right in the end. I'm still where I would have wanted to be, even if the choice had been mine. It's a happy coincidence."
Skeptics don't believe in those sorts of happy coincidences. Meta-control really bothers us.
2. As for your definition of FW:
A. You can't mean that fw depends on the observer. What you seem to mean is that fw depends on whether a hypothetical observer, of ordinary human ability, would be able to reliably determine counter-factuals about X.
The notion of "a hypothetical observer of ordinary human ability" is troublesome.
B. But more importantly: your definition comes far short of traditional compatibilist accounts of fw.
For example, I can reliable predict counter-factuals of computer programs on my laptop. If I study the software, and so on, my prediction can be pretty sophisticated. But nobody thinks the software on my laptop has free will. So there has to be a lot more to free will.
3. As for your comments regarding incompatibilists are discontentment, I think you are really onto something true and profound. See also my comments above regarding happy coincidences.
I know that Gary Watson has been working on a paper on self-acceptance for years now, but has yet to share it with us. I suspect that, when he does present it, it will be consistent with your theory that incompatibilists are less content with their lives.
Consider the alleged fact that compatibilists are more social. They could be more social because they are simply: happier and more self-accepting.
Think of self-acceptance in the terms of "happy coincidences" I discussed above.
Here is a metaphor: Everyone is at the Christmas tree. And they are opening up presents. The presents, however, are their thick selves.
The compatibilists open up their thick selves and think, "Alright! Exactly what I wanted!"
The incompatibilists think, "Oh no, I don't really like this self."
Or they think, on an even deeper lever, "Alright, I love this self, but the only reason I love it is because that's the thick self I was given---one that loves itself. If I had been given a different thick self, I might have felt differently!"
Now ask these recipients about free will. The compatibilist says, "well, I didn't really choose the thick self I have. But who cares? It's a happy coincidence. I love the life I have, so I'm not disturbed by the fact that I didn't choose."
The incompatibilist says, "no. My gift sucks. I'm not letting you choose next time. I want to choose. And even if I do love my gift, it's because the gift makes you love itself, and not because I chose to love the gift. So it's hopelessly circular to take comfort in the fact that the gift turned out alright."
Posted by: Kip | July 26, 2009 at 06:23 AM
Kip,
Thanks for the comments.
Regarding my definitions for (FW) and (FW*), you are correct. They are meant to imply a hypothetical observer. That would have been less confusing if they had been stated as follows,
However, the rest of the worries about using observers in these definitions can be put to bed by noting that these definitions are strictly externalist in nature: they do not imply any volitional belief formation on the part of the observer. Rather, they imply simply that if the observer's belief-producing faculties are properly functioning, then warranted beliefs about S will be produced in the observer through those faculties. In this respect, I am importing quite a bit from reliabilist epistemological theory, but I do not think the theory I am framing depends on reliabilist epistemology being true in the general sense.Regarding meta-control, if (D) and (D*) are true, it should put to bed any concerns about meta-control and "fairness". If the meta-controller succeeds in creating S such that, according to (D), S deserves to be treated in harsh ways, perhaps the meta-controller will also deserve to be treated harshly according to (D).
If the meta-controller only succeeds in manipulating S to *look* like a P without S *being* a P, then the meta-controller may have created a situation where (D) and (D*) will produce conflicting results: to observers it may look like S is a P, but S really is not a P. In this case, at least S can take solace in not really being a P and not really deserving to be treated like a P.
Regarding free will and references to counterfactuals, my definitions of free will are certainly revisionist to an extent. I believe there are many things that would separate man from animal and man from machine, but I am not sure that "free will" is one of them. As I define it, free will is (roughly) the passive ability for S's character to be expressed in the physical world.
I would define S's moral character as morally significant counterfactuals about S. I am very skeptical that animals or machines could support those kinds of counterfactuals. However, the definitions of free will do not contain this qualification because free will is obviously broader than moral desert.
In cases where the counterfactuals expressed by S are not morally significant, there is no relevance to moral desert. However, the definition (D) is not strictly limited to moral desert either: if S is a knife and S would easily cut through skin, then it might be fair to say that S deserves to be treated with care and respect in order to avoid causing harm (especially to one's self). So here we have an inanimate object that deserves to be treated a certain way based on the definition of free will provided here -- maybe that strikes you as weird, but it strikes me as progress.
So, I suppose this is a sort of "deflationary" theory of free will, and yet it is meant to capture most everything that compatibilists desire to obtain from free will. But, you can label (FW) and (FW*) with whatever terms suit you more. For instance, I would be happy to adopt alternate vocabulary for (FW) and (FW*) and add a supplemental account about how an agent with a "will" expresses itself and call *that* an account of free will. The more important point is these definitions seem sufficient to ground moral responsibility in both deterministic and indeterministic worlds.
Finally, regarding discontentment, I believe this attitude, as real as it may be, is fundamentally mistaken and perhaps even treatable. However, I would want to say more about this in a dedicated piece.
Posted by: Mark Smeltzer | July 26, 2009 at 02:50 PM
Kip,
Nothing to say in response? Come on, I'm talking about taking the steam out of causa sui here. Surely that must irk you a bit. In fact, if what the picture I am painting here is roughly correct, it means that folks like Nietzsche made some fundamental errors in assessing moral reponsibility.
For example, Nietzsche could have offer such an account due to his commitment to person-types: his prized warrior class, for example, embodying what Nietzsche viewed as the highest form of freedom, sees fit that people who endorse lesser forms of freedom ought to be spat upon. If there are certain types of people who broadly will agree on certain moral principles, then these groups simply will have answers to the "how a P ought to be treated" question, given that the answer flows from their nature (sounds sort of Strawsonian now, doesn't it?).
As long as there is a minimally sufficient account of how these affects relate to (D) and (D*), we have all the ingredients for fulfilling the rest of the account laid out above. In which case, we would have secured quite a stronghold for Compatibilism.
Posted by: Mark Smeltzer | July 31, 2009 at 12:47 PM
Mark,
There is not much more to say. You offer certain analyses/definitions of free will. On first glance, they look pretty far from the mark. But: (1) I come to the debate with an incompatibilist/libertarian/causa-sui understanding of free will and (2) nobody seems to have a clear idea of what free will is at the edges, and nobody seems to agree on what free will is.
And the comments about "free will" apply to "moral responsibility," too. I tend to think that the issues regarding the one tend to track the other. Both of those are terms that ordinary people hardly ever use in ordinary speech.
If I wanted to, I could try to poke holes in your definitions all day long. That's been the standard incompatibilist response for a long time: "well, manipulated agent X satisfied your compatibilist conditions but still isn't free/responsible. So there." That's probably not going to get us very far though: you'll either revise your definitions to accommodate my examples, or we'll get to a point where you look at something and say "that's free will" and I look at it and say "no, it's not." Then how will we prove each other wrong?
At that point, I think the best thing to do would be to collect large amount of data on how people define free will, and how people respond to various questions about whether something constitutes free will. That data hasn't really been collected. But when it is, I think it will show that there is so much variation in how people define free will, and so much admitted ignorance as to what the term means, that we won't be able to settle the question.
So, after all that, what's left? I think what would be left is:
Exploring why people thought that free will didn't exist, and explore why people thought that free will existed despite determinism? Were some people A. more vulnerable to cognitive biases? B. more engaged in wishful thinking? C. prejudiced by whatever philosophers they read before coming to the debate? How does autism spectrum disorder affect beliefs about free will? Depression? Self-acceptance? Extroversion? The fundamental attribution error?
I think answering those questions will be much more interesting---and much more fruitful---than answering the question: who was right and who was wrong. I think the odds are tiny indeed of finding a mathematical proof that will persuade compatibilists, or incompatibilists, to lay down their swords and admit defeat.
In other words, I suspect that the free will problem is more like Hilbert's 21st problem than his 7th.
Posted by: Kip | August 01, 2009 at 06:52 PM
Kip,
My primary contention is about a certain way of looking at desert and its implications to the moral responsibility debate. So, if you're going to poke holes in anything, let's start with (D) and (D*).
My contentions about defining moral responsibility and free will are framed purely in terms of this concept of desert -- so there's no wiggle room for whether I've got the terms pegged "correctly". I'm not interested in whether the terms are "correct" -- I'm interested in whether they perform the work of connecting our actions, warranted beliefs about other's and ourselves, and desert.
If the definitions are sufficient to accomplish those goals, I'd be happy to hang my hat on those things and yield the floor to whomever wants to continue the what-does-free-will-really-mean debate.
The definition of desert that is suggested here is impervious to any type of manipulation scenarios that I have ever encountered in the literature, but please surprise me. That was the point me taking the effort to post.
If there is anything obvious I am missing, I would like to hear about it before I waste any time writing a more substantial account.
Posted by: Mark Smeltzer | August 02, 2009 at 02:50 AM
Mark,
Along the narrower lines that you've guided me towards, I can see at least two objections:
1. Perhaps there are no oughts. Josh Greene, Richard Double and J. L. Mackie defend versions of moral skepticism that would seem to prevent your theory from getting off the ground.
2. Perhaps even if we grant (D), it does not get the compatibilist what he wants. Consider Charles Manson.
One person might say "Charles Manson is a non-defective human being exercising their pure free will, choosing to hurt and kill innocent people. Therefore Manson deserves to be treated as such a person. And that treatment involves Manson suffering."
But it seems that Bertrand Russell and Richard Dawkins could just as well say: "Charles Manson is a defective human being, who didn't exercise any free will, and chose to hurt and kill innocent people because of his defective programming. Therefore Manson deserves to be treated as such a person. And that treatment involves curing and rehabilitation."
If I'm right about the above (that D applies equally to staunch compatibilism as to radical free will anti-realism), then I'm willing to grant D. Is there any reason why I should be afraid to?
Posted by: Kip | August 02, 2009 at 07:50 AM
Kip,
Thanks for the thoughtful reply.
One of the most significant things about (D) is that it is going to be really hard for anyone to deny it consistently.
For instance, even a thorough going nonrealist like Rhorty is going to have a hard time denying that an exacto knife ought not be used to clean one's ears. In a debate once with Umberto Eco, Rhorty tried to defend his nonrealism against Eco's criticisms and was forced to defend the idea that an object like a screw driver had no natural function and instead derives its "purpose" from conventions only. Rhorty claimed that one might as well choose to use it to clean one's ears. Now, as funny as that might sound, I think the point would have been even less plausible if they had been debating the natural function of an exacto knife -- while it could be argued that using a screw driver to clean one's ears is not strictly a violation of nature, I don't see how the same argument could be advanced against using an exacto knife for the same purpose.
In short, (D) depends on a minimally sufficient account of realism about natural function. As such, any account that relies upon (D) will obviously not gain much ground in circles that deny such a possibility. Regardless, that would not deter me from advancing the account since most of us are not afflicted with that form of skepticism.
Lastly, (D) does not seem to presuppose a universal moral realism that entails how a P ought to be treated objectively. Different folks could have different interpretations of what S deserves. In this case, I would appeal to something like Darwall's second-personal sense of moral claims, and posit a notion of moral authority. In cases of conflict surrounding interpretations of (D), it would be up to the recognized moral authority to yield a verdict. Even a Theist or Polytheist could accept an account like this since since the God(s) would bear the maximal moral authority. Perhaps even Rhorty could accept (D) insofar as he would be free (of conceptual restraints) to suggest very broad intepretations of how a P ought to be treated.
Finally, regarding someone like Manson, let's suppose the following three premises are true:
- (D), (D*), (FW) and (FW*) are true (setting aside for the moment that we agree that (FW) and (FW*) are perhaps mislabeled).
- Charles Manson is the type of person who would kill another person in inexcusable circumstances and/or for inexcusable reasons.
- We have sufficient evidence that we are warranted to believe (2).
What follows from the conjunction of these premises regarding how Manson deserves to be treated? Only that he deserves to be treated by the community as a type-of-person-who-would-kill-another-person-in-inexcusable-circumstances-and/or-for-inexcusable-reasons (or IK for short) ought to be treated.Does that mean an IK should be scorned? Should an IK be pitied? Should an IK be praised? Should an IK be hospitalized? Should an IK be put to death? Should an IK be treated with honor and celebrity? (There are people who endorse all of these options.) The thing that these questions have in common is that they are all ethical questions, and I do not think an account of Compatibilism has to answer those questions. To put it another way, these questions all very important questions for a worldview to answer, but it seems we could be Compatibilists regardless of the answers to these ethical questions.
I don't see any reason why anyone should be afraid to endorse (D) and thereby Compatibilism, but I may be a bit biased there. The more important question is, if you find nothing troubling here, would it cause you to reconsider whether you are in fact an Incompatibilist?
Posted by: Mark Smeltzer | August 04, 2009 at 01:36 PM
Mark,
1. I have no trouble denying that the exacto knife ought to be used, or not used, for any purpose. Even if that is a rare position, I'm perfectly happy being in the company of Double, Greene and Mackie.
2. You seem to grant that D is consistent with the "medicalized"/no-free-will view that Dawkins and others have suggested. In that case, I have no trouble accepting D.
But note that, in 2, I don't grant compatibilism. At the end of your post, you say that "I don't see any reason why anyone should be afraid to endorse (D) and thereby grant Compatibilism."
But Compatibilism doesn't follow from D. D says that "S ought to be treated as a P if S is a P." If all of the persons in the world are only people without free will, who do good or bad only according to their programming, and we treat them as such, D is satisfied. But compatibilism is not.
Posted by: Kip | August 04, 2009 at 07:33 PM
Kip,
Do you object to the precursory analysis of (D) and (D*) that I offer which leads to the endorsement of moral responsibility? I am perfectly fine if you want to label this account as semi-compatibilist, because if there's a variety of free will that this account doesn't secure, then it is a variety (I find) not worth wanting.
Secondly, (D) does not depend on there being an answer to the "how does P ought to be treated" question -- that is a project that is separate from an account of agency. In this regard it may be that Double, Greene, Mackie and Nagel were rationally more closely aligned with (semi-)compatibilism than they had realized.
The purpose of my thread here is to get any and all feedback that may be helpful in determining the next step of this project. Obviously, there needs to be a thorough account of (D), and there needs to be a subtle defense of the transition from (D) to (D*), and from (D*) to (FW) and (FW*). I see (FW) and (FW*) as representing an (extremely) minimalistic form of Compatibilism, and it seems straight forward how to extend (FW) and (FW*) to resolve some of the major worries that raised by folks like G. Strawson, Spinoza, and Nietzsche (e.g., the worry that mentally we are merely observers of our own actions and that than our actions are not being wrought by our conscious thoughts, which seems to erode the concept of "control" bundled up with free will). I think, in the end, the account would endorse more of their ideas than it would reject, and yet it is from the outset a defense of moral responsibility.
Posted by: Mark Smeltzer | August 04, 2009 at 09:03 PM
Mark,
When you say “(D) S ought to be treated as a P if S is a P” what exactly do you mean by treated as a P? My suspicion is that you mean treated in the way that a P ought to be treated. But then (D) reduces to a tautology.
Please precisely define the meaning of treated as a P.
Posted by: Brian Parks | August 07, 2009 at 08:16 PM
Mark,
With my moral realist hat on, I see no objective to your initial account of D, etc.
But I still don't see how even "it may be that Double, Greene, Mackie and Nagel were rationally more closely aligned with (semi-)compatibilism than they had realized."
If Double, Greene, etc., say that people don't have free will and should be treated as if they have a disease, then they are endorsing neither compatibilism nor semicompatibilism.
Posted by: Kip | August 07, 2009 at 10:46 PM
And just to be clear, I think Double is more of a conservative type non-realist about free will (a la Nichols in After Incompatibilism), who would hardly endorse the more extreme "medicalized" position that I do.
Posted by: Kip | August 07, 2009 at 10:54 PM
Brian,
Taking your suggested line, if (D) were expressed as:
I see no significant difference between (D) and (D2). I also see a substantive definition in both cases -- not mere tautologies.Taking (D) and (D2), we have the juxtaposition of two ideas: S's nature and the ethical stance toward P. (D) and (D2) function as the bridge(s) between those two ideas to yield the ethical stance toward S.
To put this in context, we could have a set of ethical commitments that goes something like this:
- Murderers ought to be depised
- Good fathers ought to be esteemed
Now, let's suppose we have an S who is both a murder (P1) and a good father (P2), how does S deserve to be treated?It won't be enough to say that S is a P1, because that is not a complete picture of S. S is both a P1 and a P2. So, we would need an answer the ethical question, "How ought a P1 & P2 be treated?"
According to (D) (and (D2)), S ought to be treated as a {P1, P2} because S is a {P1, P2}. Thus, is is very significant that (D) and (D2) contain the phrase "if S is a P" at the end. One thing that I am debating with myself is whether (D) needs to be revised to say "if and only if S is a P" in order to fully capture this idea.
The primary significance of starting with desert lies in the suggestion that desert is the horse that pulls the cart of moral responsibility.
According to someone like Galen Strawson, there is the suggestion it should go the other way around: (MR GS) S deserves to be treated as a P if S is a P and S is MR for being a P. If we accept (D) on its own merits, before we begin to consider the question of moral responsibility, we will have powerful reasons to reject a principle like (MR GS).
I believe that desert is a cross-cutting concept that applies to many types of objects, some of which are (moral) agents. For instance, I gave some examples in this thread about knives deserving to be treated with respect and the like. Since moral responsibility presumably applies primarily to moral agents, it makes sense that desert should be defined first since it is the broader concept. Thus, even if G. Strawson's alternative approach seems intuitive to us at first, the realization that desert is broader may be sufficient for us to change our minds and focus on defining desert first.
If we accept the ordering that is suggested here by putting desert first, we will have blocked moves such as these: S deserves to be treated as a P if S is a P and S is ____ for being a P. How could we possibly fill that blank? Obviously knives are in no way responsible for being sharp, potentially dangerous objects, and yet they deserve to be treated with respect for the potential lethality. I don't think there is an acceptable way to fill in that blank, which is why (D) excludes it entirely.
Kip,
That is a great question and one that I would love to tackle, but I am afraid I will have to disappoint for now. Answering that question would require a much more in depth survey of each philosophers' writings and their relations to the proposal being considered here.
My best attempt at a short reply would be that I surmise that there is little in the way of argumentation in these philosophers' writings that would prevent them from accepting much of what is said about (D), (D*), (FW) and (FW*). Even if that is the case, and we could rightly label them as (semi-)compatibilists, it does not entail that they would think that moral responsibility is actual.
Most of these philosophers are also very worried about moral luck. The precursory theory offered here about moral responsibility, which is grounded on the possibility of having warranted beliefs about the nature of others, does not entail that we actually have warranted beliefs about the nature of others.
Things like moral luck could be levied as a challenge to the suggestion that anyone is morally responsible, since moral luck may prevent us from forming warranted beliefs about others, but that is not the same thing as challenging (semi-)compatibilism directly.
Posted by: Mark Smeltzer | August 08, 2009 at 01:33 PM
Mark,
You say: “(D2) S ought to be treated in the way that a P ought to be treated if S is a P
I see no significant difference between (D) and (D2). I also see a substantive definition in both cases -- not mere tautologies.”
Consider “(D3) S ought to be treated in the way that all members of the class P ought to be treated if S is a member of the class P.” Surely, you will agree that (D3) is a tautology.
What do (D) and (D2) add to the obviously tautological (D3)?
You say: “According to someone like Galen Strawson, there is the suggestion it should go the other way around: (MR GS) S deserves to be treated as a P if S is a P and S is MR for being a P. If we accept (D) on its own merits, before we begin to consider the question of moral responsibility, we will have powerful reasons to reject a principle like (MR GS).”
One problem that I have with your account is that it seems to lead to the absurd conclusion that things like exacto-knives can be morally responsible.
Consider:
(D EK) K ought to be treated as an exacto-knife ought to be treated if K is an exacto-knife.
(FW EK) K has free will if O would be able to reliably apprehend counterfactuals about K by observing K.
(D EK) is tautologically true. (FW EK) is true given our (arguably) indeterministic universe.
Given your formula “Moral Responsibility = Desert + Free Will” it seems to follow that an exacto-knife can be morally responsible.
I think there is a valid challenge here not just to your account of moral responsibility but to all compatibilist accounts. If individuals in a strictly deterministic universe can be morally responsible for things, why can’t atoms, molecules, thermostats, and so on be similarly responsible?
Superficially, the compatibilist answer will probably be "Because atoms, molecules, thermostats, and so on are not conscious."
But that just delays the real question: what do conscious deterministic processes offer over and above unconscious deterministic processes that magically opens the door for moral responsibility?
Posted by: Brian Parks | August 09, 2009 at 01:08 AM
Brian,
As you have stated (D3), it is not a tautology. Perhaps if you move the "if" to the beginning, it would be more obvious:
However, if by labeling it as a tautology you simply mean that it seems obviously true, then I agree with you! Glad you're on board :)Regarding treating mundane object as "morally responsible", keep in mind the strict sense in which (FW) and (FW*) are defined. The function of moral responsibility is providing a bridge for getting to what S deserves when our beliefs about S's nature are obtained indirectly, and (FW) and (FW*) are those bridges. For example, we can directly obtain warranted beliefs about how sharp a knife is by lightly touching its blade. (FW) and (FW*) do not seem the least bit relevant in this case. Hence, I seriously doubt whether (MR) or (MR*) are relevant to the knife. But even if they are, so what? In other words, I am not sure why it should be considered problematic for the view if moral responsibility happened to be broader than we had thought it was.
I've already granted to Kip that (FW) and (FW*) are extremely minimalistic, and in order to actually put forward an account of strictly agential free will, much would need to be said about the interplay between an agent's nature, its conscience mind, its actions, and the meaning of those actions with respect to what they tell of the agent's nature. I am interested in putting forward an account like this, but I don't see it as strictly necessary in order to talk about the broader concept of how desert functions with respect to objects that we cannot directly obtain information about (agents are surely of this type).
Posted by: Mark Smeltzer | August 09, 2009 at 01:16 PM
Mark,
You say: “As you have stated (D3), it is not a tautology. Perhaps if you move the "if" to the beginning, it would be more obvious: (D3)If S is a member of the class P, S ought to be treated in the way that all members of the class P ought to be treated.”
No Mark, it is a tautology.
Consider:
(A) S is a member of the class P
(B) There is a way W that all members of the class P ought to be treated
(C) S ought to be treated in way W.
When forced into precision, D2 and D3 say: if (A) + (B), then (C). But that claim is true simply in virtue of the stated meanings of (A), (B) and (C)! Look at the meanings!
If S is a member of the class P, and if all members of the class P—i.e., S, T, U, and so on—ought to be treated in way W, then S ought to be treated in way W! But we already said that S ought to be treated in the way W when we said that all members of the class P—i.e., S, T, U, and so on—ought to be treated in way W!
You say: “However, if by labeling it as a tautology you simply mean that it seems obviously true, then I agree with you! Glad you're on board :)”
Not only is it obviously true, it’s synthetically empty. It adds nothing that isn’t already given in the meaning of its phrases.
100% of your lifting will be done establishing (A) and (B). Once you’ve established (A) and (B), there is no lifting required to establish (C).
You say: “Regarding treating mundane object as "morally responsible", keep in mind the strict sense in which (FW) and (FW*) are defined. The function of moral responsibility is providing a bridge for getting to what S deserves when our beliefs about S's nature are obtained indirectly, and (FW) and (FW*) are those bridges.”
What?!
So, if I can somehow get direct knowledge of your nature before you engage in any behavior, I can start punishing and rewarding you then?
Here is an interesting scenario for you to address:
Suppose a bad neuroscientist injects you with a chemical designed to irreversibly modify your brain chemistry and give you the nature of a horribly violent criminal. During the injection, a good neuroscientist enters the lab, sees what the bad neuroscientist is trying to do, and kills him in your defense. The good neuroscientist removes the injection and then conducts futuristic imaging of your brain to determine if you have in fact developed a violent criminal nature. “Oh no”, he says, “It's happened! You’ve become a violent criminal! The condition is irreversible!”
The good neuroscientist knows that you have the nature of a horribly violent criminal. He gained this knowledge directly, through futuristic imaging of your brain.
Does he need to dabble with the ‘bridge’ of moral responsibility in order to start punishing you? Does he need to wait for you to actually do something wrong? On your view, no. He can get right to it. Make you suffer. You deserve to suffer—after all, you have the nature of a violent criminal.
That’s absurd.
You say: “For example, we can directly obtain warranted beliefs about how sharp a knife is by lightly touching its blade. (FW) and (FW*) do not seem the least bit relevant in this case. Hence, I seriously doubt whether (MR) or (MR*) are relevant to the knife. But even if they are, so what? In other words, I am not sure why it should be considered problematic for the view if moral responsibility happened to be broader than we had thought it was.”
If a view implies that exacto-knives can be morally responsible, I consider that to be a problem.
At the same time, I don’t think a compatibilist can really give a principled explanation for why, in a deterministic universe, lower animals, insects, plants, and exacto-knives are not morally responsible for the behaviors they manifest, and yet human beings are.
You say: “I am interested in putting forward an account like this, but I don't see it as strictly necessary in order to talk about the broader concept of how desert functions with respect to objects that we cannot directly obtain information about (agents are surely of this type).”
Well, I do have to say, your approach is highly creative and novel. Supposedly, that's a good thing for philosophers seeking publication. Still, if I'm being honest, I think it fails miserably ;-) You have to get rid of those reductios--the prepunishment reductio and the exacto-knife reductio.
Posted by: Brian Parks | August 10, 2009 at 02:02 AM
Brian,
If you think that your (A) and my(D) are tautologous, then who cares? All that would mean is that (A) is a restatement of (D) and that (D) is a restatement of (A). If this is the case then as long as (A) is true it means that (D) is true. So, I could say here that your point about tautologies is moot. (Though I disagree that (D) is a restatement of (A), it seems neither-here-nor-there with respect to the question of whether (D) is true.)
Regarding prepunishment, the account is not committed to saying that it would be possible for anything short of God to obtain direct knowledge of persons. Moreover, this account is agnostic regarding whether a person's nature is malleable -- it could easily turn out to be the case that the answer is no.
Regarding exacto-knives and moral responsibility, keep in mind that we're using precisely defined terminology here. It isn't fair to conclude that "the theory is false because it implies that exacto knives are morally responsible". The theory implies something like this, "if someone learns that an exacto knife has the potential to cause grievous bodily harm by being harmed by an exacto knife, that person ought to treat that object with the appropriate respect reserved for sharp, dangerous objects." That's all the account is committed to saying and I do not see anything troubling in that statement.
We could easily imagine someone who had never seen an exacto knife finding one laying on a workbench and cuts his hand while picking it up. It would have been possible for him to have obtained direct knowledge of the knife's cutting potential by simply observing the appearance of the blade, but that does not mean that the subject actually did so. And in this case, the subject did not do so and happens to have learned about that cutting potential only by observing that he has been cut. So, this (in a way) seems to satisfy (FW*) and thus it would be fair to treat the exacto knife as a member of a certain class of objects (viz., sharp objects) based on this newly obtained information.
When an exacto knife cuts, it means that the knife is dangerous. Persons are much more complex than exacto knives. When a person does something bad, it could mean any number of things. Moreover, just because a person has not done anything bad, it does not mean a person is good. The underlying nature of a person is much harder to "detect" than that of an exacto knife.
Posted by: Mark Smeltzer | August 10, 2009 at 02:35 PM
Mark,
“If you think that your (A) and my (D) are tautologous, then who cares? All that would mean is that (A) is a restatement of (D) and that (D) is a restatement of (A). If this is the case then as long as (A) is true it means that (D) is true. So, I could say here that your point about tautologies is moot. (Though I disagree that (D) is a restatement of (A), it seems neither-here-nor-there with respect to the question of whether (D) is true.)”
To avoid belaboring the point, I’ll just say this. If you can establish that “S is a member of the class P”, and that “There is a way W that all members of the class P ought to be treated”, then I will gladly grant you that “S ought to be treated in way W.”
As for your original conjecture,
(D) “If S is a P then S ought to be treated as a P”
I think you should change it to something like,
(D4) “The way that S ought to be treated is entirely a function of the kind of nature that S has.”
That, in my view, is a more effective way of expressing what I think you mean to say.
You say: “Regarding prepunishment, the account is not committed to saying that it would be possible for anything short of God to obtain direct knowledge of persons.”
But still, you are committed to saying that God is justified in prepunishment. More importantly, you are committed to the claim that God can create punishment desert in an individual from thin-air without any prior involvement from the individual. All he has to do is create the individual with an evil nature.
As a moral anti-realist, I am quite confident that the property that we call ‘desert’ does not exists as a real, objective property of anything in the universe. It is just a way of putting a word on certain retributive feelings that our organism has evolved to experience, feelings that the universe does not share.
However, even with my moral realist hat on, I have to admit that nothing could be more counter-intuitive to me than the claim that a person can ultimately deserve pain and punishment simply because God made the person to have an evil nature. If that is what your theory implies, then in my view it is already dead-on-arrival.
You say: “Moreover, this account is agnostic regarding whether a person's nature is malleable -- it could easily turn out to be the case that the answer is no.”
Right, and that’s still a problem because it could just as easily turn out that the answer is yes. If that is the case, then absurdity immediately follows: God—or a person with access to sufficient technology—can generate desert in an individual purely through manipulation. If, for example, I manage to get inside Jesus’ brain, tweak things around, and give him Hitler’s nature, then right then and there He will deserve whatever Hitler deserves.
On a side note, is it not a bit peculiar that the possibility that a person’s nature can change creates problems for a theory of desert and moral responsibility? One would think that if evil people could theoretically change their natures, that this would strengthen the argument for treating them in a certain way, i.e., for punishing them for what they do. On your view, the existence of such an ability causes the entire theory to collapse into absurdity.
Posted by: Brian Parks | August 12, 2009 at 03:41 AM
Brian,
Sounds like you are getting hung up on the ethics side of things, which is certainly irrelevant to the question of moral responsibility. An account of desert and moral responsibility does not need to specify what an agent deserves. Rather, the account just needs to establish how an agent deserves.
Regarding prepunishment, there are (at least) two types of prepunishment. The first is one that I don't think anyone should have a problem with in principle: if it is possible to know what an agent deserves now then it is fair to treat that agent now according to what he deserves now -- the problem for me is that I do not think there is sufficient reason to think that it raises any practical problems. The second kind of prepunishment is one that is not compatible with (D): if it is possible to know what an agent will deserve in the future (because his nature will have changed from what it is now) then it is fair to treat that agent now according to what he will deserve (once his nature becomes what it will be like) in the future. This second form of prepunishment is incompatible with (D), and I reject it on that basis. The second form seems to be the kind that Smilansky raised as a criticism of compatibilism in his recent paper. I'm not sure which kind you have in mind.
Moreover, I am not sure what role you want prepunishment to play in your criticism. Maybe it is like this: either (D) is false or prepunishment in principle is possible; prepunishment is not possible; therefore (D) is false? If this is how your argument is supposed to play out, how would you defend the argument that prepunishment is not possible? To defend that argument you would need to offer an alternative account for desert -- an alternative to (D). Otherwise, the criticism rings hollow. (Especially considering that the first kind of prepunishment is simply a restatement of (D).) Moreover, the first premise in this argument is false regarding the second type of prepunishment: the truth of (D) is compatible with the falsity of the second form of prepunishment.
Regarding whether an agent's nature can change, I reject your claim that the account becomes absurd if people's natures can change. Rather, if (D) is true and people can change their natures, it simply means that what they deserve changes over time. However, according to (D*), how people deserve to be treated in practice is a function of how their nature appears to an observer -- this implies observational change, regardless of whether the person's true nature is changing. So, this account seems theoritically compatible with both types of change. If an agent's nature can change, it surely makes thing more difficult for us to determine who deserves what, but that is a practical problem.
This account is an attempt to put forward an explanation of how desert, and ultimately moral responsibility, actually works. Therefore, the goal is to see just how far we can get with those concepts -- I'm not doing it the other way around.
Anyway, thanks for the comments. I personally don't see anything troubling in the points you raise, but I will be sure to note that I ought to include in my agenda points to address these concerns in more detail.
Posted by: Mark Smeltzer | August 12, 2009 at 11:45 AM