Consider the following manipulation scenario.
I implant in your brain a radio-controlled neurological device that allows me to manipulate all of the psychological forces (PFs for short) that guide you in your choices—that is, all of the feelings, emotions, sensitivities, motivations, dispositions, desires, aversions, beliefs, and so on that hold sway in your mind. Using the device, I can make you feel pleasure, pain, guilt, pride, calm, anxiousness, anger, compassion, and so on, each in response to whatever stimuli I specify. I can make you love a certain kind of food, I can make you hate members of a certain race, I can make you romantically attracted to a certain person, I can do all of that. More importantly, I can control the degree and intensity of each of your states of mind.
The one thing that the device does not allow me to control, however, is your ability to choose. You retain that ability. Though I determine your psychological inputs, you determine the choices that follow from those inputs. They remain “up to you.” You pull the trigger on them.
Now, suppose that I use this device (and other hypothetical powers that I have) to steer you into the exact same external situation that Bernie Madoff was in immediately prior to one of his immoral choices--let's say in an office somewhere in Manhattan, deliberating whether to push "enter" on the computer and transfer a potential client's funds into the Ponzi scheme. I then give you the same internal PFs--the same feelings, emotions, sensitivities, motivations, desires, beliefs and so on that were present in his mind at the time of the choice. In physical terms, I put your brain in the exact same neurological state that his brain was in, and I let things go from there to see what happens.
The question is, do you make the same immoral choice that he made? If our universe is deterministic, or more specifically, if the human brain is a deterministic physical system, then the answer is necessarily yes. The question then becomes, are you morally responsible for making that choice?
In my view, we have to answer no. If we answer yes, we force ourselves into an absurd conclusion: that, if given access to the requisite technology, I would be able to make you morally responsible for whatever I want—a theft, a rape, a murder, a holocaust, anything. On such an analysis, science would in principle be able to afford me total, unfettered control over your moral destiny. That is not an acceptable conclusion.
On a theoretical level, we can bite the bullet and accept that you would be morally responsible for making the choice. But what will we say when someone actually goes through with this experiment in real-life, when someone actually manipulates a brain—maybe my brain or your brain—in the stated way, so that a real person—maybe you or me—actually ends up facing a life in prison or an eternity in hell? Will our view change? I certainly hope that that it will.
If we are being honest with ourselves, we have to answer no. But if we answer no, if we say that you aren’t responsible for making the immoral choice, then how can we say that Madoff is responsible for making that choice? It’s the same choice, is it not? The only difference is that in your case, a person brought about the PFs that led you to make it, while in Madoff’s case, Nature brought them about. But why would that difference make a difference as to your guilt or innocence? Why would the fact that your PFs were brought about by a personal force rather than by a non-personal force absolve you of moral responsibility for a choice that you yourself obviously made?
It seems clear to me that if I put you in Madoff’s exact circumstances and I give you a choice, and you end up making the same choice, then you deserve the same judgment and punishment for that choice that he deserves. You took the same test and scored the same grade. If you passed, then he passed. If you aren't responsible, then neither is he.
To compatibilist gardeners: how would you respond to this manipulation case? Are you morally responsible for the choice? If not, how can Madoff be responsible? If so, how can a conception of moral responsibility be intuitively plausible if it allows one person to guarantee that another person will become guilty of something?
Shifting the focus to libertarianism, it is commonly assumed that manipulation cases do not threaten the libertarian position in the way that they threaten the compatibilist position. This assumption, in my view, is wrong. Libertarianism offers no meaningful advantages over compatibilism when it comes to problems of manipulation.
On a superficial level, how might a libertarian attempt to escape from the problem? Well, if our universe is indeterministic, or more specifically, if the human brain is an indeterministic physical system, and I put you in Madoff’s exact internal and external circumstances, I won’t be able to guarantee that you will make the same choice that he made. Your choosing differently than he did will be compatible with your having a brain in the same state that his brain was in (with the same PFs). So, by accepting that you would be responsible for the choice, a libertarian doesn’t have to deal with the dilemma of my being able to use technology to seal your moral destiny. The fact of indeterminism prevents that dilemma from emerging.
But this is hardly a solution to the problem. To overcome the obstacle of indeterminism, all I have to do is run the experiment over and over again. If you choose differently than Madoff the first time through, I can just steer you back into Madoff’s circumstances, remanipulate your brain and your PFs to match his, and then release, allowing you to choose again.
The fact that Madoff made a certain immoral choice means that there is a non-zero probability that a brain in the state that his brain was in will arrive at that choice. Thus, given enough trials with your brain in that state, you will eventually make the same choice that he made. It may take a million trials, but we're going to get there eventually.
What happens when we do get there? Say I run the experiment 1,000,000 times, and on the first 999,999 times, you make a different choice, a moral choice. But on the last time, you make Madoff’s choice. Are you any less responsible for that choice than Madoff was for his original choice? Obviously not, and therefore you will deserve the same judgment and punishment for that choice that he deserves.
Does the outcome of the previous 999,999 choices affect whether you are morally responsible for the 1,000,000th choice, the one that was immoral? It certainly would be puzzling for someone to answer yes. Think about how ridiculous the same answer would be if made in the reverse direction. “Yeah, I made a bad choice the first time through, I killed somebody, but I’m not morally responsible for it, because when you reran the scenario 999,999 times, putting me in the same psychological circumstances and giving me additional chances to kill someone, I chose not to kill every time.” Congratulations, but you still made a choice to kill somebody!
And so I pose the question to libertarians: If I can successfully run the manipulation scenario an infinite number of times, then how can the fact of indeterminism help anything? if I run the Madoff manipulation scenario 1,000,000 times on a person, and on the last run, they choose the immoral choice, are they morally responsible for having made that choice? Do they deserve the same judgment and punishment for that choice that Madoff deserves for his? If not, why not? If so, do you think it is problematic that your conception of responsibility allows one person to guarantee that another person will end up guilty of something?
Hi Brian,
As long as the impossibility of your thought experiment does not prevent you from producing exactly the same circumstances, why not assume you have the power to turn me into Madoff (or a Madoff clone)?
If I am not exactly Madoff, have you left even a shred of my character and values, habits and preferences, my current feelings and desires?
If not, have you then not reduced me to Madoff?
Assuming you leave even a tiny difference, I think I can show you how your 1,000,000 replays (a variation of Peter van Inwagen's Mind argument in his 2000 article "Free Will Still a Mystery") can be resisted.
Your random outcomes (and van Inwagen's) are based on the radical and mistaken Libertarian view held by Bob Kane, Laura Ekstrom, Mark Balaguer, and others that there is random indeterminism directly in our decision.
Randy Clarke calls this freedom in the decision "centered" and "directly free" in his Libertarian Accounts of Free Will
My more "modest" libertarianism, like Dennett and Mele, uses indeterminism only in the generation of alternative possibilities.
If I had that shred of character left, I would never do what Madoff did - certainly not for random reasons. And if I did, as you see I would not be responsible.
Additionally, my free generator of ideas might come up with something creative and new whatever my prior life experience, so there is even hope for a real Madoff (too late now?).
Your whole scenario is another example of the flawed Randomness Objection in the Standard Argument Against Free Will.
Posted by: Bob Doyle | July 01, 2009 at 01:58 PM
Bob,
Thanks for the response.
You say: “”If not, have you then not reduced me to Madoff?”
No. I can’t reduce you to Madoff because Madoff is in a jail in MCC right now. He can’t be in a jail in MCC and in an office in Manhattan at the same time.
You say: “If I am not exactly Madoff, have you left even a shred of my character and values, habits and preferences, my current feelings and desires?”
I don’t know. I’ve given you Madoff’s exact PFs and I’ve put your brain in the exact state that his brain was in prior to one of his immoral choices. I don’t have exhaustive knowledge of your present PFs or your present brain state, nor do I have exhaustive knowledge of the PFs and the brain state that Madoff exhibited prior to the choice in question, so I can’t specify the extent to which there will be overlap or preservation. Hopefully, the overlap and preservation will be minimal ;-)
I suspect that your response will be that you will no longer be you under this kind of change. You will die, and some other being that is numerically distinct from you will emerge. Others have made this response in discussing manipulation; personally, I find it evasive and unpersuasive.
As an intuitive challenge to the response, ask yourself: if I told you that, after I manipulate your PFs and your brain state, that I am going to torture the resultant being, i.e., the being that I claim is still you and that you claim is an entirely new person, would you be scared for yourself, scared of the prospect of being tortured? If you would no longer be you after the manipulation, then you shouldn’t be any more scared than you would be if I told you I was going to torture a total stranger—after all, that’s exactly what I would be doing on your view, is it not?
The response that you would become a numerically different person under the manipulation begs the question: what are "you"? Personally, I am skeptical about the existence of the self, but if I had to answer the question, I would propose that "you" are a center of subjectivity that experiences phenomena. On this proposition, it is entirely conceivable that you would continue to exist through the change. You would still be there, the same center of subjectivity, what would be different would be the feelings, desires, motivations, beliefs, moods and so on that you experience—the "what it’s like" of your mental existence.
To frustrate the scenario, one might claim that certain PFs are intrinsic to your "essence", such that if you were to lose them in a manipulation scenario, you would die and a new person would emerge. But this claim creates numerous problems. First, there is no non-arbitrary way to specify what your "essence" is. Is it "loving spinach", "hating onions", "caring about others", "having a tendency to be nice", "having an inclination to be selfish", "having a disposition to get angry"? What is it? Depending on how we answer this question, each of us will have died many times in our organic lives, because as individuals we are constantly changing and developing, sometimes in radical ways. Second, the view leads to unacceptable conclusions in the arena of moral responsibility. If Madoff uses drugs in prison and develops a mental illness—say schizophrenia or alzheimer’s—and his personality completely changes as a result, the human being in his space will be an entirely new and distinct human being, no longer the Madoff that is in prison right now. We will therefore have to release him. Are you ready to accept that? If so, you might as well encourage prisoners to permanently damage or alter their brains, as that will be a sure-fire way to get out of jail.
As for the questions I posed in the original thread, I’m not clear as to what your answers are. I will ask them again to clarify.
You say: “If I had that shred of character left, I would never do what Madoff did - certainly not for random reasons.”
What if you didn’t have that shred of character left? Assume that I put your brain in the exact same physical state that Madoff’s brain was in immediately prior to some immoral choice that he made. Given the nature of the brain, is it certain that you would you make the same choice that he made?
If it is not certain, then suppose I repeat the above scenario over and over again ad infinitum. Is there any chance that you will eventually choose what Madoff chose?
You say: “And if I did, as you see I would not be responsible.”
I don’t understand this statement. Are you saying that if I manipulated you as above, and you made the same choice as Madoff did, that you would not be responsible for the choice? If so, why would you not be responsible? Why would you be any less responsible than Madoff?
Posted by: Brian Parks | July 01, 2009 at 05:22 PM
Brian,
If I have not a shred of my own character left and my "adequately determined" will is selecting from among possibilities for action generated from "the same feelings, emotions, sensitivities, motivations, desires, beliefs and so on that were present in his mind at the time of the choice," I think you are very close to getting me to act like Madoff - and being responsible for my Madoff decisions.
May I ask if you have replaced all my memories and experiences?
If not, since my Cogito Model for the generation of alternative possibilities uses random combinations and variations on my past experiences, it is possible that I/Madoff will come up with alternatives that Madoff might never have produced.
This does not guarantee that the new me/Madoff will choose any of these possibilities, given that I assume my character and values have all changed.
So I need to know whether you have replaced all my memories and past experiences, and have you programmed in the new habits and preferences (the "fixation of beliefs" as Charles Sanders Peirce calls it, the "self-forming actions" of Bob Kane, etc.) that are consistent with the past me.
If these are all changed, that is what I meant by changing me into (reducing me to) not the physical Madoff now in jail, but shall we say a Madoff clone?
Anyway, the shred of character I referred to would be the character and values (beliefs), the habits and preferences that have developed from my past experiences.
If all of these are gone, I think I must grant your manipulation is complete.
Please see my Cogito Model of free will to understand how free generation of alternative possibilities does not reduce our agential control and responsibility that is the result of evaluation and selection from those partially random options.
Cogito Model
Posted by: Bob Doyle | July 02, 2009 at 03:54 PM
Kip, I see your sci-fi scenario and raise you another one.
In the future, humans meet an intelligent space-going race who live in hives with one queen and many non-reproducing offspring of the queen. Humans debate whether each organism should be considered a "person", or only each hive, collectively. Humans taking the latter view point to the extraordinary degree of cooperation and groupthink among the members of a hive.
After a while, humans discover that the queen in fact controls the psychological forces (PFs) of her daughters, just as you have described above. Her control is seamless and perfect. Doesn't this clinch the argument in favor of one person per hive?
In your scenario, you have created a tidy little hive with one queen and one worker bee. Blaming the worker bee for stinging you makes about as much sense(*) as blaming the right half of a normal human's brain when his left hand smacks you. Yes, the right brain did it, voluntarily, and even has enough brainpower to do it all on its own. But it's not all on its own, and that's important. There's a more apropos, more well-delineated unitary system, a larger whole agent to blame.
* Less sense, actually. The right brain is an equal partner, the worker bee is a slave. Somewhere in between blaming the right brain, and blaming the hand that smacked you - that's where blaming the worker bee goes.
Suppose Kip is about to MadoffWithMe and then torture the resulting person. I would definitely be scared for the torture victim. Would this be selfish fear or empathic fear? I don't think there's really any difference other than the label one puts ("me" or "them") on the future person. (John Perry and Derek Parfit on personal identity are instructive here.) In this case I would label the person "not me, except, for many legal purposes, me", which doesn't help much.
Posted by: Paul Torek | July 02, 2009 at 05:12 PM
Paul,
I haven't posted in this thread yet. Perhaps you think that Brian has been manipulated into someone just like me.
Still. I have to defend my usual ally in our battles against compatibilism. I'll try to post some more thoughts later.
Posted by: Kip | July 02, 2009 at 06:51 PM
Bob,
Long post coming here, it's divided into three parts: (1) we assume that I do replace all your memories (2) we assume that I do not replace all your memories (3) A brief question about the possibility of moral responsibility under your Cogito Model.
You say: “I need to know whether you have replaced all my memories and past experiences.”
The answer can be yes or no, it doesn’t matter. Problems will emerge for moral responsibility regardless.
Let’s divide the experiment into two possibilities: first, that I replace all of your memories with those of Madoff, and second, that I leave certain core memories of yours intact.
First Possibility: All Memories Replaced
In this possibility, I put your brain in the exact physical state that Madoff’s brain was in immediately prior to an immoral choice that he made. In psychological terms, I make all of your mental goings-on—including your memories—match his. I then steer you into the exact circumstances that he was in at the time, and I let go so that you can make the choice for yourself.
Surely, if I run this same scenario on you over and over again, you will eventually choose to do what Madoff did. The question will then be: are you responsible for that choice? No matter how you answer, you will undermine your position. If you answer no, then you will have to admit that Madoff is not responsible either. If you answer yes, then you will have to embrace the conclusion that technology can theoretically afford me the ability to guarantee that you will become morally responsible for anything that I choose: a theft, a rape, a murder, a holocaust, whatever. That is not an intuitively acceptable conclusion.
Now, you made the comment: “If these [memories] are all changed, that is what I meant by changing me into (reducing me to) not the physical Madoff now in jail, but shall we say a Madoff clone?”
If you want, you can describe the scenario in that way, you can say that the manipulation has reduced you to a Madoff clone. The point is that you have been reduced to the Madoff clone, not someone else. If I torture the being that emerges from the manipulation, you—not some random stranger—will be the one to feel the pain.
The conception of self at work in my analysis is that of a center of subjectivity. The phenomenal content that this center experiences can change in radical ways, as when I transfer you from a torture chamber into a warm, cozy bed. But the underlying identity of the center remains the same. It is the same center having the different experiences.
Surely, you would agree that if moral responsibility is possible, that the existence of a self as an enduring center of subjectivity would not preclude that possibility. Why would the fact that I am a center of subjectivity that can continue to exist despite radical neurological and phenomenological changes mean that I cannot be morally responsible for my choices? What does one thing have to do with the other?
So, if your conception of moral responsibility is tight, it should not pose a problem for you to assume, for the sake of argument, that you would remain the same underlying self despite the aforementioned manipulation. Thus I ask: if, on that assumption, given repeat manipulations, you eventually make the same immoral choice that Madoff made, would you be morally responsible for making that choice?
If you insist on claiming that possessing certain PFs—in this case, certain memories—is a necessary condition for your existence, such that the absence of those PFs would cause you to die and would cause an entirely distinct individual to emerge, my challenge to you will be as follows: specify the PFs! Specify the mental content that, if manipulated, would cause you to die and give rise to a numerically distinct being. You won’t be able to, at least not without generating a plethora of absurd conclusions in the process.
Second Possibility: Core Memories Left Intact
In this possibility, I manipulate your feelings, emotions, desires, beliefs, and so on to match those that Madoff possessed immediately prior to one of his immoral choices, and I steer you into the circumstances that he was in at that time. However, I leave certain core memories of yours intact.
I take it that you will agree that you will be the numerically same individual—the numerically same Bob Doyle—that you were before the manipulation. So we can put the issue of personal identity aside.
A relevant comment you made: “Since my Cogito Model for the generation of alternative possibilities uses random combinations and variations on my past experiences, it is possible that I/Madoff will come up with alternatives that Madoff might never have produced.”
I understand this comment and I grant it’s underlying claim: given the cogito model, and the fact that there are different memories encoded in your neural network, your choice might be open to possibilities that Madoff’s choice was never open to.
Let’s turn our attention to the moment that I complete the manipulation and release you to make the choice. Surely, you will agree that it is possible that you will choose as Madoff did. That is, at the moment that I let go, your choice will be open to the immoral possibility, though it may not yet be determined to that possibility.
Let’s assume that you do not make the immoral choice. You choose one of the other possibilities. Here is what I am going to do. I am going to remanipulate your PFs and steer you back into the same situation that you were just in. I am then going to let go and allow you to choose again.
The question is, if I do this an infinite number of times, will there eventually be a time where you make the immoral choice? If so, the next question is: will you be morally responsible for making that choice?
If the answer to the second question is no, then you can no longer claim that Madoff is responsible for the choices he made. If the answer is yes, then you have a deeper problem: your conception of moral responsibility leads to an absurdity. It implies that technology can in principle allow me to seal your moral destiny, that is, guaranteethat you will become morally responsible for a theft, a rape, a murder, a holocaust, whatever I choose. Contemplate that conclusion realistically, as if someone were actually going to manipulate you as soon as you finish reading this post. You will see that it is not an intuitively acceptable conclusion.
Now, back to the first question. You might claim that if you did not choose the immoral option on the first run-through, that there will never be a subsequent run-through in which you will choose the immoral option. In other words, “adequate determinism” will ensure that all subsequent run-throughs would turn out the same way.
All this would mean for the scenario, however, is that I didn’t go farther enough with the manipulation. It would mean that I need to tweak more of your memories, make them more congruent with Madoff’s, so that “adequate determinism” leads you to choose as he did. Surely, you will agree that such a tweak would be possible without my having to replace all of your memories. Once the tweak is complete, and you make the immoral choice, the same problematic concerns discussed above will apply: you will either have to embrace a conception of responsibility that yields an absurdity, or you will have to deny Madoff’s responsibility.
Cogito Model
You say: “Please see my Cogito Model of free will to understand how free generation of alternative possibilities does not reduce our agential control and responsibility that is the result of evaluation and selection from those partially random options.”
I’ve read your Cogito Model and I agree that it is a more potent model than many of its prior alternatives. However, it is a model for "free will", and I am not concerned with "free will." The debate about "free will" is a semantic debate: whether the referent of the term exists depends on how one precisely defines the term.
What I am concerned with is moral responsibility. In my view, Galen Strawson has shown rather convincingly that moral responsibility is impossible. Consider,
(1) To make a choice, a person must be guided in her choice by certain principles of choice.
(2) To be responsible for a choice, a person must be responsible for the fact that she is guided by those principles of choice.
(3) To be responsible for the fact that she is guided by those principles of choice, a person must complete an infinite regress of choices.
(4) A person cannot complete an infinite regress of choices, therefore a person cannot be responsible for a choice.
How does your cogito model facilitate an answer to this argument?
Posted by: Brian Parks | July 02, 2009 at 10:28 PM
Paul,
I’ll interject and answer your questions, but I expect you to answer the ones I posed in the original post.
You say: “In the future, humans meet an intelligent space-going race who live in hives with one queen and many non-reproducing offspring of the queen. Humans debate whether each organism should be considered a "person", or only each hive, collectively. Humans taking the latter view point to the extraordinary degree of cooperation and groupthink among the members of a hive. After a while, humans discover that the queen in fact controls the psychological forces (PFs) of her daughters, just as you have described above. Her control is seamless and perfect. Doesn't this clinch the argument in favor of one person per hive?”
No. On the “one person per hive” ontology there is no way to account for the power of choice that each of the members have. What they choose remains “up to them”, remember? All the queen can do is manipulate their feelings, desires, beliefs, motivations and so on. She cannot actually pull the trigger on any of the specific choices that they make. They have to do that.
Now, be very careful how you respond here. If you try to say that their choices cannot really be “up to them” given that the outcomes of those choices are entirely a function of prior mental states that they do not control, you know exactly what I am going to do with that ;-) So don’t even try it.
You say: “In your scenario, you have created a tidy little hive with one queen and one worker bee. Blaming the worker bee for stinging you makes about as much sense(*) as blaming the right half of a normal human's brain when his left hand smacks you. Yes, the right brain did it, voluntarily, and even has enough brainpower to do it all on its own. But it's not all on its own, and that's important. There's a more apropos, more well-delineated unitary system, a larger whole agent to blame.”
This response assumes that we can either blame the queen bee or the worker bee, but not both. That assumption is wrong: if we want, we can blame both. We can blame the queen bee for inserting the bad PFs, and we can blame the worker bee for making the bad choice given the bad PFs.
Here's the punch: what we would be blaming the worker bee for is exactly what you blame Madoff for: making bad choices given a bad set of PFs.
Now, to my question. Assume that we live in a deterministic universe. I put your brain in the exact state that Madoff’s brain was in immediately prior to one of his immoral choices. I then put you in the exact circumstances he was in at the time and I let you make the choice for yourself. Assume that the nature of the self is such that the present "you" continues to exist through the manipulation. Trivially, you make the same immoral choice that he made. Are you morally responsible for making that choice? If not, how can Madoff be morally responsible for making it? Why does the fact that your PFs emerged artificially, in response to a personal force (a manipulator), absolve you of a guilt that would unquestionably apply to you if your PFs had emerged naturally, in response to a non-personal force (such as genetics and culture)? Please tell me why.
Posted by: Brian Parks | July 02, 2009 at 11:41 PM
It matters how the "psychological forces" are brought about. (I don't think it matters whether an agent or non-agential thing causes them.) Your description of the case provides little detail about this, though by saying that the agent is "manipulated" you suggest that the PF's are brought about in a responsibility-undermining way. Both compatibilist and incompatibilist accounts can include a condition that precludes responsibility when PF's are brought about in certain ways.
Posted by: R. Clarke | July 03, 2009 at 06:48 AM
"Why does the fact that your PFs emerged artificially, in response to a personal force (a manipulator), absolve you of a guilt that would unquestionably apply to you if your PFs had emerged naturally, in response to a non-personal force (such as genetics and culture)? Please tell me why."
Most folks would intuit that the manipulator bears responsibility for the choice since he installed the PFs, such that the choice is really more up to him than me. We are naturally inclined to assign responsibility to agents, not impersonal factors and situations, so we'll pick the closest (non-manipulated) agent to the choice to fix blame on. Absent a manipulator, this would be Madoff. This is a description of what I suspect is going on in terms of human psychology; I don't endorse our natural tendency to ignore impersonal factors and situations when assigning causal responsibility and addressing wrong-doing.
Posted by: Tom Clark | July 03, 2009 at 09:51 AM
Randy,
You say: “It matters how the psychological forces are brought about.”
Why does it matter?
You say: “Your description of the case provides little detail about this, though by saying that the agent is "manipulated" you suggest that the PF's are brought about in a responsibility-undermining way.”
I put your brain in Madoff’s state, I put you in his circumstances, and I let you choose. Depending on your choice, I repeat over and over again. Eventually you make the same choice that Madoff made. What more do you need to know?
Madoff chose to steal. His choice followed from his PFs.
You chose to steal. Your choice followed from your PFs, which were qualitatively identical to his PFs.
If he is responsible for his choice, why are you not responsible for yours?
To say “because my PFs were brought about by manipulation” doesn’t answer the question. Why does the fact that your PFs were brought about by manipulation prevent you from being morally responsible for your choice? Did you not make the choice?
What is it about “being morally responsible for a choice” that renders it compatible with “the PFs that guided the choice arose naturally” and yet incompatible with “the PFs that guided the choice arose through manipulation”?
You say: “Both compatibilist and incompatibilist accounts can include a condition that precludes responsibility when PF's are brought about in certain ways.”
What would be the basis for such a condition? Until you give a basis, you do not have an elegant theory of moral responsibility; what you have is a set of unrelated post-hoc claims pieced together to render the result you want.
Suppose that I propose that “gravity is repulsive.” You then come along and say “If gravity were repulsive, then things would not fall to the ground, they would rise to the sky.” I respond, “Oh, wait, I forgot to mention an important excepton to my previous proposition: gravity is repulsive except on this planet, where it is attractive.”
Simply adding an exception like that after the fact isn’t going to satisfy anyone. You need to make sense of the exception, you need to need to explain why gravity is attractive here and yet repulsive everywhere else in the universe. Surely there is going to be a reason.
The exact same thing has happened here. You propose that “You are morally responsible for choices that you yourself make.” I come along and say “Suppose that I give you the same PFs as Madoff and I put you in his circumstances. Naturally, you choose what he chose. If what you say about moral responsibility were true, then you would be morally responsible for your choice. That's unacceptable.” You respond, “Oh, wait, I forgot to mention an important exception to my previous proposition: you are morally responsible for choices that you yourself make except in cases where the PFs that guide those choices arise through manipulation.”
Surely, you can understand why I would find such an account--with its mysterious "out of the blue" exception--to be unsatisfying. You need to provide a basis for the exception, you need to explain why the fact that your PFs were inserted by an outside agent prevents you from being responsible for choices that you yourself make in their presence.
Posted by: Brian Parks | July 03, 2009 at 05:51 PM
Tom,
You say: “This is a description of what I suspect is going on in terms of human psychology; I don't endorse our natural tendency to ignore impersonal factors and situations when assigning causal responsibility and addressing wrong-doing.”
Point taken. I am trying to get libertarians and compatibilists who embrace this natural tendency to justify it in terms of a coherent theory of moral responsibility. In other words, I want an answer to the question: if “being morally responsible for my bad choice” is compatible with “my bad choice was guided by a bad set of PFs that emerged naturally, through no fault of my own”, why is it not also compatible with “my bad choice was guided by a bad set of PFs that emerged artificially, through no fault of my own”?
I don't think I'm going to get an answer ;-)
Posted by: Brian Parks | July 03, 2009 at 06:16 PM
Brian (and Kip),
Sorry about the mix-up. With all these identity thieves populating our scenarios, sometimes it's hard to keep track of who's who ;)
I do not assume that we can't blame both MadoffWithMe and his manipulator. In fact, I do blame "both" of "them", I just do it collectively, as a single agent. Similarly, if I were to encounter a hive-mind I would attribute responsibility to the hive rather than individual organisms.
You say that on the "one person per hive" ontology there is no way to account for the power of choice that each of the members have. But it is not the job of such an ontology to account for all powers of choice, only to account for reasonably independent powers of choice. The notion of reasonable independence is a rough-and-ready one. Two randomly selected humans are more independent of each other than two typical siblings, who in turn are more independent of each other than two Siamese twins. But there is a radical difference between all those cases, on the one hand, versus the two halves of a typical human's brain. The interdependence between Siamese twins may be impressive, but the interdependence between the two halves of one brain is in a whole 'nother league. Because this huge gap exists between cases of "separate minds" versus "same mind", we can use those terms very meaningfully in everyday life.
Your MadoffWithMe case may pose some challenges to our rough-and-ready understanding of sameness of agent. But when we look at the activity that you are manipulating, it falls closer to the "same mind" end of the spectrum. Both you and MadoffWithMe participated in an intention that the money should be deposited in the Ponzi scheme. You adjusted your manipulations as needed to set this outcome up; MadoffWithMe "pulled the trigger" on it. Your relationship to MadoffWithMe is less like the interaction between two normal minds cooperating in a conspiracy, and more like - not perfectly like, but more like - the interaction between the bulk of a normal brain and the motor cortex. At least when it comes to Ponzi schemes. Perhaps you take no interest in MadoffWithMe when it comes to, say, his music listening choices. In that case, those music choices may be more like the actions of a whole person rather than a sub-person (although they will of course be influenced by his mood, which is affected by the Ponzi schemes).
The hive queen (or the manipulator in your scenario) may not be able to pull the trigger on the worker-bee's actions. But by the same token, your left brain is powerless to pull the trigger on smacking someone with your left hand. Only your right brain can do that. (Let's ignore a few real-neurology complications and pretend that control is completely contralateral.) All your left brain can do is try to manipulate your right brain into doing so. Should we then reject the "one person per brain" ontology since it does not account for the powers of choice that each half-brain has?
Interestingly, if you suffer a severe stroke right after delivering the smack, such that the left half of your brain dies, the remaining right-brain-only person is still a person, and for all practical purposes the same person. This demonstrates that considering a larger unitary system to be a single person, does not necessarily prevent us from blaming a smaller remnant of the person, should the rest of the original person be removed from the picture.
I cannot answer your final question as asked, because I can't buy one of the stipulations. I can't assume that the nature of the self is such that the present "you" continues to exist through the manipulation. The problem is that the essence of self inheres in whatever underlies psychological continuity and connectedness. In the real world as we know it, that means certain fine structures and patterns of evolution of the brain. And in this manipulation scenario, those structures and patterns were radically disrupted. Such disruption is absolutely necessary to achieve the stipulated changes in psychological forces. The radical disruption destroys the earlier self.
But suppose there is an immaterial soul - wouldn't that make it possible that you survive Madoff-ization? No. Either the immaterial soul underlies psychological continuity or not. If we stipulate that it does, then your manipulations will be insufficient to Madoff-ize me; my same soul will carry my same personality. If it does not, then having the same soul is insufficient to make me the same person. You might as well point to the fact that MadoffWithMe has the same skin, undisrupted, on the bottom of his foot. Same sole or same soul, it makes no difference.
To say that "you" are a center of subjectivity that experiences phenomena is a bit one-sided, but already commits us to the importance of neural structures and processes. By severely disrupting those in the thought experiment, you undermine the survival of the "self". I would add that "you" are also a center of activity. In Parfit's language: the relationship between an earlier intention and a later action that executes it, is one of the elements of psychological continuity; continuity of character is another. And of course, disrupting those relationships is what your thought experiment is all about.
Posted by: Paul Torek | July 04, 2009 at 05:25 AM
One way to change someone's brain is to present her with a cogent argument. Another way, which I suppose would be available to the manipulator in your scenario, would be to skip the argument and just rearrange the neurons and their interactivity. Seems a morally relevant difference to me, one that a theory of moral responsibility should recognize.
Posted by: R. Clarke | July 06, 2009 at 06:25 AM
R.,
We human beings are hard-wired to act on cogent arguments, or, more precisely, the argument among however many that appears most appealing or convincing to us. Presenting us simply with a cogent argument is not enough to cause the act, however, presenting us with an irresistibly compelling argument will compel our act as effectively and completely as re-arranging our neurons.
In other words, same argument within same context will yield same act. Such a manipulation, which very much amounts to a externally induced neural re-arranging, leaves us no real choice, and, therefore, no real justification for ascribing moral responsibility to that choice.
Posted by: George Ortega | July 06, 2009 at 08:29 AM
I'd be interested in hearing an "irresistibly compelling argument" about anything of consequence.
Posted by: brandon w. | July 07, 2009 at 01:52 PM
B.W.,
You'd better get your dissertation written and quit wasting time on blogs, or else! How's that?
Posted by: R. Clarke | July 07, 2009 at 02:20 PM
Ahh, but you see, I'm collecting irresistibly compelling arguments in order to put them in my dissertation. Just thought I might try to siphon off a few here. I figure it'll save you some red ink in the long run.
Posted by: brandon w. | July 07, 2009 at 02:35 PM
Brandon,
Regarding your statement; "I'd be interested in hearing an 'irresistibly compelling argument' about anything of consequence.," I'd like to respond, but need you to say exactly what you find inconsequential. I'm guessing it's not the question of justifiable moral responsibility.
Unless, of course, you're jesting; in which case my too punny retort is that maybe you're jest running from consequences.
R.,
From your pass, I assume we agree that my July 6th assertion correctly describes the equivalent unjustifiable moral accountability of neural rearrangement and irresistibly compelling arguments.
Posted by: George Ortega | July 07, 2009 at 07:41 PM
Hi Brian, I'm a little puzzled by your treatment of 'psychological inputs'. Since this is somewhat tangential to your main concerns here, I've responded in a new post, here. (Comments welcome!)
Posted by: Richard | July 12, 2009 at 03:56 PM
Richard,
The device's controlling or not one's ABILITY to choose is inconsequential to the question, and for the person to otherwise not have an ABILITY to choose would make the question incoherent.
That the device controls one's decision making procedures seems a matter quite distinct from the superfluous condition of the device's not controlling one's ability to choose.
Posted by: George Ortega | July 12, 2009 at 05:24 PM
Richard, the term I used to construct the scenario was 'pyschological forces' (PFs), not 'psychological inputs.' I defined PFs to include what you call input states (feelings) as well as what you call dispositional states (inclinations, traits, etc.)
For those who would naturally object to my folk-psychological terminology, I offered a more precise 'neuroscientific' description of the scenario.
Here is that description: Free will advocates claim that human beings make choices. OK, suppose Bernie Madoff made an immoral choice C at time t. Let dt be an infinitely small number greater than zero. At t-dt, Bernie Madoff had not yet made choice C.
The question: if I put your organism, including your brain, in the exact state that Madoff's organism and brain were in at t-dt, and I put you in the same external circumstances that he was in, would you make choice C at time t?
If so, would you be responsible for making that choice?
Now, if you don't make choice C, I'm going to repeat the above scenario over and over again until you do. Suppose that after 999 run-throughs, you finally make choice C. Are you responsible for making choice C on the 999th run-through? If not, why not?
What is your objection to this scenario? If you have no objection, how do you answer it?
Posted by: Brian Parks | July 12, 2009 at 06:36 PM
Brian:
How can you put someone else's brain into the exact same state as someone else, without killing that person and creating a copy of the other person?
Are you suggesting do this slowly enough (molecule by molecule) to preserve continuity of consciousness?
Even in that case, there is a strong argument to be made that the old person is gone and a new person has been created, however gradually. Mother Theresa is just nothing like Charles Manson, even if you gradually change one into the other, molecule by molecule.
Posted by: Kip | July 12, 2009 at 06:57 PM
Paul,
You say: "I do not assume that we can't blame both MadoffWithMe and his manipulator. In fact, I do blame "both" of "them", I just do it collectively, as a single agent."
Are you claiming that they *really* are a single agent ontologically? Or are you just arbitrarily deciding to conceptualize them that way?
Some interesting questions for you ;-)
1) Suppose that MadoffWithMe is feeling pleasure. The manipulated being is feeling pain. What is the single agent you speak of feeling? Both? Neither?
Is this single agent conscious? If not, how can it be an agent? If so, what is the content of its consciousness at a given moment? Is it the content of MadoffWithMe's consciousness, or the content of the manipulated being's consciousness? Or both?
2) How does this single agent maintain its identity through time? Suppose that MadoffWithMe dies, so that all we now have is the manipulated being. Does the single agent you were referring to still exist?
3) Suppose that the universe is deterministic and that there is an agential God who set the initial conditions. Does it therefore follow that there is only one agent in this universe, God + the rest of us? Are none of us agents in our own right?
4) How do you recommend we punish this single agent?
You say: "I cannot answer your final question as asked, because I can't buy one of the stipulations. I can't assume that the nature of the self is such that the present "you" continues to exist through the manipulation. The problem is that the essence of self inheres in whatever underlies psychological continuity and connectedness."
What is that? Maybe it would help if you explain what you consider a self to be.
Posted by: Brian Parks | July 12, 2009 at 07:17 PM
Brian,
Answer to first July 12th question; In a deterministic universe, exact state + exact circumstance = same choice.
Answer to second July 12th question; No, because you made the choice for Richard.
Answer to third July 12th question;
Repeating the scenario over and over would yield a different result only in an indeterministic universe, which doesn't exist. However, even if it did, Richard could not be fairly held responsible for a choice that was made randomly.
Kip,
You could just as easily ask how anyone could put anyone's brain in any state. Are you implying that an impossible scenario should not serve as the basis of discussion for a determined will/free will debate?
If so, I think I agree with you. I have always thought it curiously unreasonable that some arguments for free will start with the premise "let us assume human beings have a free will."
Posted by: George Ortega | July 12, 2009 at 07:44 PM
Kip,
You say: "How can you put someone else's brain into the exact same state as someone else, without killing that person and creating a copy of the other person?"
There are conceptionalizations of 'self' in which numerical identity is possible through the described change. I am asking the reader to assume one of those conceptualizations for the sake of argument. The only reason this request would be problematic would be if the conceptualizations were somehow incompatible with moral responsibility. I see no reason to think they would be. Do you?
You say: "Are you suggesting do this slowly enough (molecule by molecule) to preserve continuity of consciousness?"
Sure. That's one way.
If I changed Mother Theresa neuron by neuron, at what point would she cease to exist?
Suppose I give her a pill that alters her neurochemistry. The pill slowly takes action, such that over a period of twelve hours, she gradually turns into a completely unrecognizable person, a violent animal--just like Manson.
Does the entity previously described as 'Mother Theresa' die in this process? If so, at what point does she die?
What if the change is more benign? She becomes different--more antisocial, more difficult--but nothing like Manson. Would she still die in this process?
How much change would be necessary for her to die?
You say: "Even in that case, there is a strong argument to be made that the old person is gone and a new person has been created, however gradually."
Really? What's the strong argument? "C'mon, it's obviously not her anymore!" is not a strong argument ;-)
The truth of the matter is that numerical identity is a mental construct, not a privileged feature of any reality. That is why these kinds of discussions tend to generate so much confusion.
Is the ship-of-theseus, with all its wood fully replaced, the same ship that set sail from the harbor? Was the Michael Jackson that just died the same underlying person that sang "I Want You Back" in the 1970's? Are your Nike basketball shoes the same shoes they were before you popped the air pockets and changed the shoelaces? Good luck with those questions. Reality has no answer for them. Likewise, it has no answer for the question "Would I remain the same underlying person if this or that about me were changed."
A meaningful conception of moral responsibility, however, requires that numerical identity be more than a mental construct. And so my example works inside that assumption: specifically the assumption that agents exist, that they endure through time, and that they can maintain their identity as substances despite drastic changes in their properties. These assumptions do not preclude or undermine the possibility of moral responsibility, so there is no reason (other than evasion) for MR advocates to quibble over them in responding to the scenario.
Posted by: Brian Parks | July 12, 2009 at 08:46 PM
Hi Brian - you used both terms. Here's a direct quote from your original post: "Though I determine your psychological inputs, you determine the choices that follow from those inputs." I just found this way of talking curious, and it suggested to me some interesting (albeit tangential) issues which I followed up on in my linked post.
[On topic: I share the view that your manipulator has simply replaced the target with a new person; but yes, that person is blameworthy insofar as he is vicious.]
Posted by: Richard | July 12, 2009 at 08:54 PM
Randy,
You say: "One way to change someone's brain is to present her with a cogent argument. Another way, which I suppose would be available to the manipulator in your scenario, would be to skip the argument and just rearrange the neurons and their interactivity. Seems a morally relevant difference to me, one that a theory of moral responsibility should recognize."
In what sense would it be a morally relevant difference?
Suppose that A and B have brains in some identical state S and that they make an identical choice C. Assuming that neither A nor B is morally responsible for having a brain in state S, why would the specific details of how their brains got to that state matter as far as their moral responsibility for making choice C?
Can you explain the difference in terms of your integrated agent-causal theory of free will? When an agent makes a decision, you say that "the decision is caused by by her, and it is nondeterministically caused, in an appropriate way, by [her reasons]" Assuming that the agent is not in any way responsible for having the reasons that she has, why would their causal origins matter as far as her responsibility for the decision itself?
Posted by: Brian Parks | July 12, 2009 at 09:03 PM
Richard,
You say: "I share the view that your manipulator has simply replaced the target with a new person."
What is the most that I could change without your claiming that I have killed the original person and created a new one?
If, for example, I just alter the person's personality (well within the capabilities of modern psychiatry), and I leave the person's memories and external appearance intact, will I have killed the original person and created a new one?
Posted by: Brian Parks | July 12, 2009 at 10:30 PM
Richard,
You write;
"I share the view that your manipulator has simply replaced the target with a new person; but yes, that person is blameworthy insofar as he is vicious."
An obvious corollary to this manner of moral attribution is that if you were injected with an agent like PCP against your will, and you became vicious as a result, society should hold you responsible for the results of that viciousness.
I would be interested in hearing your rationale for such an unfair perspective.
Posted by: George Ortega | July 13, 2009 at 01:52 AM
I think that your manipulation scenario is only problematic for compatibalists.
If we accept libertarian free will, and, as the scenario goes, by creating a madoff brain state in me, there is some non zero probability that I will choose C. A libertarian can consistenly bite the bullet that I am morally blameworthy and still reject the conclusion that technological advances can make me guilty of something. Because, even if it is the case that technology can put me in the exact same situation over and over again, if or when I actually make the choice, according to Libertarian free will, I am the one freely making the decision. The technology did not make me do anything even though it gave me plenty of opportunities to do so.
Posted by: Murali | July 13, 2009 at 04:51 AM
Murali,
In a deterministic universe, the technology created the preconditions that prohibit your having a free will; in other words, it made you choose as you chose.
Posted by: George Ortega | July 13, 2009 at 05:46 AM
But the question isnt whether determinism is true or not. The question is whether manipulation poses a problem to moral responsibility. What we could do, is tease out how various mental states etc affect decision outcomes under a libertarian schema. i.e. how free is the will, if it is susceptible to influence. The example may very well serve to illustrate inconsistencies in various naive forms of libertarianism. However, even if we grant all of Mr Parks' initial conditions, a consistent libertarian need not concede that technology seals one's moral destiny. For 2 reasons
1. it may be highly proable that under many iterations, a manipulated madoff clone (MMC) would pull the ponzi scheme at least once, even though in any one iteration, theindividual probability would actually be quite low.
However, it is not necessarily the case that our MMC will eventually push the button, only highly probable. In that case, technology can never seal any moral destiny, only make certain destinies highly improbable. But that, in itself is not really controversial or problematic at all.
2. Even if and when MMC fails, it is not the technology that is doing the sealing (if any happens) but the free will of MMC.
That said, it is still a potent criticism of compatibalist notions of moral responsibility. More should be said on how to account for moral responsibility.
For example, in order for a compatibalist to argue for the distinction between naturally and artificially induced PFs, a compatibalist could argue that even though the agent is not the ultimate cause of his own behaviour, a necessary condition for moral responsibility would have to be that the agent has to own his motivating factor. preferences and moods which form as a result of real events maybe be more 'owned' than those that are not. This seems to cover not only the manipulation scenario, but also situations where the agent is being deceived, or is mistaken. (So, it doesnt seem ad hoc).
Of course, the theory has to be refined in order to fit with many other common intuitions about moral responsibility. But the rough sketch I gave above isnt too shabby for an amateur.
Posted by: Murali | July 13, 2009 at 08:12 AM
Brian:
"Does the entity previously described as 'Mother Theresa' die in this process? If so, at what point does she die?"
Your argument seems to be of the form:
1. You can't point to me when MT died.
2. Therefore she didn't die.
That's a fallacy. (Eddy Nahmias has made a similar point: just because compatibilist have difficulties distinguishing free and unfree agents in extreme cases, doesn't mean that there is no distinction.)
Here's an analogy (cited by Mele in his review of Double's work, I believe): as you pluck hairs off someone's head, eventually the person becomes bald. When exactly does the person become bald? It's hard to say. But that doesn't mean baldness doesn't exist.
Similarly, if you change my brain slowly into Mother Theresa's, there's a very strong argument that you've killed me and made a new person. We might not be able to pinpoint when that happened, any more than we can pinpoint the time of death of any mysterious murder, but that doesn't mean the murder didn't happen.
Remember that I'm extremely sympathetic to your view. In fact, we share the same empathy-based, skeptical view on free will. But it's important to recognize the strength of the compatibilist argument based on personal identity. As I've said before, I think it's the best argument the compatibilist has.
Posted by: Kip | July 13, 2009 at 03:16 PM
Murali,
You say: "A libertarian can consistenly bite the bullet that I am morally blameworthy and still reject the conclusion that technological advances can make me guilty of something."
The problem is that technological advances would be able to guarantee that you will become guilty of something. That is a highly problematic conclusion.
You say: "It may be highly proable that under many iterations, a manipulated madoff clone (MMC) would pull the ponzi scheme at least once, even though in any one iteration, the individual probability would actually be quite low. However, it is not necessarily the case that our MMC will eventually push the button, only highly probable. In that case, technology can never seal any moral destiny, only make certain destinies highly improbable."
I'm going to rerun the scenario until you make the bad choice. I will wait as long as that takes. If there is a non-zero probability that you will make the bad choice, then I will win. There will never be a point where you will be able to say that you won, that you completed the exercise without making the choice, because the exercise will not be over yet. It will keep going until you do make the choice ;-)
You say: "But that, in itself is not really controversial or problematic at all."
No, it's still a problem. My being able to guarantee, to a probability of 99.99999...999%, that you will become guilty of something is most definitely a problem.
If the only advantage that the libertarian position offers over the compatibilist position is that .0000...0001% chance, then it does not offer any meaningful advantages.
Posted by: Brian Parks | July 13, 2009 at 03:58 PM
Kip,
You say: "Similarly, if you change my brain slowly into Mother Theresa's, there's a very strong argument that you've killed me and made a new person."
OK, then give the argument. If I change MT's brain slowly to match the state that Manson's brain was in prior to a crime, why *must* I conclude that I've killed her and created a numerically different person?
What is the problem with assuming, for the sake of argument, that Mother Theresa is a self, a subjective center of experience, and that she can continue to exist despite drastic changes in the content of what she experiences? That's all I'm asking us to assume.
If the assumption were somehow incompatible with the existence of moral responsibility, then the compatibilist objection would have merit. But that's not the case. There is absolutely no incompatibility whatsoever between the concept of a self as a subjective center of experience and the concept of moral responsibility, and therefore there is no reason for a compatibilist to refuse to accept, for the sake of argument, the assumptions behind the scenario.
You say: “Remember that I'm extremely sympathetic to your view. In fact, we share the same empathy-based, skeptical view on free will. But it's important to recognize the strength of the compatibilist argument based on personal identity. As I've said before, I think it's the best argument the compatibilist has.”
But it’s not an argument, it’s an evasion.
In proposing the scenario, I’m not claiming that the self retains its numerical identity through the manipulation. None of us knows what a self is, or if such a thing even exists.
To facilitate the scenario, I’m asking the reader to make an assumption about the self. Specifically, I’m asking the reader to assume that the self is a substance, a subjective center of experience, and that it can retain its numerical identity as a substance despite drastic changes in its experiential content. Not all of us would agree with this assumption, but that doesn’t make it absurd or untenable.
It’s certainly possible that the assumption be true, that the self be a substantial center of experience that can endure despite drastic changes in its thoughts, feelings, beliefs, inclinations, dispositions, and so on. If that assumption does in fact turn out to be true, are moral responsibility advocates going to immediately abandon their belief in moral responsibility? Are they going to throw in the towel and say “Oh well, a self can theoretically survive Parks’ manipulation scenario, I guess moral responsibility is therefore impossible.” Of course not! So I find it rather disingenuous that they would refuse to grant the truth of the assumption for the sake of argument.
Posted by: Brian Parks | July 13, 2009 at 05:50 PM
Brian,
In order to answer your questions, let me fill in a few details.
The Clockwork God "sets it and forgets it" when He builds a universe. Although He carefully plans each human's actions before anyone's birth, it is an important part of His scheme that your actions should flow directly from your character. This universe contains many agents.
The Hovering God sets it, but She doesn't forget it. Although She knows for a certainty exactly what each human will do, She constantly wills that each action should follow the plan and would intervene were it necessary. In this universe, all explanations of the form "human H did X because H wanted Y and believed that X would bring about Y" are incomplete. They stand in need of "and God saw that it would be so, and approved". This universe contains only one Agent. God alone is (and we are extensions of Her).
By the way, MadoffWithMe is the victim, so dubbed after the manipulation. I'll continue to use the name this way. Now, do I claim that MadoffWithMe and the manipulator are really the same agent, or that I arbitrarily conceptualize them this way? Neither: I non-arbitrarily conceptualize them this way, much as I might non-arbitrarily conceptualize a "basically bald" man as bald. It might have a fuzzy truth level of about 0.9, but it will do in a pinch. If someone asks me, "how many bald guys in the room" and that's the only guy, I'll say one. Unless I have a lot of time to kill.
If the manipulator feels pleasure and the manipulated human feels pain, the single agent feels both. Of course, it might not care about some of those feelings. But then, a warrior on a battlefield might be feeling both pride and pain, but might have trained himself not to care about the pain.
An agent need not be phenomenally conscious. An agent need only be propositionally conscious; i.e. agency requires beliefs and goals, not sensations and feelings. We're already familiar with many such agents. They're called corporations. They maintain their identity by creating a stable basis for the continuation of their goals and of the epistemic enterprise that generates their beliefs. E.g., they have mission statements, profit reports, research departments, etc.
If the manipulator dies, MadoffWithMe now constitutes a complete story of desires and beliefs producing action. Hence, MadoffWithMe is now an agent. It's not important to say whether he's the same agent. Although many psychological traits have been lost (assuming that the deceased manipulator cared about more than just this one scheme) we know that he willed the Ponzi scheme, after making a reflective choice based on a desire and belief network and brain states that persist. So he's responsible. Unless, that is, we brainwash him back to being me, as I was before all this manipulation. Then the post-reverse-brainwash person is not responsible for the MadoffWithMe phase. (And probably not the earlier, pre-any-brainwash phase either. This noble effort probably fails to resurrect the original person.)
More later...
Posted by: Paul Torek | July 14, 2009 at 06:59 PM
Murali,
You write; "But the question isnt whether determinism is true or not. The question is whether manipulation poses a problem to moral responsibility."
The manipulation poses a problem to moral responsibility BECAUSE the manipulation is a deterministic process that prohibits free will. There would otherwise be no point to the manipulation.
and; "1. it may be highly proable that under many iterations, a manipulated madoff clone (MMC) would pull the ponzi scheme at least once, even though in any one iteration, theindividual probability would actually be quite low."
You apparently made that assertion under the assumption that Brian's first premise "In physical terms, I put your brain in the exact same neurological state that his brain was in, and I let things go from there to see what happens." permits his second premise "Say I run the experiment 1,000,000 times, and on the first 999,999 times, you make a different choice, a moral choice. But on the last time, you make Madoff’s choice." is possible. It is not. The deterministic nature of Brian's first premise completely and forever prohibits all but one possible outcome, even if you were to run the experiment an infinite number of times.
Your second objection fails for the same reason; determinism prohibits multiple outcomes.
Kip,
You write; "Here's an analogy (cited by Mele in his review of Double's work, I believe): as you pluck hairs off someone's head, eventually the person becomes bald. When exactly does the person become bald? It's hard to say. But that doesn't mean baldness doesn't exist."
That question depends entirely on your definition of baldness. If you define baldness most precisely and literally, the person becomes bald immediately after you have plucked his very last hair.
and; "Similarly, if you change my brain slowly into Mother Theresa's, there's a very strong argument that you've killed me and made a new person." You are ascribing a theoretical prospect to an admittedly supernatural premise - that the change would, in fact, leave you alive and the same person (body).
Brian, July 13 (1),
You write; " There will never be a point where you will be able to say that you won, that you completed the exercise without making the choice, because the exercise will not be over yet. It will keep going until you do make the choice ;-)"
As I explained to Murali, under the deterministic manipulation, you will never make the choice.
and; "If the only advantage that the libertarian position offers over the compatibilist position is that .0000...0001% chance, then it does not offer any meaningful advantages."
You are being too generous. In a deterministic universe, the libertarian position offers a 0.0% chance.
Paul,
You write; "The Clockwork God "sets it and forgets it" when He builds a universe. Although He carefully plans each human's actions before anyone's birth, it is an important part of His scheme that your actions should flow directly from your character. This universe contains many agents.
The Hovering God sets it, but She doesn't forget it."
By bringing God into the picture, you are introducing an element fraught with problems. For example, if your God is omnipotent, then S/he can do whatever S/he pleases, regardless of any and all laws of nature. That prospect would render any and all philosophical and scientific inquiry into the determined will/free will-moral responsibility question mute. If, as all evidence points to, God created a deterministic universe, S/he did, in fact, set it and forget it.
Posted by: George Ortega | July 15, 2009 at 01:14 AM
George, I didn't bring God into the picture, Brian did. I'm just answering his question as best I can.
Brian,
How do we punish the single agent? By breaking it back into two agents and punishing all resulting agents who have continuity with the crime. If we find that MadoffWithMe continues to think and act like Madoff even without further manipulation, then we punish him along with the manipulator. On the other hand, if MadoffWithMe reverts back to his original personality as soon as the device is removed from the manipulator's hand, then only the manipulator is a continuant of the agent who committed the crime.
Finally, on my views of personal identity. I cribbed most of what I know from John Perry. If you want details and reasons, you're better off going to the source. But, in a nutshell, what ordinarily underlies psychological continuity is a set of neural structures and processes; thus it is the survival of the latter that is key to survival of my self. This set is a "heap" and is therefore potentially subject to Sorites problems.
Also on personal identity, in your last post you bring back Self as a Substance. However, you haven't attempted to answer my "same sole" argument. If the soul does not underly any psychological or physical properties of the person we know and love, why on earth should we think that "same person" goes with "same soul"? The skin on the bottom of your foot is no less ludicrous a candidate for the bearer of your identity.
Posted by: Paul Torek | July 15, 2009 at 04:26 AM
Paul,
Sorry about that; I had been away for a couple of days and was trying to catch up. The point, of course, still holds. Whenever God and His supernatural powers are brought into the discussion, it digresses to matters of belief that are no longer amenable to rational debate.
Posted by: George Ortega | July 15, 2009 at 11:54 AM
George, you said:
The manipulation poses a problem to moral responsibility BECAUSE the manipulation is a deterministic process that prohibits free will. There would otherwise be no point to the manipulation.
Sorry I looked back here only now. a few things
1. Determinism would automatically invalidate libertarian free will. Unless one is a compatibalist, it would automatically mean that there was no such thing as moral responsibility, whether or not the agent was manipulated.
2. The manipulation scenario was not exactly tring to test the validity of determinism, since the scenario explicitly tested the existence of moral responsibility under manipulation given that libertarianism or compatibalism were true. To actually go on and say that determinism is really true, (contra the libertarian case) merely ignores half the problem.
3. The situation so described ostensibly (under libertarian conditions)states that Madoffwithme is still capable of free will. In the libertarian case, he still actually has a choice in the matter
(technically at least). Yet, the question is to distinguish why the real madoff has moral responsibilty and the manipulated version does not.
a) one way of justifying this is by saying that the manipulation bears a family resemblance to coercion. A person who is coerced still possesses free will, only that they are not morally responsible for actions performed while coerced. Any theory of moral responsibility that is able to exculpate the coerced should exculpate the manipulated.
b)Or even if manipulation is not relevantly similar to coercion, there is still no moral responsibilty as he cannot be reasonably be expected not to push the button
c) There is something wrong with Brian's description of libertarianism. Does free will genuinely mean that there is some probablility that a person does otherwise?
Posted by: Murali | July 20, 2009 at 03:17 AM
Paul,
(1) Clockwork God v. Hovering God – You stated that the universe governed by the Hovering God has only one agent. You did not specify whether the same is true of the universe governed by a Clockwork God. I will assume that in such a universe your view is that there can be more than one agent. If that assumption is not correct, let me know.
We can easily construct the manipulation scenario so that the manipulator becomes like the Clockwork God. Suppose that the universe is relevantly deterministic and that I put you in the exact state that Madoff was in one day prior to his first Ponzi choice. I then go to sleep. Maybe I die in my sleep, maybe not. We don’t know. All that we know is that you are necessarily going to make the choice that he made come the next day.
Clearly, I am like the Clockwork God, and therefore we can say that you are an agent in your own right. Assuming that you remain the same self that you were prior to the manipulation, then the result is that I was able to seal your moral fate. I was able to ensure, through technological tinkering, that you would become morally responsible for something. That is an unacceptable conclusion. It represents a reductio of the pro-moral-responsibility position.
(2)Punishing the Single Agent – If we can justifiably punish each part of the single agent after we break them up, then the implication is that, after the break up, each of them is morally responsible for something.
Returning to the manipulation scenario, suppose I manipulate you in a deterministic universe so that you take Madoff actions. The community then breaks us up to punish us, as I was able to predict it would. If the punishment is justified at the time that it is administered, then the implication is that you—the separated, single agent—are morally responsible for something at that time. It follows that I was able to achieve my goal: I was able to ensure that you would become morally responsible for something.
We see, then, that the “single agent” hypothesis accomplishes nothing. The reductio holds even if we grant that hypothesis as an assumption.
The Manipulation Argument Itself – I made the point to Kip, and I’ll make it again in the form of a question. I am not claiming that the self is the numerically same self before the manipulation as after. The “self” is not well-defined on anyone’s analysis—yours or mine—so that would be a silly claim for me to make.
What I am doing is having the reader assume certain things about the self that make it possible for the self to retain its numerical sameness despite the manipulation. Specifically, I am having the reader assume that the self is a substantial center of subjectivity that experiences mental content, and that this content can change in drastic ways with the self remaining the numerically same self that it was before the change.
Let’s call this assumption AssumptionS.
Here is a simple question for you:
If AssumptionS turns out to be true, would you abandon your conception of moral responsibility? Would you concede that individuals cannot be morally responsible for their behaviors?
If not, then you need to attack the manipulation argument in some other way than by challenging AssumptionS. For, by your own admission, you would hold to the same views on moral responsibility even if AssumptionS were true.
As I said to Kip, challenging AssumptionS is an evasion, not a legitimate challenge to the argument.
Posted by: Brian Parks | July 25, 2009 at 04:18 PM
Murali,
You say: “Any theory of moral responsibility that is able to exculpate the coerced should exculpate the manipulated.”
True, and any theory of moral responsibility that can exculpate the manipulated should exculpate Madoff, because there choices are identical in every conceivable respect.
You say: “Or even if manipulation is not relevantly similar to coercion, there is still no moral responsibilty as he cannot be reasonably be expected not to push the button.”
If he cannot reasonably be expected not to push the button, then how can Madoff reasonably be expected not to push the button?
That is the problem, and your response doesn't address it.
You say: “There is something wrong with Brian's description of libertarianism. Does free will genuinely mean that there is some probability that a person does otherwise?”
If libertarian free will were compatible with there being a zero probability of your ever choosing otherwise, then why would it be incompatible with determinism? That doesn’t make any sense.
Also, as a libertarian, you have to remember where your desired indeterminism will have to come from. Ultimately, it will have to come from QM. According to QM, if you have two particles in the exact same quantum state and you measure them, the probability of getting a specific result is necessarily the same for each of them. So, if I put every particle in your brain, body, and environment in the exact same state that the particles in Madoff’s brain, body, and environment were in immediately prior to a bad choice that he made, then the probability that you will make the same bad choice will necessarily be non-zero. We can say that much with certainty because if the probability of the bad choice occurring from that state were zero, then Madoff himself would not have been able to make it, as his choice occurred from the same state.
Posted by: Brian Parks | July 25, 2009 at 04:59 PM
Kip,
You are able to ensure, through technological tinkering, that the-closest-remaining-thing-to-I would become morally responsible for something. That is an acceptable conclusion. To (ab?)use Robert Nozick's terms, my closest continuer is morally responsible, but whether this "closest" is "close enough" is, as you admit, at best indeterminate.
But supposing a Substantial Self, what changes? I see at least two ways to have "the reader assume that the self is a substantial center of subjectivity that experiences mental content, and that this content can change in drastic ways with the self remaining the numerically same self that it was before the change."
S1. The self is a substantial center of experience, but not, or only coincidentally, a center of activity.
S2. The self is a substantial center of both experience and activity.
Now, I don't think S1 could conceivably turn out to be true, because it is not faithful to the ordinary conception of "self". (Suppose experience turns out to take place in a Cartesian Ego, while when it comes to action, all the action is in the physical brain. Then the self is simply the composite of both Ego and brain, it seems to me.) However, I can still answer the question about moral responsibility under assumption S1: there would be no moral responsibility. Tautologically, S1 rules out moral responsibility by separating the Self from having any fundamental connection to activity.
Assumption S2 could conceivably be true. But I don't see how manipulation is supposed to work on S2.
Posted by: Paul Torek | July 26, 2009 at 06:31 AM
I address some of these issues in this essay:
Van Inwagen versus Van Inwagen on Freedom and Randomness
Abstract:
In _Free Will Remains a Mystery_, PvI proposes a thought experiment in which God rewinds the universe back to a certain point and lets events proceed again from there. He says that free agents are free to choose differently when the universe is restarted in this way, and that this shows that their actions are essentially random. I argue that his argument is unsound.
Key points:
--determinism or randomness is a false dichotomy. Free choices are neither.
--The frequency model of probability does not apply to free choices. The probability that a free agent will choose X rather than Y is undefined.
--Supposing that the universe could be replayed as in PvI's thought experiment, a free agent would choose the same way every time.
Posted by: Mike Robertson | July 26, 2009 at 08:00 AM
Paul,
You say: “S2 = The self is a substantial center of both experience and activity … Assumption S2 could conceivably be true. But I don't see how manipulation is supposed to work on S2.”
You agree that S2 is compatible with determinism, right?
In that case, there shouldn’t be any problem seeing how manipulation would work on S2. We gradually change the state of the manipulated’s brain, body, and whatever else, and then we let things unfold from there.
So, consider the following two conceivably true statements:
(1) The universe is deterministic
(2) S2
My question for you: If (1) and (2) turn out to be true, will you abandon your conception of moral responsibility? Will you admit that individuals are not morally responsible for their behaviors?
Posted by: Brian Parks | July 26, 2009 at 09:02 AM
Mike,
In your paper you say: “Let me begin with the point about probability. Probabilities cannot be assigned to free choices.”
If that is the case, then free choices cannot be manifestation of quantum effects.
So the question becomes: as a libertarian, where do you plan on getting your indeterminism from?
It seems to me that in order to maintain your view, you will have to either deny our best model of physics and introduce one that is even more peculiar, or posit a supernatural soul that can act independently of events in the physical brain. Because if our best model of physics is true, and if our choices are tied to events in the physical brain, then they most definitely have probabilities associated with them.
Posted by: Brian Parks | July 26, 2009 at 09:25 AM
Brian,
I'm assuming an agent-causation version of libertarianism here. Start with the self as you have defined it and add that such an entity has the power to initiate new causal chains.
On this view, free choices don't originate from quantum events; they originate from *agents* (i.e., selves). The significance of QM for this model is simply that it denies the Newtonian principles that dictate only one possible outcome from the state of the universe at any given time. QM allows enough slack in the universe for free choices to occur, but free choices come from agents, not quanta.
I don't see how any consequences for QM follow from my claim that the probability than an agent will choose one course rather than another cannot be defined, but if they do follow, I have no problem with either weird new physics or a supernatural soul. I would much rather accept that there is more in heaven and earth than is dreamed of in our philosophy than throw out moral responsibility or meaningful freedom.
Posted by: Mike Robertson | July 26, 2009 at 01:03 PM
Brian,
I agree that S2 is compatible with determinism. But I'm not sure that it follows from the combination of them, that you can manipulate the agent. On a traditional understanding of S2, the Substantial Self just is the source of activity, so it's not clear how you can make it change its tune, short of replacing it. We need more than just S2 + determinism.
To that end, suppose traditional soul-beliefs are wrong, and instead of a simple substance, the Self is a complex one, with various properties that undergird action and interact with the environment. Then it may be open to manipulation. But by the same token, the Self loses its immunity to Sorites problems. For the Self is that which underlies the manifest continuity of experience and action, whether that be made of neurons or soul-stuff. But if the soul-stuff that underlies the continuity is complex, and you seriously disrupt the normal mechanisms (or "soulanisms") that serve that continuity, then you push identity of the Self into the indeterminate zone or beyond.
Posted by: Paul Torek | July 27, 2009 at 03:51 PM
Mike,
You say: “On this view, free choices don't originate from quantum events; they originate from *agents* (i.e., selves). The significance of QM for this model is simply that it denies the Newtonian principles that dictate only one possible outcome from the state of the universe at any given time.”
But QM puts a definite probability on those possible outcomes. Thus, if QM is true, and if free choices map to or control possible outcomes in the physical universe, then they too must be probabilistic.
Your options are to either deny that free choices map to or control possible outcomes in the physical universe (in which case they would be irrelevant), or to deny QM (good luck).
You say: “I don't see how any consequences for QM follow from my claim that the probability than an agent will choose one course rather than another cannot be defined, but if they do follow, I have no problem with either weird new physics or a supernatural soul. I would much rather accept that there is more in heaven and earth than is dreamed of in our philosophy than throw out moral responsibility or meaningful freedom.”
We can explain our intuitions about free will and moral responsibility quite well by reference to evolutionary psychology. There is no need to reject well-tested science or to introduce supernaturalisms.
Now, we may not find the explanations to be particularly cozy or consistent with our naïve views of ourselves, but that’s a different matter. The truth doesn’t care about our emotional reactions to it.
Posted by: Brian Parks | August 07, 2009 at 09:12 PM
Paul,
You say: “I agree that S2 is compatible with determinism. But I'm not sure that it follows from the combination of them, that you can manipulate the agent. On a traditional understanding of S2, the Substantial Self just is the source of activity, so it's not clear how you can make it change its tune, short of replacing it. We need more than just S2 + determinism.”
I can make it change its tune because it is a part of the universe and the universe is deterministic.
If determinism is true, then what the self is going to choose later is set by the state of the universe now. By manipulating the state of the universe now, I can manipulate what the self is going to choose later.
Why would my manipulating the state of the universe now necessarily imply my killing the self that exists now and creating an entirely new one? I don’t see how that follows.
Now, one could argue that if determinism is true, i.e., if it is true that what the self is going to choose later is set by the state of the universe now, then the self can’t really be said to be choosing anything. But that’s a totally different point, one that is in gross conflict with your compatibilist position.
You say: “To that end, suppose traditional soul-beliefs are wrong, and instead of a simple substance, the Self is a complex one, with various properties that undergird action and interact with the environment. Then it may be open to manipulation. But by the same token, the Self loses its immunity to Sorites problems. For the Self is that which underlies the manifest continuity of experience and action, whether that be made of neurons or soul-stuff. But if the soul-stuff that underlies the continuity is complex, and you seriously disrupt the normal mechanisms (or "soulanisms") that serve that continuity, then you push identity of the Self into the indeterminate zone or beyond.”
There is no need to make those suppositions. They just introduce new complications.
Suppose that self S is a simple substance. It is a rule in our universe that if X at t, then S chooses X1 at t1. If Y at t, then S chooses Y1 at t1. If Z at t, then S chooses Z1 at t1.
I want S to choose Z1, i.e., to pull the Madoff trigger. So I put the universe in condition Z. Per the rule, S murders someone. Why does my putting the universe in condition Z necessarily imply that I have killed S and created a new entity?
Posted by: Brian Parks | August 07, 2009 at 09:50 PM