Here's our first ever GFP Online Reading Group Event. Make it awesome.
Initial comments (below) by Al Mele on Doris, Knobe, and Woolfolk's paper "Variantism about Responsibility."
For this to work, you need to start commenting. Al's given us some great reflections below. Now you join in. Thanks in advance.
From Al Mele:
DKW’s paper was a pleasure to read. I’ll get the discussion started with some scattered comments.
1. Frankfurt-style and Woolfolk-style stories
Many readers will have noticed a difference between the Woolfolk et al. stories about Bill and Frankfurt-style stories. In F-style stories, the potential controller doesn’t cause the agent to decide to A and doesn’t cause the agent to A. He is prepared to do this if necessary; but the agent does it all “on his own” and the potential controller stays on the sidelines. In Woolfolk’s story, the “drug makes [Bill] unable to resist the demands of powerful authorities,” and they order Bill to shoot Frank in the head. So it certainly seems that they cause Bill to shoot Frank. Even so, “subjects judged the high identification actor more responsible . . . than the low identification actor” (p. 13).
Why might that be? (I’m assuming that “more responsible” here is short for “more responsible for killing Frank” and that the kind of responsibility at issue is moral responsibility.) Consider the following suggestions about part of what the subjects might be thinking about Bill in the high-identification condition:
A. (1) the bad guys caused Bill to kill Frank, and (2) Bill’s desire to kill Frank (or Bill himself, as in agent causation) also caused his killing Frank. (This is consistent with causal overdetermination and with joint causation.)
B. A1 is true, A2 is false, and Bill has some responsibility for killing Frank because he would have killed him even if the bad guys hadn’t made him do that.
(These suggestions aren’t meant to be exhaustive, of course.)
Possibly, suggestion B has led some of you to wonder whether some subjects would say that Bill has some responsibility for killing Frank in a strange story in which it is clear that Bill does not kill Frank. Wait! Am I nuts? Well, consider a story along the following lines. Bill makes his discovery about Frank and is extremely upset. Bad guys paralyze his arms and hands, put a gun in his right hand, raise his right arm (by electronically stimulating his arm muscles) so that the gun is pointed at Frank’s head, and (by further electronic stimulation) cause Bill’s paralyzed right index finger to depress the trigger. “Bill was certain about his feelings. He wanted to kill Frank.” What’s more, Bill thought that he was acting: he did not realize that his arm and hand were paralyzed. And he felt no reluctance about blowing “his friend’s brains out.”
Suppose that “subjects judged” this Bill “more responsible [for killing Frank] . . . than the low identification” Bill. What should we make of that? Well, given that these rational subjects know that Bill did not kill Frank (even though Frank was killed), they cannot be expressing the belief that he has some moral responsibility *for killing* him. They are doing something else with their words. This certainly is possible, and it makes one wonder whether some of the respondents to Woolfolk’s own high-identification story are doing something similar with their words. Running the story I sketched would produce some evidence about this. (Suppose it turned out that the responsibility rating is not significantly higher in Woolfolk’s high identification case than in my high identification case.)
Lots more below the fold.
2. The “determinism” studies by Nahmias et al. (in *Phil Psych* 2005) and Nichols & Knobe (NOUS, forthcoming)
The idea for the supercomputer story was hatched here in Tallahassee at the Pitaria – a restaurant some of you know from conference lunches. Eddy Nahmias told the lunch group what he wanted to test, I suggested a supercomputer story that would entail that the world in which it is set is deterministic but would not use the word “determinism,” Eddy seemed a little reluctant to try such a story, and Joshua Knobe (who was here visiting for a few days and had a problem with his voice) quietly persuaded Eddy that it was a good idea. My thought about not using the word “determinism” was that, when people learn this word outside of a philosophy class, they “learn” that part of its *meaning* is “something that precludes free will and moral responsibility.” And even if one defines it for them in a standard philosophical way and encourages them to ignore what they used to think about its meaning, old linguistic habits die hard.
Now, Nichols & Knobe (NK) don’t use the word “determinism” in the text they give their subjects. But they do something related. They say that in universe A, “if everything . . . was exactly the same up until [etc.] it *had to happen* that . . .” and that “given the past, each decision *has to happen* the way that it does.” How is this related? Well, traditional compatibilists contend that agents sometimes “could have done otherwise” in deterministic worlds (in a sense of the quoted expression that they say is relevant to moral responsibility) while granting that the entailment from laws + past to future events holds; and NK’s description of universe A steers subjects away from a traditional compatibilist view of things. (The assertion that it had to happen that S X-ed is naturally read as entailing that S could not have done otherwise than X.)
I’m not suggesting that the abstract/concrete difference makes no difference in subjects’ responses. (Indeed, I would reject this suggestion. On concrete vs. abstract questions about intentional action see my 2001 paper in the Malle, Moses & Baldwin MIT volume [pp. 27-35 passim].) But the *had to happen* element in the description of universe A may be making part of the difference, and I think it would be useful to run a test of the abstract/concrete effect that dispenses with that element and focuses on capturing the entailment business that defines determinism. This can be done in non-technical terms with a description of universe A that features a supercomputer that predicts all future events with absolute certainty based entirely on its complete information about the past. (This will trouble Humeans about laws of nature, but the story should be kept pretty simple.)
Here’s another point. As DKW say, “The results were dramatic.” But might a big part of the effect be explained by context switching? Many subjects may understand “fully morally responsible” in the question about the concrete story in an every-day sense corresponding, in the case of bad actions, roughly to “guilty and fully punishable according to law,” and they may understand the same words in the abstract question in a much more metaphysical or theoretical way. (For relevant discussion, see DKW, pp. 19-20. Some readers may find pp. 71-72 of my *Free Will and Luck* interesting in this connection.)
3. Knobe’s chairman studies
I was very impressed by these studies when I saw them several years ago, and I still am. More recently, I was impressed by a finding reported in a draft by Shaun Nichols and Joseph Ulatowski (NU). In one study, they asked subjects why they answered the question about the chairman as they did, and in another, all subjects were asked about both chairmen (to test for an order effect). Subjects who gave the majority response in the “Harm” scenario typically appealed to what the chairman *knew* in explaining their answers, and those giving the majority response in the “Help” scenario typically appealed to the chairman’s lacking an *intention* or *motive* to help the environment. When other subjects were asked about both scenarios, a very interesting pattern emerged. Of the 44 subjects, 16 said that both the harming and the helping were intentional, 14 said that neither was intentional, and 14 said that the harming but not the helping was intentional. (This is the finding that impressed me.)
The combination of these two new findings suggests a hypothesis: There are at least three different concepts (or conceptions) of intentional action in our communities, and in two of them knowledge plays some roles that it doesn’t play in the third. (NU suggest a related hypothesis.) The hypothesis can be made more precise and then tested. (Fiery Cushman and I are thinking now about tests.) In any case, partly in light of NU’s finding, I do wonder whether Knobe’s studies get at something that we should think of as “the folk concept” of intentional action or whether there are several different concepts of intentional action out there that collectively account for the data. (BTW, there is also evidence of at least two concepts of *intention* in our communities, in one of which knowledge plays a role that it doesn’t play in the other.)
4. “The overwhelming emotion asymmetry”
I wonder whether part of what is at work here is an “in-character” / “out-of-character” asymmetry. Perhaps quite a few people are tacitly thinking that good actions done “because of . . . overwhelming and uncontrollable sympathy” are done by good, sympathetic people whereas bad actions done “because of . . . overwhelming and uncontrollable anger” typically are out of character for the agent. One way to try to get evidence about this is to tweak the first scenario along the following lines:
Jack had never had even a streak of kindness in him. He had always been a selfish and entirely unsympathetic man. But one day, overwhelming and uncontrollable sympathy suddenly came over him, and because of that Jack impulsively gave a homeless man his only jacket even though it was freezing outside. Jack never did anything remotely like that again for the rest of his life.
Of course, there’s no reason for DKW to be unhappy with an “in-character” / “out-of-character” asymmetry, but it would be good to have more evidence about what lies behind “the overwhelming emotion asymmetry.”
5. “The negligence asymmetry”
DKW write: “responsibility attributions are often associated with determinations of restitution; since more severe accidents may call for greater restitution, it arguably makes good sense for people to be less stringent in their standards for responsibility in cases where the harm is severe than they would be in cases where the harm is mild.” Wouldn’t it make more sense to have the same standards for responsibility and to make restitution a function of a combination of degree of responsibility and degree of harm?
6. Causal judgments
In section 3, I suggested that there may be more than one concept of intentional action (and intention) in our communities. The same may be true of causation. This is testable.
The pen story is great. But what were subjects asked? Did they have the option of saying that both caused the problem and the option of saying that neither caused it? If so, how large were the minorities: in particular, those who say that both caused it and those who say that neither caused it? (Someone trying to screw up the experiment might have given a third minority response: that only the assistant caused the problem.)
I have raised a lot of questions – more than I would have raised in settings in which I had the burden of offering some answers! But my loose approach seemed to me appropriate for starting the discussion. I hope you found “Variantism about Responsibility” as stimulating as I did.