I thought Gardeners might have fun grappling with a recent paper by Christopher Suhler and Patricia Churchland, "Control: conscious and otherwise," published in Trends in Cognitive Sciences, 13(8), 341-347, available here (by permission). They argue against what they call the "Frail Control" hypothesis advanced by philosophers such as John Doris, which has it that people are far less in control than they suppose, given the influence of unconscious situational factors (lots of experimental data on this). Instead, Suhler and Churchland say that we should expand our notion of responsibility-conferring control to include unconscious and automatic processes, which they point out are robust, ubiquitous, "smart," and essential for effective behavior. Conscious control is all well and good, but not the sine qua non of responsible agency. In which case, they say, people can't appeal to unconscious influences as a new class of excuses, as the Frail Control hypothesis might suggest they could.
They say "most of the patterns of behavior described in the social psychology literature [on the effects of unconscious influences] do not fall outside the realm of control." (p. 346) Might this demotion of consciousness as the criterion of control open the door to a sort of strict liability policy, in which agents can be held responsible for behavior which had significant unconscious precursors, behavior that perhaps they wouldn't consciously endorse? Eddy Nahmias has often suggested here and in his papers that it's "bypassing" of conscious processes and the threat of mechanistic reductionism, not determinism, which pose the real threats to control and responsibility. But such bypassing seems not to worry S&C, who suggest that there are neurobiological criteria for being in control that cut across the conscious/unconscious distinction. So long as the neural mechanisms are in good working order, they say, the agent is presumptively in control, so reductionism is no worry for them either. Are they going too far in demoting consciousness, and elevating the role of unconscious mechanisms, in their conception of responsible agency, and what considerations would count against their proposal? Or for it? Enjoy!
Thanks Tom! I'm happy to receive the pointer. Good hook, too. I'm really looking forward to reading the paper and thinking about the issues you raise.
Posted by: Dan Speak | August 27, 2009 at 04:15 PM
Tom,
Thanks for the paper - it looks interesting though I have only read 3 pages of it.
Let me begin by making a distinction that's inspired by a similar one Al Mele makes when he considers libertarian theories in his book.
I think we need to distinguish between non-derivatively unconscious control (NDUC) and its opposite, derivatively unconscious control (DUC).
With derivative unconscious control, I mean control that is derived from a prior conscious practice. When I first started typing, I would think and search about each key on my keyboard. However, now that I'm an expert typer, I no longer have to think about the keys when I type - my control of the keyboard is handled unconsciously. So the prior conscious practice over time becomes automatic and unconscious.
And by NDUC, I mean the sorts of practices that aren't DUC.
Of course, these terms a bit controversial but I believe they are still helpful in mapping out the conceptual landscape. For instance, consider the automatic stereotype associations that we make and that easily show up say in Implicit Association Tests. Should these be considered NDUC or DUC? Even though the core of these associations could be conscious (i.e. the imagery we see in movies and commercials for instance), they still aren't DUC the way I see it.
My point in making this distinction is the following. If there ever could be a property like moral responsibility (hehe...), I think that NDUC acts wouldn't have that property and DUC acts would have that property. Or, NDUC acts should be excused but DUC acts shouldn't be excused.
Since I don't think holding people responsible for their IAT scores makes really sense, you can see why I'm uneasy about classifying IATs as some sort of DUC.
Of course, the big question is, how much of our unconscious acts/thoughts originate in/are caused by previous conscious thoughts/acts.
Posted by: Cihan | August 27, 2009 at 05:20 PM
There is a species of DUC that the criminal law in the US has been mulling over. People can learn to do things by reflex faster than they can exert conscious control. At the keyboard the consequences of an error are usually not egregious, but when firearms are the instruments of action, errors are less reversible. An expert Western-style shooter can, for example, draw and shoot in under 250 msec. Imagine such a man, already highly stressed by crime in his neighborhood, startled in his backyard at night by an (inadvertent) trespass. By reflex he draws & shoots the intruder in a quarter second, killing him. The shooter truthfully pleads that he had formed no intention to shoot anyone. He was lawfully armed in backyard because of serious crime in his neighborhood. A threatening presence suddenly loomed before him in the dark and his reflexes took over. Focusing just on a moral evaluation of his actions, do we seem to have a culpable or an excusable homicide here? What factors decide our verdict?
Posted by: Philoponus | August 28, 2009 at 08:51 AM
So, Philoponus, we meet again! It's good to see you here at the Garden. Hopefully, we will have some future exchanges back over at the LANP blog. For now, I just wanted to suggest the following with respect to your Western-style shooter case. First, as stated, the Western style shooter has honed his shooting skills to such an extent that he can unholster and discharge his pistol in under 250 msec. Second, you suggest that his ability on this front can be cashed out in terms of its being a DUC--i.e., based on conscious training, distal intentions to be able to shoot as quickly as possible, etc., the shooter now has the ability to discharge his weapon with lightening-fast speed. However, in normal cases when he exercises this ability, he nevertheless has both a distal and a proximal intention to discharge the weapon. For instance, he plans to win a shooting competition by being the fastest person to discharge after a "green light" flashes on the screen before him. Under these circumstances, we would certainly say that he discharges his weapon both knowingly and intentionally. Indeed, we would say the same thing if he found himself in an old-fashioned duel with another gunman. In your case, however, he seemingly lacked a proximal intention to "fire now." Instead, he was startled by the inadvertant trespasser, which in turn led him to reflexively discharge his weapon. Your question is whether the fact that he lacked a conscious proximal intention to "fire now" is either mitigating or exculpating, morally speaking. The corollary legal question is where he would fit in terms of the the MPC's PKRN distinction.
For now, I wanted to make a couple of observations just to spark discussion. First, given that he is walking around in his backyard armed as the result of the recent crime spree in his neighborhood, it is presumably safe to assume that he had the following distal intention" "if someone attacks me, I will shoot them as quickly as possible." Moreover, given that he is aware of his own well-honed speed-shooting abilities, when he formed this distal intention, he should have been aware that he was running the risk that his reflexes/instincts might outstrip his conscious control in the event that someone merely startled him rather than attacking him. If he failed to take this possibility into consideration, he was being negligent. If, on the other hand, he thought about this possibility but ignored it, then he ran the risk either recklessly or knowingly--depending on how certain he was that he could mistakenly shoot and innocent person if he happened to be startled. So, without hearing more about what was going on "in his head" at the time he decided to arm himself before entering his backyard in the dark, it is unclear whether he was being negligent or reckless. By my lights, this case is analogous to the professional boxer who walks around with "lethal weapons" for hands. It's clearly legal for him to walk around, but he carries an extra burden to exercise due diligence given the lethal nature of his particular skill at harming others. I think the same burden applies in the case of your modern day quick-draw-McGraw. But before we could judge his action either morally or legally, we would need more information about his mental states at the time he formed the distal intention to shoot anyone who threatened to harm him in his backyard at night.
Posted by: tnadelhoffer | August 28, 2009 at 02:15 PM
I haven't yet read Suhler and Churchland, but I know in advance that I'll agree with their main thesis. Kudos to them: it's high time someone said it.
To me, the fundamental point about control is rationality, and I'm antecedently convinced that human unconscious processes have plenty of rationality. Or as they put it, "smart"s. Perhaps the conscious mind has greater rationality on average, but I'm not quite sure about that.
Now if there are isolated subsystems that are impermeable to reason, or have hostile goals that conflict with the goals of the main personality, that's a different matter. But those are rather special cases. I assume that's not the kind of unconscious process we're talking about.
Posted by: Paul Torek | August 29, 2009 at 01:50 PM
I forgot to mention in my previous comment how strongly I agree with the importance of investigating unconscious executive control (UEC). I agree that the Neo-Kantian/Aristotelian paradigm of control by conscious reflection & deliberation ignores the complex ways our brains are able to direct our behaviour. It is crucial to investigate the structures and chemistry that underlie these alternative paths of control.
My personal preference is to search for UEC in situations such as I referred to in the previous comment. Some will say that this is like going to the Moon to do botany: the prospects are unpromising. Libet and his colleagues investigated reactions that play out over a 500 msec interval, and he suggested that some sort of pre-frontal veto could interrupt these reactions. I raise the bar by offering even faster reactions (drawing and shooting a threatening figure that appears suddenly). These can transpire in 250 msec, when an expert shooter’s reactions have become fully “automatic” and reflexive. To the neuroscientist I would put this question: are there any unconscious paths of control available in this circumstance, remembering the shooter’s brain is probably awash in stress hormones
Trained people do actually shoot others in this sort of situation. Interestingly, the presumptions regarding control and responsibility seem to be different in US military & civilian jurisdictions. Highly stressed sentries and personnel on patrol (in combat zones) do shoot by reaction figures that suddenly appear before them. Sometimes these turn out to be friendlies or non-combatants. Provided the soldier was in full compliance with his orders and the general governing ROE’s, there is usually no liability associated with such an “accidental” shooting. The same man, investigating a trespass in his backyard at night in a high-crime neighborhood, faces a very different review of his actions if he shoots someone by reaction. In brief, the Castle Doctrine does not usually cover property outside one’s residence (or conveyance), so the shooter must mount a positive defense of Self-Defense, including the test of whether a reasonable man would deem the use of lethal force necessary in this situation. An absurd test—which juries, thankfully, sometimes ignore! To be sure, the shooter has diminished responsibility in mitigation, but the deeper question is whether he is to be exonerated. I think the level of control that someone can possibly exert in this situation should be a very important consideration.
Thomas—thank you for the excellent comments—correctly observes that our shooter has no conscious proximal intention to shoot anyone. On the contrary, he believes that going armed into his backyard with his gun skills will give him a better chance of interrupting and dealing with a criminal trespass (and planned burglary?) without having to actually use deadly force. This strategy is reasonable, I believe, and lawful is many states in the Western US, where I assume our shooter lives. What happens is that a figure suddenly pops up in front of him, our shooter is startled and frightened, and his reflexes take over. He is genuinely surprised by the shooting. People with no experience of combat or extremely stressful situations have no idea that their trained reflexes will take over and execute complex tasks well before any conscious thought has time to intrude into the process. Even veterans like our shooter routinely misjudge their ability to exert conscious control in very quick, high risk situations.
Thomas suspects negligence in our shooter, and that is probably the legal issue on which the verdict would turn. But where is the culpable error in the shooter’s performance? He thought, as almost everyone does, that he could exert conscious control in the face of any threat. He was wrong, but culpably wrong? Perhaps there was no way he could have checked his deadly reaction in this situation. This is where we need to instructed by neuroscience.
.
Posted by: Philoponus | August 30, 2009 at 04:39 PM
Philoponus,
We agree that negligence would likely be the legal issue at stake in a case such as this. As such, it would hinge on some assessment of whether a "reasonable person" in the sharp-shooter's situation would realize that he would need to exercise extra caution lest he be startled into shooting an inadvertent trespasser, neighbor's child, etc. Given that the "reasonable person" standard is not based on what the "average person" would think--since average people can clearly be unreasonable--we need not be concerned with whether "everyone else" would think the same thing, no? Instead, we first assume that a reasonable person would weigh the following:
1. the foreseeable risk of harm that is created by walking around armed in the dark vs. the foreseeable benefits that are created;
2. the extent and magnitude of the risk (in this case, accidentally killing an innocent person);
3. the likelihood that the risk could actually cause harm to others (which, in this case, is pretty high);
4. alternative steps that could be taken which might involve lesser risk.
Now, my question for Philoponus and others is whether it would be fair to assume that an idealized reasonable person in the case of the shooter should also take into consideration the best available scientific evidence concerning reflexive, automatic, and habituated "actions"? If so, then we should clearly deem the shooter to have acted negligently. If not, why not?
I, for one, happen to think that before a well-trained sharp-shooter goes armed into the dangerous night, the onus is on him to take the time to carefully think through the risks he is creating and whether these risks are outweighed by the benefits of his feelings of security. I don't profess to be a consistently reasonable person--as many Gardeners can attest!--but I think that if I could unholster and discharge a firearm in 250ms, I would think twice before deciding to walk armed at night and ready to kill. Of course, I also happen to think that we are morally obligated to pay some attention to scientific advancements when these advancements have a direct bearing on the moral consequences of our actions. But that is a story for another day...
Posted by: tnadelhoffer | August 30, 2009 at 06:07 PM
Sharp-shooters and trigger-fingers aside, what most caught my attention in this paper was the claim that "most of the patterns of behavior described in the social psychology literature [on the effects of unconscious influences] do *not* fall outside the realm of control" (emphasis added). That is, S&C seem to be saying that there's really little or no manipulation going on in most of these cases since the targets of manipulation are still in control by virtue of unconscious processes. If this is their claim, it strikes me as unwarranted, since having unconscious control capacities, even if smart and robust, does not necessarily render one invulnerable to unconscious manipulation. For instance one could be susceptible to unconscious cues to overeat transmitted by purveyors of junk food while having the standard complement of non-conscious skills and capacities that S&C describe in their paper.
We don't, it seems to me, have a general-purpose built-in non-conscious capacity -- non-conscious “deciding and weighing” -- to resist situational manipulations, in which case they might cause us to behave in ways that conflict with our endorsed values. Being unconsciously responsive to situational contingencies and nonconscious processes doesn’t necessarily confer control. For instance, it helps to be consciously trained to notice and resist the influences of advertising - food, political, whatever, after which I suppose we *might* at some point become automatically resistant to those attempted manipulations (Cihan’s derivatively unconscious control). But absent such training I don’t see how we can be held solely responsible for our unconscious responses: since only conscious processes confer the control imparted by simultaneous access to memory, values, reasoning and anticipation, we weren’t fully in control because conscious processes weren’t engaged with respect to the unconscious influences. So it seems to me that Doris’ frail control hypothesis stands at least to some extent, and along with it a class of excuses for having done things we wouldn’t have consciously endorsed in advance, such as overeating under the influence of unconscious situational factors. As S&C put it “if your choice is strongly affected by situational factors in ways that you are unaware of, then you plausibly have an excuse for your actions.”
S&C offer the caveat that “consciousness – for instance of goals and what the neo-Kantian would call ‘reasons’ – does sometimes have an important role in control,” so perhaps my worries are unfounded, and indeed in personal correspondence Chris Suhler agrees that when it comes to such things as food and campaign ads, “shifting processing to consciousness may lead to better outcomes for many - probably the majority of – people.” But if so, then prior to that shift advertisers might bear a good deal of responsibility for how their targets behave.
Posted by: Tom Clark | August 31, 2009 at 08:20 PM
Thomas,
I agree with much of what you say, but I am still a little hesitant to pronounce a verdict of negligence. Let me quickly fill in a little more of our scenario and see what you think. The first thing I would say regarding being “ready to kill” is that our shooter—let’s call him Bob—goes into backyard with his weapon holstered. He could lawfully go into his backyard will his pistol in hand, pointed in front of him, and his finger on the trigger. That would have made him able to shoot even faster, but in the dark he decides that would be too dangerous. So he holsters his pistol and goes out investigate what he is sure is at least a criminal trespass and quite possibly a burglary or vandalism or something worse. We’ve stipulated that Bob lives in a bad neighborhood where violent crime is common. Bob has a legitimate need to be armed, a legitimate right to protect himself and his property, and carrying a pistol holstered seems a pretty reasonable and unaggressive an assertion of these rights. (I concede: If he had been walking around with pistol pointed in front of him, finger on the trigger, I would call the shooting negligent.)
We stipulate that Bob has no desire or intention to kill anyone. The pistol is there for self-defense in case he is attacked. His backyard is well-posted against trespass, and mostly, though not fully fenced in. No one could enter the yard without seeing the bold NO TRESPASS signs. What happened on the night of the shooting is that a neighbor entered Bob’ backyard searching for his cat. The neighbor is large and cantankerous fellow who disdains to ask Bob’s permission to search the yard. The neighbor is carrying a large flashlight that isn’t working at the moment. The neighbor is looking under a large scrub in Bob’s backyard as Bob walks into the yard. The neighbor pops just as Bob passes, large metallic object in his right hand. Bob fires by reflex at the large figure popping up less than 10 feet away.
What I think has happened here is that some part of Bob’s brain interpreted the large man popping up with metallic object in hand as a deadly armed ambush and so triggered an (unstoppable?) shoot-to-save-your-life reflex. None of this “thinking” was conscious and under cortical executive control. But--the key issue for neuroscience--was there a possible path of unconscious control that Bob could and should have availed himself of to be able veto his 250 msec fatal reflex? More simply, was it possible for Bob not to have shot in a situation like this where he believed he was in mortal danger? If not, I can find no culpable error in his behaviour on which to hang a verdict of negligent homicide.
Engaging in certain inherently dangerous activities—like keeping wild animals as pets—triggers a standard of strict liability: no matter how careful you are, if something goes wrong and other people are injured, you are civilly and sometimes criminally liable. Perhaps we should consider making carrying a firearm a strict liability issue, but it currently isn’t in any jurisdiction I know of. And so Bob also has no liability under a strict liability statute, and we are backing to hunting for a culpable error.
Posted by: Philoponus | September 01, 2009 at 08:46 AM
I am not a philosopher and I am probably not playing this game right, but from Philoponus' description of Bob, I consider him a violent person:
- Bob considers killing a person in defense of his property "legitimate";
- Bob did not, nor does he prefer to, ask questions before shooting;
- Bob did not go into his backyard with his own flashlight;
- Bob did not flood the backyard with light from the flood lamps he initially installed instead of buying a hand gun;
- Bob either turned off or did not repair the motion-sensing device(s) connected to the flood lamps that would have illuminated the backyard (not to mention this whole situation);
- Bob did not hunker down behind his back door and begin yelling things like: "I have no intention to kill anyone, but I do so love my gun!", "It's either my grass or your life punk!" or "Rambo is my favorite movie!";
- Bob only practiced executions .. er sorry .. executing standing draws -- against various pop-up "bad people" targets -- of his holstered gun for the kill shot, blindfolded and with ear plugs, over and over and over until he was faster than the Waco Kid.
Good God I love Bob and the last I heard he was marching around inner cities of large metros with his "pistols" screaming about his humanistic intentions, but that he is pretty sure the various other "races" are inferior and that the "white" race is in danger of "Mongoloidization". And who in what jurisdiction can not sympathize?
Posted by: czrpb | September 02, 2009 at 08:02 AM
I'm a social psychologist who has been following this blog for some time. There is a lot of interesting work being done right now in social psychology to supplement both the Suhler and Churchland article as well as the previous comments.
Notably, there is a lot of work on what has been called the "weapon bias" or "shooter bias": namely, that people are more likely to misperceive a harmless object as a weapon after being primed by a Black rather than White face. See for instance work by Keith Payne (2001; 2006) and Joshua Correll (2002; 2007). These effects occur despite proximal conscious intentions, and are thought to reflect passively received cultural stereotypes (that were not necessarily consciously endorsed in the past). Some of Payne's work with mathematically modeling the weapons bias suggests that (lack of) conscious control is the critical factor for whether weapons bias emerges: automatic biases influence performance only in the absence of control (e.g. under time pressure, under ego depletion, etc.)
But perhaps more relevant in light of the Suhler and Churchland article is the work on chronic goal accessibility. For instance, the chronic (e.g. automatically activated) goal to be egalitarian has been shown to block automatic stereotypes from influencing judgment. And Jack Glaser has a recent paper showing that implicit motivation to control prejudice -- which I believe he casts as a kind of non-consciously exercised control -- moderates the shooter bias. In any case, the aforementioned work seems to touch on the role of both conscious and non-conscious control in split-second decisions.
As for philosophical issues of responsibility, I agree with Thomas that people might be negligent if they fail to pay attention to psychological research on consciousness, control, and automatic cognition. For instance, Stewart and Payne (2009) recently showed that people can "use" automaticity to prevent against future bias. You can set an implementation intention ahead of time, linking a simple if-then action plan (e.g. "If I see a Black face, think 'safe'") with an environmental cue; when the environmental cue then shows up later on, it automatically activates the planned cognition and behavior, preventing unwanted responses. Stewart and Payne showed that setting implementation intentions in this way blocked weapons bias; this more frugally achieves the same outcome as repeated training, an intervention that has been successful in other studies of weapon bias. As psychological and neurological science show more ways to exercise proactive control like this, those who would ignore such strategies risk lapsing into negligence. Of course, it is hard to convince people that they are biased and need to take these extra, counter-intuitive steps (see Dan Wegner's work on "the myth of the ideal agent", as well as Paul Davies' recent book, "Subjects of the World.") And eventually, this motivated resistance might itself become blameworthy as well.
Finally, I just want to mention a pair of studies that I ran with Keith Payne and Joshua Knobe. We had people morally appraise a variety of racial discrimination cases. In some of these cases, the discrimination was described as being due to automatic biases--that is, biases that were conscious but uncontrollable. In other cases, the discrimination was described as being due to unconscious biases. And for control cases, we provided no mental state explanation for the discrimination, to make it appear like conscious hypocrisy. For all the cases, we also posited that the discriminating agent explicitly endorsed egalitarian values, yet nonetheless discriminated. We found that people reduced blame for unconscious bias, but not for automatic bias. I bring this up because it appears that even if there is a form of unconscious control, the conscious/unconscious on a simpler level *does* appear to make a difference for college students' moral intuitions about salient cases.
Anyway, thanks for the interesting post, and the commentaries have been quite thought-provoking.
Posted by: Daryl Cameron | September 02, 2009 at 06:24 PM
In case anyone is interested, I have a chapter that discusses the potential threat of situationist social psychology to autonomy and responsibility (link below). I agree with Suhler and Churchland that we can be responsible for actions produced by non-conscious processes. But the threat from situationism that I focus on is fact that the research suggests that our actions are often produced by factors or processes we are unaware of, *and* were we aware of their influence, we would not accept them as good reasons for action (we would not want to be so influenced).
This paper also discusses the idea that we possess and exercise autonomy (or free will) to varying degrees, so it links up to the other active post here at the Garden on Anders Kaye's paper. Indeed, Anders has a relevant paper: "Does Situationist Psychology Have Radical Implications For Criminal Responsibility?" 59 Ala. L. Rev. 611 (2008). I can't find it online.
Here's my paper:
http://www2.gsu.edu/~phlean/papers/Autonomous_Agency_and_Social_Psychology_prepublication.pdf
Posted by: Eddy Nahmias | September 02, 2009 at 07:13 PM
I have a copy of Kaye's paper if anyone wants it.
Posted by: Jonathan Jong | September 03, 2009 at 08:42 PM
I found the link from The Situationist, which also linked here, and I read the article. I appreciate Daryl's grasp (above) of the article and application. However, I was hoping to see some discussion over the different positions Suhler and Churchland's article at least implicitly map out. In part, I come from an analytic philosophy background, but I have found the idea of the sophistication of the unconscious mind and the less robust simplicity of consciousness to be very important to all philosophical topics, whether it be metaphor comprehension or philosophy of law - and a view have found to be an important motivation of my embrace of situationism as applied to law. Yet, this is precisely the background for Suhler and Churchland's argument, and explicitly noted to be so. I wonder then how this article can be construed as a criticism to situationism as applied to law broadly construed. I am not sure that it is, since the article focuses explicitly on John Doris' model, which is labeled the 'Frail Control' hypothesis, a hypothesis defined in terms of not accepting the importance of the sophistication of the unconscious mind. I therefore see something along the lines of three distinct positions here: Situationism, Frail Control, and Traditional Theory. Any ideas about this would be appreciated.
Posted by: Michael Metzler | September 04, 2009 at 09:04 PM
Earlier in this comment threat, I was simply responding to Philoponus' interesting example/question rather than addressing the arguments developed by Suhler and Churchland. Now that I have finally had the chance to read their piece, I just wanted to point out that it seems to me to entirely miss the mark. For starters, they define the so-called “Frail Control Hypothesis” (FCH) as the thesis that “even in unexceptional conditions, humans have little control over their behavior.” Then, they go on to argue that because non-conscious control is common-place, FCH is uncompelling. But I think they only arrive at this conclusion because they did not accurately capture the worry expressed by FCH. By my lights, the issue was not control per se, but *conscious* control.
In short, the worry expressed by Doris and others is that (a) if we associate the agent with the conscious self, and (b) if it turns out that the conscious self has far less control over human behavior than we traditionally assumed, then it is unclear why moral responsibility shouldn’t shrink along with the diminishing powers of the conscious self. Let’s call this the “Shrinking Agency Hypothesis.” After all, while we can clearly exercise some reflective control over the beliefs, desires, and intentions that are accessible to consciousness, it is clear that we cannot control the non-conscious control processes that operate entirely below the veil of consciousness. As such, I am not sure what sense can be made of the claim that I am equally responsible for the behavior that is the result of my non-conscious control as I am for the behavior that is the result of my conscious control. In the latter cases, I am *consciously* choosing, deliberating, acting, etc. In the former cases, I—here defined as my conscious self—am not aware of the springs of my own action. In some important sense, the behavioral products of non-conscious control are alien to my conscious self. I take it that’s why people are so surprised by the data on situationism. No one thinks that whether they decide to help an old lady in need could be greatly influenced by finding a dime in a phone booth. Indeed, that dime-finding (or the lack thereof) influences our behavior provides us reasons for second-guessing the domain of conscious thought and deliberation.
Of course, Suhler and Churchland could simply extend the traditional notion of agency to include both conscious and non-conscious control. But they would need an argument that showed how the thread of responsibility carries over from the former to the latter. Until then, all they have shown is that human beings can exercise two kinds of control. But it does not follow from this alone, that we are equally responsible for both.
Posted by: tnadelhoffer | September 08, 2009 at 01:28 PM
Thomas, your latest comment reminds me of Dennett's point that "if you make yourself really small, you can externalize just about anything". "Shrinking Agency" indeed. I don't accept the premise that the products of non-conscious control are alien to my conscious self. I'd say it depends - are we playing for the same team? Most of the time, I find my unconscious goals to be highly congruent with the ones I reflectively endorse.
Of course, most of the time isn't the same as all of the time. Dime-finding might be a case in point. Although I'm fine with the idea that my mood should be an important determinant of how helpful I am, I also want the other person's degree of need to be even more important in determining my helpful behavior. If the research shows that it's probably not so*, then that's a problem.
* I don't know if the research does show this. I'm working strictly with second-hand info here, and just hypothesizing a potential research finding, in order to explore the larger issue.
Posted by: Paul Torek | September 12, 2009 at 04:04 PM
"In some important sense, the behavioral products of non-conscious control are alien to my conscious self."
Well, I think this misses an important aspect of Suhler and Churchland's thesis, which I mentioned above: the special sophistication of the unconscious mind. The powers of 'conscious control' are limited, less sophisticated, and derivative from the powers of the unconscious mind. The words "my" and "self" now reference contested concepts, and the relationship between the unconscious and conscious mind on this view suggests the antonym of "alien." S's unconscious control Suhler and Churchland reference is a large part of S's overall character, perseverance, loyalty, determination, faithfulness, reliability, honor, work ethic, and so on.
Posted by: Michael Metzler | September 26, 2009 at 07:30 PM
Michael,
Acting without conscious awareness of her motives, the agent doesn't have (as Thomas N. pointed out) *reflective* control over her behavior, however sophisticated her unconscious control might be. Also, conscious *intent* is normally central to responsibility. In the law, a wrongful act has to be accompanied by mens rea (done intentionally and knowingly) to carry culpability. To say that it need not would indeed count as a revolution in our responsibility practices, but it isn't clear to me that such strict liability would be justifiable.
I agree with you (and Paul Torek) that an agent's unconsciously governed actions might well be congruent with her character and conscious values, but that doesn't make her responsible for them to the same extent as her conscious acts, because she has less control over them. Adding to Thomas's point: bringing things into consciousness makes available very significant *additional* capacities for control (e.g., conforming behavior to one's conscious values, asking advice, considering the consequences of one's acts) that make the agent far more sensitive to the prospect of rewards and sanctions (more response-able), which makes it legitimate to hold her fully responsible. Of course, we usually *are* conscious of our consequential actions and the motives behind them, so it would be interesting to hear real-world examples of how Suhler and Churchland suppose their reconfiguration of responsibility would play out. How much and in what ways would things change, I wonder?
Posted by: Tom Clark | September 27, 2009 at 04:11 PM
Tom,
Some of us—fools that we are—occasionally engage in what the law calls inherently dangerous activities. We handle explosives and firearms, we operate cars and boats and planes at high speed at race venues, we keep exotic dangerous animals as pets, etc. We are, if we are responsible people, very confident that we have reliable control over our “toys” in these situations. Indeed, we take extraordinary precautions to insure public safety, but still, and due to no recklessness or negligence, serious accidents occur and people are injured or killed. It seems to me a reasonable choice at law and in personal conscience to hold people who cause such injury responsible for their actions. Granted there was conscious intent to harm—quite the contrary—and granted there was no provable recklessness or negligence, yet none of these facts seems to me to relieve the doer of responsibility for the harm he has caused. This is, as you know, the essence of strict liability, and I know you are skeptical. The consideration that for me is conclusive in applying a standard of strict liability to inherently dangerous activities is that the doer freely chooses to engage in unnecessary activities that he knows have caused and can cause grievous injury despite his best efforts to act safely. Put more simply, if you don’t want to be held responsible for race accidents or accidents involving explosives and firearms, don’t touch the bloody things, or don’t touch them in a public venue where others can be harmed. Police and military needs to handle weapons & pursue, so strict liability should not apply to them, but when you and I choose to engage knowingly in dangerous recreations, I think we should be held to a standard of responsibility that goes beyond the avoidance of reckless & negligent conduct.
I expect that neuroscience, as it investigates the paths of unconscious control, is going to expand the pale of negligence, so that more injury-causing accidents will be prosecuted as negligence (because it becomes clear that the doer could have done more to avoid or minimize to the accident). In many cases I imagine it will remain controversial what ultimate control the doer could have exerted. The just solution to a blurry line of ultimate control seems to me a clear standard of strict responsibility: if you don’t want to held responsible for the consequences of your accidents, then don’t engage in unnecessary inherently dangerous activities.
Posted by: Philoponus | September 28, 2009 at 08:29 AM
Philoponus,
I agree about strict liability for negligence, which is settled law, and perhaps it should be expanded in the way you suggest for accidents (but to do so might be onerous in ways you haven't anticipated). But my impression is that the paper is arguing for a good deal more than that since it basically lets manipulators off the hook. They are *denying* the claim that “if your choice is strongly affected by situational factors in ways that you are unaware of, then you plausibly have an excuse for your actions.”
Posted by: Tom Clark | September 28, 2009 at 10:29 AM
Tom,
Thanks for the note. As I take it, Suhler and Churchland are providing counter balancing reason to reject what they take to be Doris' strong thesis, which appears to mitigate responsibility too much. I agree with your point about the importance of reflective, conscious control, which plays an important part in our overall morality and responsibility. However, the 'sophistication' of the unconscious mind, when understood correctly on my view, changes what we take to be the nature and extent of consciousness and conscious control. For example, intent, reflection, knowing action, motivation, 'me', and a large part of what we simplistically take to be 'our conscious' control, just is the sophisticated working of our unconscious mind - mechanisms we have very little knowledge about. When I go to trial, my unconscious mind goes to trial. And when you praise me for my fortitude and character, you largely praise my unconscious mechanisms of control. I would not call this 'revolution', but Situationism, which I think is compatible with my view, is a strong claim. From Harvard Law's Situationist (http://thesituationist.wordpress.com/about-situationism/): "Situationism is premised on the social scientific insight that the naïve psychology—that is, the highly simplified, affirming, and widely held model for understanding human thinking and behavior—on which our laws and institutions are based is largely wrong."
Posted by: Michael Metzler | October 03, 2009 at 05:58 PM
"Situationism is premised on the social scientific insight that the naïve psychology—that is, the highly simplified, affirming, and widely held model for understanding human thinking and behavior—on which our laws and institutions are based is largely wrong."
I think what situationism most centrally denies is dispositionalism, that behavior is largely the result of stable internal dispositions such as character traits, as opposed to situational influences. Saying that people should be held responsible for unconsciously controlled acts that result from their dispositions (acts which may or may not be consonant with their consciously held values) sounds to me like more dispositionalism, not less. Situationists seem bent more on *distributing* responsibility for actions to situations than narrowing it onto just the agent. Of course behavior results from both dispositions and situations, so it isn't as if the agent disappears from the causal analysis of action. It's a matter of seeing the causality correctly so that our responsibility practices reflect *all* the causes, on the assumption that we want to improve things, not just impose just deserts. Situationists might argue that situations have to be "held responsible" just like people.
The causal role of consciousness per se I take it is a matter of considerable controversy. David Rosenthal mounts what seems to me a pretty withering attack on the functional necessity of consciousness in his talk Consciousness and Its Function. Still, even if consciousness per se isn’t functionally necessary, the processes that (somehow) entail it very likely *are* necessary for various sorts of higher-level control. In which case agents who remain unconscious of significant chunks of their dispositions (from no fault of their own, let's stipulate) aren't getting access to processes that help make them far more responsive to social norms, and thus aren’t fair targets of responsibility practices that normally engage such processes.
Posted by: Tom Clark | October 04, 2009 at 07:58 AM