My concern here is with a specific type of “torn decision,” in which an agent’s decision cannot follow directly from the endorsed contents of their self (such as moral convictions, rational deliberations, self-governing policies, volition structures, and so forth). In statistical terms, the probability of this agent deciding in favor of any one of the options before her is .5. It is important to note that the agent is not indecisive because she is confronted with prospects she takes to be abhorrent/unthinkable options, nor is she torn because her decision is, to her, trivial/insignificant. She is “torn” because the constituents of her agential self either do not specify what she should do or they conflict with one another, resulting in a .5 probability of her deciding one way or the other.
Perhaps an example would help: Agent Z has, for some time, endorsed only Republican candidates. Simultaneously, he has come to care deeply about the reduction of pollutants released into the atmosphere. These commitments, surprisingly, do not conflict until the election of 2010, wherein the Republican candidate openly opposes pollution controls, and the Democratic candidate supports them. For sake of simplicity, suppose that, to Z, all other factors regarding the election and the two candidates cancel out. He is in the voting booth, and must vote one way or the other, having previously decided that not voting is an unthinkably unpatriotic thing for him to do.
Libertarians, such as Kane and Balauger, seem to view such torn decisions as the paramount opportunity for freedom (or L-freedom). Here, alternative possibilities are generated and the agent indeterminately “just decides,” perhaps “in light of” prior deliberation and desires, and perhaps as a self-forming decision, but always by selecting from 2 or more genuinely available courses of action. Compatibilists, on the other hand, would seemingly tend to view such moments as examples of diminished autonomy, wherein such notions as identification, ownership, and self-governance are inapplicable. The decision is not adequately controlled by the agent’s reasons-responsive mechanism and/or volitional structures (given that, should the decision-moment occur 1000 times, one result will happen 500 times, and the other result will happen 500 times), and so is not one that is made fully autonomously.
What is irksome to me is that I am not comfortable with either of these horns, so to speak. The so-termed “libertarian” view seems to ride on the presumption that, when the agent “just decides,” it is she who is making the decision and so it is she who makes the leap beyond indeterminacy. But exactly who or what is this “she” that/who is deciding? If her present beliefs, commitments, and desires cannot guide her one way or the other, it seems as though there is no “she” that makes a decision. A volitional spasm, perhaps, but not the personality of an agent. At the same time, at least some of our torn decisions do seem to constitute important moments in our agential history, defining, in a way, who we become as persons and agents. And so for compatibilists to poo-poo such decisions as paradigm examples of non-autonomous (or at least, less-than-fully autonomous) decision-making seems mistaken, as well.
How might we move beyond this gridlock? That is, what is the sort of agency proper to non-randomly made torn decisions (of the kind discussed here)?
A minor point first: Kane specifically denies that the probabilities in torn decisions must be around .5. Whether he is entitled to the denial is a different matter.
Why should a compatibilist think that torn decisions lack in autonomy? Perhaps you think they should because you think that the 'self' doesn't make the decision. But the agent might be deeply committed to the values at stale (something like deep commitment is needed, otherwise the agent might just arbitrarily choose, like when we choose from a menu). Given the deep commitment, there seems no reason not to say that the self is deeply involved. Of course, the self can't make the decision simply by seeing what their antecedent commitments incline toward, but who ever said that compatibilism is committed to the view that only easy choices are autonomous?
Posted by: Neil | July 09, 2009 at 08:15 PM
Neil,
Thanks for the clarification regarding Kane's theory. Hopefully the larger point remains intact.
I don't think that a compatibilist necessarily has to think that torn decisions (of Z's sort) are made in the midst of weak autonomy. My suggestion is simply that, given the structure of current prominent compatibilist theories of autonomous agency, this seems to be the result that follows. Personally, I think that a perfectly good compatibilist account of strongly autonomous torn decision-making is possible (and even beneficial).
A note about your argument concerning deep commitments. I'm not sure this view avoids the difficulties raised by Z's case. It is precisely because Z is deeply committed to both Republican loyalism and environmentalism that he cannot make his decision, no? The issue for what I perhaps have ill-termed the compatibilist view is _not_ whether or not Z's self is involved (for surely it is, unless he ceases to care about the decision); the problem is that the endorsed components of Z's self, as they stand, cannot guide Z to a decision, even though they are involved in Z's appraisal of his options. This is what I take to necessitate the conclusion that compatibilists (again, as I categorize them) must admit that Z is not fully autonomous.
And so it seems that a compatibilist account of torn decision-making, in this case, must somehow make sense of how Z creates new agential commitments, markedly different (though perhaps related to) the ones he currently holds. For without such commitment-creation, Z (to compatibilists)either remains frozen in indecision or else he makes an arbitrary and/or random (and so non-autonomous) decision.
Posted by: Grant Rozeboom | July 09, 2009 at 09:18 PM
Hi Grant,
I'm wondering why you think compatibilists should want their theories say that torn decisions are autonomous. Compatibilists would want torn decisions to be things a person is morally responsible for, but I don't see any reason a compatibilist couldn't hold that people are morally responsible for their torn decisions. For example, in your case Z seems responsive to the relevant reasons. He's unable to come up with new considerations that would resolve his dilemma, so he remains torn. (Maybe he should abstain from voting if he's really torn, but set that aside.) Wouldn't it be weird for him to suddenly find himself fully behind his decision, once he votes?
I've thought that one puzzling bit of libertarianism is the insistence by (some) libertarians that torn decisions are paradigms of autonomous agency. That's not my intuition at all.
This seems like a case (perhaps one of many) where we face deep and serious decisions without enough information or enough time to get it and find we have to choose anyway. When we do, it's regrettable, but where's the problem for compatibilism?
Best,
Zac
Posted by: Zac Cogley | July 09, 2009 at 09:56 PM
Grant,
The gridlock you perceive is due to the faulty nature of your premise. While the probability of her deciding one way or the other may be .5, the reality of her situation is that she will ultimately decide based on her strongest, or most compelling, motive.
There is absolutely no evidence to justify your premise that "an agent’s decision cannot follow directly from the endorsed contents of their self (such as moral convictions, rational deliberations, self-governing policies, volition structures, and so forth)."
To the contrary, a deterministic universe demands that the agent's decision MUST follow directly from the endorsed content of it's self.
Your conclusion regarding the illogic of the indeterminist prospect is, however, quite correct, as cited below;
"The so-termed “libertarian” view seems to ride on the presumption that, when the agent “just decides,”it is she who is making the decision and so it is she who makes the leap beyond indeterminacy. But exactly who or what is this “she” that/who is deciding? If her present beliefs, commitments, and desires cannot guide her one way or the other, it seems as though there is no “she” that makes a decision."
Posted by: George Ortega | July 09, 2009 at 10:37 PM
Grant,
My view is that "torn decisions", of the statistically ambiguous type you mention, exist only due to lack of self-knowledge. My view is that an agent with maximal self-knowledge would never experience a torn decision, except of a most mundane and non-threatening kind.
In light of this, cases of torn decisions are moments of self-discovery: if we don't know enough about ourselves to know that we ought to prefer A over B, then regardless of which option we choose, the immediate experience derived from choosing A or choosing B will often reveal our inward most feelings about which option we'll prefer to pick in the future.
Why is that? Well, we are structured in such a way as to already have the knowledge to yield a decision about which option we prefer, the only problem is that we lack a certain kind of knowledge.
There are two options here: either we are lacking knowledge about ourselves, or we are lacking knowledge about one of the options. The latter kind is not relevant to free will criticisms, and the former kind is resolved using the strategy above.
Lack of knowledge about an option might look like this: suppose an agent highly prefers vanilla ice cream, and when face with a choice between vanilla or chocolate, vanilla or strawberry, etc., the agent does not experience even a moment of being torn about which option to select. Now, one day the agent's favorite ice cream joint puts up a sign about a new flavor by the register, and the agent notices this sign right before putting in his usual order for vanilla. The agent may then experience a moment of being torn between whether to choose vanilla or take a risk and try the new flavor.
In the context of an ice cream parlor, such types of knowledge-gaps are easily filled: the agent can just ask for a sample and see whether he prefers it over vanilla on that occasion. The knowledge he gains in the act of sampling the new flavor reveals two things: 1) knowledge about the option, and 2) knowledge about the agent's self (e.g. whether the agent is the type that prefers the new flavor).
At this point is important to interject that an agent with a maximal sense of himself would not experience the kind of doubt we described above. If the agent knows himself completely, and knows things like how much risk is willing to take, whether on that particular occasion he has had a lot of vanilla ice cream lately or very little, whether is mood craves vanilla regardless of the potential for something better, etc. The only time an agent with maximal self-knowledge would only experience being "torn" when there is NO information within himself that assist with choosing: possibly imagine a game show, where the contest has to choose the prize behind door A or B.
Yet, we can still imagine the agent having strategies to resolve these kinds of situations: when faced with a 50/50 or other statistically even situation, I will always choose the first option. If the agent has put such strategies in place, there may be a moment when the agent recognizes he is facing a a statistically split choice, and yet never feel "torn" for a moment due to such a strategy.
Sadly, in life not everything is so cut and dry as in the ice cream parlor. We do not always have the luxury of being able to easily resolve our sources of doubt and make informed decisions. Often we have to act blindly and we don't learn the desired lessons until sometime after the choice has been made. Regardless, the fundamental mechanics remain the same.
I have discussed this view a number of times on the Garden across a variety of threads:
Posted by: Mark Smeltzer | July 09, 2009 at 11:04 PM
Grant, we can say, if we like, that torn decisions are paradigms of autonomy, or that they are somehow lacking in autonomy. Let's leave that issue, and how to decide it, aside. How does this issue relate to the question of compatibilism at all? If the decision is non-autonomous, because the agent's motives do not settle it, then it is non-autonomous for compatibilist and libertarian alike. Of course the libertarian might think these non-autonomous decisions are necessary for the existence of free will, but that doesn't give them the right to dub them autonomous.
Posted by: Neil Levy | July 10, 2009 at 03:54 AM
Thanks, all, for your incisive responses. Hopefully my brief responses below (a day of customer service work beckons!) are adequate to keep the ball rolling.
Zac,
That’s an interesting question. I think you might be right that in cases where we are “without enough information or enough time to get it and find we have to choose anyway,” no one, compatibilist or not, would want to admit that we find there an example of full autonomy.
But I’m not so sure that this describes Z’s case. To be sure, Z is, as you point out, morally responsible for his eventual decision, autonomous or not. However, a unique feature of cases such as Z’s is, I think, that they hold long-term significance for the shape and content of agents’ identity i.e. significantly different characterizations and expectations follow from voting for the Republican vs. the Democrat, given Z’s history. Should Z realize this, it seems as though he would want to make this decision in a manner such that he could really commit to it or own it, which itself would seem to entail some measure of strong autonomy.
Whether or not compatibilists are motivated to flesh out this point is, I suppose, a different issue. So perhaps I shouldn’t invoke the name without their permission :)
George,
It may well be true, as you point out, that Z eventually decides on the basis of his “strongest, or most compelling, motive.” It may also be true that Z blindfolds himself, makes a mark on the ballot for one of the candidates, and settles the issue that way. My concern (that, to my shame, I did not state clearly enough in the original post) is that Z’s decision cannot follow from, or be sufficiently determined by, the endorsed components of his self. That is, theories of autonomous agency tend to qualify some desires, policies, deliberations, as ones the agent endorses (think of Frankfurt’s internality/externality distinction). Given that, for Z, the pertinent endorsed features of his self conflict entirely as he attempts to make the voting decision, even if he finds some other motive for making his decision (“Uggghh – let’s just get this over with” or “The word ‘Republican’ looks nicer on the ballot”), he still won’t have made a decision that many philosophers would feel comfortable terming autonomous. I guess I don’t see why it is necessarily true that, eventually, either Z’s republican loyalism or environmentalism wins out. They are equally weighted, and so Z must find some other (non-endorsed or not-yet-endorsed) motive or volition in order to make his decision, not knowing how to order these two long-held commitments.
Mark,
My apologies for not having read your previous posts; I’ve been following the Garden only since January of last year.
Based upon what I can glean from your response, then, my thoughts are this: It seems as though Z’s decision is torn precisely because he does know how much each of his relevant commitments mean to him. And what he comes to know about them is that they are equally significant; they carry equal weight in his practical deliberations and thus, when pitted against one another (as they are in this case), they are impotent to guide Z to a decision. And so I’m not sure how any further self-knowledge would help, nor am I entirely sure why we still must insist that Z, once he knew more about himself, would be able to overcome his indecision.
Now, you may be right that, once Z “just makes” a decision, he will reflect upon it and incorporate its implications for his selfhood. But my concern is whether or not and how Z can make this decision autonomously. And it seems as though one of the main threats to Z’s (strong) autonomy with regards to this voting decision is his inability to side himself with one option or the other. So, my question, re-worded, is this: Given that Z cannot identify himself with or fully own his decision, as identification and ownership have traditionally been conceived, how can he still make this decision fully and strongly autonomously?
Neil,
Perhaps I assumed an inference I should have made more explicit. For compatibilists (as I’ve characterized them here), one is a fully free agent if and only if one is fully autonomous i.e. one’s decisions are appropriately controlled by volitional structures, critically-reflective mechanisms, etc. It doesn’t seem as though libertarians would subscribe to this biconditional statement; and this is why, to them, Z could be a fully free agent (but for reasons that are mysterious to me), but for many compatibilists, as I perceive them, Z is not a fully free agent since he cannot be fully autonomous in how he makes his decision.
Additionally, Z could be “weakly” autonomous insofar as his decision is made “in light of,” but is not “determined by” the relevant features of his self (i.e. his commitment to the Republican party and environmentalism). And so a libertarian might be entitled to say something like that Z’s decision is freely made because it is undetermined and autonomously made because it is made “in light of” prior deliberations and commitments. But this is not the sort of autonomy that I’m after.
With regards to your question about how this relates to the question of compatibilism, I didn’t mean to suppose that Z’s case is relevant to the compatibilist/incompatibilist debate in its traditional form. What I was attempting to point out was that, for many compatibilists, free agency consists in the determination of actions by the endorsed components of oneself, and that, for libertarians, this doesn’t hold true (at least by itself). Z’s case is puzzling because his decision doesn’t seem strictly determined by the endorsed components of his self (and so doesn’t satisfy standard compatibilist criteria), but yet libertarians don’t seem to have an adequate answer to the question of how Z might still retain some measure of strong autonomy in his decision.
Posted by: Grant Rozeboom | July 10, 2009 at 06:28 AM
Grant, you raise interesting questions. I address some of these issues in this paper (in case you or others are interested):
http://www2.gsu.edu/~phlean/papers/Close_Calls.pdf
Posted by: Eddy Nahmias | July 10, 2009 at 06:42 AM
Grant,
There two potential situations:
- While the choice between A and B (and C, ...) might be statistically even for the agent, the agent has self-endorsed policies for resolving such situations. In this case, the decision is completely autonomous. With your voting example, the agent may have a policy that when two candidates seem equally well suited, he will pick the _______ candidate. (Where the blank is filled in with a predicate that will determine his decision.)
- The agent is facing two unprecedented options and has no self-endorsed policies to make a decision. In this situation, it is imaginable that the agent would take some time to educate himself on the options if possible, try to perform some non-committal experiments if possible, or at bottom he will flip some sort of coin.
If the agent has to resort to some sort of pure coin flip, it will be an deliberate, autonomous act to use that resolution policy. Moreover, only the reason the agent would invoke that policy is in a situation where the agent lacks the requisite knowledge to pick and lacks the ability to obtain that knowledge (or some degree of it) before picking. (The voting example hardly seems relevant to this type unless the agent in the example is *really* lazy and is choosing a coin flip over asking questions about the candidates.)I guess I'm hard pressed to see any pressure this places on the concept of autonomous acting.
Posted by: Mark Smeltzer | July 10, 2009 at 12:33 PM
questions of agency, coin flips and otherwise: thanks for laying out the problematics so clearly. i am working on the possibility of ethics without foundations at http://prosthetics.wordpress.com
approaching it as a move almost parallel to rousseau's social contract formation: we daily decide to act as if something were grounded, founded and otherwise... i call this the move from the is/ought derivation to something that looks more like is/want.
of course it runs up against this decisionist problematic and probably does not incorporate actor-network-theory to the extent that it should, source objects and factors that move outside the deciding agent and in the world.
still, i would enjoy your feedback on this train of thought...
take care
Posted by: nikki | July 10, 2009 at 09:35 PM
Grant,
You write:
"It may well be true, as you point out, that Z eventually decides on the basis of his “strongest, or most compelling, motive.” It may also be true that Z blindfolds himself, makes a mark on the ballot for one of the candidates, and settles the issue that way."
I think we are in agreement that, because of the above, Z is acting according to a free will in neither case.
"Given that, for Z, the pertinent endorsed features of his self conflict entirely as he attempts to make the voting decision, even if he finds some other motive for making his decision (“Uggghh – let’s just get this over with” or “The word ‘Republican’ looks nicer on the ballot”), he still won’t have made a decision that many philosophers would feel comfortable terming autonomous."
True, but my point, which addresses the salient matter here, is that those philosophers are basing their conclusion on an illogical argument or a belief, like, for example, the belief that there is a Hell and if I'm to avoid it's eternal torment I had better belief what the Church teaches about free will.
Posted by: George Ortega | July 11, 2009 at 06:36 AM
Eddy,
Thanks for pointing me in the direction of your paper -- I'll have to take a look.
Mark,
First, I think it is important to distinguish "weak" and "strong" forms of autonomy. I take it as uncontroversial that Z is weakly autonomous -- that is, he meets the standard requirements for being morally responsible for his decision, however it occurs. But strong autonomy, which traditionally requires that agents own their decision by having it be determined by the endorsed components of their agential self, seems necessarily absent from Z's case.
The reason why hinges on my belief that there are more than the two options you present in a dichotomous fashion above. For sake of argument, let's say that Z does not have a pre-established self-governing policy about what to do in the voting-booth scenario. But the two prospects with which he is presented are not wholly unprecedented to Z, either; it's the fact that they result in an irresolvable conflict that is novel to Z.
And so the "pressure" that is placed on autonomy is this: This decision is very important to Z. He wants to take it seriously, and he wants to make it in a way to which he can be committed and with which he can identify himself (i.e. strong autonomy). However, this seems impossible, given that current theories of strong autonomy require something unavailable to Z -- namely, that the relevant features of his endorsed self be sufficient to determine his decision.
This is why I suggested above that Z, if he is to avoid a coin-flip or some other trivial strategy (like blindfolding), must seemingly create a new commitment or policy on the spot, and he must do so in a way that is both different than those found in his agential past but somehow in line with this history as well. But I am not quite sure what this would look like, or if it is possible. Kant's notion of reflective judgment (3rd Critique) might help us, though.
Again, if Z "resolutely" resorts to a coin-flip, he will be autonomous insofar as he will be responsible for this act and its accompanying policy. But I don't think this sense of "autonomous" entails _strong_ autonomy, and it is the strong forms of autonomy that I wish (but am confounded in trying) to impute to Z.
George,
I'm not quite sure I understand the analogy you use to explain the "illogical argument or belief" you attribute to philosophers who would claim that Z is not fully or strongly autonomous. Perhaps you could elaborate, for my sake?
Posted by: Grant Rozeboom | July 11, 2009 at 09:34 AM
Grant,
I disagree that solution I presented to this problem entails weak autonomy. You say, "However, this seems impossible, given that current theories of strong autonomy require something unavailable to Z -- namely, that the relevant features of his endorsed self be sufficient to determine his decision."
If his endorsed self contains a policy that allows him to resolve the decision (even if that policy is a coin flip), does that not entail that "the relevant features of his endorsed self [are] sufficient to determine his decision"? I guess we could quibble over the word "relevant"...
Regardless, I have two additional criticisms:
There is an additional, as of yet unstated, element at work in accepting that agent's can endorse "coin flip" resolution policies and thereby endorse the resulting decision as the agent's own decision: the coin flip does not determine the decision, it is the agent who chooses to accept that additional data as sufficient to tip the scale. This has to be strong autonomy: the agent endorses the outcome, and can offer an explanation as to why the outcome was chosen.
Moreover, the agent does not have to obey the coin flip: if the agent says, "Heads and I will vote Republican" and the coin comes up tails, the agent has to decide what to do with that information -- the agent may at this moment feel very mad at the coin, and this strong emotion allows him to uncover his true feelings and resolve to vote Republican. The converse could just as easily happen.
If the agent has no feeling at all about the flip then the agent can conclude that the agent has no deep feelings about either option, and this ought to be an indicator to the agent that additional information should be sought in the future.
When I am torn over what to order at a restaurant, I often flip a coin. I then take my emotional response to the coin flip to be the determiner of what I order. For example, when torn between steak or fish, if the coin lands steak and I am very pleased, then I know that I really wanted steak all the while, yet if it lands fish and I feel upset, then this also lets me know that all the while I really wanted the steak. The coin does not function as the arbiter of truth here; rather, it is like a divining rod.
Posted by: Mark Smeltzer | July 11, 2009 at 01:35 PM
Mark (and Grant),
Could we do away with the coin flip and still have strong autonomy? - Z thinks, 'It's a shame I can't both vote republican and vote for pollution controls. But that's life. At least I'm voting for something I care about.' Then Z votes democrat, based on no other information, still feeling as though there is no reason to choose pollution control over a republican vote, regretting the situation, and yet telling himself that at least he is supporting something he cares deeply about.
Here Z does something which he fully endorses, and takes it as a small tragedy that life sometimes pits the things he values against one another. I am tempted to call this strongly autonomous: Z isn't required to make the world conform to the things he endorses. But maybe I'm misunderstanding the nature of the decision.
Grant (unrelated to Mark’s issue),
This is very interesting stuff. What do you think about these:
1] The agent cannot have strong autonomy in genuine torn decision cases. In other words, why do you wish to impute strong autonomy to Z in this case? (this is not meant to be a challenge, as it might read, but a real question: I'm curious about the underlying import for a theory of autonomous agency)
2] You say - 'at least some of our torn decisions do seem to constitute important moments in our agential history, defining, in a way, who we become as persons and agents.' I agree. But can't we distinguish between the decisions which come to constitute who we are, and who we are? Can't at least some of those things we endorse as part of our autonomous agency arise from previous random decisions?
Thanks,
Josh
p.s. if this posts twice, apologies, I had problems earlier.
Posted by: Josh Shepherd | July 11, 2009 at 03:35 PM
Adding to Josh's point in his opening paragraph, why can't we just say that since Z's decision is maximally consistent with his endorsed self, it's fully autonomous? The fact that the decision isn't uniquely maximally consistent with his endorsed self is neither here nor there, on this view.
Posted by: Paul Torek | July 11, 2009 at 06:08 PM
Josh,
That sounds something like the first prong of the dichotomy I presented above. Grant responded to that by saying that the agent does not have any policies that will help him resolve the dispute and make a decision. My later post is a response to that.
Generally speaking, I think many times when we feel torn, it is for reasons like what you describe: we see the good in A and the good and B, and we wish there was a synthetic option C that had the best of both, yet we are forced to choose between A and B. Our decisions may be the result of policies that we endorse, but that doesn't mean we always get what we want.
Posted by: Mark Smeltzer | July 11, 2009 at 06:56 PM
Grant,
I did not present an analogy. I presented a possible explanation for why a philosopher might not being able to make a logical conclusion about the free will question.
Posted by: George Ortega | July 11, 2009 at 09:16 PM
Gentlemen,
Grant's premise reads as follows:
"She is “torn” because the constituents of her agential self either do not specify what she should do or they conflict with one another, resulting in a .5 probability of her deciding one way or the other."
If you accept this premise, rather than the reality that she WILL decide based on some reason, you miss the main points; her decision will be deterministic, not autonomous, and not indeterministic.
Posted by: George Ortega | July 11, 2009 at 09:27 PM
Grant,
Thanks very much for your excellent post. I hope it is not excessively obnoxious for me to mention that in the introductory essay in my new OUP book, OUR STORIES: ESSAYS ON LIFE, DEATH, and FREE WILL, I have some ruminations on related issues. More specifically, I explore in a preliminary way the puzzling fact that some philosophers find the locus of true freedom in "close calls" (torn decisions), whereas others find it in "clear cases". I'd also be interested in others' views about why philosophers disagree on this point; it has always struck me and puzzled me.
Posted by: John Fischer | July 12, 2009 at 03:13 AM
John,
You raise an important question in the debate. Whether we accept the conclusions of philosophers that the only decisions we make freely are "close calls" or "clear cases," either perspective leaves some of our decisions as having come from a determined will.
The question is; is there such a thing as a partially free will? One way to explore this question is to consider the role of our unconscious in decision making. Since, unlike our conscious will, our unconscious never sleeps, it is always potentially determining our decisions. In fact, John Bargh has shown very strong empirical evidence supporting this conclusion. If we conclude that our unconscious determines some of our decisions, how can we ever be sure that any decision we make has derived from a free will rather than from an unconsciously determined will? And if we can never be sure, what does this say about our perception of having a free will?
Posted by: George Ortega | July 12, 2009 at 04:12 AM
Mark,
I will certainly check out your previous posts (and Eddy’s paper) very soon. Since I have not done so (sadly) as of yet, I will respond only to those points that seem explicitly laid out in this post and series of comments, setting aside, for the moment, discussion of the strategy you forward for how agents might resolve their torn decisions.
First, it was probably a mistake on my part to use the term “irresolvable.” What I meant was not that the agent was without any autonomous means to making a decision, but rather that the content of the agent’s long-held commitments relevant (see my note below about content-relevance) to this decision could not spell out what the agent should do (and so could not result in strong autonomy). You suggest that, if the agent has a coin-flip resolution strategy in store, or if Z comes to adopt one on the spot, that the decision will be sufficiently determined by an endorsed component of his self. And so you suggest, further, that such action on the part of Z entails strong autonomy.
My qualm is this: The coin-flip strategy does not seem to take into account the import this decision has for Z’s life (and that Z understands this decision to have for his life). This is not a matter of ordering chicken or fish. This is a matter of having to favor one set of deeply-held commitments at the expense of forsaking another cluster of commitments, with the result that neither set of commitments will guide Z in the same manner afterwards (supposing that Z takes this decision seriously). That is, if Z decides to vote for the Republican, he will no longer be someone whose self-governing policy of environmentalism is adequate to determine his decision-making in all pertinent instances.
And so I don’t think that a coin-flip strategy, either previously endorsed or adopted on the spot, addresses this forward-looking component of Z’s (unavailable) strong autonomy. It seems rather to miss the point, as Z would see it. That is, it doesn’t address (or resolve) the issue concerning the content of Z’s commitments to Republican loyalism and environmentalism, and the changed influence these commitments will have on Z’s future agential efforts. And so the way it “resolves” the conflict between them is not adequate to identify Z fully with the result and its implications for his future as an agent. It simply allows Z to make a decision and move on, leaving the clash between the contents of these two features of his self unresolved. I take this to imply that Z’s decision qua the result of a coin-flip, perhaps endorsed insofar as he makes it so that he can leave behind the voting booth and its bothersome difficulties, is still not one that is strongly autonomous with regards to its relation to his longstanding commitments and their fate as components of his self.
Josh,
I’m not quite sure what to do with the possibility of Z becoming resigned to his difficult decision and using it to claim that Z can make this decision in a manner that is strongly autonomous. It still seems that the difficulty lies only in the fact that Z’s “relevant” commitments (to Republicanism and environmentalism) cannot, by themselves, determine his decision. And so, supposing that Z becomes resigned or defeated by the irksome character of his world, what still can we say about how Z decides one way or the other? If he feels defeated, and so “just decides,” it seems as though we run into the problem I posed to the so-called “libertarian” view – how is it that Z decides in a strongly autonomous fashion, if it is just as well to Z that he decide for the Republican or the Democrat, given that the world is unfair or impossibly difficult? Just who or what is doing the deciding here?
You ask fair and stimulating questions.
With regards to the 1], my underdeveloped argument is this: It is important to Z that he make a decision to which he is committed and with which he can live. In other words, even though he is completely torn between his two options, he still wants to pick between them in such a way as to own his decision. This, by my lights, motivates imputing to Z at least the possibility of strong autonomy.
The short answer to 2] is yes. But, given how I’ve constructed Z’s case, I don’t think his decision, even if he is able to incorporate it as the result of autonomous agency, should be labeled as randomly made (that is, given the import Z attributes to this decision). Unless you think that Z’s voting decision will result from previous randomly made decisions?
Paul,
What you describe would be a theory of strongly autonomous agency that could avoid the trouble I think follows from Z’s case. But it seems troubling to admit that since (1) Z’s decision is very important to him, and (2) There are two “maximally” consistent options available to him, and so (3) It is equally likely that, if Z just decides or does a coin flip, he will choose either candidate, then (4) Z can be strongly autonomous in making a decision about which he cares very much but whose outcome comes not as the result of employing his content-relevant* commitments but rather some other, decision-producing strategy (coin-flipping).
But maybe this is a bullet that’s ok to bite.
*Content-relevant, meaning that the Z’s choice between the Republican and the Democrat relates directly to his commitments to the Republican party and environmentalism, and only indirectly to decision-producing strategies such as coin-flipping.
George,
Thanks for the concise clarification of my (controversial) premise. Appreciated!
John,
Thanks for the reference to your new book. It further motivates me to find it and read it (but, of course, no further incentive was needed!).
The puzzle you describe is just that – very puzzling. And it is very related, in my mind, to the problem I attempt to draw from Z’s case. I look forward to reading your discussion of it.
Posted by: Grant Rozeboom | July 12, 2009 at 07:40 PM
Grant,
You write:
"But it seems troubling to admit that since (1) Z’s decision is very important to him, and (2) There are two “maximally” consistent options available to him, and so (3) It is equally likely that, if Z just decides or does a coin flip, he will choose either candidate, then (4) Z can be strongly autonomous in making a decision about which he cares very much but whose outcome comes not as the result of employing his content-relevant* commitments but rather some other, decision-producing strategy (coin-flipping).
But maybe this is a bullet that’s ok to bite."
You can spare your teeth the taste of lead by realizing that Z's "just deciding" and deciding to flip a coin are both causally determined actions, and that the result of the coin flip is equally out of Z's control.
Posted by: George Ortega | July 12, 2009 at 08:14 PM
Grant,
Interesting comments. I wonder, do you think that Z's voting decision should resolve the conflict? Or rather, do you think it is necessary for Z's decision to resolve the conflict to count the decision as strongly autonomous?
I, for one, see genuine conflict as long spread: we make gradual progress to achieving an internal resolution to the external conflict (reality forces us to choose one, not both) through a process of experimentation and reflection. On that view, I don't put too much stock into what decision Z makes regarding his vote.
In this case, the decision could go either way for all I care, and I would still count it as strongly autonomous so long as it is the agent's endorsed policies determine the outcome. Z will need to reflect on his decision afterward and see how it meshes with his endorsed policies. Based on his reflections, policies may be revised, ejected or birthed.
As regards voting, I see this as happening over an extended period because the very feedback mechanisms involved take a long time to distill (e.g. government requires much time and effort to get something done well, and very little time and effort to get something done poorly). It may be years before Z has fully synthesized the ramifications of his voting decision, and it will presumably be after many conversations with others who had experienced similar conflict of beliefs to help him better understand his own commitments.
In short, I'm definitely not in the camp that sees torn decisions as paradigm cases of agency. If there is a disagreement between us on this point, the position sketched here may taste bitter by default.
Posted by: Mark Smeltzer | July 13, 2009 at 03:42 PM
Grant,
One other quick point (well, when I started I meant it to be quick, but this point sort of grew on me): from your last post, I clearly see a parallel to your case (from by extension the kinds of cases your are interested in) and to cases that involve trolley problems. In other words, we put an agent into a position where the choice seems intractable, as least to the agent, and ask the agent to decide regardless.
As in trolley problems, some agents will appeal to their preferred ethical theory in an attempt to (rapidly) deduce what they ought to do, and others will throw their hands up in despair and let fate take over.
However, I think there also is third option, which is purposeful experimentation. This option endorses the position that it is unknown which option is better, and that the agent will choose an option just so that more information can be obtained to assist with future decisions.
Regardless of the range of options, when an agent achieves a resolution to a traditional trolley problem, it may yield that the agent will grow to view whichever decision they made as the wrong one: if they chose to run over the child to save the group, it is normal to feel the pangs of guilt, and this causes the agent to believe that it would have been better to have run down the mob. The reverse could equally occur.
I think what we're dealing with here is that sometimes we have to make decisions that seem inescapably terrible: both of the options are bad. We can try our best to pick the least bad option, but we recognize that we are picking a bad option regardless.
I see the voting case you present as this type of scenario: a torn decision involving two bad options. If instead both candidates maximized the agent's ideals then the agent would have a fairly trivial decision before him: should I choose this awesome candidate or that awesome candidate when both seem to have the potential to make me equally happy? Poor, poor agent. However, the situation we are considering implies sacrifice and disappointment regardless of what the agent chooses.
If that is the kind of torn decisions you are most concerned with, why would we want that to be the paradigm of strong autonomous decision making? Would it not be better to have an account where strong autonomy barely squeaks by in these cases?
One the one hand, there are torn decisions an agent may face that would surely carry great significance in the agent's life: the agent may choose to sacrifice his love of self to donate time and money to a noble cause, the agent may choose to sacrifice his own safety to ensure the safety of someone else, the agent may choose to sacrifice security and well being to achieve something great, the agent may choose to sacrifice his desire for vanity and gratification in order to love a normal, flawed person.
These kinds of decisions could qualify as torn decisions which do seem very significant, but they seem to involve choosing between something safe, secure, bland, or selfish and something bold, risky, exciting or selfless. These may well qualify as paradigm cases of autonomy, but there is a clear asymmetry at work here between these and the kinds of torn decisions that you initially suggest.
Torn decisions that pit virtue against vice surely put the agent's character to the test, and even if the agent fails the test, the agent can repent and seek redemption. It may seem that I've gone off topic, but I bring this up to stress my own view that the resolution of the torn decision need not uniquely paint of picture of where the agent is heading (in terms of character development) in order to qualify as strongly autonomous.
Posted by: Mark Smeltzer | July 13, 2009 at 04:16 PM
Mark, July 13 (1),
You write; "In this case, the decision could go either way for all I care, and I would still count it as strongly autonomous so long as it is the agent's endorsed policies determine the outcome."
The point you still seem to be missing is that the decision would have been made by a deterministic process that prohibits free will, or autonomy, regardless of which way it went. In other words, the "agent's endorsed policies" have a causal past that extends back even beyond the agent's birth.
Posted by: George Ortega | July 15, 2009 at 01:58 AM
The issue of freedom and randomness has come up a few times in the past month. Here's a short essay (~1800 words) in which I examine Peter van Inwagen's essay "Free Will Remains a Mystery" and argue that determination and randomness are a false dichotomy.
Van Inwagen versus Van Inwagen on Freedom and Randomness
Posted by: Mike in MI | July 22, 2009 at 09:16 AM
Here's an abstract of my argument:
In _Free Will Remains a Mystery_, PvI proposes a thought experiment in which God rewinds the universe back to a certain point and lets events proceed again from there. He says that free agents are free to choose differently when the universe is restarted in this way, and that this shows that their actions are essentially random.
His argument is unsound. I argue that --determinism or randomness is a false dichotomy. Free choices are neither.
--The frequency model of probability does not apply to free choices. The probability that a free agent will choose X rather than Y is undefined.
--Supposing that the universe could be replayed as in PvI's thought experiment, a free agent would choose the same way every time.
[Note to Garden admins: Have you considered the advantages of a forum over a blog for discussions of this kind?]
Posted by: Mike Robertson | July 26, 2009 at 07:31 AM