Here at the Garden, I've mentioned earlier my enthusiasm for "transhumanism". According to this philosophy, the enhancement of human abilities, and the surpassing of human limitations, is ethically desirable. Most transhumanists also believe that accelerating progress in technology will allow significant enhancement sooner rather than later.
Transhumanism is still a small movement. Nevertheless, Oxford's Nick Bostrom defends these ideas in academia--especially against critics on the President's Council on Bioethics (PCB). Of course, I am also interested in the "determinism problem" and so naturally I explored the intersection of these two subjects.
I was glad to discover that the two complement each other. One appreciates the relationship between transhumanism and the determinism (mechanism?) problem when one focuses upon the superhuman--or posthuman--figures which appear in both literatures. Posthumans are prominent in transhumanist literature because they are its direct subject matter. But posthumans also frequently appear in discussions of "free will" and the reason why this should be so is less clear. I feel that their relevance to "Agency Theory" deserves more attention.
One begins to appreciate the importance of posthumans for the free will debate when one considers the state of the art of compatibilism. According to Michael McKenna:
Before discussing any particular view, it is worth focusing on a troubling question over which contemporary compatibilists are in disagreement. Frankfurt's (Section 5.3), Wolf's (Section 5.4), and Fischer's (Section 5.5) views each face problems with manipulation cases. Recall, in a manipulation case, an incompatibilist opponent puts before the compatibilist a case involving an agent who satisfies all of the compatibilist-friendly requirements—hierarchies, meshes, or mechanisms—but who comes to satisfy those requirements through a process of manipulation that intuitively suggests that the agent is not free and morally responsible. The incompatibilist uses the examples 1) to force the compatibilist to acknowledge that it matters how it is that an agent comes to have the relevant compatibilist-friendly characteristics, and 2) to challenge the compatibilist to show what relevant difference there is between an agent who is manipulated into the relevant state, as opposed to an agent who has come to be in that state by a typical deterministic history.
The construction of such manipulation devices require posthumans: people whose abilities, especially mental abilities, surpass our own. Indeed, upon reflection, one can note that the literature is filled with references to such "science fiction" type scenarios. Consider: Frankfurt controllers, Martians with radios, Watson's "super-powerful designers", Kane's Covert Non-Constraining controllers, and so on. Watson, in particular, refers to these designers in his excellent "Soft Libertarianism and Hard Compatibilism." In that article, he criticizes soft libertarian accounts (such as Kane's) while claiming that a hard compatibilist "is the only kind of compatibilist to be." The article concludes with the statement: "The philosophical alternatives for those who take freedom seriously (as I think we all must, in practice) are hard." Watson does an admirable job of framing the free will debate. This is progress.
In the spirit of Watson's article, I want to bring another article to other Gardeners' attention. The transhumanist web-zine Betterhumans published a fascinating report titled "Vaccinating Against Vice." This article explores the degree to which these science fiction manipulation scenarios already exist. The report expresses concern about the threat such manipulation poses for political freedom--but the threat to metaphysical freedom is apparent too. "To concede that the government, or any authoritative body, has the right to hardwire an individual's brain against possible behaviours or thoughts would be fatal to the core of our notion that we are a free people," says one critic.
I appreciate this critic's concern but should it stop us from using technology to prevent crime? In the limiting case, consider a world where I am a posthuman. Because I am superintelligent, I can press a button and create a human whose every choice I already know to be good. (It is a scientific question where this is even possible, but is this question relevant? This example also suggests the ethical dilemma of the Nozick's "experience machine.") If I do not press the button, a person will be born naturally--with the inherent risk of becoming a criminal. Indeed, I have the ability to create a society in which no crime every happens, yet the citizens satisfy the most stringent requirements for (hard) compatibilist freedom.
I do not think concerns about freedom should prevent me from pressing that button. My feeling is that, if we must choose between paradise and freedom, so much the less for freedom. I feel confident in this position not just because I would be reducing evil in the world, but also because the alternative does not seem to offer in any more freedom. That nature randomly or blindly created a person does not seem to enhance the freedom that we already grant them. Indeed, the only advantage that resisting such manipulation would seem to have is disguising the degree to which we are already un-free.
One might summarize Watson's thesis, which these considerations emphasize, as thus: agents in this universe can have no more freedom than agents whose entire lives are created through design. In particular, we do not have the demanding type of freedom that concerns skeptics (Galen Strawson, Derk Pereboom) or hard libertarians (Clarke, Chisholm). It is conceivable that we have the freedom that soft libertarians describe, but their accounts cannot improve upon those offered by compatibilists. Gardeners, is Watson's thesis correct? Will technologies such as those mentioned in Vaccinating Against Vice force us to confront Watson's challenge? How should society respond to these technologies?
One final note: the relevance of transhumanism is not limited to deconstructive or deflationary accounts of free will. Transhumanism is also relevant to constructive accounts. Posthumans suggest how people might become super-free. Although any freedom agents have cannot be greater than if their lives were created by design, we might have more freedom than we do now. Some manipulated agents enjoy more freedom than other manipulated agents. We can imagine how posthumans would be able to satisfy even stricter requirements for free will (more elaborate heirarchies, more sophisticated mechanisms, etc.) than we do today. Their superhuman mental abilities would give them greater freedom of choice and more opportunities than we have now. Perhaps these benefits of future technologies would compensate, somewhat, for any adverse impact they have upon our conventional sense of freedom. But we will never be able to deny Watson's thesis, and as we grow wiser, we might also better appreciate the hardness of this limitation.
"Transhumanism is still a small movement."
Thank goodness.
Posted by: Mark Smeltzer | November 07, 2004 at 10:15 PM
I suspect transhumanism is a small movement only because of the technology. If one could augment oneself as easily as getting a breast implant, then it would be much more common. Indeed I think the phenomena of plastic surgery going mainstream suggests a lot about how people would view augmentation.
For instance there are recent studies on drugs that could significantly improve memory. If it wasn't tied to simply treating diseases in old age, I know I'd take them immediately.
Posted by: Clark Goble | November 08, 2004 at 10:26 AM
Kip,
Having lived an exemplary life, the product of the supersmart transhumanist discovers on her death bed that she was created to be good – that her life options, good though they were, were limited by virtue of her design. You, her designer, knew all along that despite what seemed to her to be moral struggles, she would always triumph in the end. Discovering this, she feels, justifiably, that she’s simply lived out your vision of the good. Her life seems trivialized - not really her life, just the living out of a plan.
A equally virtuous creature of natural origins has no guarantee of goodness, her exemplary life ultimately attributable to the deterministic concatenation of circumstances (people, places, and things) that happened to result in good character and behavior. No one knew how her life would turn out. On her death bed, she looks back on her life with some gratitude and satisfaction, knowing that she’s simply been very lucky (she’s a determinist, let’s assume) to have had good genes and parenting, just the right level of resistible temptation, etc. etc..
But somehow her life doesn’t seem trivial, as in scenario #1. Rather it seems to have been rich with moral struggle and triumph – in short it was *her* life, not someone else’s plan, that she lived. Her triumph, her life, is owed not to a single designing foreknowing intelligence, but to the give and take of millions of contingencies, none of which exerted an ultimate intentional control. In the absence of such a controller, she properly thinks of *herself* as the intentional agent in most proximate control, and therefore (naturalistically) deserving of credit for a life well lived. Assigning (or taking) such credit, of course, doesn’t imply or require self-causation. It’s just what we naturally express when we see something exemplary, and it has the effect of reinforcing goodness. Satisfaction with one’s life reinforces the determination to keep living a good life.
Manipulation cases differ from natural determinism only by virtue of the existence of the designer that has foreknowledge of the behavior of his creation. The credit for your creation’s goodness lies with you, not her, since her intentions are trumped by yours – you created her good intentions, after all. Absent such a designer, credit for behavior accrues to the most proximate intentional agent, namely the natural, undesigned person herself. The natural person is of course just as determined as the designed person, so the issue, as you say, isn’t a difference in freedom, it’s about where credit and blame work to best guide goodness.
In playing god, you’ve usurped your creation’s liability for naturalistic credit and blame, since this depends on not having an uber-agent trump her own intentions by having designed her to be good (or bad). Not being thus liable would make life pretty thin, it seems. Would you really want to author such a creature? Only, perhaps, if she could never learn of your existence.
Posted by: Tom Clark | November 09, 2004 at 06:21 PM
Tom,
I am glad to see my usual allies disagreeing with me!
You defend a distinction between (what I will call) the artificial and the natural saint. According to you the life of the natural saint does not seem "trivial", whereas the the life of the artificial saint does. You elaborate upon this distinction by claiming that the natural saint has more ownership of her life than the artificial saint does. Furthermore, because a single controller is responsible for the artificial saint's behavior, whereas responsibility seems diffused over a million antecedent causes in the case of the natural saint, the natural saint "properly thinks of herself as the intentional agent in most proximate control"; the artificial saint does not. This allows the natural saint to take credit for her actions.
I think there are many problems with these claims. For one, it remains an open question (despite the inclinations of anyone towards atheism or theism) whether or not human behavior--or even the known universe--is the result of a designer. Such a designer need not be "worthy of worship". Rather, the designer might be of the more deistic variety. On your view, whether or not we should take credit for our actions hinges upon the question of this designer's existence. This strikes me as an awkward consequence of your argument which I doubt you would embrace.
Another problem is this. You claim that only the natural saint can regard themselves as the agent in most proximate control. But the artificial has the exact same level of *proximate* control--that is why we qualify the word "control" with "proximate". The natural saint has, in some vague sense, more *ultimate* control. Even this is unclear, however, because ultimate control--the final decision-making authority--would seem to reside with whatever determined the initial conditions and rules of the universe. Nothing determines them and so ultimately everything is out of control (this is Galen Strawson's view). Perhaps we should call the sort of control you describe as "First Agent" control, to distinguish it from proximate as well as ultimate control.
You suggest another problematic distinction: "The natural person is of course just as determined as the designed person, so the issue, as you say, isn’t a difference in freedom, it’s about where credit and blame work to best guide goodness." I sympathize with this idea. *If* the designer exists, then one might use the system of responsibility and punishment strictly with the designer, and bypass the agent entirely. However, even if the designer exists, one could also bypass the designer, and work strictly with the agent. You suggest that bypassing the agent is better than bypassing the designer, but why should this be the case? Bypassing the agent is not the "best" option because both methods work just as well. So, according to our understanding of "functional" responsibility, both would be equally responsible.
Consider this analogy. Humans have the same relationship with thermostats that our hypothetical designers have with humans. Suppose that, in two different worlds, the temperature is too cold in some room. In one world, the thermostat naturally evolved through evolution over billions of years. In the other world--our world, human beings created thermostats to maintain temperatures in rooms. In the latter world, one has two alternative methods for increasing the temperature. One can deal with the thermostat itself or one can deal with the thermostat's creator. (One subtlety is that the control designers have over their creations--whether they are thermostats or people--usually ends after the moment of creation. The designers then only have control over their future creations.) In the former world, one can only deal with the thermostat. But I suggest, and I think you would agree, that the natural thermostat, even though it satisfied the requirements for "First Agent" control, has no more responsibility or control than the artificial one. In the same way, a natural saint has no more control over or responsibility for its actions than an artificial saint does. That an agent is "First" is irrelevant.
Kip
Posted by: Kip Werking | November 10, 2004 at 01:11 PM
I mistook your original thought experiment to involve some sort of constant monitoring and manipulation of the artificial saint, whereas it’s clear now (and I should have understood this) that after creation she’s left to her own devices. She behaves well due to an excellent design that takes into account virtually all temptations, pitfalls, etc. that might compromise sainthood – she is virtually incorruptible by virtue of being given a highly sophisticated moral guidance system. So yes, on this reading of the scenario, she has as much proximate control as the natural saint, since it is only having such control that keeps her saintly (she isn’t being directly manipulated to be good). If, contrary to her designer’s expectations, she starts to waver due to an unforeseen contingency, then as you say we can justly hold her responsible, since by hypothesis she’s capable of responding to such guidance. We need not go back to the designer.
I’d say that ascriptions of control are relative, in that if an agent isn’t being manipulated by another agent, she has more control over her behavior than if she were being manipulated. For me to control my actions is simply for *my* intentions to be the most proximate source of behavioral control, not someone else’s. (And of course if my intentions are being manipulated by another agent, they don’t count as mine.) There is, as you say, no ultimate control, but we can still distinguish between levels of proximate control that might be ascribed to an agent, and thus levels of justifiable practical accountability that work to guide goodness.
Posted by: Tom Clark | November 11, 2004 at 03:03 PM