Neuroethics

First published Wed Feb 10, 2016

Neuroethics is an interdisciplinary research area that focuses on ethical issues raised by our increased and constantly improving understanding of the brain and our ability to monitor and influence it, as well as on ethical issues that emerge from our concomitant deepening understanding of the biological bases of agency and ethical decision-making.

1. The rise and scope of neuroethics

Neuroethics focuses on ethical issues raised by our continually improving understanding of the brain, and by consequent improvements in our ability to monitor and influence brain function. Significant attention to neuroethics can be traced to 2002, when the Dana Foundation organized a meeting of neuroscientists, ethicists, and other thinkers, entitled Neuroethics: Mapping the Field. A participant at that meeting, columnist and wordsmith William Safire, is often credited with introducing and establishing the meaning of the term “neuroethics”, defining it as

the examination of what is right and wrong, good and bad about the treatment of, perfection of, or unwelcome invasion of and worrisome manipulation of the human brain. (Marcus 2002: 5)

Others contend that the word “neuroethics” was in use prior to this (Illes 2003; Racine 2010), although all agree that these earlier uses did not employ it in a disciplinary sense, or to refer to the entirety of the ethical issues raised by neuroscience.

Another attendee at that initial meeting, Adina Roskies, in response to a perceived lack of recognition of the potential novelty of neuroethics, penned “Neuroethics for the new millennium” (Roskies 2002), an article in which she proposed a bipartite division of neuroethics into the “ethics of neuroscience”, which encompasses the kinds of ethical issues raised by Safire, and “the neuroscience of ethics”, thus suggesting an extension of the scope of neuroethics to encompass understanding the biological basis of ethical thought and behavior and the ways in which this could itself influence and inform our ethical thinking. This broadening of the scope of neuroethics highlights the obvious and not-so-obvious ways that understanding our own moral thinking might affect our moral views; it is one aspect of neuroethics that distinguishes it from traditional bioethics. Another way of characterizing the field is as a study of ethical issues arising from what we can do to the brain (e.g., with neurotechnologies) and from what we know about it (including, for example, understanding the basis of ethical behavior).

Although Roskies’ definition remains influential, it has been challenged in various ways. Some have argued that neuroethics should not be limited to the neuroscience of ethics, but rather be broadened to the cognitive science of ethics (Levy, personal communication), since so much work that enables us to understand the brain takes place in disciplines outside of neuroscience, strictly defined. This is in fact in the spirit of the original proposal, since it has been widely recognized that the brain sciences encompass a wide array of disciplines, methods, and questions.

However, the most persistent criticisms have been from those who have questioned whether the neuroscience of ethics should be considered a part of neuroethics at all: they argue that understanding our ethical faculties is a scientific and not an ethical issue, and thus should not be part of neuroethics (Conrad and Vries 2011). This argument is usually followed by a denial that neuroethics is sufficiently distinct from traditional bioethics to warrant it being called a discipline in its own right.

The response to these critics is different: Whether or not these various branches of inquiry form a natural kind or are themselves a focus of ethical analysis is quite beside the point. Neuroethics is porous. One cannot successfully engage with many of the ethical issues without also understanding the science. In addition, academic or intellectual disciplines are at least in part (if not entirely) social constructs. And in this case the horse is out of the barn: It is clear that interesting and significant work is being pursued regarding the brain bases of ethical thought and behavior and that this theoretical understanding has influenced, and has the potential to influence, our own thinking about ethics and our ethical practices. That neuroethics exists is undeniable: Neuroethical lines of research have borne interesting fruit over the last 10–15 years; neuroethics is now recognized internationally as an area of study; neuroethics courses are taught at many universities; and training programs, professional societies, and research centers for neuroethics have already been established. Neuroethics is a discipline in its own right in part because we already structure our practices in a way that recognizes it as such. What is most significant about neuroethics is not whether both the ethics of neuroscience and the neuroscience of ethics are given the same overarching disciplinary name, but that there are people working on both endeavors and that they are in dialogue (and sometimes, the very same folks are doing both).

Of course, to the extent that neuroethicists asks questions about disease, treatment, and so on, the questions will look familiar, and for answers they can and should look to extant work in traditional bioethics so as not to reinvent the wheel. But, ultimately, Farah is correct in saying that

New ethical issues are arising as neuroscience gives us unprecedented ways to understand the human mind and to predict, influence, and even control it. These issues lead us beyond the boundaries of bioethics into the philosophy of mind, psychology, theology, law and neuroscience itself. It is this larger set of issues that has…earned it a name of its own. (Farah 2010: 2)

Neuroethics is driven by neurotechnologies: it is concerned with the ethical questions that attend the development and effects of novel neurotechnologies, as well as other ethical and philosophical issues that arise from our growing understanding of how brains give rise to the people that we are and the social structures that we inhabit and create. These questions are intimately bound up with scientific questions about what kinds of knowledge can be acquired with particular techniques: what are the scope and limits of what a technique can tell us? With many new techniques, answers to these questions are obscure not only to the lay public, but even to the scientists themselves. The uncertainty about the reach of these technologies adds to the challenge of grappling with the ethical issues raised.

Many new neurotechnologies enable us to monitor brain processes and increasingly, to understand how the brain gives rise to certain behaviors; others enable us to intervene in these processes, to change and perhaps to control behaviors, traits, or abilities. Although it will be impossible to fully canvass the range of questions neuroethics has thus far contemplated, discussion of the issues raised by a few neurotechnologies will illustrate the range of questions neuroethics entertains. Sections 2–5 below discuss a non-exhaustive list of topics that fall under the general rubric of ethics of neuroscience. Section 6 discusses the neuroscience of ethics and Section 7 looks towards new neurotechnologies.

2. The ethics of enhancement

While medicine’s traditional goal of treating illness is pursued by the development of drugs and other treatments that counteract the detrimental effects of disease or insult, the same kinds of compounds and methods that are being developed to treat disease may also enhance normal cognitive functioning. We already possess the ability to improve some aspects of cognition above baseline, and will certainly develop other ways of doing so. Thus, a prominent topic in neuroethics is the ethics of neuroenhancement: What are the arguments for and against the use of neurotechnologies to enhance one’s brain’s capacities and functioning?

Extreme proponents of enhancement are sometimes called “transhumanists”, and opponents are identified as “bioconservatives”. These value-laden appellations may unnecessarily polarize a debate that need not pit extreme viewpoints against each other. Neuroethics offers many nuanced intermediate positions that recognize shared values (Parens 2005) and may make room for embracing the benefits of enhancement while recognizing the need for some type of regulation (e.g., Lin and Alhoff 2008). The relevance of this debate itself depends to some extent upon a philosophical issue familiar to traditional bioethicists: the notorious difficulty of identifying the line between disease and normal function and the corresponding difference between treatment and enhancement. However, despite the difficulty attending the principled drawing of this line, there are already clear instances in which a technology such as a drug is used with the aim of improving a capacity or behavior that is by no means clinically dysfunctional, or with the goal of improving a capacity beyond the range of normal functioning. One common example is the use of methylphenidate, a stimulant typically prescribed for the treatment of ADHD. Known by the brand name Ritalin, methylphenidate has been shown to improve performance on working memory, episodic memory and inhibitory control tasks. Many students use it as a study aid, and the ethical standing of such off-label use is a focus of debate among neuroethicists (Sahakian and Morein-Zamir, 2007; Greely et al. 2008). The prevalence of use in college students is in question: while some claim it is widespread, other surveys report minimal usage (2–5%). (McCabe et al. 2005; Teter et al. 2006; Wilens et al. 2008; Singh et al. 2014).

As in the example above, the enhancements neuroethicists most often discuss are cognitive enhancements: technologies that allow normal people to function cognitively at a higher level than they might without use of the technology. One standing theoretical issue for neuroethics is a careful and precise articulation of whether, how and why cognitive enhancement has a philosophical status different than any other kind of enhancement, such as enhancement of physical capacities by the use of steroids. Often overlooked are other interesting potential neuroenhancements. These are less frequently discussed than cognitive enhancements, but just as worthy of consideration. They include social/moral enhancements, such as the use of oxytocin to enhance pro-social behavior, and other noncognitive but biological enhancements, such as potential physical performance enhancers controlled by brain-computer interfaces (BCIs) (see, e.g., Savulescu and Persson 2012; Douglas 2008; Roco and Montemagno 2004). Whether discussions regarding these kinds of enhancement effectively recapitulate the cognitive enhancement debate or raise different concerns and arguments remains to be seen.

2.1 Arguments for enhancement

Naturalness: Humans naturally engage in many forms of enhancement, including cognitive enhancement. Indeed, we typically applaud and value these efforts. After all, the aim of education is to cognitively enhance students (by, we now understand, changing their brains), and we look askance at those who devalue this particular enhancement, rather than at those who embrace it. So some kinds of cognitive enhancement are routine and unremarkable. Proponents of neuroenhancement argue that there is no principled difference between the enhancements we routinely engage in and enhancement by use of drugs or other neurotechnologies (Greely 2010; Dees 2007). Many in fact argue that we are a species whose nature it is to develop and use technology for augmenting our capacities and that continual pursuit of enhancement is a mark of the human. For example, Greely and colleagues claim,

The drugs just reviewed, along with newer technologies such as brain stimulation and prosthetic brain chips, should be viewed in the same general category as education, good health habits, and information technology—ways that our uniquely innovative species tries to improve itself. (Greely et al. 2008: 702)

Cognitive liberty: Those who believe that “cognitive liberty” (see section 3 below) is a fundamental right argue that an important element of the autonomy at stake in cognitive liberty is the liberty to determine for ourselves what to do with and to our minds, including cognitive enhancements, if we so choose (Boire 2001; Bostrom and Roache 2010). Although many who champion “cognitive liberty” do so in the context of a strident political libertarianism, one can recognize the value of cognitive liberty without swallowing an entire political agenda. So, for example, even if we think that there is a prima facie right to determine our own cognitive states, there may be justifiable limits to that right. More work needs to be done to establish the boundaries of the cognitive liberties we ought to safeguard.

Utilitarian arguments: Many proponents of cognitive enhancement point to the positive effects of enhancement and argue that the benefits outweigh the costs. In these utilitarian arguments it is important to consider the positive and negative effects not only on individuals but also on society more broadly (see, e.g., Selgelid 2007).

Practical arguments: These utilitarian arguments often point to the difficulty in enforcing regulations of extant technology or to the detrimental effects of trying to do so. They tend to not really be arguments in favor of enhancement, but rather reasons not to oppose its use by regulation or legal strictures (Sandberg and Savulescu 2011; Bostrom and Roache 2010; Heinz et al. 2012; Maslen et al. 2014b).

2.2 Arguments against enhancement

Opposition to enhancement can take the form of moral condemnation and/or legal prohibition or restriction, or other regulation. It is possible for these forms of opposition to come apart—for instance, to condemn cognitive enhancement on moral grounds, while permitting it legally (for example, for some of the reasons mentioned above). Below I discuss varieties of moral arguments offered against enhancement.

Harms: The simplest and most powerful argument against enhancement is the claim that brain interventions carry with them the risk of harm, risks that make the use of these interventions unacceptable. The low bar for acceptable risk is an effect of the context of enhancement: risks deemed reasonable to incur when treating a deficiency or disease with the potential benefit of restoring normal function may be deemed unreasonable when the payoff is simply augmenting performance above a normal baseline. Some suggest that no risk is justified for enhancement purposes (Heinz et al. 2012; Kass 2003a; Sandel 2004). In evaluating the strength of a harm-based argument against enhancement, several points should be considered: 1) What are the actual and potential harms and benefits (medical and social) of a given enhancement? 2) Who should make the judgments about appropriate tradeoffs? Individuals may judge differently at what point the risk/benefit threshold occurs, and their judgments may depend upon the precise natures of the risks and benefits. The distinction between moral condemnation and legal prohibition is relevant here as well, since legal strictures presuppose an answer to this latter question. Notice, too, the harm argument is toothless against enhancements that don’t pose any risks.

Unnaturalness: A number of thinkers argue, in one form or another, that use of drugs or technologies to enhance our capacities is unnatural, and the implication is that unnatural implies immoral (Kass 2003b; Maslen, Faulmüller, and Savulescu 2014a; DeGrazia 2005). Of course, to be a good argument, more reason has to be given both for why it is unnatural (see an argument for naturalness, above), and for why naturalness and morality align. Some arguments suggest that manipulating our cognitive machinery amounts to tinkering with “God-given” capacities, and usurping the role of God as creator can be easily understood as transgressive in a religious-moral framework. Despite the appeal of this framework to religious conservatives, a neuroethicist may want to offer a more ecumenical or naturalistic argument to support the link between unnatural and immoral, and will have to counter the claim, above, that it is natural for humans to enhance themselves.

Diminishing human agency: Another argument suggests that the effect of enhancement will be to diminish human agency by undermining the need for real effort and allowing for success with morally-meaningless shortcuts. Human life will lose the value achieved by the process of striving for a goal and will be belittled as a result (see, e.g., Schermer 2008; Kass 2003b). Although this is a promising form of argument, more needs to be done to undergird the claims that effort is intrinsically valuable. After all, few think that we ought to abandon transportation by car for horses, walking, or bicycling, because these require more effort and thus have more moral value.

The hubris objection: This interesting argument holds that the type of attitude that seems to underlie pursuit of such interventions is morally defective in some way, or is indicative of a morally defective character trait. So, for example, Michael Sandel suggests that the attitude underlying the attempt to enhance ourselves is a “Promethean” attitude of mastery that overlooks or underappreciates the “giftedness of human life”. It is the expression and indulgence of a problematic attitude of dominion toward life to which Sandel primarily objects:

The moral problem with enhancement lies less in the perfection it seeks than in the human disposition it expresses and promotes. (Sandel 2002)

Others have pushed back against this tack, arguing that the hubris objection against enhancement fundamentally misunderstands the concepts it relies upon (Kahane 2011).

Equality and Distributive Justice: One question that routinely arises with new technological advances is “who gets to benefit from them?” As with other technologies, neuroenhancements are not free. However, worries about access are compounded in the case of neuroenhancements (as they may also be with other learning technologies). As enhancements increase capacities of those who use them, they are likely to further widen the already unconscionable gap between the haves and have-nots: we can foresee that those already well-off enough to afford enhancements will use them to increase their competitive advantage against others, leaving further behind those who cannot afford them (Farah 2007; Greely et al. 2008; Academy of Medical Sciences 2012). One can imagine policy solutions to this, of course, such as having enhancements covered by health insurance, having the state distribute them to those who cannot afford them, etc. However, widespread availability of neuroenhancements will inevitably raise questions about coercion.

Coercion: The prospect of coercion is raised in several ways. Obviously, if the state decides to mandate an enhancement, treating its beneficial effects as a public health issue, this is effectively coercion. We see this currently in the backlash against vaccinations: they are mandated with the aim of promoting public health, but in some minds the mandate raises concerns about individual liberty. I would submit that the vaccination case demonstrates that at least on some occasions coercion is justified. The pertinent question is whether coercion could be justifiable for enhancement rather than for harm prevention. Although some coercive ideas, such as the suggestion that we put Prozac or other enhancers in the water supply, are unlikely to be taken seriously as policy recommendations (however, see Appel 2010), less blatant forms of coercion are more realistic threats. For example, if people immersed in tomorrow’s competitive environment are in the company of others who are reaping the benefits from cognitive enhancement, they may feel compelled to make use of the same techniques just to remain competitive, even though they would rather not use enhancements. The danger is that respecting the autonomy of some may put pressure on the autonomy of others (Tannenbaum 2014; Maslen, Faulmüller, and Savulescu 2014a; Farah 2007).

There is unlikely to be any categorical resolution of the ethics of enhancement debate. The details of a technology will be relevant to determining whether a technology ought to be made available for enhancement purposes: we ought to treat a highly enhancing technology that causes no harm differently from one that provides some benefit at noticeable cost. Moreover, the magnitude of some of the equality-related issues will depend upon empirical facts about the technologies. Are neurotechnologies equally effective for everyone? For example, there is evidence that some known enhancers such as the psychostimulants are more effective for those with deficiencies than for the unimpaired: studies suggest the beneficial effects of these drugs are proportional to the degree to which a capacity is impaired (Husain & Mehta 2011). Other reports claim that normal subjects’ capacities are not actually enhanced by these drugs, and some aspects of functioning may actually be impaired (Mattay, et al. 2000; Ilieva et al. 2013). If this is a widespread pattern, it may alleviate some worries about distributive justice and contributions to social and economic stratification, since people with a deficit will benefit proportionately more than those using the drug for enhancement purposes. (However, biology is rarely that equitable, and it would be surprising if this leveling pattern turned out to be the norm). Since the technologies that could provide enhancements are extremely diverse, ranging from drugs to implants to genetic manipulations, assessment of the risks and benefits and the way in which these technologies bear upon our conception of humanity will have to be empirically grounded.

3. Cognitive liberty

Freedom is a cornerstone value in liberal democracies and one of the most cherished kinds of freedom is freedom of thought. The main elements of freedom of thought, or “cognitive liberty” as it is sometimes called (Sententia 2013), include privacy and autonomy. Both of these can be challenged by the new developments in neuroscience. The value of, potential threat to, and ways to protect these aspects of freedom are a concern for neuroethics.

3.1 Privacy

As the framers of the U.S. constitution were well aware, freedom is intimately linked with privacy: even being monitored is considered potentially “chilling” to the kinds of freedoms our society aims to protect. One type of freedom that has been championed in American jurisprudence is “the right to be let alone” (Warren and Brandeis 1890), to be free from government or other intrusion in our private lives.

In the past, mental privacy could be taken for granted: the first person accessibility of the contents of consciousness ensured that the contents of one’s mind remained hidden to the outside world, until and unless they were voluntarily disclosed. Instead, the battles for freedom of thought were waged at the borders where thought meets the outside world—in expression—and were won with the First Amendment’s protections for those freedoms. Over the last half century, technological advances have eroded or impinged upon many traditional realms of worldly privacy. Most of the avenues for expression can be (and increasingly are) monitored by third parties. It is tempting to think that the inner sanctum of the mind remains the last bastion of real privacy.

This may still be largely true, but the privacy of the mind can no longer to be taken for granted. Our neuroscientific achievements have already made significant headway in allowing others to discern some aspects of our mental content through neurotechnologies. Noninvasive methods of brain imaging have revolutionized the study of human cognition and have dramatically altered the kinds of knowledge we can acquire about people and their minds. The threat to mental privacy is not as simple as the naive claim that neuroimaging can read our thoughts, nor are the capabilities of imaging so innocuous and blunt that we needn’t worry about that possibility. A focus of neuroethics is to determine the real nature of the threat to mental privacy and to evaluate its ethical implications, many of which are relevant to legal, medical, and other social issues. Doing so effectively will require both a solid understanding of the neuroscientific technologies and the neural bases of thought, as well as a sensitivity to the ethical problems raised by our growing knowledge and ever-more-powerful neurotechnologies. These dual necessities illustrate why neuroethicists must be trained both in neuroscience and in ethics. In what follows, I briefly discuss the most relevant neurotechnology and its limitations and then canvas a few ways in which privacy may be infringed by it.

3.1.1 An illustration: potential threats to privacy with functional MRI

One of the most prominent neurotechnologies poised to pose a threat to privacy is Magnetic Resonance Imaging, or MRI. MRI can provide both structural and functional information about a person’s brain with minimal risk and inconvenience. In general, MRI is a tool that allows researchers noninvasively to examine or monitor brain structure and activity and to correlate that structure or function with behavior. Structural or anatomical MRI provides high-resolution structural images of the brain. While structural imaging in the biosciences is not new, MRI provides much higher resolution and better ability to differentiate tissues than prior techniques such as X-rays or CT scans.

However, it is not structural but functional MRI (fMRI) that has revolutionized the study of human cognition. fMRI provides information about correlates of neuronal activity, from which neural activity can be inferred. Recent advances in analysis methods for neuroimaging data such as multi-voxel pattern analysis now allow relatively fine-grained “decoding” of brain activity (Haynes and Rees 2005; Norman et al. 2006). Decoding involves using a machine-learning algorithm to compare an observed pattern of brain activation with a database of brain activity patterns. The database is composed of experimentally established correlations between brain activity patterns and a functional variable of interest, such as a task, behavior, or mental content. The closest match allows one to (defeasibly) attribute the associated functional variable to the person being scanned. The kind of information provided by functional imaging promises to provide important evidence useful for three goals: Decoding mental content, diagnosis of mental dysfunction, and prediction of behavior/character/dysfunction. Neuroethical questions arise in all these areas.

Before discussing these issues, it is important to remember that neuroimaging is a technology that is subject to a number of significant limitations, and these technical issues limit how precise the inferences can be. For example:

  • The correlations between the fMRI signal and neural activity are rough: the signal is delayed in time from the neuronal activity, and spatially smeared, thus limiting the spatial and temporal precision of the information that can be inferred.
  • A number of dynamic factors relate the fMRI signal to activity, and the precise underlying model is not yet well-understood.
  • There is relatively low signal-to-noise ratio, necessitating averaging across trials and often across people.
  • Individual brains differ both in brain structure and in function. Variability makes determining when differences are clinically or scientifically relevant difficult and leads to noisy data. Due to natural individual variability in structure and function and to brain plasticity (especially during development), even large differences in structure or deviation from the norm may not be indicative of any functional deficiency. Cognitive strategies can also affect variability in the data. These sources of variability can complicate the analysis of data and provide even more leeway for differences to exist without dysfunction.
  • Activity in a brain area does not imply that the region is necessary for performance of the task.
  • fMRI is so sensitive to motion that it would be virtually impossible to get information from a noncompliant subject. This makes the prospect of reading content from an unwilling mind virtually impossible.

For more information about limitations and capabilities of fMRI see (Jones et al. 2009; Morse and Roskies 2013).

Without appreciating these technical issues and the resulting limits to what can legitimately be inferred from fMRI, one is likely to overestimate or mischaracterize the potential threat that it poses. In fact, much of the fear of mindreading expressed in non-scientific publications stems from a lack of understanding of the science. For example, there is no scientific basis to the worry that imaging would enable the reading of mental content without our knowing it. Only NIRS (Near InfraRed Spectroscopy), an imaging method that could theoretically be used at a distance, is the sort of method that could be employed without the subject’s knowledge, but the kind of information it provides is very crude and unsuitable for decoding mental content. Thus, fears that the government is able to remotely or covertly monitor the thoughts of citizens are unfounded.

3.1.2 Decoding of mental content

Noninvasive ways of inferring neural activity have led many to worry that mindreading is possible, not just in theory, but even now. Coupled with decoding techniques, fMRI can be used, for example, to reconstruct a visual stimulus from activity of the visual cortex while a subject is looking at a scene or to determine whether a subject is looking at a familiar face or hearing a particular sound. If mental content supervenes on the physical structure and function of our brains, as most philosophers and neuroscientists think it does, then in principle it should be possible to read minds by reading brains. Because of the potential to identify mental content, decoding raises issues about mental privacy.

Despite the remarkable advances in brain imaging technology, however, when it comes to mental content, our current abilities to “mind-read” are relatively limited (Roskies 2015a). Although some aspects of content can be decoded from neural data, these tend to be quite general and nonpropositional in character. The ability to infer semantic meaning from ideation or visual stimulation tends to work best when the realm of possible contents are quite constrained. Our current abilities allow us to infer some semantic atoms, such as representations denoting one of a prespecified set of concrete objects, but not unconstrained content or entire propositions. Of course, future advances might make worries about mindreading more pressing. For example, if we develop means for understanding how simple mental representations can be combined in order to yield complex combinations and can decode meaning from these complexes, we may one day come to be able to decode propositional thought.

Still, some worries are warranted. Even if neuroimaging is not at the stage where mindreading is possible, it can nonetheless threaten aspects of privacy in ways that should give us pause. Even now, neuroimaging provides some insights into attributes of people that they may not want known or disclosed. In some cases, subjects may not even know that these attributes are being probed, thinking they are being scanned for other purposes. A willing subject may not want certain things to be monitored. In what follows, I consider a few of these more realistic worries.

Implicit bias: Although explicitly acknowledged racial biases are declining, this may be due to a reporting bias attributable to the increased negative social valuation of racial prejudice. Much contemporary research now focuses on examining implicit racial biases, which are automatic or unconscious reflections of racial bias. With fMRI and EEG, it is possible to interrogate implicit biases, sometimes without the subject’s awareness that that is what is being measured (Chekroud et al. 2014; Richeson et al. 2003; Luo et al. 2006). (These can also be measured behaviorally, through tests like the IAT (Implicit Attitude Test), so the worry is not solely a neuroimaging worry.) While there is disagreement about how best to interpret implicit bias results (e.g., as a measure of perceived threat, as in-group/out-group distinctions, etc.) and what relevance they have for behavior, the possibility that implicit biases can be measured, either covertly or overtly, raises scientific and ethical questions (see implicit.harvard.edu). When ought this information to be collected? What procedures must be followed for subjects legitimately to consent to implicit measures? What significance should be attributed to evidence of biases? What kind of responsibility should be attributed to people who hold them? What predictive power might they hold? Should they be used for practical purposes? One can imagine obvious but controversial potential uses for implicit bias measures in legal situations, in employment contexts, in education, and in policing, all areas in which concerns of social justice are significant. See also entry on implicit bias.

Lie detection: Several neurotechnologies are being used to detect deception or neural correlates of lying or concealing information in experimental situations. For example, both fMRI measures looking for neural correlates of deception and EEG analysis techniques relying on the P300 signal in versions of the GKT (or Guilty Knowledge Test) have been used in the laboratory to detect deception with varying levels of success. These methods are subject to a variety of criticisms (Farah et al. 2014; National Research Council 2003). For example, almost all experimental studies fail to study real lying or deception, but instead investigate some version of instructed misdirection. The context, tasks, and motivations differ greatly between actual instances of lying and these experimental analogs, calling into question whether these laboratory tests are relevant in real-world situations. Few studies address the tests’ efficacy in the face of countermeasures. Moreover, accuracy, though significantly higher than chance, is far from perfect, and because of the inability to determine base rates of lying, error rates cannot be effectively assessed. Thus, we cannot establish their reliability for real-world uses (Roskies 2015a). Despite these limitations, several companies have marketed neurotechnologies for this purpose (see, e.g., No Lie MRI, Brainwave Science, Cephos—though Cephos no longer markets neuroimaging techniques for lie detection).

Character traits: Neurotechnologies have shown some promise in identifying or predicting aspects of personality or character. In an interesting study aimed at determining how well neuroimaging could detect lies, Greene and colleagues gave subjects in the fMRI scanner a prediction task in a game of chance that they could easily cheat on. By using statistical analysis the researchers could identify a group of subjects who clearly cheated and others who did not (Greene and Paxton 2009). Although they could not determine with neuroimaging on which trials subjects cheated, there were overall differences in brain activation patterns between cheaters and those who played fair and were at chance in their predictions. Moreover, Greene and colleagues repeated this study at several months remove, and found that the character trait of honesty or dishonesty was stable over time: cheaters the first time were likely to cheat (indeed, cheated even more the second time), and honest players remained honest the second time around. Also interesting was the fact that the brain patterns suggested that cheaters had to activate their executive control systems more than noncheaters, not only when they cheated, but also when deciding not to cheat. While the differential activations cannot be linked specifically to the propensity to cheat rather than to the act of cheating, the work suggests that these task-related activation patterns may reflect correlates of trustworthiness.

The prospect of using methods for detecting these sorts of traits or behaviors in real-world situations raises a host of thorny issues. What level of reliability should be required for their employment? In what circumstances ought they to be admissible as evidence in the courtroom? For other purposes? Using lie detection or decoding techniques from neuroscience in legal contexts may raise constitutional concerns in the U.S.: Is brain imaging a search or seizure as protected by the 4th Amendment (Farahany 2012a)? Would its forcible use be precluded by 5th Amendment rights (Farahany 2012b)? These questions, though troubling, might not be immediately pressing: in a recent case (United States v. Semrau 2012) the court ruled that fMRI lie detection is inadmissible, given its current state of development. However, the opinion left open the possibility that it may be admissible in the future, if methods improve. Finally, to the extent that relevant activation patterns may be found to correlate significantly with activation patterns on other tasks, or with a task-free measure such as default-network activity, it raises the possibility that information about character could be inferred merely by scanning the subjects doing something innocuous, without their knowledge of the kind of information being sought. Thus, there are multiple dimensions to the threat to privacy posed by imaging techniques.

3.1.3 Diagnosis

Increasingly, neuroimaging information can bear upon diagnoses for diseases, and in some instances may provide predictive information prior to the onset of symptoms. Work on the default network is promising for improving diagnosis in certain diseases without requiring that subjects perform specific tasks in the scanner (Buckner, Andrews-Hanna, and Schacter 2008). For some diseases, such as in Alzheimer’s disease, MRI promises to provide diagnostic information that previously could only be established at autopsy. fMRI signatures have also been linked to a variety of psychiatric diseases, although not yet with the reliability required for clinical diagnosis. Neuroethical issues also arise regarding ways to handle incidental findings, that is, evidence of asymptomatic tumors or potentially benign abnormalities that appear in the course of scanning research subjects for non-medical purposes (Illes et al. 2006; Illes and Sahakian 2011). Because of the popularity of fMRI for basic research in cognitive neuroscience, this common issue in medical ethics has become significant for non-medical researchers.

The ability to predict future functional deficits through neuroimaging raises a host of issues, many of which have been previously addressed by genethics (the ethics of genetics), since both provide information about future disease risk. What may be different is that the diseases for which neurotechnologies are diagnostically useful are uniformly those that affect the brain, and thus potentially mental competence, mood, personality, or sense of self. As such they may raise peculiarly neuroethical questions (see next section).

3.1.4 Prediction

As discussed above, decoding methods allow one to associate observed brain activity with previously observed brain/behavior correlations. In addition, such methods can also be used to predict future behaviors, insofar as these are correlated with observations of brain activity patterns. Some studies have already reported predictive power over upcoming decisions (Soon et al. 2008). Increasingly, we will see neuroscience or neuroimaging data that will give us some predictive power over longer-range future behaviors. For example, brain imaging may allow us to predict the onset of psychiatric symptoms such as psychotic or depressive episodes (Singh, Sinnott-Armstrong, and Savulescu 2013; Arbabshirani et al. 2013; Fryer et al. 2013). In cases in which this behavior is indicative of mental dysfunction it raises questions about stigma but also may allow more effective interventions.

One confusion regarding neuro prediction should be clarified immediately: When neuroimages are said to “predict” future activity, it means they provide some statistical information regarding likelihood. Prediction in this sense does not imply that the predicted behavior necessarily will come to pass; it does not mean a person’s future is fated or determined. Although scientists occasionally make this mistake when discussing their results, the fact that brain function or structure may give us some information about future behaviors should not be interpreted as a strong challenge to free will. The prevalence of this mistake among both philosophers and scientists again illustrates the importance for neuroethicists of sophistication in both neuroscience and philosophy.

Perhaps the most consequential and most ethically difficult potential use of predictive information is in the criminal justice system. For example, there is evidence that structural brain differences are predictive of scores on the PCL-R, a tool developed to diagnose psychopathy (Hare 1991; Hart and Hare 1997). It is also well-established that psychopaths have high rates of recidivism for violent offenses. Thus, in principle neuroimaging could be used to provide information about an individual’s likelihood of recidivism. Indeed, a recent study has shown that brain scans did have predictive value for recidivism, controlling for other risk factors (Aharoni et al. 2013). Should data like that be admissible for determining sentences or parole decisions? Would that be equivalent to punishing someone for crimes they have not committed? Or is it just a neutral extension of current uses of actuarial information, such as age, gender, and income level? At an extreme, one could imagine using predictive information to detain people who have not yet committed a crime, arresting them before they do. This dystopian scenario, portrayed in the film Minority Report (Speilberg 2002), also illustrates how our abilities to predict can raise difficult ethical and policy questions when they collide with intuitions about and the value of free will and autonomy. More generally, work in neuroethics could be of significant practical use for the law, and indeed is often called by another moniker, “neurolaw”.

3.2 Autonomy

A second way in which cognitive liberty could be impacted is by limiting a person’s autonomy. Autonomy is the freedom to be the person one wants to be, to pursue one’s own goals without unjustifiable hindrances or interference, to be self-governing. Although definitions of autonomy differ (see, e.g., entries on personal autonomy and autonomy in moral and political philosophy) , it is widely appreciated as a valuable aspect of personhood. Autonomy of the mental can be impacted in a number of ways. Here are several:

Direct interventions: The ability to directly manipulate our brains to control our thoughts or behavior is an obvious threat to our autonomy. Some of our neurotechnologies offer that potential, although these sorts of neurotechnologies are invasive and used only in cases where they are medically justified. Other types of interventions, such as the administration of drugs to calm a psychotic person, may also impact autonomy.

We know that stimulating certain brain areas in animals will lead to repetitive and often stereotyped behaviors. Scientists have implanted rats with electrodes and have been able to control their foraging behaviors by stimulating their cortex. In theory we could control a person’s behavior by implanting electrodes in the relevant regions of cortex. In practice, we have a few methods that can do this, but only in a limited way. For example, Transcranial Magnetic Stimulation (TMS) applied to motor cortex can elicit involuntary movements in the part of the body controlled by the cortical area affected, or when repetitively administered it can inhibit activity for a period of time, acting as a temporary lesion. Effects will vary depending on what area of the brain is stimulated; higher cognitive functions can be impacted as well. tDCS, or transcranial Direct Current Stimulation, uses direct current to stimulate cortex, and there are conflicting reports about whether it can reliably enhance cognitive function (Horvath, Forte, and Carter 2015; Bennabi et al. 2014). More invasive methods, such as Deep Brain Stimulation (DBS, discussed below) and electrocorticography (ECOG), both invasive techniques requiring brain surgery, demonstrate that direct interventions can affect cognition, action, and emotion, often in very particular and predictable ways.

However much of a threat to autonomy these methods pose in theory, they are rarely used with the aim of compromising autonomy. On the contrary, direct brain interventions, when used, are largely aimed at augmenting or restoring rather than bypassing or diminishing autonomy. For example, one rapidly advancing field in neuroscience is the area of neural prostheses and brain computer interfaces. Neural prostheses are artificial systems that replace defective neural ones, usually sensory systems. Some of the more advanced and widely-known are artificial cochleas. Other systems have been developed that allow vision-like information to feed touch-specific receptors, enabling blind people to navigate the visual world (Bach-y-Rita and Kercel 2003). Brain computer interfaces (BCIs), on the other hand, are systems that read brain activity and use it to guide robotic prostheses for limbs, or to move a cursor on a video screen (Lebedev and Nicolelis 2006; Wolpaw et al. 2000). Prosthetic limbs that are guided by neural signals have restored motor agency to paraplegics and quadriplegics, and other BCIs have been used to communicate with people who are “locked in”, that is, they are fully conscious but cannot move their bodies (Birbaumer, Murguialday, and Cohen 2008; Naseer and Hong 2015). Thus, although in principle brain interventions could be used to control people and diminish their autonomy, in general, direct interventions are being developed to restore and enhance it.

Neuroeconomics and neuromarketing: There are more subtle ways to impact autonomy than direct brain manipulations, and these are well within our grasp: Our thoughts can be manipulated indirectly: old worries prompted by propaganda and subliminal advertising have taken on a renewed currency with the advent of neuroeconomics and neuromarketing. By better understanding how we process reward, how we make decisions more generally, and how we can bias or influence that process, we open the door to more effective external indirect manipulations. Indeed, social psychology has been showing how subtle alterations to our external environment can affect beliefs, moods, and behaviors. The precise threats posed by understanding the neural mechanisms of decision making have yet to be fully articulated. Is neuromarketing being used merely to design products that satisfy our desires more fully or is it being used to manipulate us? Depending on how you see it, it could be construed as a good or an evil. Does understanding the neural substrates of choice and reward provide advertisers more effective tools than they had merely by using behavioral data, or just more costly ones? Do consumers consequently have less autonomy? How can we compensate for or counteract these measures? These questions are only beginning to be adequately addressed (Stanton et al. 2014).

Regulation: Yet another way that autonomy can be impacted is by restricting the things that a person can do with and to her own mind. For instance, banning mind-altering drugs is an externally imposed restraint on people’s ability to choose their states of consciousness. The degree to which a person should be prevented from doing what he wishes to his or her self, body or mind, is an ethical issue on which people have differing opinions. Some claim this kind of regulation is a problematic infringement of autonomy (Juth 2011; Bostrom and Sandberg 2009), but certain regulations of this type are already largely accepted in our society. For instance, regulation of drugs of abuse does impact individual autonomy, but it arguably averts potentially great harms, both self-inflicted harms to the individual users and associated harms to society. Allowing cognitive enhancing technologies only for treatment uses but not for enhancement purposes is another restriction of mental autonomy. Whether it is one we want to sanction is still up for debate. For instance, the potential harms to individuals and to society are much less obvious and foreseeable. Regardless of where this debate goes, it should be clear that complete autonomy is not practically possible in a world in which one person’s actions affect the well-being of others.

Belief in free will: Advances in neuroscience have been frequently claimed to have bearing upon the question of whether we have free will and on whether we can be truly morally responsible for our actions. Although the philosophical problem of free will is generally considered to be a metaphysical problem, demonstrable lack of freedom would have significant ethical consequences. A number of neuroscientists and psychologists have intimated or asserted that neuroscience can show or has shown that free will is or is not an illusion (Brembs 2011; Libet et al. 1983; Soon et al. 2008; Wegner 2003). Others have countered with arguments to the effect that such a demonstration is in principle impossible (Roskies 2006). Regardless of what science actually shows about the nature of free will, the fact that people believe neuroscience evidence supports or undermines free will has been shown to have practical consequences. For example, evidence merely supporting the premise that our minds are a function of our brains, as most of neuroscience does, is perceived by some people to be a challenge to free will (Nahmias, Coates, and Kvaran 2007). And in several studies, manipulating belief in free will affects the likelihood of cheating (e.g., Vohs and Schooler 2008). The debate within neuroscience about the nature and existence of free will will remain relevant to neuroethics in part because of its impact on our moral, legal and interpersonal practices of blaming and punishing people for their harmful actions (see also entries on free will and moral responsibility).

4. Identity and consciousness

4.1 Personal Identity

One of the aspects of neuroethics that makes it distinctive and importantly different from traditional bioethics is that we recognize that, in some yet-to-be-articulated sense, the brain is the seat of who we are (see, e.g., personal identity). For example, we now have techniques that alter memories by blunting them, strengthening them, or selectively editing them. We have drugs that affect sexuality, and others that affect mood. Here, neuroethics rubs up against some of the most challenging and contentious questions in philosophy: What is the self? Does neuroscience show that the concept does not refer? If there is a self, what sorts of changes can we undergo and still remain ourselves? What is it that makes us the same person over time? Of what value is this temporal persistence? What costs would changing personhood incur?

Because neuroscience intervention techniques can affect memory, desires, personality, mood, impulsivity and other things we might think of as constitutive of the person or the self, the changes they can cause (and combat) have a unique potential to affect both the meaning and quality of the most intimate aspects of our lives. Although neuroethics is quite different from traditional bioethics in this regard, it is not so different from genethics. For a long time, it was argued that “you are your genes”, and so the ability to interrogate our genomes, to change them, or to select among them was seen as both a promising and potentially problematic one, enabling us to understand and manipulate human nature to an extent far beyond any we had previously enjoyed. But as we have discovered, we are not (just) our genes. Our ability to sequence the human genome has not laid bare the causes of cancer, the genetic basis for intelligence, or of psychiatric illness, as many had anticipated. One reason is that our genome is a distal cause of the people we come to be: many complex and intervening factors matter along the way. Our brains, on the other hand, are a far more proximal cause of who we are and what we do. Our moment-to-moment behavior and our long-range plans are directly controlled by our brains, in a way they are not directly controlled by our genomes. If “You are your genes” seemed a plausible maxim, “You are your brain” is far more so.

Despite its plausibility, it is notoriously difficult to articulate the way in which we are our brains: What aspects of our brains makes us the people that we are? What aspects of brain function shape our memories, our personality, our dispositions? What aspects are irrelevant or inessential to who we are? What makes possible a coherent sense of self? The lack of answers we have to these deep questions does little to alleviate the pragmatic worries raised by neuroscience, since our ability to intervene in brains outstrips our understanding of what we are doing, and can affect all these aspects of our being.

In philosophy, work focusing on persons may address a variety of distinct issues using different constructs. Philosophers might be interested in the nature of personhood, in the nature of the self, in the kinds of traits and psychological states or processes that give an experienced life coherence, or in the ingredients for a flourishing life. Each calls for its own analysis. Outside of philosophy, many of these issues are run together, and confusion often results. Neuroethics, while in a unique position to leverage these issues and apply them in a fruitful way, often fails to make the most of the conceptual work philosophers have done in this area. For example, papers in neuroethics often conflate a number of these distinct concepts, referring them under the rubric of “personal identity”. This conflation further muddies already difficult waters, and diminishes the potential value of neuroethical work. Below I try to give a brief roadmap of the separate strands that neuroethicists have been concerned with.

The philosopher’s conception of personal identity refers to the issue of what makes a person at one time numerically identical to a person at another time (see entry on personal identity). This metaphysical question has been addressed by a variety of philosophical theories. For example, some theorists argue that what it is to be the numerically identical over time is to be the same human organism (Olson 1999) and that being the same organism is determined by sameness of life. If having the same life is the relevant criterion, one could argue that although life-sustaining areas of the brainstem are essential to personal identity (Olson 1999), arguably brain changes that did not interrupt life would not be. For those who believe instead that bodily integrity is what is essential, arguably the ability of neuroscience to alter brain activity will have little effect on personal identity. Many other philosophers have identified the sameness of a person as being grounded in psychological continuity of some sort (e.g., Locke 1689). If this criterion is the correct one, then the stringency of that criterion may be crucial: radical brain manipulation may cause an abrupt enough shift in memories and other psychological states that a person after brain intervention is no longer the same person he or she was prior (Jotterand and Giordano 2011; Glannon 2009; Schermer 2011; see also papers in Neuroethics, 6(3), 2013). The more stringent the criterion, the greater is the potential threat of neurotherapies to personal identity. On the other hand, if the standards for psychological continuity or connectedness are high enough, changes in personal identity may in fact be commonplace even without neurotherapies (Baylis 2011). Recognizing this may prompt us to question the criterion and/or the importance or value of personal identity. Parfit, for example, argues that what makes us one and the same person over time, and what we value (psychological continuity and connectedness) can come apart (Parfit 1984).

For some, the question of personhood comes apart from the question of identity. Even if personal (i.e., numerical) identity is unchallenged by neurotechnologies and by brain dysfunction, important neuroethical questions may still be raised. Philosophers less concerned with metaphysical questions about numerical identity have focused more on the self, and on notions of authenticity and self-identification, emphasizing the importance of the psychological perspective of the person in question in creating a coherent self (e.g., Witt et al. 2013). In this vein, Schectman has suggested that what is important is the ability to create a coherent narrative, or “narrative self” (Schechtman 2014). There is evidence that the ability to create and sustain a coherent narrative in which we are the protagonist and with which we identify is a measure of psychological health (Waters and Fivush 2015). On the other hand, some philosophers deny that they have a narrative self and locate selfhood in a synchronic property (Strawson 2004). Concerns about the nature and coherence of the narrative self, and about authenticity and autonomy, tend to be the ones most relevant to neuroethics, since these constructs clearly can be affected by even modest brain changes. For example, how do we evaluate the costs and ethical issues attending a dramatic change in personality? If neurointerventions promise to result in dramatic shifts in a person’s values and commitments, whose interests should take priority if one person must be favored—the original or the resulting person? The relevance of personhood, self, agency, identity and identification needs further elaboration for neuroethics. In what follows we discuss how one neurotechnology can bear upon some of these questions.

4.1.1 Example: Deep Brain Stimulation

Deep Brain Stimulation (DBS) involves the stimulation of chronically implanted electrodes deep in the brain, and it is FDA approved for treating Parkinson’s Disease, a neurodegenerative disease affecting the dopamine neurons in the striatum. Neuromodulation with DBS often restores motor function in these patients, permitting many to live much improved lives. It is also being explored as treatments for treatment-resistant depression, OCD, addiction, and other neurological and psychiatric issues. Although DBS is clearly a boon to many people suffering from neurological diseases, there are a number of puzzling issues that arise from its adoption. First, it is a highly invasive treatment, requiring brain surgery and permanent implantation of a stimulator, thus posing a real possibility of harm and raising questions of cost/benefit tradeoffs. This is coupled with the fact that scientists have little mechanistic understanding of how the treatment works when it does, and treatment regimes and electrode placement tend to be determined more by art than by science. Occasionally DBS causes unusual side effects, such as mood changes, hypomania or mania, addictive behaviors, or hypersexual behavior. In one case a patient with wide-ranging musical tastes developed a fixation for Johnny Cash’s music, which persisted until stimulation was ceased (Mantione, Figee, and Denys 2014). Other reported cases involve changes in personality. The ethical questions in this area revolve around the ethics of intervening in ways that alter mood and/or personality, which is often discussed in terms of personal identity or “changing who the person is”, and around questions of autonomy and alienation (Klaming and Haselager 2013; Kraemer 2013).

One poignant example from the literature tells of a patient who, without intervention, was bedridden and had to be hospitalized due to severe motor dysfunction caused by Parkinson’s Disease (Leentjens et al. 2004). DBS resulted in a marked improvement in his motor symptoms but also caused him to be untreatably manic, which required institutionalization. Thus, this unfortunate man had to choose between being bedridden and catatonic, or manic and institutionalized. He made the choice (in his unstimulated state) to remain on stimulation (the literature does not mention whether his stimulated self concurred). While it did not happen in this case, one could imagine a situation in which the patient will choose, while unstimulated, to undergo chronic stimulation, but, while under stimulation, would choose otherwise (or vice versa). The possibility for dilemmas or paradoxes will arise when, for example, we try to determine the value of two potential outcomes that are differently valued by the people who might exist. To which person (or to the person in which state) should we give priority? Or, even more perplexing: if the “identity” (narrative or numerical) of the person is indeed shifted by the treatment, should we give one person the authority to consent to a procedure or choose an outcome that in practice affects a different person? DBS cases like this will provide fodder for neuroethicists for years to come.

Many neurotechnologies that have been developed for treating brain dysfunction (and especially psychiatric illnesses) have primary or side effects that affect some aspect of what we may think of as related to human agency. The ethical issues that arise with these neurotechnologies involve determining 1) in which way they do impact our selves or our agency; 2) what value, positive or negative we should put on this impact (or ability to so affect agency); and 3) how to weigh the positive gains against the negatives. One issue that has been raised is whether we possess a clear enough conception of the elements of agency in order to effectively perform this sort of analysis (Roskies 2015b). Moreover, given the likelihood that no objective criteria exist for how to evaluate tradeoffs in these elements and the fact that different people may value different aspects of themselves differently, the weighing process will likely have to be subjectively relativized.

Finally, DBS as well as neural prostheses and BCIs raise another neuroethical issue: our conception of humanity and our relations to machines. Some contend that these technologies effectively turn a person into a cyborg, making him or her something other than human. While some find this an ethically unproblematic natural extension of our species’ characteristic drive to invent and improve our selves with technology (Clark 2004), others fear that creating a bio-cybernetic organism raises troubling questions about the nature or value of humanity, about the bounds of self, or about Promethean impulses (Attiah and Farah 2014; Sandel 2009).

4.2 Consciousness, life, and death

The Hard Problem of consciousness (namely, how to explain the qualitative character of experience, or “what it is like” to have a particular experience, see e.g., Chalmers 1995) has yielded little to the probings of neuroscience, and it is not clear whether it ever will. However, in the last decade impressive advances have been made in other realms of consciousness research. Most impressive have been the improvements in detecting altered levels of consciousness with brain imaging. Diagnosing behaviorally unresponsive patients has long been a problem for neurology, although as long as 20 years ago, neurologists had recognized systematic differences between and in the prognoses for a persistent vegetative state (PVS), a minimally conscious state (MCS), and locked-in syndrome, a syndrome in which the patient has normal levels of awareness but cannot move. Functional brain imaging has fundamentally changed the problems faced by those caring for these patients. Owen and colleagues have shown that it is possible to identify some patients mischaracterized as being in PVS by demonstrating that they are able to understand commands and follow directions (Owen et al. 2006). In these studies, both normal subjects and brain injured patients were instructed to visualize doing two different activities while in the fMRI scanner. In normal subjects these two tasks activated different parts of cortex. Owen showed that one patient diagnosed as in PVS showed this normal pattern, unlike other PVS patients, who showed no differential activation when given these instructions. This data suggests that some PVS diagnosed subjects can in fact process and understand the instructions, and that they have the capacity for sustained attention and voluntary mental action. These results were later replicated in other such patients. In a later study the same group used these imagination techniques to elicit from some patients with severe brain injury answers to Yes/No questions (Monti et al. 2010). More recent work aims to adapt these methods for EEG, a cheaper and more portable neurotechnology (Cruse et al. 2011). Neuroimaging provides new tools for evaluating and diagnosing patients with disorders of consciousness.

These studies have the potential to revolutionize the way in which patients with altered states of consciousness are diagnosed and cared for, may have bearing on when life support is terminated, and raise the possibility of allowing patients to have some control over questions regarding their care and end of life decisions. This last possibility, while in some ways alleviating some worries about how to treat severely brain-damaged individuals, raises other thorny ethical problems. One of the most pressing is how to deal with questions of competence and informed consent: These are people with severe brain damage, and even when they do appear capable on occasion of understanding and answering questions, there is still uncertainty about whether their abilities are stable, how sophisticated they are, and whether they can competently make decisions about such weighty issues (Clausen 2008; Sinnott-Armstrong 2016). Nonetheless, these methods open up new possibilities for diagnosis and treatment and for restoring a measure of autonomy and self-determination to people with severe brain damage.

5. Neuroscience and society

5.1 Neuroscience and social justice

Neuroethics must also be attentive to issues of social justice. Beyond issues that affect individuals, such as autonomy, consent, and self-determination, discussed above, are ethical issues that affect the shape of society. In this regard the concerns are not substantially different than in traditional bioethics. As neuroscience promises to offer treatments and enhancements, it must attend to issues of distributive justice and play a role in ensuring that the fruits of neuroscientific research do not go only to those who enjoy the best our society has to offer. Moreover, a growing understanding that poverty and socioeconomic status more generally have long-lasting cognitive effects raises moral questions about the social policy and the structure of our society, and the growing gap between rich and poor (Farah 2007; Noble and Farah 2013). It seems that the social and neuroscientific realities may reveal the American Dream to be largely hollow, and these findings may undercut some popular political ideologies. Justice may demand more involvement of neuroethicists in policy decisions (Giordano, Kulkarni, and Farwell 2014; Shook, Galvagni, and Giordano 2014).

Ethical issues also arise from neuroscientific research on nonhuman animals. As does traditional bioethics, neuroethics must address questions about the ethical use of animals for experimental purposes in neuroscience. In addition, however, it ought to consider questions regarding the use of animals as model systems for understanding the human brain and human cognition. Animal studies have given us the bulk of our understanding of neural physiology and anatomy, and have provided significant insight into function of conserved biological capacities. However, the further we push into unknown territory about higher cognitive functions, the more we will have to attend to the specifics of similarities and differences between humans and other species, and evaluating the model system may involve considerable philosophical work (Shanks, Greek, and Greek 2009; Shelley 2010; Nestler and Hyman 2010). In some cases, the dissimilarities may not warrant animal experiments.

Finally, neuroethics stretches seamlessly into the law (see, e.g., Vincent 2013; Morse and Roskies 2013). Neuroethical issues arise in criminal law, in particular with the issue of criminal responsibility. For example, the recognition that a large percentage of prison inmates have some history of head trauma or other abnormality raises the question of where to draw the line between the bad and the mad (Center for Disease Control 2007; Maibom 2008). Neuroethics has bearing on issues of addiction: some have characterized addiction as a brain disease or species of dysfunction, and question whether it is appropriate to hold addicts responsible for their behavior (Hyman 2007; Carter, Hall, and Illes 2011). Research has demonstrated that human brains are not fully developed until the mid-twenties,and that the areas last to develop are prefrontal regions involved in executive control and inhibition. In light of this, many have argued that juveniles should not be held fully responsible for criminal behavior. Indeed, a recent Supreme Court ruling (Roper v. Simmons, 2005) has barred the death penalty for juvenile murderers, but although an amicus brief in favor of the ruling referenced brain immaturity, the opinion itself does not rely upon it. A later case, Miller v. Alabama (2012) rules life without parole for juveniles unconstitutional, and mentions neuroscience and social science in a footnote. Other areas of law, such in tort law, employment law, and health care law also overlap with neuroethical concerns, and may well be influenced by neuroscientific discoveries (Clausen and Levy 2015; Freeman 2011; Jones, Schall, and Shen 2014).

5.2 Public perception of neuroscience

The advances of neuroscience have become a common topic in the popular media, with colorful brain images becoming a pervasive illustrative trope in news stories about neuroscience. While no one doubts that popularizing neuroscience is a positive good, neuroethicists have been legitimately worried about the possibilities for misinformation. These include worries about “the seductive allure” of neuroscience, and of misleading and oversimplified media coverage of complex scientific questions.

5.2.1 The seductive allure

There is a documented tendency for the layperson to think that information that makes reference to the brain or to neuroscience or neurology is more privileged, more objective, or more trustworthy than information that makes reference to the mind or psychology. For example, Weisberg and colleagues report that subjects with little or no neuroscience training rated bad explanations as better when they made reference to the brain or incorporated neuroscientific terminology (Weisberg et al. 2008). This “seductive allure of neuroscience” is akin to an unwarranted epistemic deference to authority. This differential appraisal extends into real-world settings, with testimony from a neuroscientist or neurologist judged to be more credible than that of a psychologist. The tendency is to view neuroscience as a hard science in contrast to “soft” methods of inquiry that focus on function or behavior. With neuroimaging methods, this belies a deep misunderstanding of the genesis and significance of the neuroscientific information. What people fail to realize is that neuroimaging information is classified and interpreted by its ties to function, so (barring unusual circumstances) it cannot be more reliable or “harder” than the psychology it relies upon.

Brain images in particular have prompted worries that the colorful images of brains with “hotspots” that accompany media coverage could themselves be misleading. If people intuitively appreciate brain images as if they were akin to a photograph of the brain in action, this could mislead them into thinking of these images as objective representations of reality, prompting them to overlook the many inferential steps and nondemonstrative decisions that underlie creation of the image they see (Roskies 2007). The worry is that the powerful pull of the brain image will lend a study more epistemic weight than is justified and discourage people from asking the many complicated questions that one must ask in order to understand what the image signifies and what can be inferred from the data. Further work, however, has suggested that once one takes into account the privilege accorded to neuroscience over psychology, the images themselves do not further mislead (Schweitzer et al. 2011).

5.2.2 Media hype

In this era of indubitably exciting progress in brain research, there is a “brain-mania” that is partially warranted but holds its own dangers. The culture of science is such that it is not uncommon for scientists to describe their work in the most dramatic terms possible in order to secure funding and/or fame. Although the hyperbole can be discounted by knowledgeable readers, those less sophisticated about the science may take it at face value. Studies have shown that the media is rarely critical of the scientific findings they report, and they tend not to present alternative interpretations (Racine et al. 2006). The result is that the popular media conveys sometimes wildly inaccurate pictures of legitimate scientific discoveries, which can fuel both overly optimistic enthusiasm as well as fear (Racine et al. 2010). One of the clear pragmatic goals of neuroethics, whether it regards basic research or clinical treatments, is to exhort and educate scientists and the media to better convey both the promise and complexities of scientific research. It is the job of both these groups to teach people enough about science in general, and brain science in particular, that they see it as worthy of respect, and also of the same critical assessment to which scientists themselves subject their own work.

It is admittedly difficult to accurately translate complicated scientific findings for the lay public, but it is essential. Overstatement of the significance of results can instill unwarranted hope in some cases, fear in others, and jadedness and suspicion going forward. None of these are healthy for the future status and funding of the basic sciences, and providing fodder for scientific naysayers has policy implications that go far beyond the reach of neuroscience.

5.3 Practical neuroethics

Medical practice and neuroscientific research raise a number of neuroethical issues, many of which are common to bioethics. For example, issues of consent, of incidental findings, of competence, and of privacy of information arise here. In addition, practicing neurologists, psychologists and psychiatrists may routinely encounter certain brain diseases or psychological dysfunctions that raise neuroethical issues that they must address in their practices (Farah 2005). Because of the overlap with traditional bioethics, these issues will not be discussed further here (articles on many of these topics can be found elsewhere in this encyclopedia). For a more detailed discussion of these more applied neuroethics issues approached from a pragmatic point of view, see, for example, Racine (2010).

6. The neuroscience of ethics

Neuroscience, or more broadly the cognitive and neural sciences, have made significant inroads into understanding the neural basis of ethical thought and social behavior. In the last decades, these fields have begun to flesh out the neural machinery underlying human capacities for moral judgment, altruistic action, and the moral emotions. The field of social neuroscience, nonexistent two decades ago, is thriving, and our understanding of the circuitry, the neurochemistry, and the modulatory influences underlying some of our most complex and nuanced interpersonal behaviors is growing rapidly. Neuroethics recognizes that the heightened understanding of the biological bases of social and moral behaviors can itself have effects on how we conceptualize ourselves as social and moral agents, and foresees the importance of the interplay between our scientific conception of ourselves and our ethical views and theories. The interplay and its effects provide reason to view the neuroscience of ethics (or more broadly, of sociality) as part of the domain of neuroethics.

Perhaps the most well-known and controversial example of such an interplay marks the beginning of this kind of exploration. In 2001, Joshua Greene scanned people while they made a series of moral and nonmoral decisions in different scenarios, including dilemmas modeled on the philosophical “Trolley Problem” (Thomson 1985). The trolley problem is an example of a moral dilemma: In one scenario, a trolley is careening down a track, and is headed for 5 people. If it hits them they will all be killed. You, an onlooker, could throw a switch to divert the trolley from the main track onto a side track, where there is only one person, who, if you throw the switch, will be killed. Should you do nothing and let the 5 be killed or switch the trolley to the side track and kill the one to save the 5? In a supposedly parallel scenario, the “footbridge” case, the trolley is headed for the 5, but you are on a footbridge above the track with a heavy man, heavy enough to stop the trolley. Should you push the man off the tracks in the way of the trolley, saving the 5 at the expense of the one? The Trolley Problem puzzle is why we seem to have different intuitions in these cases, since both involve saving 5 at the expense of one. When Greene scanned subjects faced with a series of such scenarios, he found systematic differences in the engagement of brain regions associated with moral processing in “personal” (e.g., pushing) as opposed to “impersonal” (e.g., flipping a switch) moral dilemmas. He hypothesized that emotional interference was behind the differential reaction times in judgments of permissibility in the footbridge case. In later work, Greene proposed a dual-process model of moral judgment, where relatively automatic emotion-based reactions and high-level cognitive control jointly determined responses to moral dilemmas, and he related his findings to philosophical moral theories (Greene et al. 2004, 2008). Most controversially, he suggested that there are reasons to be suspicious of our deontological judgments and interpreted his work as lending credence to utilitarian theories (Greene 2013). Greene’s work is thus a clear example of how neuroscience might affect our ethical theorizing. Claims regarding the import of neuroscience studies for philosophical questions have sparked a heated debate in philosophy and beyond, and prompted critiques and replies from scholars both within and outside of philosophy (see, e.g., Berker 2009; Kahane et al. 2011; Christensen et al. 2014). One effect of these exchanges is to highlight a problematic tendency for scientists and some philosophers to think they can draw normative conclusions from purely descriptive data; another is to illuminate the ways in which descriptive data might itself masquerade as normative.

Greene’s early studies demonstrated that neuroscience can be used in the service of examining extremely high-level behaviors and capacities, and have served as an inspiration for numerous other experiments investigating the neural basis of social and moral behavior and competences. Neuroscience has already turned its attention to phenomena such as altruism, empathy, well-being, and theory of mind, as well as to disorders such as autism and psychopathy (Sinnott-Armstrong 2007; Churchland 2012; Decety and Wheatley 2015; Zak 2013). The relevant works range from imaging studies using a variety of imaging techniques, to manipulation of hormones and neurochemicals, to purely behavioral studies. In addition, interest in moral and social neuroscience has collided synergistically with the growth of neuroeconomics, which has flourished in large part independently (Glimcher and Fehr 2013). A recent bibliography has collected almost 400 references to works in the neuroscience of ethics since 2002 (Darragh, Buniak, and Giordano 2015). We can safely assume that many more advances will be made in the years to come, and that neuroethicists will be called upon to advance, evaluate, expound upon, or deflate claims for the purported ethical implications of our new knowledge.

7. Looking forward: new neurotechnologies

The examples discussed above included pharmaceuticals that are already approved for use, existing brain imaging techniques and invasive neurotherapies. But practical neuroethical concerns, and some theoretical concerns, are highly dependent upon the details of technologies. Several technologies are already on the horizon that are bound to raise some new neuroethical questions, or old questions in new guises. One of the most powerful new tools in the research neuroscientist’s arsenal is “optogenetics”, a method of transfecting brain cells with genetically engineered proteins that make the cell responsive to light of specific wavelengths. The cells can then be activated or silenced by shining light upon them, allowing for cell-specific external control (Deisseroth 2011; Yizhar et al. 2011). Optogenetics has been successfully used in many model organisms, including rats, and work is underway to use optogenetics in monkeys. One may presume it is only a matter of time before it will be developed for use in humans. The method promises to provide precise control of specific neural populations and relatively noninvasive targeted treatments for diseases. It promises to introduce the kind of neuroethical issues raised by many mechanisms that intervene with brain function: questions of harm, of authenticity, and the prospect of brain cells being controlled by someone other than the agent himself. A second technique, CRISPR (Clustered Regularly Interspaced Short Palindromic Repeat), allows powerful targeted gene editing (Cong et al. 2013). Although not strictly a neuroscientific technique, it can be used on neural cells to effect brain changes at the genetic level. Genetic engineering might make possible neural gene therapies and designer babies, making real the consequences of the genetic revolution thus far only imagined.

These and other technologies were not even imagined a few decades ago, and it is likely that other future technologies will emerge which we cannot currently conceive of. If many neuroethical issues are closely tied to the capabilities of neurotechnologies, as I have argued, then we are unlikely to anticipate future technologies in enough detail to predict the constellation of neuroethical issues that they may give rise to. Neuroethics will have to grow as neuroscience does, adapting to novel ethical and technological challenges.

Bibliography

  • Academy of Medical Sciences, 2012, “Human Enhancement and the Future of Work: Report from a joint workshop hosted by the Academy of Medical Sciences, the British Academy, the Royal Academy of Engineering and the Royal Society”, London: Academy of Medical Sciences. [Academy 2012 available online]
  • Aharoni, Eyal, Gina M. Vincent, Carla L. Harenski, Vince D. Calhoun, Walter Sinnott-Armstrong, Michael S. Gazzaniga, and Kent A. Kiehl, 2013, “Neuroprediction of Future Rearrest”, Proceedings of the National Academy of Sciences, 110(15): 6223–28. doi:10.1073/pnas.1219302110
  • Appel, Jacob M., 2010, “Beyond Fluoride: Pharmaceuticals, Drinking Water and the Public Health”, The Huffington Post, written March 18, 2010 and last updated May 25, 2011. [Appel 2010 available online]
  • Arbabshirani, Mohammad R., Kent A. Kiehl, Godfrey D. Pearlson, and Vince D. Calhoun, 2013, “Classification of Schizophrenia Patients Based on Resting-State Functional Network Connectivity”, Frontiers in Neuroscience, 7: 133. doi:10.3389/fnins.2013.00133
  • Attiah, Mark A., and Martha J. Farah, 2014, “Minds, Motherboards, and Money: Futurism and Realism in the Neuroethics of BCI Technologies”, Frontiers in Systems Neuroscience, 8(May): 86. doi:10.3389/fnsys.2014.00086
  • Bach-y-Rita, Paul, and Stephen W. Kercel, 2003, “Sensory Substitution and the Human–machine Interface”, Trends in Cognitive Sciences, 7(12): 541–46. doi:10.1016/j.tics.2003.10.013
  • Baylis, Françoise, 2011, “‘I Am Who I Am’: On the Perceived Threats to Personal Identity from Deep Brain Stimulation”, Neuroethics, 6(3): 513–26. doi:10.1007/s12152-011-9137-1
  • Bennabi, Djamila, Solène Pedron, Emmanuel Haffen, Julie Monnin, Yvan Peterschmitt, and Vincent Van Waes, 2014, “Transcranial Direct Current Stimulation for Memory Enhancement: From Clinical Research to Animal Models”, Frontiers in Systems Neuroscience, 8(September): 159. doi:10.3389/fnsys.2014.00159
  • Berker, Selim, 2009, “The Normative Insignificance of Neuroscience”, Philosophy & Public Affairs, 37(4): 293–329.
  • Birbaumer, Niels, Ander Ramos Murguialday, and Leonardo Cohen, 2008, “Brain–computer Interface in Paralysis:” Current Opinion in Neurology, 21(6): 634–38. doi:10.1097/WCO.0b013e328315ee2d
  • Boire, Richard G., 2001, “On Cognitive Liberty”, The Journal of Cognitive Liberties, 2(1): 7–22.
  • Bostrom, Nick and Anders Sandberg, 2009, “Cognitive Enhancement: Methods, Ethics, Regulatory Challenges”, Science and Engineering Ethics, 15(3): 311–41. doi:10.1007/s11948-009-9142-5
  • Bostrom, Nick and Rebecca Roache, 2010, “Smart Policy: Cognitive Enhancement and the Public Interest”, Contemporary Readings in Law and Social Justice, 2(1): 68–84.
  • Brembs, Björn, 2011, “Towards a Scientific Concept of Free Will as a Biological Trait: Spontaneous Actions and Decision—Making in Invertebrates”, Proceedings of the Royal Society of London B: Biological Sciences, 278(1707): 930–939. doi:10.1098/rspb.2010.2325
  • Buckner, Randy L., Jessica R. Andrews-Hanna, and Daniel L. Schacter, 2008, “The Brain’s Default Network”, Annals of the New York Academy of Sciences, 1124(1): 1–38. doi:10.1196/annals.1440.011
  • Carter, Adrian, Wayne D. Hall, and Judy Illes (eds), 2011, Addiction Neuroethics: The Ethics of Addiction Neuroscience Research and Treatment, 1st edition, London: Academic Press.
  • Center for Disease Control (CDC), 2007, “Traumatic Brain Injury in Prisons and Jails”, [CDC 2007 available online (pdf)].
  • Chalmers, David J., 1995, “Facing up to the Problem of Consciousness”, Journal of Consciousness Studies, 2(3): 200–219.
  • Chekroud, Adam Mourad, Jim A.C. Everett, Holly Bridge, and Miles Hewstone, 2014, “A Review of Neuroimaging Studies of Race-Related Prejudice: Does Amygdala Response Reflect Threat?” Frontiers in Human Neuroscience, 8: 179. doi:10.3389/fnhum.2014.00179
  • Christensen, Julia F., Albert Flexas, Margareta Calabrese, Nadine K. Gut, and Antoni Gomila, 2014, “Moral Judgment Reloaded: A Moral Dilemma Validation Study”, Emotion Science, 5: 607. doi:10.3389/fpsyg.2014.00607
  • Churchland, Patricia S., 2012, Braintrust: What Neuroscience Tells Us about Morality, reprint edition, Princeton, NJ: Princeton University Press.
  • Clark, Andy, 2004, Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence, 1st edition, New York: Oxford University Press.
  • Clausen, Jens, 2008, “Moving Minds: Ethical Aspects of Neural Motor Prostheses”, Biotechnology Journal, 3(12): 1493–1501. doi:10.1002/biot.200800244
  • Clausen, Jens and Neil Levy (eds.), 2015, Handbook of Neuroethics, Netherlands: Springer. doi:10.1007/978-94-007-4707-4_123
  • Cong, Le, F. Ann Ran, David Cox, Shuailiang Lin, Robert Barretto, Naomi Habib, Patrick D. Hsu, et al., 2013, “Multiplex Genome Engineering Using CRISPR/Cas Systems”, Science, 339(6121): 819–23. doi:10.1126/science.1231143
  • Conrad, Erin and Raymond De Vries, 2011, “Field of Dreams: A Social History of Neuroethics”, In Sociological Reflections on the Neurosciences, 13: 299–324. Emerald Group Publishing Limited. doi:10.1108/S1057-6290(2011)0000013017
  • Cruse, Damian, Srivas Chennu, Camille Chatelle, Tristan A. Bekinschtein, Davinia Fernández-Espejo, John D. Pickard, Steven Laureys, and Adrian M. Owen, 2011, “Bedside Detection of Awareness in the Vegetative State: A Cohort Study”, Lancet (London, England), 378(9809): 2088–94. doi:10.1016/S0140-6736(11)61224-5
  • Darragh, Martina, Liana Buniak, and James Giordano, 2015, “A Four-Part Working Bibliography of Neuroethics: Part 2—Neuroscientific Studies of Morality and Ethics”, Philosophy, Ethics, and Humanities in Medicine: PEHM, 10(2). doi:10.1186/s13010-015-0022-0
  • Decety, Jean and Thalia Wheatley (eds), 2015, The Moral Brain: A Multidisciplinary Perspective, Cambridge, MA: The MIT Press.
  • Dees, Richard H., 2007, “Better Brains, Better Selves? The Ethics of Neuroenhancements”, Kennedy Institute of Ethics Journal, 17(4): 371–95.
  • DeGrazia, David, 2005, Human Identity and Bioethics, Cambridge: Cambridge University Press.
  • Deisseroth, Karl, 2011, “Optogenetics”, Nature Methods, 8(1): 26–29. doi:10.1038/nmeth.f.324
  • Douglas, Thomas, 2008, “Moral Enhancement”, Journal of Applied Philosophy, 25(3): 228–45. doi:10.1111/j.1468-5930.2008.00412.x
  • Farah, Martha J., 2005, “Neuroethics: The Practical and the Philosophical”, Trends in Cognitive Sciences, 9(1): 34–40. doi:10.1016/j.tics.2004.12.001
  • –––, 2007, “Social, Legal, and Ethical Implications of Cognitive Neuroscience: ‘Neuroethics’ for Short”, Journal of Cognitive Neuroscience, 19(3): 363–64. doi:10.1162/jocn.2007.19.3.363
  • Farah, Martha J., 2010, Neuroethics: An Introduction with Readings, 1st edition, Cambridge, Mass: The MIT Press.
  • Farah, Martha J., J. Benjamin Hutchinson, Elizabeth A. Phelps, and Anthony D. Wagner, 2014, “Functional MRI-Based Lie Detection: Scientific and Societal Challenges”, Nature Reviews Neuroscience, 15(2): 123–31. doi:10.1038/nrn3665
  • Farahany, Nita, 2012a, “Searching Secrets”, University of Pennsylvania Law Review, 160(5): 1239–1308.
  • –––, 2012b, “Incriminating Thoughts”, Stanford Law Review, January, 351–408.
  • Freeman, M. (ed.), 2011, Law and Neuroscience: Current Legal Issues Volume 13, 1st edition, Oxford, New York: Oxford University Press.
  • Fryer, Susanna L., Scott W. Woods, Kent A. Kiehl, Vince D. Calhoun, Godfrey Pearlson, Brian J. Roach, Judith M. Ford, Vinod H. Srihari, Thomas H. McGlashan, and Daniel H. Mathalon, 2013, “Deficient Suppression of Default Mode Regions during Working Memory in Individuals with Early Psychosis and at Clinical High-Risk for Psychosis”, Schizophrenia, 4: 92. doi:10.3389/fpsyt.2013.00092
  • Giordano, James, Anvita Kulkarni, and James Farwell, 2014, “Deliver Us from Evil? The Temptation, Realities, and Neuroethico-Legal Issues of Employing Assessment Neurotechnologies in Public Safety Initiatives”, Theoretical Medicine and Bioethics, 35(1): 73–89. doi:10.1007/s11017-014-9278-4
  • Glannon, W., 2009, “Stimulating Brains, Altering Minds”, Journal of Medical Ethics, 35(5): 289–92. doi:10.1136/jme.2008.027789
  • Glimcher, Paul W. and Ernst Fehr (eds), 2013, Neuroeconomics, Second Edition: Decision Making and the Brain, 2nd edition, Amsterdam: Boston: Academic Press.
  • Greely, Henry T., 2010, “Enhancing Brains: What Are We Afraid Of?” Cerebrum: The Dana Forum on Brain Science, 2010(July). [Greely 2010 available online]
  • Greely, Henry, Barbara Sahakian, John Harris, Ronald C. Kessler, Michael Gazzaniga, Philip Campbell, and Martha J. Farah, 2008, “Towards Responsible Use of Cognitive-Enhancing Drugs by the Healthy”, Nature, 456(7223): 702–5. doi:10.1038/456702a
  • Greene, Joshua, 2013, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, New York: Penguin Press.
  • Greene, Joshua D. and Joseph M. Paxton, 2009, “Patterns of Neural Activity Associated with Honest and Dishonest Moral Decisions”, Proceedings of the National Academy of Sciences, 106(30): 12506–11. doi:10.1073/pnas.0900152106
  • Greene, Joshua D., Leigh E. Nystrom, Andrew D. Engell, John M. Darley, and Jonathan D. Cohen, 2004, “The Neural Bases of Cognitive Conflict and Control in Moral Judgment”, Neuron, 44(2): 389–400. doi:10.1016/j.neuron.2004.09.027
  • Greene, Joshua D., R. Brian Sommerville, Leigh E. Nystrom, John M. Darley, and Jonathan D. Cohen, 2001, “An fMRI Investigation of Emotional Engagement in Moral Judgment”, Science, 293(5537): 2105–8. doi:10.1126/science.1062872
  • Greene, Joshua D., Sylvia A. Morelli, Kelly Lowenberg, Leigh E. Nystrom, and Jonathan D. Cohen, 2008, “Cognitive Load Selectively Interferes with Utilitarian Moral Judgment”, Cognition, 107(3): 1144–54. doi:10.1016/j.cognition.2007.11.004
  • Hare, R.D., 1991, The Hare Psychopathy Checklist-Revised, New York, USA: Multi-Health Systems.
  • Hart, Stephen D. and Robert D. Hare, 1997, “Psychopathy: Assessment and Association with Criminal Conduct”, In Handbook of Antisocial Behavior, edited by D. M. Stoff, J. Breiling, and J. D. Maser, 22–35. Hoboken, NJ, US: John Wiley & Sons Inc.
  • Haynes, John-Dylan and Geraint Rees, 2005, “Predicting the Stream of Consciousness from Activity in Human Visual Cortex”, Current Biology, 15(14): 1301–7. doi:10.1016/j.cub.2005.06.026
  • Heinz, Andreas, Roland Kipke, Hannah Heimann, and Urban Wiesing, 2012, “Cognitive Neuroenhancement: False Assumptions in the Ethical Debate”, Journal of Medical Ethics, 38(6): 372–375. doi:10.1136/medethics-2011-100041
  • Horvath, Jared C., Jason D. Forte, and Olivia Carter, 2015, “Evidence That Transcranial Direct Current Stimulation (tDCS) Generates Little-to-No Reliable Neurophysiologic Effect beyond MEP Amplitude Modulation in Healthy Human Subjects: A Systematic Review”, Neuropsychologia, 66(January): 213–36. doi:10.1016/j.neuropsychologia.2014.11.021
  • Husain, Masud and Mitul A. Mehta, 2011, “Cognitive Enhancement by Drugs in Health and Disease”, Trends in Cognitive Sciences, 15(1): 28–36. doi:10.1016/j.tics.2010.11.002
  • Hyman, Steven E., 2007, “The Neurobiology of Addiction: Implications for Voluntary Control of Behavior”, The American Journal of Bioethics: AJOB, 7(1): 8–11. doi:10.1080/15265160601063969
  • Ilieva, Irena, Joseph Boland, and Martha J. Farah, 2013, “Objective and Subjective Cognitive Enhancing Effects of Mixed Amphetamine Salts in Healthy People”, Neuropharmacology, Cognitive Enhancers: molecules, mechanisms and minds 22nd Neuropharmacology Conference: Cognitive Enhancers, 64(January): 496–505. doi:10.1016/j.neuropharm.2012.07.021
  • Illes, Judy, 2003, “Neuroethics in a New Era of Neuroimaging”, American Journal of Neuroradiology, 24: 1739–1741
  • Illes, Judy, 2006, Neuroethics: Defining the Issues in Theory, Practice, and Policy, Oxford University Press.
  • Illes, Judy and Barbara J. Sahakian, 2011, Oxford Handbook of Neuroethics, Oxford University Press.
  • Illes, Judy, Matthew P. Kirschen, and John D. E. Gabrieli, 2003, “From Neuroimaging to Neuroethics”, Nature Neuroscience, 6(3): 205–205. doi:10.1038/nn0303-205
  • Illes, Judy, Matthew P. Kirschen, Emmeline Edwards, L R. Stanford, Peter Bandettini, Mildred K. Cho, Paul J. Ford, et al., 2006, “Incidental Findings in Brain Imaging Research”, Science (New York, N.Y.), 311(5762): 783–84. doi:10.1126/science.1124665
  • Jones, Owen D., Joshua Buckholtz, Jeffrey D. Schall, and Rene Marois, 2009, Brain Imaging for Legal Thinkers: A Guide for the Perplexed, SSRN Scholarly Paper ID 1563612. Rochester, NY: Social Science Research Network. [Jones et al. 2009 available online]
  • Jones, Owen D., Jeffrey D. Schall, and Francis X. Shen, 2014, Law and Neuroscience, Aspen Casebooks, 1st edition, Alphen aan den Rijn, Netherlands: Wolters Kluwer Law & Business.
  • Jotterand, Fabrice and James Giordano, 2011, “Transcranial Magnetic Stimulation, Deep Brain Stimulation and Personal Identity: Ethical Questions, and Neuroethical Approaches for Medical Practice”, International Review of Psychiatry, 23(5): 476–85. doi:10.3109/09540261.2011.616189
  • Juth, Niklas, 2011, “Enhancement, Autonomy, and Authenticity”, in Savulescu, et al. 2011: 34–48. doi:10.1002/9781444393552.ch3
  • Kahane, Guy, 2011, “Mastery Without Mystery: Why There Is No Promethean Sin in Enhancement”, Journal of Applied Philosophy, 28(4): 355–68. doi:10.1111/j.1468-5930.2011.00543.x
  • Kahane, Guy, Katja Wiech, Nicholas Shackel, Miguel Farias, Julian Savulescu, and Irene Tracey, 2011, “The Neural Basis of Intuitive and Counterintuitive Moral Judgment”, Social Cognitive and Affective Neuroscience, March 18, 2011, nsr005. doi:10.1093/scan/nsr005
  • Kass, Leon R., 2003a, “Ageless Bodies, Happy Souls: Biotechnology and the Pursuit of Perfection”, New Atlantis: A Journal of Technology & Society, 1(Spring): 9–28.
  • –––, 2003b, “Beyond Therapy: Biotechnology and the Pursuit of Human Improvement”, President’s Council on Bioethics, Washington, DC, 16. Kass 2003b available online (pdf)
  • Klaming, Larry and Pim Haselager, 2013, “Did My Brain Implant Make Me Do It? Questions Raised by DBS Regarding Psychological Continuity, Responsibility for Action and Mental Competence”, Neuroethics, 6: 527–39. doi:10.1007/s12152-010-9093-1
  • Kraemer F., 2013, “Me, Myself and My Brain Implant: Deep Brain Stimulation Raises Questions of Personal Authenticity and Alienation”, Neuroethics, 6: 483–97.
  • Lebedev, Mikhail A. and Miguel A.L. Nicolelis, 2006, “Brain–machine Interfaces: Past, Present and Future”, Trends in Neurosciences, 29(9): 536–46. doi:10.1016/j.tins.2006.07.004
  • Leentjens, A.F., V. Visser-Vandewalle, Y. Temel, F.R. Verhey, 2004, “[Manipulation of mental competence: An ethical problem in case of electrical stimulation of the subthalamic nucleus for severe Parkinson’s disease]” (article in Dutch), Nederlands Tijdschrift voor Geneeskunde, 148(28): 1394–98. [Leentjens et al. 2004 abstract in English available]
  • Libet, Benjamin, Curtis A. Gleason, Elwood W. Wright, and Dennis K. Pearl, 1983, “Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (readiness-Potential)”, Brain, 106(3): 623–42. doi:10.1093/brain/106.3.623
  • Lin, Patrick and Fritz Allhoff, 2008, “Against Unrestricted Human Enhancement”, Journal of Evolution & Technology, 18(1): 35–41.
  • Locke, John, 1689, “Chapter XXVII”, in An Essay Concerning Human Understanding,
  • Luo, Qian, Marina Nakic, Thalia Wheatley, Rebecca Richell, Alex Martin, and R. James R. Blair, 2006, “The Neural Basis of Implicit Moral attitude—An IAT Study Using Event-Related fMRI”, NeuroImage, 30(4): 1449–57. doi:10.1016/j.neuroimage.2005.11.005
  • Maibom, Heidi L., 2008, “The Mad, the Bad, and the Psychopath”, Neuroethics, 1(3): 167–84. doi:10.1007/s12152-008-9013-9
  • Mantione, Mariska, Martijn Figee, and Damiaan Denys, 2014, “A case of musical preference for Johnny Cash following deep brain stimulation of the nucleus accumbens”, Frontiers in Behavioral Neuroscience, 8: 152. doi:10.3389/fnbeh.2014.00152
  • Marcus, Steven J. (ed.), 2002, Neuroethics: Mapping the Field, 1st edition. New York: Dana Press.
  • Maslen, Hannah, Nadira Faulmüller, and Julian Savulescu, 2014a, “Pharmacological Cognitive Enhancement—how Neuroscientific Research Could Advance Ethical Debate”, Frontiers in Systems Neuroscience, 8(June 11): 107. doi:10.3389/fnsys.2014.00107
  • Maslen, Hannah, Thomas Douglas, Roi Cohen Kadosh, Neil Levy, and Julian Savulescu, 2014b, “The Regulation of Cognitive Enhancement Devices: Extending the Medical Model”, Journal of Law and the Biosciences, 1(1): 68–93. doi:10.1093/jlb/lst003
  • Mattay, Venkata S., Joseph H. Callicott, Alessandro Bertolino, Ian Heaton, Joseph A. Frank, Richard Coppola, Karen F. Berman, Terry E. Goldberg, and Daniel R. Weinberger, 2000, “Effects of Dextroamphetamine on Cognitive Performance and Cortical Activation”, NeuroImage, 12(3): 268–75. doi:10.1006/nimg.2000.0610
  • McCabe, Sean Esteban, John R. Knight, Christian J. Teter, and Henry Wechsler, 2005, “Non-Medical Use of Prescription Stimulants among US College Students: Prevalence and Correlates from a National Survey”, Addiction, 100(1): 96–106. doi:10.1111/j.1360–0443.2005.00944.x
  • Miller v. Alabama, 567 U.S. ___ (2012).
  • Monti, Martin M., Audrey Vanhaudenhuyse, Martin R. Coleman, Melanie Boly, John D. Pickard, Luaba Tshibanda, Adrian M. Owen, and Steven Laureys, 2010, “Willful Modulation of Brain Activity in Disorders of Consciousness”, New England Journal of Medicine, 362(7): 579–89. doi:10.1056/NEJMoa0905370
  • Morse, Stephen J. and Adina L. Roskies (eds), 2013, A Primer on Criminal Law and Neuroscience, Oxford; New York: Oxford University Press.
  • Nahmias, Eddy, D. Justin Coates, and Trevor Kvaran, 2007, “Free Will, Moral Responsibility, and Mechanism: Experiments on Folk Intuitions”, Folk, 95193(5/1): 07.
  • Naseer, Noman and Keum-Shik Hong, 2015, “fNIRS-Based Brain-Computer Interfaces: A Review”, Frontiers in Human Neuroscience, 9(January 28): 3. doi:10.3389/fnhum.2015.00003
  • National Research Council, 2003, The Polygraph and Lie Detection, [NRC 2003 available online]
  • Nestler, Eric J. and Steven E. Hyman, 2010, “Animal Models of Neuropsychiatric Disorders”, Nature Neuroscience, 13(10): 1161–69. doi:10.1038/nn.2647
  • Noble, Kimberly G. and Martha J. Farah, 2013, “Neurocognitive Consequences of Socioeconomic Disparities: The Intersection of Cognitive Neuroscience and Public Health”, Developmental Science, 16(5): 639–40. doi:10.1111/desc.12076
  • Norman, Kenneth A., Sean M. Polyn, Greg J. Detre, and James V. Haxby, 2006, “Beyond Mind-Reading: Multi-Voxel Pattern Analysis of fMRI Data”, Trends in Cognitive Sciences, 10(9): 424–30. doi:10.1016/j.tics.2006.07.005
  • Olson, Eric T., 1999, The Human Animal: Personal Identity without Psychology, New York: Oxford University Press.
  • Owen, Adrian M., Martin R. Coleman, Melanie Boly, Matthew H. Davis, Steven Laureys, and John D. Pickard, 2006, “Detecting Awareness in the Vegetative State”, Science, 313(5792): 1402–1402. doi:10.1126/science.1130197
  • Parens, Erik, 2005, “Authenticity and Ambivalence: Toward Understanding the Enhancement Debate”, The Hastings Center Report, 35(3): 34–41. doi:10.2307/3528804
  • Parfit, Derek, 1984, Reasons and Persons, Oxford : Oxford University Press.
  • Racine, Eric, 2010, Pragmatic Neuroethics: Improving Treatment and Understanding of the Mind-Brain, Cambridge, MA: The MIT Press.
  • Racine, Eric, Ofek Bar-Ilan, and Judy Illes, 2006, “Brain Imaging: A decade of coverage in the print media”, Science Communication, 28(1): 122–42. doi:10.1177/1075547006291990
  • Racine, Eric, Sarah Waldman, Jarett Rosenberg, and Judy Illes, 2010, “Contemporary Neuroscience in the Media”, Social Science & Medicine, 71(4): 725–33. doi:10.1016/j.socscimed.2010.05.017
  • Richeson, Jennifer A., Abigail A. Baird, Heather L. Gordon, Todd F. Heatherton, Carrie L. Wyland, Sophie Trawalter, and J. Nicole Shelton, 2003, “An fMRI Investigation of the Impact of Interracial Contact on Executive Function”, Nature Neuroscience, 6(12): 1323–28. doi:10.1038/nn1156
  • Roco, Mihail C. and Carlo D. Montemagno (editors), 2004, “The Coevolution of Human Potential and Converging Technologies”, Annals of the New York Academy of Sciences, Volume 1013.
  • Roper v. Simmons, 543 U.S. 551(2005).
  • Roskies, Adina L., 2002, “Neuroethics for the New Millenium”, Neuron, 35(1): 21–23. doi:10.1016/S0896-6273(02)00763-8
  • –––, 2006, “Neuroscientific Challenges to Free Will and Responsibility”, Trends in Cognitive Sciences, 10(9): 419–23. doi:10.1016/j.tics.2006.07.011
  • –––, 2007, “Are Neuroimages like Photographs of the Brain?” Philosophy of Science, 74: 860–72.
  • –––, 2015a, “Mind Reading, Lie Detection, and Privacy”, in Clausen and Levy 2015: 679–95.
  • –––, 2015b, “Agency and Intervention”, Philosophical Transactions of the Royal Society of London B: Biological Sciences, 370(1677). doi:10.1098/rstb.2014.0215
  • Sahakian, Barbara, and Sharon Morein-Zamir, 2007, “Professor’s Little Helper”, Nature, 450(7173): 1157–59. doi:10.1038/4501157a
  • Sandberg, Anders and Julian Savulescu, 2011, “The Social and Economic Impacts of Cognitive Enhancement”, in Savulescu et al. 2011: 92–112. doi:10.1002/9781444393552.ch6
  • Sandel, Michael., 2002, “What’s Wrong with Enhancement”, President’s Council on Bioethics, Washington, DC, 12. [Sandel 2002 available online]
  • –––, 2004, “The Case Against Perfection”, The Atlantic, April. [Sandel 2004 available online].
  • –––, 2009, The Case against Perfection: Ethics in the Age of Genetic Engineering, 1st edition. Cambridge, MA: Belknap Press.
  • Savulescu, Julian, Ruud ter Meulen, and Guy Kahane (eds), 2011, Enhancing Human Capacities, Blackwell Publishing Ltd.
  • Savulescu, Julian and Ingmar Persson, 2012, “Moral Enhancement, Freedom and the God Machine”, The Monist, 95(3): 399–421.
  • Schechtman, Marya, 2014, Staying Alive: Personal Identity, Practical Concerns, and the Unity of a Life, Oxford University Press.
  • Schermer, Maartje, 2008, “Enhancements, Easy Shortcuts, and the Richness of Human Activities”, Bioethics, 22(7): 355–63. doi:10.1111/j.1467-8519.2008.00657.x
  • –––, 2011, “Ethical Issues in Deep Brain Stimulation”, Frontiers in Integrative Neuroscience, 5(May 9): 17. doi:10.3389/fnint.2011.00017
  • Schweitzer, N. J., Michael J. Saks, Emily R. Murphy, Adina L. Roskies, Walter Sinnott-Armstrong, and Lyn M. Gaudet, 2011, “Neuroimages as Evidence in a Mens Rea Defense: No Impact”, Psychology, Public Policy, and Law, 17(3): 357–93. doi:10.1037/a0023581
  • Selgelid, Michael J., 2007, “An Argument Against Arguments for Enhancement”, Studies in Ethics, Law, and Technology, 1(1).
  • Sententia, Wrye, 2013, “Freedom by Design”, In The Transhumanist Reader, edited by x More and Natasha Vita-More, 355–60. John Wiley & Sons. doi:10.1002/9781118555927.ch34
  • Shanks, Niall, Ray Greek, and Jean Greek, 2009, “Are Animal Models Predictive for Humans?” Philosophy, Ethics, and Humanities in Medicine, 4(2): 1–20.
  • Shelley, Cameron, 2010, “Why Test Animals to Treat Humans? On the Validity of Animal Models”, Studies in History and Philosophy of Biological and Biomedical Sciences, 41C (3): 292–99.
  • Shook, John R., Lucia Galvagni, and James Giordano, 2014, “Cognitive Enhancement Kept within Contexts: Neuroethics and Informed Public Policy”, Frontiers in Systems Neuroscience, 8(December 5): 228. doi:10.3389/fnsys.2014.00228
  • Singh, Ilina, Imre Bard, and Jonathan Jackson, 2014, “Robust Resilience and Substantial Interest: A Survey of Pharmacological Cognitive Enhancement among University Students in the UK and Ireland”, PLoS ONE, 9(10): e105969, 12 pages. doi:10.1371/journal.pone.0105969
  • Singh, Ilina, Walter P. Sinnott-Armstrong, and Julian Savulescu, 2013, Bioprediction, Biomarkers, and Bad Behavior: Scientific, Legal, and Ethical Challenges, Oxford: Oxford University Press.
  • Sinnott-Armstrong, Walter, 2007, Moral Psychology Vol. 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development, Cambridge, MA: MIT Press.
  • ––– (ed.), 2016, Finding Consciousness: The Neuroscience, Ethics, and Law of Severe Brain Damage, Oxford: Oxford University Press.
  • Soon, Chun Siong, Marcel Brass, Hans-Jochen Heinze, and John-Dylan Haynes, 2008, “Unconscious Determinants of Free Decisions in the Human Brain”, Nature Neuroscience, 11(5): 543–45. doi:10.1038/nn.2112
  • Spielberg, Steven (director), 2002, Minority Report.
  • Stanton, Steven J., Crystal Reeck, Scott A. Huettel, and Kevin S. LaBar, 2014, “Effects of Induced Moods on Economic Choices”, Judgment and Decision Making, 9(2): 167–75.
  • Strawson, Galen, 2004, “Against Narrativity”, Ratio, 17(4): 428–52. doi:10.1111/j.1467–9329.2004.00264.x
  • Tannenbaum, Julie, 2014, “The Promise and Peril of the Pharmacological Enhancer Modafinil”, Bioethics, 28(8): 436–45. doi:10.1111/bioe.12008
  • Teter, Christian J., Sean Esteban McCabe, Kristy LaGrange, James A. Cranford, and Carol J. Boyd, 2006, “Illicit Use of Specific Prescription Stimulants Among College Students: Prevalence, Motives, and Routes of Administration”, Pharmacotherapy, 26(10): 1501–10. doi:10.1592/phco.26.10.1501
  • Thomson, Judith Jarvis, 1985, “The Trolley Problem”, The Yale Law Journal, 94(6): 1395–1415. doi:10.2307/796133
  • United States v. Semrau, No. 11–5396 (6th Cir. 2012).
  • Urban, Kimberly R. and Wen-Jun Gao, 2014, “Performance Enhancement at the Cost of Potential Brain Plasticity: Neural Ramifications of Nootropic Drugs in the Healthy Developing Brain”, Frontiers in Systems Neuroscience, 8(May 13): 38. doi:10.3389/fnsys.2014.00038
  • Vincent, Nicole A. (ed.), 2013, Neuroscience and Legal Responsibility, 1st edition, New York: Oxford University Press.
  • Vohs, Kathleen D. and Jonathan W. Schooler, 2008, “The Value of Believing in Free Will Encouraging a Belief in Determinism Increases Cheating”, Psychological Science, 19(1): 49–54. doi:10.1111/j.1467-9280.2008.02045.x
  • Warren, Samuel D. and Louis D. Brandeis, 1890, “Right to Privacy”, Harvard Law Review, 4: 193.
  • Waters, Theodore E.A. and Robyn Fivush, 2015, “Relations Between Narrative Coherence, Identity, and Psychological Well-Being in Emerging Adulthood”, Journal of Personality, 83(4): 441–451. doi:10.1111/jopy.12120
  • Wegner, Daniel, 2003, The Illusion of Conscious Will, 1st edition. A Bradford Book.
  • Weisberg, Deena Skolnick, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, and Jeremy R. Gray, 2008, “The Seductive Allure of Neuroscience Explanations”, Journal of Cognitive Neuroscience, 20(3): 470–77. doi:10.1162/jocn.2008.20040
  • Wilens, Timothy E., Lenard A. Adler, Jill Adams, Stephanie Sgambati, John Rotrosen, Robert Sawtelle, Linsey Utzinger, and Steven Fusillo, 2008, “Misuse and Diversion of Stimulants Prescribed for ADHD: A Systematic Review of the Literature”, Journal of the American Academy of Child and Adolescent Psychiatry, 47(1): 21–31. doi:10.1097/chi.0b013e31815a56f1
  • Witt, Karsten, Jens Kuhn, Lars Timmermann, Mateusz Zurowski, and Christiane Woopen, 2013, “Deep Brain Stimulation and the Search for Identity”, Neuroethics, 6(3): 499–511. doi:10.1007/s12152-011-9100-1
  • Wolpaw, J.R., N. Birbaumer, W.J. Heetderks, D.J. McFarland, P.H. Peckham, G. Schalk, E. Donchin, L.A. Quatrano, C.J. Robinson, and T.M. Vaughan, 2000, “Brain-Computer Interface Technology: A Review of the First International Meeting”, IEEE Transactions on Rehabilitation Engineering: A Publication of the IEEE Engineering in Medicine and Biology Society, 8(2): 164–73.
  • Yizhar, Ofer, Lief E. Fenno, Thomas J. Davidson, Murtaza Mogri, and Karl Deisseroth, 2011, “Optogenetics in Neural Systems”, Neuron, 71(1): 9–34. doi:10.1016/j.neuron.2011.06.004
  • Zak, Paul, 2013, The Moral Molecule: How Trust Works, reprint edition, New York, NY: Plume.

Acknowledgments

The author would like to acknowledge the research assistance of Yaning Chen for this project.

Copyright © 2016 by
Adina Roskies <adina.roskies@dartmouth.edu>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing the directive]