Scientists Call Out Ethical Concerns for the Future of Neuro-technology

Scientists Call Out Ethical Concerns for the Future of Neuro-technology

Nov 27, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Edd Gent

Curated by Helena M. Herrero Lamuedra

For some die-hard tech evangelists, using neural interfaces to merge with AI is the inevitable next step in humankind’s evolution. But a group of 27 neuroscientists, ethicists, and machine learning experts have highlighted the myriad ethical pitfalls that could be waiting.

To be clear, it’s not just futurologists banking on the convergence of these emerging technologies. The Morningside Group estimates that private spending on neurotechnology is in the region of $100 million a year and growing fast, while in the US alone public funding since 2013 has passed the $500 million mark.

The group is made up of representatives from international brain research projects, tech companies like Google and neural interface startup Kernel, and academics from the US, Canada, Europe, Israel, China, Japan, and Australia. They met in May to discuss the ethics of neuro-technology and AI, and have now published their conclusions in the journal Nature.

While the authors concede it’s likely to be years or even decades before neural interfaces are used outside of limited medical contexts, they say we are headed towards a future where we can decode and manipulate people’s mental processes, communicate telepathically, and technologically augment human mental and physical capabilities.

“Such advances could revolutionize the treatment of many conditions…and transform human experience for the better,” they write. “But the technology could also exacerbate social inequalities and offer corporations, hackers, governments, or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency, and an understanding of individuals as entities bound by their bodies.”

“The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias.”

The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias. The first and last topics are already mainstays of warnings about the dangers of unregulated and unconscientious use of machine learning, and the problems and solutions the authors highlight are well-worn.

On privacy, the concerns are much the same as those raised about the reams of personal data corporations and governments are already hovering up. The added sensitivity of neural data makes the suggestion of an automatic opt-out for sharing of neural data and bans on individuals selling their data more feasible.

But other suggestions to use technological approaches to better protect data like “differential privacy,” “federated learning,” and blockchain are equally applicable to non-neural data. Similarly, the ability of machine learning algorithms to pick up bias inherent in training data is already a well-documented problem, and one with ramifications that go beyond just neuro-technology.

When it comes to identity, agency, and augmentation, though, the authors show how the convergence of AI and neuro-technology could result in entirely novel challenges that could test our assumptions about the nature of the self, personal responsibility, and what ties humans together as a species.

They ask the reader to imagine if machine learning algorithms combined with neural interfaces allowed a form of ‘auto-complete’ function that could fill the gap between intention and action, or if you could telepathically control devices at great distance or in collaboration with other minds. These are all realistic possibilities that could blur our understanding of who we are and what actions we can attribute as our own.

The authors suggest adding “neurorights” that protect identity and agency to international treaties like the Universal Declaration of Human Rights, or possibly the creation of a new international convention on the technology. This isn’t an entirely new idea; in May, I reported on a proposal for four new human rights to protect people against neural implants being used to monitor their thoughts or interfere with or hijack their mental processes.

But these rights were designed primarily to protect against coercive exploitation of neuro-technology or the data it produces. The concerns around identity and agency are more philosophical, and it’s less clear that new rights would be an effective way to deal with them. While the examples the authors highlight could be forced upon someone, they sound more like something a person would willingly adopt, potentially waiving rights in return for enhanced capabilities.

The authors suggest these rights could enshrine a requirement to educate people about the possible cognitive and emotional side effects of neuro-technologies rather than the purely medical impacts. That’s a sensible suggestion, but ultimately people may have to make up their own minds about what they are willing to give up in return for new abilities.

This leads to the authors’ final area of concern—augmentation. As neuro-technology makes it possible for people to enhance their mental, physical, and sensory capacities, it is likely to raise concerns about equitable access, pressure to keep up, and the potential for discrimination against those who don’t. There’s also the danger that military applications could lead to an arms race.

“The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground.”

The authors suggest that guidelines should be drawn up at both the national and international levels to set limits on augmentation in a similar way to those being drawn up to control gene editing in humans, but they admit that “any lines drawn will inevitably be blurry.” That’s because it’s hard to predict the impact these technologies will have and building international consensus will be hard because different cultures lend more weight to things like privacy and individuality than others.

The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground. In the end, they conclude that it may come down to the developers of the technology to ensure it does more good than harm. Individual engineers can’t be expected to shoulder this burden alone, though.

“History indicates that profit hunting will often trump social responsibility in the corporate world,” the authors write. “And even if, at an individual level, most technologists set out to benefit humanity, they can come up against complex ethical dilemmas for which they aren’t prepared.”

For this reason, they say, industry and academia need to devise a code of conduct similar to the Hippocratic Oath doctors are required to take, and rigorous ethical training needs to become standard when joining a company or laboratory

How to ensure future brain technologies will help and not harm society

How to ensure future brain technologies will help and not harm society

May 9, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Written by P. Murali Doraiswamy -Professor, Duke University, Hermann Garden – Organisation for Economic Co-operation and Development, and David Winickoff – Organisation for Economic Co-operation and Development

Curated by Helena M. Herrero Lamuedra

Thomas Edison, one of the great minds of the second industrial revolution, once said that “the chief function of the body is to carry the brain around.” Understanding the human brain – how it works, and how it is afflicted by diseases and disorders – is an important frontier in science and society today.

Advances in neuroscience and technology increasingly impact intellectual wellbeing, education, business, and social norms. Recent findings confirm the plasticity of the brain over the individual’s life. Imaging technologies and brain stimulation technologies are opening up totally new approaches in treating disease and potentially augmenting cognitive capacity. Unravelling the brain’s many secrets will have profound societal implications that require a closer “contract” between science and society.

Convergence across physical science, engineering, biological science, social science and humanities has boosted innovation in brain science and technological innovation. It offers large potential for a systems biology approach to unify heterogeneous data from “omics” tools, imaging technologies such as fMRI, and behavioural science.

Citizen science – the convergence between science and society – already proved successful in EyeWire where people competed to map the 1,000-neuron connectome of the mouse retina. Also, the use of nanoparticles as coating of implanted abiotic devices offers great potential to improve the immunologic acceptance of invasive diagnostics. Brain-inspired neuromorphic engineering aims to develop novel computer systems with brain-like characteristics, including low energy consumption, adequate fault tolerance, self-learning capabilities, and some sort of intelligence. Here, the convergence of nanotechnology with neuroscience could help building neuro-inspired computer chips; brain-machine interfaces and robots with artificial intelligence systems.

Future opportunities for cognitive enhancement for improved attentiveness, memory, decision making, and control through, for example, non-invasive brain stimulation and neural implants have raised, and shall continue to raise, profound ethical, legal, and social questions. What is societally acceptable and desirable, both now and in the future?

At a recent OECD workshop, we identified five possible systemic changes that could help speed up neurotechnology developments to meet pressing health challenges and societal needs.

1. Responsible research

There is growing interest in discussing and unpacking the ethical and societal aspects of brain science as the technologies and applications are developed. Much can be learned from other experiences in disruptive innovation. The international Human Genome Project (1990-2003), for example, was one of the earlier large-scale initiatives in which social scientists worked in parallel with the natural sciences in order to consider the ethical, legal and social issues (ELSI) of their work.

The deliberation of ELSI and Responsible Research and Innovation (RRI) in nanotechnologies is another example of how societies, in some jurisdictions, have approached R&D activities, and the role of the public in shaping, or at least informing, their trajectory. RRI knits together activities that previously seemed sporadic. According to Jack Stilgoe, Senior Lecturer in the Department of Science and Technology Studies, University College London, the aim of responsible innovation is to connect the practice of research and innovation in the present to the futures that it promises.

Frameworks, such as ELSI and RRI should more actively engage patients and patient organisations early in the development cycle, and in a meaningful way. This could be achieved through continuous public platforms and policy discussion instead of traditional one-off public engagement and the deliberation of scientific advances and ELSI through culture and art.

Research funders – public agencies, private investors, foundations, as well as universities themselves – are particularly well positioned to shape trajectories of technology and society. Through their funding power, they have unique capacity to help place scientific work within social, ethical, and regulatory contexts.

It is an opportune time for funders to: 1) strengthen the array of approaches and mechanisms for building a robust and meaningful neurotechnology landscape that meaningfully engages human values and is informed by it; 2) discuss options to foster open and responsible innovation; and 3) better understand the opportunities and challenges for building joint initiatives in research and product development.

2. Anticipatory governance

Society and industry would benefit from earlier, and more inclusive, discussions about the ethical, legal and social implications of how neurotechnologies are being developed and their entry onto the market. For example, the impact of neuromodulatory devices that promise to enhance cognition, alter mood, or improve physical performance on human dignity, privacy, and equitable access could be considered earlier in the research and development process.

3. Open innovation

Given the significant investment risks and high failure rates of clinical trials in central nervous systems disorders, companies could adopt more open innovation approaches in which public and private stakeholders actively collaborate, share assets including intellectual property, and invest together.

4. Avoiding neuro-hype

Popular media is full of colorful brain images used to illustrate stories about neuroscience. Unproven health claims, including those which give rise to so-called ‘neuro-hype’ and ‘neuro-myths’. Misinformation is a strong possibility where scientific work potentially carries major social implications (for example, work on mental illness, competency, intelligence, etc).

It has the potential to result in public mistrust and to undermine the formation of markets. There is a need for evidence-based policies and guidelines to help the responsible development and use of neurotechnology in medical practice and in over-the-counter products. Policymakers and regulators could lead the development of a clear path to translate neurotechnology discoveries into human health advantages that are commercially viable and sustainable.

5. Access and equity

Policymakers should discuss the socio-economic questions raised by neurotechnology. Rising disparities in access to often high-priced medical innovation require tailored solutions for poorer countries. The development of public-private partnerships and simplification of technology help access to innovation in resource-limited countries.

In addition to helping people with neurological and psychiatric disorders, the biggest cause of disability worldwide, neurotechnologies will shape every aspect of society in the future. A roadmap for guiding responsible research and innovation in neurotechnology may be transformative.