2023 Villars High Level Anticipation Workshop on Neuro-augmentation
Comment
Stakeholder Type

2023 Villars High Level Anticipation Workshop on Neuro-augmentation

Deep Dive:

2023 Villars High Level Anticipation Workshop on Neuro-augmentation

Contributors


+ 21 More

Connections

Pulse of Science

At the inaugural Villars high-level Anticipation Workshop, a carefully chosen group of leaders from across several relevant disciplines met to discuss issues pertinent to the future of “neuro-augmentation”. The meeting was organised by the GESDA Academic Forum with the financial contribution of the foundation Defitech. A full list of attendees is available at the end of this report.

The uptake of devices that measure and influence the activity of the brain and nervous system is rapidly growing as they spread from university research out to clinical applications. According to one estimate, the neurotechnology marketplace is worth $14.3 billion US, with that number due to climb beyond $20 billion within four years.1 Brain-monitoring devices, brain-machine interfaces, and other ways of reading and writing the signals of the nervous system are on the cusp of wide adoption across society. Their ubiquity could change everything from workplace rights to what it means to be human.

The development of neural augmentation science and technologies will pose significant challenges to society and individuals, asking questions about what it means to be human, how our society functions and the processes of science. While such technologies are still under development (and some face significant road blocks), anticipation is an ethical imperative. Issues that arise will include: whether neuro-augmentations will be optional in the workplace; their potential effects on other aspects of our biology and on self-determination; the optimal trade-offs between intrusiveness and usefulness; who will benefit from this push to neuro-augmentation, and what potential ethical and legal consequences will emerge? The technological improvements are happening quickly enough that we may only get one chance to ask these questions.

The gathered experts were uniquely positioned to provide informed insight about the technological and scientific progress that will drive the direction and speed of neuro-augmentation. They outlined the current status of the science and identified areas of immediate, mid-term and far term impact and concern, and how this should drive the focus of diplomatic and ethics intervention. This report is structured as follows:

  1. Reading from and writing to the human brain: state of the art; predictions and possible applications; current obstacles to progress
  2. Hybrid brain development: state of the art; predictions and possible applications; current obstacles to progress
  3. Artificial cognition: software simulation; biomimetic hardware; AI as external cognition; predictions and possible applications; current obstacles to progress
  4. Ethical and governance dimensions: considerations for the individual; considerations for research animals and other entities; responsible anticipation.

1. Reading from and writing to the human brain

Inputs by

Jocelyne BlochProfessor, Department of Neurosurgery, Lausanne University Hospital and University of Lausanne
Grégoire CourtineProfessor, Center for Neuroprosthetics, Brain Mind Institute, EPFL
Ilka DiesterProfessor, Head of Optophysiology Lab, University of Freiburg
Itzhak FriedProfessor, Department of Neurosurgery, University of California Los Angeles School of Medicine
Jaimie HendersonDirector, Stereotactic and Functional Neurosurgery, Stanford University School of Medicine; Co-Director, Stanford Neural Prosthetics Translational Laboratory (NPTL), Stanford University
Tamar MakinProfessor of Cognitive Neuroscience, University of Cambridge

State of the art

A current standard, FDA-approved implant for reading from the brain is a “Utah” array with around 100 electrodes. These can pick up cortical signals and analyse them in order to decode motor and language functions, such as language production. Patients who are no longer able to speak – whether due to traumatic injury or neurodegenerative disease such as ALS – have participated in several trials of such implanted devices. The technology’s capabilities have accelerated due to machine learning. In 2017, language acquisition using this technology was at 8 words per minute using a keyboard-based manual letter selection process. When researchers developed machine learning algorithms that could pick patterns in the brain data from local electric field potentials, it became possible to detect which letter a user was thinking about writing. This raised the rate of writing to 15 words per minute. Now it is possible to detect language formation intent in relevant areas of cortex at a rate of 65 words per minute.2 In 2021, the FDA approved the “Stentrode”, an endovascular implant that can translate brain activity from inside a vein in the motor cortex. In a 1-year safety trial of four people with paralysis,3 the device enabled thought-mediated control of computer activity, including shopping and sending emails. One patient was able to chat on Twitter.4 The company behind the Stentrode are still recruiting for more safety trials.

Decoded neural signals have long been used to actuate external robotic limbs and tools, as demonstrated in experiments like BrainGate.5 More recently, it has become possible to use them to actuate parts of the human body whose control has been lost to spinal injury. Several labs are working on a “neural bypass” that routes the brain’s signals around areas of spinal cord damage, activating the remaining undamaged nerve fibres that actuate the limbs. By stimulating nonfunctional but undamaged neurons or nerves that lie beyond the site of damage, signals obtained from invasive electrode arrays (and less penetrating electrodes such as electrocorticography implants beneath the skull) are able to re-activate these nerves using the motor command signals, which has restored the ability to walk and even some proprioception — that is, limb perception and movement.67

Substantial recordings form single neurons in the brain have also been achieved in clinical settings using platinum-iridium microwires. Most commonly this approach has been employed in epilepsy monitoring units where these microelectrodes can record single neuron activity in patients who can declare their memories and intentions. These studies suggest neuronal signals prior to conscious memory recollection or intention, thus in principle presenting the possibility of decoding cognitive faculties such as memory and will prior to subjects being consciously aware of them. This means not only reading your mind, but reading your mind before it is made up.

Writing to the brain is a much less developed field. It has been possible, with invasive implants, to use Utah arrays to generate patterns of electrical pulses that stimulate a targeted collection of neurons, with the effect of implanting “false” sensations.8 Other experiments have ostensibly implanted false memories in animal models, or removed specific memories, though whether these experiments will have useful outcomes in humans has been questioned. In humans, the representation of conceptual knowledge in memory and will is encoded in single neurons.9 Although understanding these codes could help predict and modulate memory and behaviour, the tools capable of activating deliberate intervention may need to penetrate deeper than the cortex, which seems to lie beyond the practical capabilities of the current generation of implants. Many potentially suitable tools, most famously Neuralink, are in development but not yet approved for this purpose.

The neuroprosthetic devices of the future will need to combine reading from and writing to the brain, that is they will need to be closed-loop systems. Such systems are entering clinical use, such as the Responsive Neurostimulation system for epilepsy or adaptive deep brain stimulation for Parkinson’s Disease, but these systems still lack specificity and single neuron resolution.

The most technologically mature alternative is optogenetics, which allows the activity of genetically-edited neurons to be switched on and off with light. Over the past two decades, it has become an increasingly important tool for neuroscience, allowing researchers to probe how the brain works, how specific networks or neurons might be controlled to treat disease, and even to control behaviour in animal models. Multiple clinical trials are now ongoing to investigate the treatment of retinal degenerative diseases using optogenetic methods.10 However, because the retina is manipulable with non-invasive light sources in a way that other neurons are not, and uniquely accessible to gene editing, it is less clear what the next targets within the brain could be. That said, as the cochlea lies in the periphery, optical cochlear implants offer the possibility of optogenetic approaches to hearing restoration.11

Another alternative route towards writing to the brain is by exploiting its own plasticity and ability to adjust to new information in its environment. For example, experiments with a “third thumb” – a second, robotically-articulated thumb attached to one hand – have revealed that the brain quickly adapts to controlling the new digit and integrates it into its body plan, over-writing circuits to accommodate the addition.12 Controlling this digit does not require invasive implants, and can be done intuitively and quickly by recruiting a different or redundant part of the body into service, for example a toe whose movements drive the thumb. This proof of principle — that the brain will quickly adopt a new tool and adapt its body plan accordingly, augmenting its ability to manage its environment — has led to interest from several corporations for further development and adoption in a manual workforce to increase productivity, and from clinicians as an alternative to traditional assistive technologies.

There is strong early evidence for a fourth route to brain modulation. Mouse and observational human studies indicate that epigenetic variations initiated by environmental factors are a powerful modulator of both behaviour and function in the brain.

In some ways this is expected: the brain is constantly changed by its interaction with a changing environment, and brain cells, like all other cells in the body, have a dynamic epigenome influenced by these changes and by experience. There is simply more to the contribution of the epigenome to the brain than is classically thought. While we have long known that epigenetic processes control gene expression during development and determine cell fate, recent research in neuroepigenetics has shown that they also control the genome in the adult brain including in postmitotic cells. The power of the epigenome goes beyond altered gene expression. Epigenetic factors and mechanisms control genome activity and its very architecture. A form of structural memory can be instantiated by epigenetic changes, for example, providing a way to encode specific information into cells.

In some cases, this information is heritable and can be passed across generations when it is present in germ cells. Specific experiences such as stress, poor diet, and exposure to endocrine disruptors, can alter epigenetic factors across the body including in the brain and reproductive cells and can affect brain plasticity, cognitive functions and behaviour and lead to psychiatric disorders and physiological dysfunctions in exposed parents and their offspring.

Research is underway to tease apart the relevant factors and their control over genes or genomic loci that influence brain plasticity and behaviour, but learning to exert some measure of control over this could be important. Taking command of specific epigenetic control mechanisms could help attenuate or prevent cellular dysfunctions and possibly some aspects of disorders from arising, but may also favour desirable behavioural traits. Epigenome editing has the added attraction that, unlike gene editing, it is reversible.

  • Predictions and applications
  • In the near-term (5 years), technology for reading and writing the signals of the brain is predicted to become wireless, integrate a higher channel count and become biocompatible. That said, the most mature and impactful near-term technologies will remain non- or minimally invasive (an example being wearable augmentation). One especially promising area for near-term neuro-augmentation is enhancing the depth and quality of sleep with a closed-loop implanted or wearable EEG device that could provide even non-invasive stimulation to push a waking brain back into sleep when needed, or push a sleepy brain into alertness. Sleep may well be a major port of intervention, and change in the architecture of sleep may prove important in enhancing cognitive function such as memory.13
  • In the medium and far term (10 to 25 years), following this trajectory, brain-computer interfaces will provide better readouts of brain states. Combined with closed-loop neurostimulation, they could lead to restoration of sensation and bowel control. In the cognitive domain they may provide another tool in combating the devastating effects of neurodegenerative conditions such as Alzheimer’s and Parkinson’s Diseases.
  • For optogenetics, the first medical use case will be restoration of vision. One of the most promising applications is the treatment of retinal degenerative diseases. Multiple clinical trials are ongoing.

For epigenetics, the promise is big, although extremely speculative. Preliminary studies suggest it might be possible to “rejuvenate” the epigenome – erasing the signals and modifications accumulated over a lifetime – to restore youthful vision.14 Such therapeutic use cases are still far off. However, within five years, it may be possible to profile the epigenome in individual brain cells in healthy adults and disease.15 Within 10 years we could have functional and causal links between epigenome and some brain functions. In 25 years, epigenome editing of the brain and the germ line may be possible, providing resilience against disruptive factors in the environment. Editing tools could involve CRISPR systems but, instead of modifying the DNA code, change the epigenome and the way a piece of DNA is regulated. An attractive first target may be post-traumatic stress disorder which has been associated with epigenetic alterations in the brain, and whose manipulation has a favourable risk/reward ratio.

Current obstacles to progress

If we are to develop better electrical implants, the materials used will have to become more biocompatible in order to listen in on more brain data. However, there are other ways existing electronic brain-machine interfaces need to evolve. Wireless data transfer methods will also need to be developed - but there is a fundamental limit to how fast data can be processed in situ, or transferred out of the brain and onto external processors. That is due to thermal issues: beyond a certain rate of data transfer, the heat generated begins to cook the brain tissue. This presents a major hindrance for wireless brain implants as both wireless transmission and on-chip processing require tens of watts, which generates enough heat to burn the surrounding tissues. This reality sits in tension with the fact that, for electrical implants to go outside the lab, devices will need to be wireless.

For the moment, these implants will require surgery because it is currently impossible to replicate the capabilities of implants with non-invasive surface technologies (though progress is being made in this area). The physics of how signals attenuate within brain tissue means that we simply cannot pick up the details of signals emitted deep within the brain from the surface. Edge computing, where data processing occurs on distributed devices close to the user, may help.

For optogenetic approaches, the perennial stoppers are gene and light delivery methods. New methods are constantly being proposed.16 The first success of the technique, for retinal degenerative disease, worked in part because of the accessibility of the light source (at the surface of the eye, there was no need to import light, as it was naturally available). Translation into subsequent successes will depend on new ways to get light into deeper targets in the body, which currently necessitates invasive procedures.

For epigenetics, the roadblock is the lack of clarity as to whether epigenetic profiles from animal models translate to humans. Then there is the question of whether these edits are stable,17 and the usual questions about gene editing: is the delivery sufficiently targeted, are the editors precise enough?

2. Hybrid brain development

Inputs by

Denis Jabaudon

Professor, Department of Basic Neurosciences, Faculty of Medicine,

University of Geneva

Isabelle MansuyProfessor, Department of Neuroepigenetics, University of Zurich & ETH Zurich
Muming PuProfessor, Institute of Neuroscience, Chinese Academy of Sciences
Giuseppe TestaProfessor of Molecular Biology, University of Milan; Head of the Neurogenomics Research Centre, Human Technopole; Group Leader, European Institute of Oncology, University of Milan
Silvia VelascoAssociate Professor, Stem Cell Biology Department, Murdoch Children’s Research Institute; Group Leader, The Novo Nordisk Foundation Center for Stem Cell Medicine (reNEW)

State of the art

To extend the capability of implanted devices, or to translate their capabilities from the university lab into the clinic, requires a deeper understanding of the brain that explains longstanding open questions in neuroscience such as, for example, how neuronal diversity emerges during early brain development and assemble into complex neuronal networks and circuits to allow the execution of high-order human brain functions, including cognition, sensory perception, and motor control. For obvious ethical reasons, the developing human brain is largely inaccessible for study. Despite animal models having been very useful in providing insights into basic principles regulating the development of the mammalian brain, important interspecies differences make them unsuitable for understanding aspects of human brain development and function that are distinctively human. To accelerate new insights in neuroscience, therefore, researchers are increasingly turning to organoids, chimeras, and transgenic animals.

Organoids

Organoids – miniature and very simplified models of real organs – are tridimensional aggregates derived from stem cells, able to reproduce biological structures similar in architecture and function to the endogenous tissues.

In particular, brain organoids show great promise for the study of the developing human brain; brain organoids have been shown to recapitulate processes of human brain development with high fidelity and reproducibility.1819 They also exhibit basic features of functional activity resembling those observed in the developing human brain.20

Brain organoids provide a unique opportunity to investigate mechanisms beyond neurological diseases. Indeed, organoids have been used to investigate the neurobiological basis of many conditions including lissencephaly, autism spectrum disorder, epilepsy, tuberous sclerosis and Zika virus infection.

Despite the remarkable progress that has been made in the brain organoid field, many challenges remain. Current brain organoid models fail to recapitulate precise anatomical structures, and well-defined circuits are also not formed. They lack experience-dependent stimulation, which is important for in vivo circuit maturation. Finally, most organoid models still lack a complete representation of the cell types found in the developing human brain. Achieving cellular and functional maturation in vitro is one of the major challenges for the organoid field, this is particularly true organoids modelling the human brain that in vivo take decades to develop. Therefore, despite being invaluable models of the developing human brain, current brain organoids cannot fully mimic the complexity of the adult human brain.

Chimeras & chimeric organoids

Building more complex and physiologically relevant organoids will require solving problems like the lack of anatomical organisation and vasculature, and the limited organoid maturation. Current approaches to overcome these limitations include generation of assembloid organoid models, co-culture with missing cell types (such as microglia), and integration of exogenous vasculature

Another potential way to address these limitations, is to transplant human brain organoids into rodent host brains to provide a more physiological environment, including vascularisation, access to appropriate developmental cues, and cell-cell interactions.21 Doing this using cells from several different individuals implanted into animals could enable full organoid maturation for transplants.

Transgenic animals

Despite the fact that human brain organoids can be used to model and understand the neurobiology behind disorders, these models are unable to replicate complex functions such as emotion and behaviour. But animal models have their limits: neither non-human primates nor mice can recapitulate all aspects of human phenotypes, especially if they involve complex genetics and affect higher-order brain functions, as with neuropsychiatric disorders. However, proof of principle concerning the power of introducing human genes, such as MCPH1, into monkey models have recently emerged.22 Inactivation of another gene, BMAL1, a gene associated with circadian rhythms and the brain’s internal “clock”, has resulted in monkeys exhibiting the traits of human-like psychiatric disorders,23. Research into the evolution of human intelligence found that monkeys carrying the human gene MCPH1, which is associated with DNA repair, performed better on short-term memory tasks. Such research opens the door to transgenic animal studies to explore specific genes in more human diseases, including distinctive brain diseases.

Predictions and potential applications

In the near-term (5-10 years), more sophisticated brain organoid models will be available. They will include a larger diversity of cell types, vascularisation, and more mature circuits. The development of brain organoids will be faster and enhanced maturation will be achieved. This will allow better modelling of late-onset neurodegenerative diseases, such as Alzheimer’s Disease. Importantly, the increased speed of brain organoid generation will offer the opportunity to develop and test drugs on patient-derived organoids to identify customised treatments that consider the specific characteristics of each individual (that is, personalised medicine). One especially promising area will be the possibility to combine multiple donor cells within single organoids to study genetic and environmental perturbations in the context of inter-individual variability. Within 10 years we could expect to gradually move from the use of brain organoids for disease modelling to drug discovery and development. Following the FDA Modernization Act 2.0’s incentive on alternative non-animal testing approaches, brain organoids will become more and more implemented in preclinical studies.24 It will be possible to treat devastating conditions that affect many brain functions, including cognition, resulting in “neuro-augmentation” of patients. In the medium and far term (10 to 25 years), biological, brain-directed computing (biocomputing) using 2D or 3D cell cultures might be used to overcome limits of current silicon-based computing, including speed and energy and data efficiency. Furthermore, brain organoids might be able to replicate basic molecular and cellular aspects of cognition in vitro, such as learning and memory and help understand disorders associated with cognitive impairment.

For chimeras, the near to medium term (5-10 years) will see advances in throughput and standards, and an extended longevity of implanted organoids, both for monkey organ development in pigs, and eventually for human organ development in pigs.

For transgenic animals, it will be possible to demonstrate that this line of research is useful for neuromodulation or therapy within five years. For example, probing the mechanisms of schizophrenia in a macaque model could result in better drugs or other treatments.

The ability for organoids to manifest “learning in a dish” could provide the basis of future biocomputing.25 This vision of biocomputing would encompass a complex, networked interface connecting brain organoids to real-world sensors and peripherals. The system could take advantage of biological memory and learning which is now beyond computing systems. Bigger and more complex organoids could also increase the efficiency of computing more generally. Linking brain organoids up to sensory organoids to allow them to perceive the world – a multi-tissue structure called an assembloid – could improve machines’ ability to interact with the environment. Near to far term (10-25 years), organoids will recapitulate complete neural circuits, not simply cells that touch each other. This research will begin to underpin mechanistic conduits to human mental traits. In the far term, cells from several different individuals implanted into animals could enable full organoid maturation for transplants.

Current obstacles to progress

Organoid technology is steadily improving, and research is achieving better replication of processes of human brain development and function. However, many challenges remain, including attaining increased cellular complexity.26 Current brain organoid models still lack a complete representation of the cell types found in the developing human brain. They fail to recapitulate precise anatomical structures, and well-defined circuits are also not formed. Finally, brain organoids lack experience-dependent stimulation, which is important for in vivo circuit maturation. Achieving cellular and functional maturation in vitro is one of the major challenges for the broad organoid field and is particularly arduous to achieve in organoids modelling the human brain, which in vivo takes decades to develop. Despite being invaluable models of the developing human brain, current brain organoids cannot fully mimic the complexity of the adult human brain.

Biocomputing harnessing brain organoids requires more standardised and sophisticated models with increased cell density. The number of neurons generated in organoids is limited by culture conditions. The lack of vascularisation also contributes to limiting brain organoid growth and neuronal survival over extended times; however, microfluidics work has supported the growth of complex intestine organoids,27 leading to hope that this approach may work for brain organoids. Instrumental for biocomputing will be to generate brain organoids with reproducible and defined cell types that interact in precise neuronal networks and circuits, like those in vivo.

Organoid production needs to become more scalable. Thousands of identical organoids are needed to allow the application of brain organoids for large-scale drug and toxicology screening. Analysis methods that fit high-throughput screening requirements are also needed.

3. Artificial Cognition

Inputs by

Thomas BroxProfessor for Pattern Recognition and Image Processing; Head, Computer Vision Group, Department of Computer Science, University of Freiburg
Chris EliasmithCanada Research Chair in Theoretical Neuroscience; Director, Centre for Theoretical Neuroscience, University of Waterloo
Giacomo Indivieri

Professor, Department of Basic Neurosciences, Faculty of Medicine,

University of Geneva

Henry MarkramProfessor, Laboratory of Neural Microcircuitry Brain Mind Institute, EPFL

State of the art

Organoids, chimeras and transgenic animals will generate valuable insight but in order to develop fine-grained models of brain processes, in-silicon representations may be more useful. Such “artificial cognition” — an umbrella term that encompasses simulation of human brains, or AI or hardware that recapitulates biological brain functions — is already generating new insight in neuroscience. Further development could eventually provide high resolution, global scale models, including digital twins of brains, to enable research it is not currently possible to conduct on biological substrates.

Conceptual advances in computer science and neuroscience also form a feedback loop — the more we understand about the brain, the more we can use these insights to improve artificial intelligence,28 and the more, in turn, these increasingly powerful computers can unlock new insights about the functioning of the human brain and its potential augmentation. Three broad areas emerged from the discussions: software simulations of the brain, biomimetic hardware, and external cognition.

Software simulations of the brain

The Blue Brain project, the BRAIN initiative and the China Brain Project have yielded an abundance of data. Blue Brain has created a biologically-realistic rendering after collecting brain data from a wide variety of sources, with fidelity high enough to allow investigation down to the ion channels in individual neurons.

A biological-scale simulation of the brain could lead to “digital twin” sandboxes of real people’s brains in which hypotheses about disease progression, as well as simple general insights, could be tested. In-silico perturbation of gene expression could allow you to predict which types of cells will be created, for example.

An alternative approach, which could provide insights without the computational load of a biological-scale model, is to scale down the resolution to a fraction of the brain’s 86 billion neurons but retain crucial principles of structure and organisation that reproduce specific functions and behaviour. Even with 6 million neurons, one such model has been able to test hypotheses, perform diverse tasks including classifying handwritten digits, and imitate the way brain cells collect and process information.29

Biomimetic hardware

A major theme that emerged at Villars was an increased appreciation in the computational neuroscience and machine learning communities for the true heterogeneity of neuron structure, shape and behaviour. There are 86 billion cells in the brain, and they are not all the same. Hundreds of thousands of brain cell types have been catalogued through single cell sequencing, and deeper sequencing is identifying more classes still. Eventually we may find that each neuron may exist in a class of its own. These are not cosmetic differences: they have major impacts on the way the cells compute.

The significance of neuron morphology has long been understood in the neuroscience community. Santiago Ramon y Cajal, the father of neuroscience, understood at the turn of the 20th century that dendrites are unique from neuron to neuron. More recently, explorations of cell morphology have taken into account the heterogeneity of axons, the information-transmitting, elongated portion of the neuron. In computer science and machine learning communities however, this knowledge has not been integrated – for functional purposes, neurons are represented as being identical to each other. Artificial neural networks tend to use “point-neuron” models that ignore these morphological differences.

Changing this, including considerations of morphology and heterogeneity, will radically improve computational neuroscience and machine learning. Chip design and Spiking Neural Network learning theories are starting to implement so called “multi-compartmental” models which include the neuron’s multiple functional parts. This growing appreciation of the role of morphology in computation is already leading to insights. For example, researchers have shown that a single realistic neuron model which takes into account the morphology of real pyramidal neurons is as powerful as a multi-layer deep-network,30 and that dendrites, another kind of protrusion from the neuron, play particular computational roles of their own31 – in particular they are being investigated for their role in consciousness.

This shift will have impacts beyond better modelling of real brain dynamics on a traditional computer architecture; it could change the architecture itself, and the efficiency of artificial cognition.

The current practice of relying on traditional graphics processing units to replicate complicated layers of cognition is wildly inefficient. The advent of GPU for Deep Neural Networks quadrupled Google’s energy bills; the electricity bill to train GPT-3 was $4.6 million, according to a conservative estimate. By 2025, information and communication technology is on track to consume 20 percent of the world’s electricity.

In the far more efficient biological brain, structure is function. In other words, the substrate is not the generic hardware on which the algorithm runs. Instead, the substrate is the algorithm. The project of replicating this is widely known as “neuromorphic computing”. First defined in the 1980s by Misha Mahowald, this field is beginning to mature.

External cognition

Though a relatively banal insight, it is worth noting that better AI, even if not bio-inspired, may also qualify as neuroaugmentation; deep learning is already artificial cognition in its own right. From a near to medium term (5-10 years) perspective, the most effective interface between the brain and external cognition will continue to be human vision and digits, which are highly connected to the brain, and have a large bandwidth. The more intelligent the external devices become, the higher level such communications can be, giving the human brain capacity to perform more complex tasks, or multiple tasks simultaneously.

Predictions and possible applications

Within 5 years, brain simulations will be capable of actuating real-world objects and it will become possible to study and interact with digital twins of simple biological organisms.

In the medium term (10 years), digital twins could let doctors interrogate specific brain functions32 or disorders, and emerge into clinical practice. We will be able to engineer specific neural circuits.

Within 25 years, it will become possible to engineer a brain with more neurons than exist in the human brain. Blue Brain will be annotated with enough granular biological detail – down to the level of genes – for researchers to be able to test specific hypotheses by making specific changes to the model at the level of genes or ion channels to observe specific phenotypic outcomes.

Obstacles to progress

For any of this to happen, significant hardware and software limitations need to be addressed. This may require a paradigm change for computing. Furthermore, artificial cognition will need to be capable of building a model of the world from which to reason about the world. We know that this is how human intelligence reasons, but neuroscientists still understand little of how humans build their mental models of the world, or how such a model could be instantiated in artificial cognition. Embodiment was agreed to be a crucial factor that could create a sensory interface with the outside world that AI could access to build its model of the world. This embodiment could take the form of robotics, organoid-type biological sensorium, or a high-fidelity simulation of the world for the AI to explore.

On the biomimetic hardware front, neuroscience still has a long way to go before we understand the link between genes and relevant functions and behaviours; genes and cell development; how cells organise themselves into different anatomies. These are major challenges that will hopefully be addressed by high-resolution digital simulation of brains.

4. Ethical and Governance Dimensions

Inputs by

Andrea BoggioProfessor of Legal Studies, Department of History and Social Sciences; Faculty Fellow, Center for Health and Behavioral Sciences, Bryant University
Karen RommelfangerNeurotech Ethicist and Strategist, Institute of Neuroethics Think and Do Tank; CEO, Ningen Neuroethics Co-Lab; Professor

Broadly three major ethical categories of concern emerged from the Villars discussions: ethical considerations for people who will benefit from neuro-augmentation; ethical considerations for the research animals and other entities which will inform the expected scientific and technological advancements, and responsible anticipation that emerge from the development of these technologies and insights. These are further subdivided where relevant.

Considerations for the individual

Here, workshop participants identified three subcategories of concern: the blurry line between restoration and augmentation, the evolving boundaries of medicalisation, and whether augmentation technology, once unleashed, can remain a choice.

There are three overlapping subgroups of neural augmentation, each of which may require tailored ethical adjudication. One type of neuro-augmentation substitutes lost function to return cognitive capacity to “normal” levels after loss to trauma or disease. Here, an example would be brain-computer interfaces that can decode intended speech and help, for instance, those who have lost verbal communication after a stroke or through ALS. The second type of augmentation supplements normal function, exceeding the capacity we associate with traditional human cognition. An example of this would be cognitive enhancers – whether drugs, surgical procedures or wearable technologies – that allow an individual to improve their memory or concentration, or overcome fatigue in the face of sleep deprivation. The third category of augmentation goes further still, providing capabilities that have not been available to previous humans. Such beyond-human function would include optogenetically-added electroception or the ability to “see” in infrared; living life guided by a high-fidelity digital twin, or the addition of extra body parts such as the Third Thumb.

There are some areas of profound disagreement on what constitutes a disorder. Many people with hearing loss or autism, for example, push back on the idea that their experience of the world is a disease to be cured, and that a society without neuro-divergence is implicitly better.

Where the boundaries between these three subgroups lie is open to debate – a debate that will be essential for shaping future policy and ethical considerations.

Some ethical viewpoints seem to depend on to what end the technology is being deployed, but the one boundary that people seem to intuitively accept is between restoration of a lost function and increasing the capacity that is already considered within the normal range. Here, the intuition is that restoration is not augmentation per se, and that there should be a different set of ethical, regulatory, social, financial (and possibly other) thresholds for acceptance of restoration compared to augmentation.

One early approach to determine the difference between restoration and augmentation is being undertaken at the Chinese Academy of Sciences. The beginnings of a cohort study of individuals labelled “healthy” after undergoing routine physical examinations are also being invited to participate in a small battery of cognitive tests that could provide data to establish average markers of brain health benchmarked against age. In the future, this data could be used to benchmark an individual’s cognitive capacity against the average established by their baseline age cohort. Having determined “normal” functioning, it then becomes possible to understand under what circumstances restoration or even early interventions – prior to onset or diagnosis of a “disease” – should be offered.

Advances in organoid and silicon-based modelling and simulation suggest that in the future, it will become possible to intervene when a pre-disease state is detected. For example, early biomarkers or hints in the projectome could warn of impending Alzheimer’s before any symptoms emerge, and intervene accordingly. Is intervention at this pre-disease stage classed as restoration or augmentation of function? The distinction may be important for financial matters such as insurance coverage. There are also lessons to be learned from breast cancer screening overdiagnosis, in which false positives were a problem. For neuro-augmentation, diagnosis before physical symptoms manifest would require a highly trusted model as in the case of brain disorders, a prediction of risk not only suggests what disease a person might have, but who they might become.

More straightforwardly, we know that a number of mental functions change with age and accept this as a part of “normal” ageing. But some may question if ageing itself is a disease; would such an intervention constitute restoration or augmentation? This, too, has implications for insurance coverage and the provision of care resources for an ageing population, as well as for defining a societally acceptable way of ageing.

Such questions raise the issue of “medicalisation”, where behaviours and conditions are considered issues warranting medical treatment. Brain technologies have the potential to re-define what we consider appropriate treatment. While most participants in the discussion agreed that neuro-augmentation technologies provide an appropriate response to brain injuries and neurodegenerative diseases, they might define such intervention as simply restorative treatment rather than augmentation per se.

With such questions in mind, the choice to augment becomes a loaded issue. It is possible that the availability of augmentation will make sticking with normal brain function seem old-fashioned or even self-destructive. There may be external or even financial pressure to augment: if age-related cognitive decline is seen as optional, those eschewing augmentation may become a financial liability or be considered selfish by family members with a duty of care. Such options may have to be included in workplace legislation; some employers may consider it necessary, in a competitive sector, to have augmented employees, or to base employment contracts on predictions of potential decline in brain function in the same way that professional athletes’ contracts are based on fitness assessments and include injury clauses. University students without access to neuro-enhancement may find themselves at a disadvantage in academic assessment and in job-seeking situations. The military has a long history of insisting that its employees use available augmentations, whether technological or pharmaceutical. It would seem naive to think that this will not be extended to medical augmentations, when they become routinely available and widely accessible.

A complicating factor is that neuro-augmentation may have unexpected outcomes. For example, the augmentation of one mental capacity might come at the cost of another, given the brain constraints for plasticity and cognitive load. Research on the Third Thumb, specifically, has raised new questions about this “neural resource allocation problem”. A person who controlled the tool for a full workday with their toe found their brain had a hard time quickly readjusting to “normal”, including while driving home. It is crucial that, before we adopt such technologies, we consider precisely under what circumstances we want to add more cognitive load, and what the potential consequences might be.

Considerations for research animals and other entities

Robust debate around establishing the line between augmentation and restoration, as discussed above, could establish red lines around which types of animal research to consider ethically acceptable. However, another ethical tension arises with respect to the augmentation of nonhuman primates with human elements. For example, a transgenic nonhuman primate model of Parkinson’s disease for a specific intervention seems less objectionable, under this rubric, than a transgenic nonhuman primate model for open ended research or augmentation research that might enhance non-human animal function to be closer to the capacities of humans (i.e. a monkey with human genetic material).

Such research on a transgenic non-human primate, expressing human genes in nonhuman primate brains, may recreate human suffering such as mental illness. Furthermore, it might be possible for an AI construct with a sufficiently complex model of the world to develop emergent properties that resemble consciousness. What do we owe these entities in our care? Will these entities still remain tools for science to use, or will they become collaborators whose rights (including the right to not be made to suffer) must be taken into account? How and under what circumstances would this distinction change research protocols?

Tests may be developed to explore the answer but they may be subject to shifting goalposts. For example, the tendency to see non-human entities as a class worthy of protection hinges on how human-like they are, but those traits tend to be rationalised away if a greater need presents itself. In these debates, one further, crucial question requires attention: even if agreement can be reached over these issues, how can it be enforced?

Responsible anticipation

Science is a human right — everyone has the right to freely share in the advances of science and its benefits. Article 15 of the International Covenant on Economic, Social and Cultural Rights places obligations on governments to encourage, facilitate and not to interfere with an individual’s work as a scientist, except as demanded by ethical and legal standards.33

Dispute resolution mechanisms

The accelerating pace and democratisation of technological capabilities in neuro-technology and gene editing are set to bring a similar uptick in international disputes, as ethical priors in different countries may not be aligned.

Participants in Villars gave a positive response to a proposal for an “international court of science”. Its nonbinding agreements would not be able to regulate ethical red lines. However, such “soft laws” would help to establish norms and facilitate the development of a consensus- and rules-based international order in this area.34

Annex: List of Participants

Patrick AebischerFormer President, EPFL; Vice-Chairman, Board of Directors, GESDA
Jocelyne BlochProfessor, Department of Neurosurgery, Lausanne University Hospital and University of Lausanne
Andrea BoggioProfessor of Legal Studies, Department of History and Social Sciences; Faculty Fellow, Center for Health and Behavioral Sciences, Bryant University
Azad BonniSenior Vice President, Global Head of Neuroscience & Rare Diseases, Roche; Professor and Head of the Department of Neuroscience, Washington University
Thomas BroxProfessor for Pattern Recognition and Image Processing; Head, Computer Vision Group, Department of Computer Science, University of Freiburg
Grégoire CourtineProfessor, Center for Neuroprosthetics, Brain Mind Institute, EPFL
Ilka DiesterProfessor, Head of Optophysiology Lab, University of Freiburg
Chris EliasmithCanada Research Chair in Theoretical Neuroscience; Director, Centre for Theoretical Neuroscience, University of Waterloo
Alexandre FaselSpecial Representative for Science Diplomacy in Geneva, Swiss Confederation
Itzhak FriedProfessor, Department of Neurosurgery, University of California Los Angeles School of Medicine
Jaimie HendersonDirector, Stereotactic and Functional Neurosurgery, Stanford University School of Medicine; Co-Director, Stanford Neural Prosthetics Translational Laboratory (NPTL), Stanford University
Michael HengartnerPresident, ETH-Board; Chairman, GESDA Academic Forum
Giacomo IndiveriProfessor, Institute of Neuroinformatics, University of Zurich and ETH Zurich
Denis JabaudonProfessor, Department of Basic Neurosciences, Faculty of Medicine,</p>

University of Geneva

Tamar MakinProfessor of Cognitive Neuroscience, University of Cambridge
Isabelle MansuyProfessor, Laboratory of Neuroepigenetics, University of Zurich & ETH Zurich
Henry MarkramProfessor, Laboratory of Neural Microcircuitry Brain Mind Institute, EPFL
Muming PuProfessor, Institute of Neuroscience, Chinese Academy of Sciences
Jean-Marc RickliHead of Global and Emerging Risks, Geneva Centre for Security Policy
Karen RommelfangerNeurotech Ethicist and Strategist, Institute of Neuroethics Think and Do Tank; CEO, Ningen Neuroethics Co-Lab; Professor
Giuseppe TestaProfessor of Molecular Biology, University of Milan; Head of the Neurogenomics Research Centre, Human Technopole; Group Leader, European Institute of Oncology, University of Milan
Silvia Velasco

Associate Professor, Murdoch Children’s Research Institute