Fast and Slow: Learning How the Brain Controls Movement

Posted on May 3rd, 2016

What if you couldn’t move faster even when you wanted to? Researchers thought that the part of the brain that determines how fast we perform voluntary movements, such as walking across a room or playing a melody on the piano, was a bit like a car. It has an accelerator to make movements faster and a brake to slow them down. Now, scientists at the Howard Hughes Medical Institute’s Janelia Research Campus have shown that, contrary to what was thought, the “brake” in this part of the brain can actually accelerate movements in mice, and the gas can rein them in. By determining how the brain controls movement, this discovery helps to explain the systematic slowing of movement in patients with Parkinson’s disease and could pave the way for interventions that allow patients to learn to perform everyday actions more fluidly.

Walking a little faster is no problem for most people, but patients with Parkinson’s disease struggle to accelerate voluntary movements. We have assumed for some time that “it’s almost as if only the brake works and the gas pedal doesn’t work,” says Janelia group leader Joshua Dudman. To better understand this effect, he and his colleague, research scientist Eric Yttri, wanted to find out more about the normal role of the basal ganglia, a brain region that is affected in Parkinson’s disease, in controlling voluntary movement. Within the basal ganglia, there are two main types of neurons known to promote (gas) or suppress (brake) movement.

In experiments described in an advance online publication May 2, 2016 in the journal Nature, Yttri and Dudman used a technique known as optogenetics to activate neurons in the basal ganglia during movements at specific speeds. By shining a laser through fine optical fibers that extend into the animals’ brains, the researchers could selectively stimulate either the gas or the brake neurons to ask how each group influenced future movement.

Yttri trained mice to move a small joystick with their front paws in order to get a sweet drink. The joystick was rigged such that a mouse has to make a choice to satisfy its thirst. The rodent has to push the joystick fast enough to obtain a drink of water, but if it pushes too rapidly it is wasting energy and ultimately limiting the total water it can consume. Every day, people make similar, albeit implicit, decisions about how rapidly they must act – deciding how fast to walk to the neighborhood restaurant on a lunch break. However, in Parkinsonian patients (and as Dudman and colleagues showed previously, Parkinsonian mice) all movements are slowed.

To gauge how forcefully a mouse was pushing, the researchers measured the speed of the joystick. On average, a mouse’s joystick movements take about half a second to complete. Dudman and Yttri first tested the effect of adding extra activity in either group of neurons during specific movements. If the push was predicted to be a swift one based upon its initial speed, the device rapidly activated one or the other group of neurons in the basal ganglia. With this procedure, the researchers could spur the mice to push the joystick systematically faster or slower on future movements, depending on which population of neurons the researchers activated.

Those results are consistent with the long-standing idea that separate populations of neurons in the basal ganglia serve as brake and gas pedal for movement. To determine whether these neurons always had the same effect on movement, the researchers asked what would happen if they activated the neurons when a mouse made a slow movement of the joystick. In this case, switching on the “gas pedal” neurons didn’t accelerate the animals’ movements. Now stimulation systematically slowed future movements. Dudman and Yttri saw a similarly reversed outcome when they triggered the “brake” neurons at the beginning of a slow push. The rodents surprisingly started to move the joystick systematically faster.

Dudman explains, “either one can speed you up or slow you down.” In other words, by showing that releasing the brake can speed movements and releasing the gas pedal can slow movements, the study suggests that we are using a combination of both pathways to regulate movement speed. To visualize how this system adjusts how we move, Dudman says, think of a racecar driver zipping around a track. Instead of either speeding up or slowing down, a driver uses both the gas and brake together to make controlled, but fast turns.

The researchers asked whether this control system could be what is disrupted in Parkinson’s disease. In patients with Parkinson’s, the cells that make a chemical messenger called dopamine die off. To simulate the loss of these cells in the mice, the researchers injected the animals with a compound that blocks dopamine receptors on neurons – mimicking an absence of dopamine. The stimulation that was previously sufficient to change the speed of movement now had no effect.

In addition to clarifying how the basal ganglia controls movements, these results have significant implications for treatment of Parkinson’s disease. Many patients already have implantable devices (deep brain stimulators) that provide electrical stimulation to the brain to improve movement. By selectively activating stimulation during specific movements, similar to what the mice received, such devices might allow patients to access to a normal range of movement speeds.

Scientists Map Brain’s ‘Thesaurus’ to Help Decode Inner Thoughts

Posted on April 27th, 2016

What if a map of the brain could help us decode people’s inner thoughts?

Scientists at the University of California, Berkeley, have taken a step in that direction by building a “semantic atlas” that shows in vivid colors and multiple dimensions how the human brain organizes language. The atlas identifies brain areas that respond to words that have similar meanings.

Credit: Alex Huth, UC Berkeley

Scientists map how the brain responds to different words.

The findings, published in the journal Nature and funded by the National Science Foundation (NSF), are based on a brain imaging study that recorded neural activity while study volunteers listened to stories from “The Moth Radio Hour.” They show that at least 1/3 of the brain’s cerebral cortex — including areas dedicated to high-level cognition — is involved in language processing.

Notably, the study found that different people share similar language maps.

“The similarity in semantic topography across different subjects is really surprising,” said study lead author Alex Huth, a postdoctoral researcher in neuroscience at UC Berkeley.

When spoken words fail

Detailed maps showing how the brain organizes different words by their meanings could eventually help give voice to those who cannot speak, such as people who have had a stroke, brain damage or motor neuron diseases such as ALS. While mind-reading technology remains far off on the horizon, charting language organization in the brain brings decoding inner dialogue a step closer to reality, the researchers said.

“This discovery paves the way for brain-machine interfaces that can interpret the meaning of what people want to express,” Huth said. “Imagine a brain-machine interface that doesn’t just figure out what sounds you want to make, but what you want to say.”

For example, clinicians could track the brain activity of patients who have difficulty communicating and then match that data to semantic language maps to determine what their patients are trying to express. Another potential application is a decoder that translates what you say into another language as you speak.

“To be able to map out semantic representations at this level of detail is a stunning accomplishment,” said Kenneth Whang, a program director in the NSF Information and Intelligent Systems division. “In addition, they are showing how data-driven computational methods can help us understand the brain at the level of richness and complexity that we associate with human cognitive processes.”

Huth and six other native English speakers participated in the experiment, which required volunteers to remain still inside a functional Magnetic Resonance Imaging (fMRI) scanner for hours at a time.

Each study participant’s brain blood flow was measured as they listened, with eyes closed and headphones on, to more than two hours of stories from The Moth Radio Hour, a public radio show in which people recount humorous and poignant autobiographical experiences.

The participants’ brain imaging data were then matched against time-coded, phonemic transcriptions of the stories. Phonemes are units of sound that distinguish one word from another.

The researchers then fed that information into a word-embedding algorithm that scored words according to how closely they are related semantically.

Charting language across the brain

The results were converted into a thesaurus-like map that arranged words on images of the flattened cortices of the left and right hemispheres of the brain. Words were grouped under various headings: visual, tactile, numeric, locational, abstract, temporal, professional, violent, communal, mental, emotional and social.

Not surprisingly, the maps show that many areas of the human brain represent language that describes people and social relations, rather than abstract concepts.

“Our semantic models are good at predicting responses to language in several big swaths of cortex,” Huth said. “But we also get the fine-grained information that tells us what kind of information is represented in each brain area. That’s why these maps are so exciting and hold so much potential.”

Senior author Jack Gallant, a UC Berkeley neuroscientist, said that although the maps are broadly consistent across individuals, “There are also substantial individual differences. We will need to conduct further studies across a larger, more diverse sample of people before we will be able to map these individual differences in detail.”

In addition to Huth and Gallant, co-authors of the paper are Wendy de Heer, Frederic Theunissen and Thomas Griffiths, all at UC Berkeley.

This NSF-funded project is an example of how NSF invests in the frontiers of brain research.

Allen Institute Releases Powerful New Data on the Aging Brain and Traumatic Brain Injury

Posted on April 26th, 2016

The Allen Institute for Brain Science has announced major updates to its online resources available at brain-map.org, including a new resource on Aging, Dementia and Traumatic Brain Injury (TBI) in collaboration with UW Medicine researchers at the University of Washington, and Group Health. The resource is the first of its kind to collect and share a wide variety of data modalities on a large sample of aged brains, complete with mental health histories and clinical diagnoses.

“The power of this resource is its ability to look across such a large number of brains, as well as a large number of data types,” says Ed Lein, Ph.D., Investigator at the Allen Institute for Brain Science. “The resource combines traditional neuropathology with modern ‘omics’ approaches to enable researchers to understand the process of aging, look for molecular signatures of disease and identify hallmarks of brain injury.”

The study samples come from the Adult Changes in Thought (ACT) study, a longitudinal research effort led by Dr. Eric B. Larson and Dr. Paul K. Crane of the Group Health Research Institute and the University of Washington to collect data on thousands of aging adults, including detailed information on their health histories and cognitive abilities. UW Medicine led efforts to collect post-mortem samples from 107 brains aged 79 to 102, with tissue collected from the parietal cortex, temporal cortex, hippocampus and cortical white matter.

“This collaborative research project aims to answer one of the most perplexing problems in clinical neuroscience,” says Dr. Richard G. Ellenbogen, UW Chair and Professor, Department of Neurological Surgery. “If a person suffers a traumatic brain injury during his or her lifetime, what is the risk of developing dementia?  We simply don’t know the answer at this time, but some of the answers might be found in this comprehensive dataset by people asking the right kind of questions. This issue is important because of the inherent risk for everyone who plays sports, exercises or in general, participates in the activities of daily life.”

“This study was made possible by the amazing generosity of the ACT participants and their families, incredible collaboration among our partners, and the generosity and vision of the Paul G. Allen Family Foundation,” says Dr. Dirk Keene, co-principal investigator and Director, UW Neuropathology. “For the first time, scientists and clinicians from around the world will have access to this unique dataset, which will advance the study of brain aging and hopefully contribute to development of novel diagnostic and therapeutic strategies for neurodegenerative disease.”

The final online resource includes quantitative image data to show the disease state of each sample, protein data related to those disease states, gene expression data and de-identified clinical data for each case. Because the data is so complex, the online resource also includes a series of animated “snapshots,” giving users a dynamic sampling of the ways they can interrogate the data.

“There are many fascinating conclusions to be drawn by diving into these data,” says Jane Roskams, Ph.D., Executive Director, Strategy and Alliances at the Allen Institute. “This is the first resource of its kind to combine a variety of data types and a large sample size, making it a remarkably holistic view of the aged brain in all its complexity.”

Researchers focused on examining the impact of mild to moderate TBI on the aged brain, comparing samples from patients with self-reported loss of consciousness incidents against meticulously matched controls.  “Interestingly, while we see many other trends in these data, we did not uncover a distinctive genetic signature or pathologic biomarker in patients with TBI and loss of consciousness in this population study,” says Lein.

“This new resource is an exciting addition to our suite of open science resources,” says Christof Koch, Ph.D., President and Chief Scientific Officer of the Allen Institute for Brain Science. “Researchers around the globe will be able to mine the data and explore many facets of the aged brain, which we hope will accelerate discoveries about health and disease in aging.”

Research to create this resource was funded with a $2.37 million grant from the Paul G. Allen Family Foundation to the University of Washington.

Two other resources have received significant updates in the latest data release. The Allen Cell Types Database now includes gene expression data on individual cells, in addition to shape, electrical activity and location in the brain. The number of cells in the database has also increased, and, in collaboration with the Blue Brain Project, a subset of cells are accompanied by a new robust biophysical model.

The Allen Mouse Brain Connectivity Atlas now includes its first public release of layer-specific connectivity in the visual cortex, including more specific targeting of cells using newly developed tracing methods.

 

Neural Roots of Curiosity Explored

Posted on April 25th, 2016

Credit: Allan Zepeda

Simons Society Junior Fellow Jennifer Bussell is designing experiments to study the neural roots of curiosity.

Jennifer Bussell is curious about curiosity. A basic desire to learn about the environment confers an evolutionary advantage on many species, but we humans also seek out information for its own sake. What is it that drives us to know something, just for the satisfaction of knowing it? “That’s a fundamental question that we know very little about,” says Bussell, a postdoctoral research scientist at Columbia University and a Simons Society Junior Fellow. Bussell is just beginning experiments in mice to investigate the neural underpinnings of our desire to find things out. Recently, I spoke with her about her work. An edited version of the interview follows.

Anyone who has had dogs or cats knows that they will investigate a new toy or other object in the room, as a way to make sense of the environment. But how do you scientifically test whether an animal is curious?

In most animals, the drive to get information, the drive to explore, to know or seek information, evolved because usually it’s useful to do that in the environment. But to study this drive — call it curiosity — we have to isolate it from other rewards and artificially make the information gained useless.

My collaborator, Columbia professor Ethan Bromberg-Martin, has come up with a wonderful way to measure monkeys’ desire to gain information independent of other rewards. In one experiment, thirsty monkeys can get a drink of water by moving their eyes to choose a symbolic target on the left or right of a computer screen. The monkey has a 50-50 chance of getting a large water reward whatever its choice. On the right, it sees one of two symbols: One symbol is associated with getting a lot of water, and the other is associated with getting less. On the left, the monkey sees one of two other symbols, neither of which means anything — the symbols on the left are not correlated with the amount of water. So the monkey can choose whether to have information in advance, but its choice has no effect on its water reward.

Once the monkey learns that it can look to the right and know ahead of time whether it will get the larger water amount, it almost always chooses to know. What’s even more amazing is that the same reward-encoding neurons that fire when the monkey gets water also fire when it sees the symbol that gives it information. We’re trying to design a similar experiment in mice.

How can you test information seeking in mice?

Smell is at the center of a mouse’s world. The researchers in the lab I’m in [run by Nobel laureate Richard Axel] know a lot about how the identity of a smell is represented in the brain. In the experiments I am setting up, we will test thirsty mice in a box with holes, or ports, where they can receive water. We will offer the mice information in the form of odors rather than visual cues, and they can indicate their choices by entering different ports. We expect that curious mice will choose the ports with informative odors, and we are interested in how those odors are represented in the brain.

How do you determine whether neurons involved in recognizing a smell are also involved in or correlated with curiosity of that smell and whether it holds information?

We can identify which neurons are activated by an odor using microscope images of a particular fluorescent protein inserted into neurons. Because we can see which neurons fire in response to a smell, we can ask whether different cells are activated in the mouse’s cerebral cortex when the smell signals the possibility of information versus when it does not.

We can also silence or activate the particular brain pathways we think might be involved in driving this curiosity and ask if those pathways play a causal role in the choice of information.

How did your own curiosity lead you to neuroscience?

As an undergraduate, I worked in a lab where we wanted to understand, genetically, what makes humans unique. We looked for brain-specific genetic differences between humans and other primates, so I’ve always been fascinated by the question of how a physical object embodies a mind and consciousness and a drive to understand the world.

In graduate school, I set out to study molecular biology, but then I learned about all of the discoveries being made in neuroscience. It seemed like such an exciting field, and I wanted to be a part of that. That was the first time that I took formal neuroscience classes, and I switched my training rotations to neuroscience. I remember watching fruit flies’ courtship under a microscope for the first time and thinking about how we knew and could, in a way, control the 2,000 neurons that make the insects do that complicated behavior. That was really amazing.

Before going to graduate school, you worked as a management consultant in the biotechnology industry. How has that experience shaped your career?

My graduate school adviser likes to say that, unlike most scientists who spend their entire lives in the captivity of academia, I’ve been out in the wild. Working in the ‘real world’ was really helpful in terms of learning how to work in a team. But mostly it made me appreciate how lucky I am to be an academic scientist. I have so much gratitude toward the taxpayers and other funders for allowing me to think deeply about the brain and how it works, and I really want to be able to do something meaningful with the opportunity.

What questions about the brain do you hope to see answered during your career?

So much about the brain is still mysterious, but one of the big questions is how information is transformed within a neural circuit. We know there are electrical and molecular signals, but what’s the code? Knowing that would be the first step toward understanding the brain in the same way that we understand an electrical circuit or a computer, where we actually know how information processing is accomplished.

Personally, I would also love to know more about the extent to which seeking information is the motivation for animals to do things. Curiosity is starting to seem like an important motivation for learning. If we can understand more about how it works, maybe we can encourage it.

International Brain Projects Considered

Posted on April 22nd, 2016

Since the launch of the US BRAIN Initiative and the EU Human Brain Project, the idea of global participation in large-scale neuroscience projects has gained considerable momentum. Australia, Canada, and Denmark have all joined the US BRAIN Initiative as formal partners. In addition, Japan has launched a nationwide initiative focused on marmoset brain research and China is preparing to announce its own national brain project.

In an attempt to channel some of this excitement about brain research into a single international collaboration to tackle a major neuroscience project, more than 60 scientists from 12 countries met at Johns Hopkins University in Baltimore, Md, earlier this month. Science magazine’s Emily Underwood wrote a story about the meeting, which was sponsored by the Kavli Foundation and the National Science Foundation. Underwood reported that the goal of the meeting was to discuss big projects worthy of worldwide participation. Some of the diverse ideas centered on curing a single disease such as depression or Alzheimer’s, while others focused on creating highly detailed maps of neural connections within the human brain or describing the detailed neural circuitry involved in the production of a single complex behavior in a mammal.

According to Underwood, three basic research questions emerged as topics of interest: what makes individual brains unique; how the brain’s many components orchestrate learning and task performance; and how to leverage the brain’s plasticity towards protecting and restoring brain function.

In addition, a central point of discussion at the meeting was figuring out a better method for vetting, sharing, and storing neuroscience data. The attendees’ proposals for such a method converged on an online resource tentatively called the International Brain Station that would serve up enormous neuroscience datasets to researchers and the general public.

The scientists will meet again in September to finalize their proposal, which will then be presented a couple of weeks later to global leaders at the United National General Assembly to gather support and funding for the proposed project.

Neuroscience Research into Dyslexia Leads to ‘Brainprints’

Posted on April 15th, 2016

A wonderful thing about basic research is its tendency to produce advances researchers hadn’t anticipated. Cognitive neuroscientist Sarah Laszlo, for instance, found her early childhood learning studies took an unexpected jump into the worlds of security and identity verification.

Credit: Sarah Laszlo, Binghamton University

Cognitive neuroscientist Sarah Laszlo of Binghamton University, State University of New York, prepares a subject to measure brain activity using electroencephalography (EEG).

Laszlo’s research at Binghamton University, State University of New York, uses electroencephalography (EEG) to measure children’s brain activity as they learn to read. Through collaboration with colleagues, however, she found the work also offered a potential breakthrough in biometrics — physical attributes, like fingerprints, that can be used to verify people’s identities.

Advancements over the past decade have revolutionized what EEG can tell researchers. Improvements in underlying technologies (e.g., size, comfort, and portability of sensors, the ability to measure the signal, and the ability to analyze large amounts of data) allow Laszlo and her colleagues to follow individual children’s development over time. Those new advantages have created opportunities to study an important area of learning development: reading.

“Previous research in this area predominantly focused on comparing groups of people, but when we are following an individual, we can begin to predict, on a child-by-child basis, who will develop problems reading in the future, at least two years in advance,” she said. “That gives us a lot of extra time to help that child before problems become noticeable.”

The ability to predict future problems with reading would provide an important tool for researching and preventing development of these issues. Research has shown that intervention can effectively help children with dyslexia and other reading disabilities — but that intervention must take place early, usually before second grade. Even attentive caregivers can have difficulty recognizing when a child has reading troubles before first or second grade.

“We are working toward developing a type of reading ability screening test that could be used how a hearing test is used now,” Laszlo said. “Having a predictive test would double or triple the time period for an effective intervention.”

Her lab studies children ages 4 to 14 who fall across the spectrum of reading ability, from gifted to dyslexic readers. She takes repeated recordings of brain activity while a child reads. Her lab is now beginning to understand characteristics that, when taken together, represent red flags discernable early enough in brain development (say in a four year old who has just barely started to understand letters) to allow for a predictive test and early intervention.

“If we can identify kids that will be dyslexic and help them before they even have a problem, it is really a big deal,” Laszlo said. “I am excited by the promise this research has to prevent problems for kids and protect them from experiencing negative life-long effects of reading problems.”

A different direction: biometrics

When her individualized brain readings over different time points caught the attention of bioengineer Zhanpeng Jin, a colleague across SUNY Binghamton’s campus, Laszlo’s research jumped in a new, direction. Jin studies biometrics and thought Laszlo’s brain activity readings could be used as a brain-based, biometric.

Using brain readings as a security measure to prevent identity theft has several advantages over other biometrics like fingerprints or retinal scans. For example, they cannot be copied surreptitiously or taken from someone who is deceased. Brain readings could prove a game-changer for the security industry. But brain readings can only be useful if they are extremely reliable; a measure that only recognizes an individual most of the time would not work as a security device because people would get locked out of their own devices and offices.

Laszlo, Jin, and their colleague, Maria V. Ruiz-Blondet, decided to explore whether they could perfect this approach by measuring brain activity in adults who were either focusing on a recurring, easily remembered, thought or looking at specific images of different foods, words, 3-D designs and celebrity faces.

By analyzing brain activity response to visual and thought stimuli, Laszlo said “We can identify the individual with 100 percent accuracy. When Zhanpeng first came to me with the idea, I honestly didn’t think it would work. It’s amazing.”

Learning How the Brain Recovers from Disruptions

Posted on April 14th, 2016

Group Leaders Karel Svoboda (left) and Shaul Druckmann (right) discuss their collaborative research at HHMI's Janelia Research Campus.

Group Leaders Karel Svoboda (left) and Shaul Druckmann (right) discuss their collaborative research at HHMI’s Janelia Research Campus.

New research from scientists at the Howard Hughes Medical Institute’s Janelia Research Campus suggests that the brain is organized into modules that work together to maintain critical functions, even in the face of disturbances.

This structural organization may explain how neurons that store short-term memories can recover from significant disruptions—for example, enabling a quarterback to remember a planned play despite the distractions he encounters before he throws the football. According to the Janelia group leaders who led the study, experimental neuroscientist Karel Svoboda and theoretical neuroscientist Shaul Druckmann, this motif is likely to underlie other essential circuitry within the brain as well.

“This is how an engineer would build a mission-critical system,” says Svoboda. “You distribute the critical systems over multiple modules, and then the modules talk to each other so they can sense when one of them isn’t doing well and can correct for each other.”

The study, reported April 13, 2016 in an advance online publication in Nature, began with a surprising observation by postdoctoral fellow Nuo Li in Svoboda’s lab. The team had been studying neurons in mice that are involved in planning motor activity. After an animal is instructed by researchers to move in a particular way, groups of these neurons become active, signaling for several seconds until the animal completes the movement—a form of memory that outlasts the milliseconds that any single neuron can signal on its own. Scientists knew that an animal’s motor plan can persist even if this signaling is temporarily disrupted. Svoboda and his colleagues wanted to find out just how robust the underlying neural activity was.

To find out, they used a laser to switch off motor-planning neurons briefly before the animals in their experiments were allowed to complete a task. Monitoring the subsequent activity, they found that once the neurons were allowed to resume normal signaling, they quickly adjusted their activity to make up for the lost time. The mice carried on as if undisturbed, remembering which way they had been instructed to move and successfully completed their task. “We quenched [neural] activity to zero and saw that it came back to exactly the levels where it should have been,” Svoboda says. “It was a perfect—almost eerily perfect—example of robustness.”

Theoretical neuroscientists have modeled several ways in which neural circuits can establish robustness, so Svoboda shared his data with Druckmann and postdoctoral fellow Kayvon Daie, seeking an explanation for how the motor-planning neurons were able to recover so completely. “There is a rich history of models that have been suggested for such systems,” Druckmann says “But when we tried to compare experimental results to the models, we found that none of them show such strong robustness.”

“The fact that it recovers to exactly where it would have been had you not shut down the system is where all the models go wrong,” Druckmann says. Existing models showed neural activity picking up where it had left off after a disruption, so that the pause introduces a persistent delay in the normal activity pattern and activity remains slightly displaced from where it should be. “But what you see in the experiment is the exact opposite,” Druckmann says. “Somehow the activity catches up to where it needs to be.”

Something was missing. So, Svoboda says, “we went back to the biology to give us hints about how to construct the next level of models.”

Knowing that no group of neurons works in isolation, the researchers began to wonder if another brain area might have compensated for the disruption Svoboda’s team had introduced in their experiments. “The simplest thing that could be is that maybe this brain area is just looking at what another brain area is doing and copying it,” Druckmann says.

His modeling suggested the situation was slightly more complex, and that neuronal activity might return to where it should be after a disruption if the cells were in communication with another brain area carrying out the same function. “It’s like two roller coasters running in parallel and connected with a big rubber band,” Druckmann explains: If one of the coasters falls off the track, the other keeps going, and the rubber band eventually snaps the wayward coaster back where it should be.

Such an organization would explain the striking robustness observed in the experiments. “Once we had the right architectural principles, all of the preexisting models could be rescued,” Druckmann says. “We realized that we need two modules, they need to be redundant in the sense that each of them can independently generate the right dynamics, and they need to be connected. Once we rewrote them according to these principles, all of the models worked.”

The scientists devised a series of experiments in which they tested their model by disrupting motor-planning circuits on opposite sides of the brain, both separately and together. As expected, when they interrupted signaling in one region at a time, neurons recovered well. But when they disrupted both motor-planning regions at the same time, recovery was impaired and animals performed their task poorly. It looked as if maintaining the motor plan did indeed depend on at least one of the modules operating undisturbed.

In a final round of experiments, the researchers disconnected the two motor-planning regions from one another, then blocked signaling on one side. Although one module remained undisturbed, the disrupted module was unable to recover, supporting the idea that two modules must be in communication to correct for lapses in activity.

The scientists suspect their new model may explain the robustness of neural circuits beyond those they tested in the current study. “We think that this modularity probably happens in many incarnations,” Svoboda says. “From circuit analysis, we know that the right kind of circuit elements are there.”

Improved Brain Mapping Tool 20 Times More Powerful Than Previous Version

Posted on April 14th, 2016

A Salk team builds upon their rabies virus technology to better map neurons across large swaths of the nervous system. In a mouse brain section (thalamus), neurons providing monosynaptic inputs to cortical inhibitory neurons are traced via rabies (blue). Purple counterstaining shows surrounding cellular architecture. (Credit: The Salk Institute)

A Salk team builds upon their rabies virus technology to better map neurons across large swaths of the nervous system. In a mouse brain section (thalamus), neurons providing monosynaptic inputs to cortical inhibitory neurons are traced via rabies (blue). Purple counterstaining shows surrounding cellular architecture. (Credit: The Salk Institute)

LA JOLLA—Salk Institute scientists have developed a new reagent to map the brain’s complex network of connections that is 20 times more efficient than their previous version. This tool improves upon a technique called rabies virus tracing, which was originally developed in the Callaway lab at Salk and is commonly used to map neural connections.

Rabies viral tracing uses a modified version of the rabies virus that jumps between neurons, lighting up connections along the way. The illuminated map allows researchers to precisely trace which neurons connect to each other. Visualizing this neural circuitry can help scientists learn more about conditions ranging from motor diseases to neurodevelopmental disorders.

“To truly understand brain function, we have to understand how different types of neurons are connected to each other across many distant brain areas. The rabies tracing methods we have developed made that possible, but we were only labeling a fraction of all of the connections,” says Edward Callaway, a Salk professor and senior author of the new paper, published April 14, 2016 in the journal Cell Reports. Callaway is also an affiliate member of the Kavli Institute for Brain and Mind at UC San Diego.

He adds that such a dramatic improvement in a critical tool for neuroscience will help researchers illuminate aspects of brain disorders where connectivity and global processing goes awry, such as in autism and schizophrenia.

From left: Euiseok Kim, Edward Callaway, Tony Ito-Cole and Matthew Jacobs (Credit: The Salk Institute)

From left: Euiseok Kim, Edward Callaway, Tony Ito-Cole and Matthew Jacobs (Credit: The Salk Institute)

Long distance connections between neurons are key to what is called global processing in the brain. Imagine a ball sailing toward a catcher. The catcher’s visual circuits will process the information about the ball and send that information over to the brain’s motor circuits. The motor circuits then direct nerves in the catcher’s arm and hand to grab the ball. That global processing relies on long-distance neural circuits forming precise connections to specific neuron types; these circuits can be revealed with rabies viral tracers.

“With this new rabies tracer, we can visualize connectivity neuron by neuron, and across long distance input neurons better than with previous rabies tracers,” says Euiseok Kim, a Salk research associate and first author of the paper.

There are billions of neurons in the brain, and only a handful of technologies that can map the communication going on between them. Some imaging techniques such as functional MRIs can visualize broad scale communication across the brain, but do not focus on the cellular level. Electrophysiology and electron microscopy can track cell-to-cell connectivity, but aren’t suited to mapping neural circuits across the whole brain.

Tracing methods using neurotropic viruses, like rabies, have long been utilized to trace connections across neural pathways. But these viruses spread widely throughout the brain across multiple circuits, making it difficult to determine which neurons are directly connected. In 2007, Callaway’s lab pioneered a new approach based on genetically modified rabies virus. This approach allowed the viral infection to be targeted to specific types of neurons and also allowed the spread of the virus to be controlled. The result is that this system illuminates neurons across the entire brain, but labels only those that are directly connected to neurons of interest.

To control how far the virus travels, scientists ensure the rabies virus can only infect a select group of neurons. First scientists remove and replace the crucial outer-coat of the rabies virus, called glycoproteins. The virus needs this coat of glycoproteins to enter and infect cells, but the replacement glycoprotein prevents the virus from infecting normal neurons. Scientists then alter a group of neurons in mice to become so-called “starter cells” that are uniquely susceptible to infection with the modified glycoprotein. Starter cells are also programmed to provide the rabies glycoproteins so that once a starter cell is infected, new copies of the rabies tracer can spread across the starter cell’s synapses into connected neurons. However, once the rabies viral tracer is in the next set of neurons, it won’t find the glycoprotein it needs to continue to spread, and so the trail of infection across neural circuits ends.

Although the original rabies viral tracer accurately traces circuits, it was only crossing a fraction of the starter cell’s synapses. The Salk research team went about engineering a more efficient rabies viral tracer. First, the researchers took pieces from various rabies strains to create new chimeric glycoproteins and then tested the versions in by counting labeled cells in known circuits.

The winning chimeric glycoprotein was further genetically modified with a technique called codon optimization to increase levels of the glycoprotein produced in starter cells. Compared to the original rabies tracer, the new codon-optimized tracer increased the tracing efficiency for long distance input neurons by up to 20 fold.

“Although this improved version is much better, there are still opportunities to improve the rabies tracer further as we continue to examine other rabies strains,” says Kim.

 

(Originally published by The Salk Institute)

Derailed Train of Thought? Brain’s Stopping System May Be at Fault

Posted on April 14th, 2016

Have you had the experience of being just on the verge of saying something when the phone rang? Did you then forget what it is you were going to say? A study of the brain’s electrical activity offers a new explanation of how that happens.

Published in Nature Communications, the study comes from the lab of neuroscientist Adam Aron at the University of California San Diego, together with collaborators at Oxford University in the UK, and was led by first author Jan Wessel, while a post-doctoral scholar in the Aron Lab. The researchers suggest that the same brain system that is involved in interrupting, or stopping, movement in our bodies also interrupts cognition – which, in the example of the phone ringing, derails your train of thought.

The findings may give insights into Parkinson’s disease, said Aron, a professor of psychology in the UC San Diego Division of Social Sciences and a member of the Kavli Institute for Brain and Mind, and Wessel, now an assistant professor of psychology and neurology at the University of Iowa. The disease can cause muscle tremors as well as slowed-down movement and facial expression. Parkinson’s patients may also present as the “opposite of distractible,” often with a thought stream so stable that it can seem hard to interrupt. The same brain system that is implicated in “over-stopping” motor activity in these patients, Aron said, might also be keeping them over-focused.

The current study focuses particularly on one part of the brain’s stopping system – the subthalamic nucleus (STN). This is a small lens-shaped cluster of densely packed neurons in the midbrain and is part of the basal ganglia system.

Adam Aron

Adam Aron, professor of psychology in the UC San Diego Division of Social Sciences (Credit: Nathalie Belanger)

Earlier research by Aron and colleagues had shown that the STN is engaged when action stopping is required. Specifically, it may be important, Aron said, for a “broad stop.” A broad stop is the sort of whole-body jolt we experience when, for example, we’re just about to exit an elevator and suddenly see that there’s another person standing right there on the other side of the doors.

The study analyzes signals from the scalp in 20 healthy subjects as well as signals from electrode implants in the STN of seven people with Parkinson’s disease. (The STN is the main target for therapeutic deep brain stimulation in Parkinson’s disease.)

All the volunteers were given a working memory task. On each trial, they were asked to hold in mind a string of letters, and then tested for recall. Most of the time, while they were maintaining the letters in mind, and before the recall test, they were played a simple, single-frequency tone. On a minority of trials, this sound was replaced by a birdsong segment – which is not startling like a “bang!” but is unexpected and surprising, like a cell phone chirping suddenly. The volunteers’ brain activity was recorded, as well as their accuracy in recalling the letters they’d been shown.

Jan Wessel

Jan Wessel, now at the University of Iowa. (Credit: Jan Wessel)

The results show, the researchers write, that unexpected events manifest the same brain signature as outright stopping of the body. They also recruit the STN. And the more the STN was engaged – or the more that part of the brain responded to the unexpected sound – the more it affected the subjects’ working memory and the more they lost hold of what they were trying to keep in mind.

“For now,” said Wessel, “we’ve shown that unexpected, or surprising, events recruit the same brain system we use to actively stop our actions, which, in turn, appears to influence the degree to which such surprising events affect our ongoing trains of thought.”

A role for the STN in stopping the body and interrupting working memory does fit anatomical models of how the nucleus is situated within circuitry in the brain. Yet more research is needed, the researchers write, to determine if there’s a causal link between the activity observed in the STN and the loss in working memory.

“An unexpected event appears to clear out what you were thinking,” Aron said. “The radically new idea is that just as the brain’s stopping mechanism is involved in stopping what we’re doing with our bodies it might also be responsible for interrupting and flushing out our thoughts.”

EEG

The study analyzes signals from the scalp in healthy volunteers as well as signals from electrode implants in the brains of people with Parkinson’s disease. (Credit: Nathalie Belanger)

A possible future line of investigation, Aron said, is to see if the STN and associated circuitry plays a role in conditions characterized by distractibility, like Attention Deficit Hyperactivity Disorder. “This is highly speculative,” he said, “but it could be fruitful to explore if the STN is more readily triggered in ADHD.”

Wessel added: “It might also be potentially interesting to see if this system could be engaged deliberately – and actively used to interrupt intrusive thoughts or unwanted memories.”

If further research bears out the connection suggested by the current study, between the STN and losing your train of thought following an unexpected event, the researchers say it might be that it is an adaptive feature of the brain, something we evolved long ago as a way to clear our cognition and re-focus on something new. Aron suggests this example: You’re walking along one morning on the African Savannah, going to gather firewood. You’re daydreaming about the meal you’re going to prepare when you hear a rustle in the grass. You make a sudden stop – and all thoughts of dinner are gone as you shift your focus to figure out what might be in the grass. In this case, it’s a good thing to forget what you had been thinking about.

 

(Originally published by UC San Diego)

IEEE Transaction on Biomedical Engineering Devotes Entire March Issue to BRAIN Initiative

Posted on April 11th, 2016

The articles cover a wide range of topics related to BRAIN Initiative goals, from multi-scale neural recordings to deep brain stimulation to technologies for recording activity in human brains such as EEG and ECoG.

The journal IEEE Transactions on Biomedical Engineering devoted their March issue, 22 articles in all, to BRAIN Initiative research. “These articles reflect a rich spectrum of BRAIN research on neurotechnologies for recording, imaging, interfacing and modulating the brain at multiple scales,” write the editors.

The articles describe research conducted by grantees of several BRAIN Initiative federal partners, including NIH, NSF, DARPA, and IARPA. The issue also includes papers by industry partners such as NeuroNexus. In addition, the issue reflects the global effort to understand the brain, with authors residing in the U.S., Australia, Belgium, Canada, Ireland, Italy, South Korea, Thailand, and the U.K.

NIH BRAIN Initiative grantees contributed the following three articles:

Chronic in vivo evaluation of PEDOT/CNT for stable neural recordings” by BRAIN grantees Kensall Wise and Euisik Yoon and colleagues. This paper discusses the design of a new type of coating for ultra-small microelectrodes. Although ultra-small microelectrodes can be used for long-term recordings, they tend to have high impedance, which makes them unable to isolate electrical signals from individual neurons. Wise and Yoon et al. developed a poly(3,4-ethylenedioxythiophene) (PEDOT) coating doped with carboxyl functionalized multi-walled carbon nanotubes (CNTs) that effectively lowered electrode resistance to allow single neuron recording. The PEDOT/CNT coating resulted in chronic recordings that were more stable and longer lasting than the current state-of-the-art coating for these types of microelectrodes.

Screen Shot 2016-04-11 at 2.09.37 PM

Dual shank silicon PEDOT neural probe. Image credit: IEEE Transactions on Biomedical Engineering. DOI: 10.1109/TBME.2015.2445713.

Combined Single Unit Neuron Activity and Local Field Potential Oscillations in a Human Visual Recognition Task” by Gregory Worrell and colleagues. This paper compares the action potentials from single neurons and the local field potential—a measure of activity averaged across many neurons—measured with intracranial hybrid electrodes in epilepsy patients performing a recognition memory task. Worrell et al. found that local field potential oscillations were more sensitive to novel images and affectively charged versus neutral images than single neurons.

B7NeuronSignalling-Graphs_Long_v2

Neural probe punctuated with alternating macro-electrode and micro-electrode clusters that record local filed potential and single neuron spikes, respectively. Image credit: IEEE.

Close-Packed Silicon Microelectrodes for Scalable Spatially Oversampled Neural Recording” by BRAIN grantee Ed Boyden and colleagues. This paper describes the design and implementation of close-packed silicon microelectrodes. The probes are fabricated in a hybrid lithography process, resulting in a dense array of recording sites connected to submicron-dimension wiring. Boyden et al. demonstrated their microelectrodes in mammalian brain recording sessions, using a series of probes comprising 1000 recording sites, each recording from an area roughly 9 microns X 9 microns.

Screen Shot 2016-04-11 at 2.20.09 PM

Closely packed recording sites on a neural probe. Each square can record from an area as small as 9 microns X 9 microns. Image credit: scalablephysiology.org

All of the IEEE Transactions on Biomedical Engineering articles are available here.