Saturday, January 21, 2017

Highlights from the 2017 Neural Computation and Engineering Connection

Once a year, researchers meet at the University of Washington (UW) in Seattle as part of the Neural Computation and Engineering Connection to discuss what's new in neuroengineering and computational neuroscience. Organized by the UW Institute for Neuroengineering, this year's topics ranged from brain-computer interfaces to rehabilitative robotics and deep learning, with plenary speakers such as Marcia O'Malley (Rice), Maria Geffen (University of Pennsylvania), and Michael Berry (Princeton).


Day 1: Thursday, 19 January 2017

Laura Specker Sullivan (CSNE/UW Philosophy): Neuroethics. Sullivan kicked off Day 1 with an introduction to the term neuroethics, a term novel enough to trigger phones to auto-correct it to "neurotics". She made an important distinction between the ethics of neuroscience (i.e., how to do neuroscience ethically) and neuroscience of ethics (i.e., the brain processes that describe how we make decisions about ethical issues). At the UW Center for Sensorimotor Neural Engineering, Sullivan is conducting conceptual and empirical research to identify and close gaps between researcher and end-user priorities, addressing topics such as ethics in medical diagnoses, fair subject selection, favorable risk-benefit ratios, informed consent, and many more.

Elliott Abe (UWIN/Comp Neuro Undergraduate Fellow): Quantifying the timing characteristics of adult zebra finch songs. Songs sung by zebra finches are highly stereotyped and change in systematic ways throughout adolescence into adulthood. Abe's research focuses on analyzing regularities in the silences between syllables in songs that are either directed to a potential mate or sung for themselves. Abe showed that silences decrease when the song is directed to a potential mate, and is now using his insights to create a mathematical model that can predict the neurological activity driving these variations.

Darby Losey (UWIN/CSNE Undergraduate Fellow): Facilitating foot-related object identification with transcranial magnetic stimulation of the dorsal premotor cortex. Motor areas have been shown to influence perceptual decisions made about hand or foot related items. Losey went a step further and showed that transcranial magnetic stimulation (TMS) had a specific effect on the decision-making process: TMS allowed subjects to make decisions more rapidly, without dropping in accuracy. More research is needed to understand how the underlying neuronal circuitry is affected by TMS in order to mediate this improved performance.

Nile Wilson (UWIN/CSNE Graduate Fellow): Error-related Potentials in Continuous Control ECoG BCI. Wilson's research focuses on improving current brain-computer interfaces (BCI). Traditionally in BCI, brain signals are decoded and interpreted by a computer, which then also instructs an external device to perform an action. For example, devices have been developed that allow users to control a mouse cursor with their thoughts. However, more often than not, robustly decoding brain activity is challenging, and the computer has no way of knowing how good it's doing. Thus Wilson proposes to provide feedback to the computer in the form of error-related negativitiy, which is an EEG signal that results from whenever a user makes an error. This signal can then be used by the computer to adapt over time in order to improve its performance.

Nancy Wang (UWIN/eScience Graduate Fellow): Un­super­vised decoding of long-term, naturalistic human neural recordings with automated video and audio annotations. Wang is taking BCI a step further, with the goal of automating the process even in unstructured environments. Using various machine learning algorithms, she was able to make sense of a large audio/video/ECoG dataset recorded over one week of an ECoG patient's recovery from surgery. A deep neural network extracted relevant features such as limb movements and facial expressions from video, which could then be paired with audio data to identify moments of social interaction, and correlated with high-dimensional ECoG data.

Kaitlyn Casimo (UWIN/CSNE Graduate Fellow): Connectivity in the resting state. Casimo studies both spontaneous and learning-related variation in resting state connectivity (i.e., functional brain connectivity when you're not doing anything but letting your mind "wander"). Although these properties have already been investigated in fMRI (and to some extent in EEG), they have not previously been investigated in electrocorticography (ECoG). In specific, her research focuses on teasing apart anatomical connections (i.e., those that are physically present in the brain) from functional connections (i.e., covarying activity across brain areas) and effective connections (i.e., the causal portion of all functional connections), and how these connections change when people are learning a skill.

Kameron Decker Harris (BDGN Graduate Fellow): Challenges in connectome inference and analysis. Harris presented a new algorithm to analyze whole-brain neural connectivity data collected as part of the Allen Mouse Brain Connectivity Atlas experiments. These data provide a weighted, non-negative adjacency matrix among 100 μm brain "voxels" using viral tracer data—revealing the connections between a source injection site and other voxels in the brain. Harris' machine learning algorithm is a linear regression model containing a matrix completion loss for missing data, a smoothing spline penalty to regularize the problem, and (optionally) a low rank factorization. Using both synthetic and Allen Mouse Brain Connectivity Atlas data, Harris showed that this new algorithm significantly outperforms state-of-the-art methods.

Kathleen Champion (Comp Neuro Graduate Fellow): Inferring brain-wide dynamics from wide-field Calcium imaging data. The goal of Champion's research is to understand the spatial and temporal structure of the brain, and how it depends on behavioral state. During her talk, Champion presented some preliminary results of fitting a number of models to wide-field Calcium imaging data of the awake behaving mouse (collected by the Allen Institute). Among the proposed algorithms were probabilistic principal component analysis (PPCA) and some linear models, although they could not predict much of the variability in the data. In the future, Champion plans to perform dynamic mode decomposition and to compare region-region interactions with anatomical connections.

Ben Shuman (UWIN Graduate Fellow): Do muscle synergies change after treatments in cerebral palsy? Cerebral palsy (CP) is a group of permanent movement disorders that appear in early childhood, possibly caused by abnormal development or lesions of the primary motor cortex. Patients with CP often present with poor coordination, stiff or weak muscles, and tremors. Shuman's research focuses on both assessing the severity of the disorder in patients, as well as evaluating current treatment strategies using quantifiable metrics. One such metric is concerned with the complexity of muscle groups (called synergies) that are usually activated together (e.g., during locomotion). Synergy complexity tends to be radically decreased in CP patients, but the hope is that it can be increased with current rehabilitation strategies. However, when Shuman performed a retrospective analysis of clinical data, he found that complexities did not increase as a result of treatment—it even slightly decreased. More research is needed to identify effective treatments, and Shuman's research provides the quantitative metrics to do so.


Maria Geffen (University of Pennsylvania): Cortical circuits supporting dynamic auditory perception. The first plenary lecture of the meeting was delivered by Maria Geffen, who talked about how acoustic scene analysis and the cocktail party problem are supported by neurons in auditory cortex, auditory thalamus, and the inferior colliculus. Using a series of optogenetics experiments, Geffen was able to reveal the neuronal circuits that underlie stimulus-specific adaptation in auditory cortex—that is, the fact that neurons in primary auditory cortex (A1) reduce their firing rate in response to common tones, but not to unexpected or "oddball" tones. One idea is that this is being done by depression in narrowly tuned channels, such that inhibition could facilitate adaptation. And in fact, using optogenetics, Geffen showed that the suppression of inhibitory inputs significantly reduces adaptation. More specificly, Somatostatin expressing interneurons (SOM) affect only the adapted regime (specific to the common tone), whereas Parvalbumin expressing interneurons affect inhibition during both common and oddball tones. In another set of experiments, Geffen was able to show that fear learning can lead to higher acuity if the fear is stimulus-specific (and the animal is not just generally more afraid). More specifically, sensory acuity seems to be negatively correlated with the degree of generalization of fear. Inactivation of auditory cortex abolished the changes in acuity, to the researchers's surprise, but preserved the potential of differential emotional learning.

Day 2: Friday, 20 January 2017

Saskia de Vries (Allen Institute): Exploring visual computations in the Allen Brain Observatory. Day 2 was kicked off by de Vries talking about the immense collection of data provided by the Allen Brain Observatory. An interesting subset of these data are neuronal responses of visual cortex in the awake mouse, collected using 2-photon calcium imaging. Mice were shown a range of both artificial and naturalistic stimuli, including static/drifting gratings, sparse noise, natural scenes, natural movies, as well as spontaneous activity. The Allen Brain Observatory readily provides a range of useful raw data and analysis modules to search for, aggregate, and visualize the coding properties of single cell and cell population responses in the mouse brain. And if that is not enough, have a look at the Allen Software Development Kit (SDK), which provides an API to interact with the data contained in the Allen Brain Observatory, Cell Types Database, and Mouse Brain Connectivity Atlas.

Jason Yeatman (UW Speech and Hearing Sciences): Network-level interactions drive response properties in word-selective cortex. Somewhere high up in the visual cortex, there is a brain region that gets activated whenever we read a word. Using diffusion tensor imaging (DTI), Yeatman's lab is measuring white matter tracts in order to build a wiring diagram of how the visual word form area (VWFA) is connected to other parts of the brain. Not surprisingly, word-selective cortex turns out to lie at the intersection of vision and language. Using a simple computational model, Yeatman was able to show that a specific area in the intraparietal sulcus (IPS) acting as top-down modulation for VWFA. Moreover, all of his software endeavors are open-source and community-maintained, such as DiPy and pyAFQ.

Daniela Witten (UW Statistics and Biostatistics): Statistical modeling for problems in Neuroscience. Focusing on 2D Calcium fluorescence traces collected by the Allen Institute, Witten wanted to find a way to automate the labeling process (which is not feasible for TB's of data collected at roughly ~100,000 frames per hour of data) in order to answer questions about the location, activity, and connectivity of neurons in the tissue. In order to find the location of all the neurons in the data, Witter built a large dictionary of spatial components (i.e., a huge superset of the location of all the neurons that could possibly be in the data), and trimmed it down using temporal constraints. In contrast to matrix factorization into spatial and temporal components, Witten argued her approach would result in a convex (i.e., mathematically "nicer") optimization problem.

Michael Berry (Princeton): Predictive coding of novel vs. familiar stimuli in the primary visual cortex. The retina supports a neural code of incredible anatomical complexity, with a large family of retinal ganglion cells encoding all sorts of spatial and temporal features of visual input. When presented with smooth motion, some retinal cells show sustained firing with anticipatory timing, whereas others show transient bursts of firing following sudden motion reversal. This suggests that the retina encodes at least two distinct pieces of information at the population level: predictable information vs. surprising information. After making significant contributions to our understanding of the neural code in the retina, Berry's research has now shifted to the primary visual cortex (V1) to answer a follow-up question: What if this sort of predictive code is also found in V1? Using 2-photon Calcium imaging of the awake behaving mouse, Berry studied V1 neuronal activity in response to sequences of smooth motion (with frames that violate the sequence thrown in randomly). And indeed, similar to the retina, two different types of responses emerged: Some cells showed sustained firing to the sequence (sparse code), whereas some cells quickly started to respond to violating frames, after two or three repetitions, thus effectively encoding novelty or surprise. Berry suggests that these transient responses are not just an "alert" signal, but instead help to improve decoding performance for downstream neurons.


Miriam Ben-Hamo (UW Biology): Long-term sleep recording in non-human primates. Ben-Hamo is interested in studying the sleep cycles of Azara's night monkeys (Aotus azarae; also known as southern night monkeys), which are indigenous to South America. Interestingly, these animals switch regularly between diurnal and nocturnal activity throughout the moonlight cycle (full moon: nocturnal, new moon: diurnal). In order to track these animals, Ben-Hamo has been developing a battery-free wireless recording system for ECoG, which has been validated in silico. Her hope is that the device will be used in Argentinia in the wild, and that the device could be used in human subjects, too.

Eatai Roth (UW Biology): Robustness via redundancy: Multisensory control of flight in hawkmoths. While hovering in front of a flower, a feeding moth receives information about how the flower is moving from two sensory modalities: visual information from the eye and mechanosensory information from the proboscis in contact with the flower. By building a two-part artificial flower that allows for independent manipulation of visual and mechanosensory cues, Roth disentangled the contribution of each sensory modality to the moth's flower-following behavior. His research shows that the moth brain linearly sums information from the visual and mechanosensory domains to maintain this behavior. He also demonstrated that either sensory modality alone would be sufficient for this behavior, but that redundancy makes the behavior robust to changes in the availability of sensory information.

Ariel Rokem (eScience Institute): Data Science: Tools and practices for the era of brain observatories. With the era of Big Data entering the field of neuroscience (with so-called "brain observatories" such as Allen Brain Observatory, UK Biobank, and the Human Connectome Project providing terabytes of data), it has become increasingly difficult (yet incredibly important) to develop scientific tools that are both reproducible and scalable. Scalability is not always easy to achieve, as Rokem demonstrated on a comparative analysis of some popular scientific programming frameworks such as Dask, Myriad, Spark, SciDB, and TensorFlow. In a Diffusion MRI used-case, the first three frameworks fared the best, by scaling well and providing user-defined functions. In terms of reproducibility, going open-source is a good first step, says Rokem, as it allows for code and data to be shared and maintained by the community. However, a common problem is that technology is evolving so quickly that a lot of tools quickly become obsolete. Rokem thus suggested to interpret reproducibility more as an action than an attribute: All code degrades over time, unless it is properly maintained.

Steve Brunton (UW Mechanical Engineering): Observing and controlling the non-linear world in a linear framework. Brunton introduced the audience to the concept of the Koopman operator and showed how it could be used to linearize nonlinear dynamical systems. If you trade a nonlinear dynamical system of finite dimension with an infinite number of measurements, the dynamics become linear—allowing us to apply decades of control theory insights to the now linear problem. Problem is, of course, that it's not that easy to find the right embedding. However, Brunton's latest work applies sparse regression (almost Lasso) to an infinite set of nonlinear features, allowing him to find the (so to speak) "principal components" of a dynamical system.

Marcia O'Malley (Rice University): Challenge and engagement: Ensuring effective upper limb robotic rehabilitation. The final talk of the meeting was given by O'Malley, who talked about robotics research used in rehabilitation of spinal cord injury (SCI) patients. Interestingly, what patients long to get back the most is their hand function, says O'Malley. This is understandable, as without hand function patients would constantly have to rely on others in their day-to-day activities. SCI patients tend to have poor upper limb coordination and poor grasp—something that O'Malley is trying to address in robot-assisted rehabilitation therapy. Her robotics platforms are fairly sophisticated, allowing for three different operating modes: passive (user giuded by robot), triggered mode (user inputs force, then robot completes movement), and active constrained mode (robot resists movement, force proportional to velocity). Intermediate modes are also possible, in order to tailor the difficulty level of therapy to each individual patient. In her latest work, the robot is actually trying to learn from the patient: When the patient fatigues, the robot can take over and simplify the task for a while; and vice-versa whenever the patient excels, the robot can increase the force needed to perform the task or reduce the allowable error range. O'Malley calls this a subject adaptive controller (SAC), and hopes to improve current therapeutic measures by building assistive robots that can ensure continuous engagement, participation, and challenge.

The full schedule can be found here.