Wednesday, January 31, 2018

Highlights from the 2018 Neural Computation & Engineering Connection (NCEC)

Once a year, researchers meet at the University of Washington (UW) in Seattle as part of the Neural Computation and Engineering Connection to discuss what's new in neuroengineering and computational neuroscience. Organized by the UW Institute for Neuroengineering, this year's topics ranged from brain-computer interfaces to rehabilitative robotics and deep learning, with plenary speakers such as Allison Okamura (Stanford), Takaki Komiyama (UC San Diego), Loren Frank (UC San Francisco), Ila Fiete (UT Austin), and David Reinkensmeyer (UC Irvine).


Day 1: Thursday, 18 January 2018

Allison Okamura (Stanford): Engineering Haptic Illusions. Okamura kicked off Day 1 with a story about the man who lost his body: Ian Waterman, who at 19 caught a virus that destroyed half of his nervous system, and robbed him of his proprioception—making him unable to mentally sense the relative positions of his limbs in space and whether or not they are in motion. Experiences such as Ian Waterman's demonstrate how important the sense of touch is in our everyday lives. In the Charm Lab at Stanford, Dr. Okamura develops a variety of haptic devices that find use in rehabilitation robotics, brain-computer interfaces (BCI), and robot-assisted surgery. For the latter, she has incorporated kinesthetic and haptic feedback into the daVinci surgical system, which was shown to greatly reduce error in teleoperated surgery. Her technology is also featured in the daVinci trainer, a surgery simulator that allows surgeons to get familiar with the daVinci system. Dr. Okamura is also currently experimenting with wearable haptic devices, which she envisions to be used by stroke patients during physical therapy.

Thomas Mohren (UWIN Graduate Fellow): Sparse sensing by arrays of wing mechanosensors for insect flight control. Working at the intersection of neuroscience and robotics, Mohren tries to understand how insect mechanosensors work. Specifically in flight control, insects need to combine information from distributed sensors to assess rotational movement. Mohren combines computational modeling with machine learning to decode flapping and rotational movements from sparse mechanosensor activity.

Phil Mardoum (UWIN Graduate Fellow): Synaptic specialization and convergence of visual channels in the retina. By exposing zebrafish to lights of different wavelengths while disrupting cell signaling in their retina, Mardoum aims to shed light* on the different receptor mechanisms involved in the processing of rod and cone inputs.

Claire Rusch (UWIN Graduate Fellow): Visual learning and processing in the honeybee, Apis mellifera. Rusch asks how honeybees are capable of complex learning and behavior with so few neurons in their brains (1 million neurons total in the honeybee brain, 170,000 in the mushroom body, the center of learning/memory). She records activity from the honeybee brain while the bee is discriminating between blue squares and green circles in VR, and assesses how learning changes neural activity.


David Caldwell (UWIN/BDGN graduate fellow): Engineering direct cortical stimulation in humans. Caldwell’s goal is to enhance neural connectivity through electrical cortical stimulation. He looks at the behavioral and neural effects of cortical stimulation in humans, working with patients implanted with electrocorticographic (ECoG) grids in preparation for epilepsy surgery.

Jesse Resnick (Computational Neuroscience Training Grant graduate fellow): Simulating axon health's impacts in the setting of cochlear implants. Even with cochlear implants, some hearing tasks can still be challenging for people who have lost their hearing: localizing the source of a sound and speech recognition in noisy environments. These tasks require assessing the time difference in sound arrival between the ears (Interaural Time Difference, ITD). By building a computational model of demyelination in auditory neurons, Resnick is working to asses if variability in demyelination severity contributes to ITD detection ability.

Adree Songco-Aguas (Computational Neuroscience Training Grant undergraduate fellow): Rod-cone flicker cancellation: retinal processing and perception in intermediate light. The rods and cones in our retina make vision seamless across day and night. Songco-Aguas is interested in how the retina behaves in intermediate light, assessing how retinal ganglion cells respond when photoreceptors are stimulated with intermediate light levels.

Abhishek De (Computaional Neuroscience Training Grant graduate fellow): Spatiochromatic integration by V1 double opponent neurons. De addresses the question of how color is processed spatially. Specifically, he asks how neurons in area V1 of the visual cortex combine color signals across their spatial fields. He records extracellularly from V1 neurons in primates to see how neurons process color across receptive fields.

Day 2: Friday, 19 January 2018

Day 2 started with breakfast and a lively discussion of ethics in neuroscience by a panel composed of UW and visiting faculty. The panel began by discussing effective collaboration between theorists and experimentalists, particularly as it pertains to authorship on publications. The session then shifted to a discussion about our responsibilities as scientists in presenting results to the public. There were discussions about balancing highlighting the novelty and general interest of research vs. realism about data, especially given the pressure for needing exciting results to generate funding.

Sheri Mizumori (Professor, UW Department of Psychology): Behavioral Implementation of Mnemonic Processing Mizumori addresses the question of how the hippocampus influences behavior; in other words, how memory drives behavior choice. During her lecture, she spoke about possible connectivity routes in the brain related to the question, and discussed her research showing the lateral habenula’s (LHb) role in reinforcement learning. Her research provided evidence that the hippocampus and LHb are part of the same memory-driven system, linked to a theta-generating network; and that the LHb is involved in the same flexible behaviors the hippocampus is involved in.

Azadeh Yazdan (Washington Research Foundation Innovation Assistant Professor of Neuroengineering, UW Departments of Bioengineering and Electrical Engineering): Optogenetic stimulation leads to connectivity changes across sensorimotor cortex in non-human primates. One focus of Yazdan’s lab is the development of efficient stimulation-based therapies for stroke. She developed a large-scale interface for optogenetics in non-human primates, and is looking at how this technique can be used to change functional connectivity between somatosensory and motor cortices. Her future work in stimulation-based stroke therapies will focus on three components: mechanisms of stimulation-induced plasticity, stroke studies in non-human primates, and large-scale interfaces.


Ila Fiete (Associate Professor, Department of Neuroscience, UT Austin): Emergence of dynamically reconfigurable hippocampal responses by learning to perform probabilistic spatial reasoning. Fiete’s goal is to learn how the brain computes and to better understand neural coding and dynamics. She asks how neural computation unfolds over time, and how neural responses underlie the computations that are performed. One difficult problem her lab considers is how to simultaneously know where you are in the environment, and also build a map of where you are. Her lab trained a neural network to solve these problems, to report an accurate estimation of position in complex tasks.

David Reinkensmeyer (Professor, Departments of Mechanical and Aerospace Engineering, Anatomy and Neurobiology, and Biomedical Engineering, UC Irvine): Robotic-assisted movement training after stroke: Why does it work and how can it be made to work better?. Robot-assisted stroke therapy can provide improvements to stroke patients, but there is high variability. By what mechanisms of plasticity or motor learning does robot-assisted therapy work? To answer this question, Reikensmeyer’s lab is building computational models of recovery after stroke, testing and refining them with data from patients receiving robotic-assisted therapy. During his lecture, he showed examples of various types of robotic-assisted therapy, including rehabilitation devices available to the public.

Gabrielle Gutierrez (UWIN postdoctoral fellow, UW Applied Mathematics Department): Info in a bottleneck. Gutierrez asks how signal processing in the retina preserves image information. In the retina, many bipolar cells converge onto one retinal ganglion cell (RGC). She looked at response functions between bipolar cells and RGCs, finding that many bipolar cells converging on one RGC helps with noise processing.

Michael Beyeler (UWIN postdoctoral fellow, UW Psychology Department & the UW eScience Institute): Modeling the perceptual experience of retinal prosthesis patients. Retinal prostheses can restore some sight to individuals who have lost vision due to conditions such as macular degeneration and retinitis pigmentosa. Beyeler asks what people with retinal prostheses actually see; visual perception with a retinal prosthesis is highly distorted relative to normal vision. He is working on a computational model that predicts what people with retinal prostheses will actually see, using data from patients with retinal prostheses. His ultimate goal is to use the model to improve and optimize the stimulation protocol for these prostheses to improve what patients actually see.

Loren Frank (Professor, Kavli Institute for Fundamental Neuroscience, Department of Physiology, UC San Francisco): Neural substrates of prospection. One role played by memories is their use in decision-making. Frank discussed his lab’s work using large-scale recording techniques to record activity in the hippocampus, as well as other areas of the brain, to relate patterns of activity to the basic cognitive functions of the structure. He shared research showing neural activity in the hippocampus was not just a representation of past locations, but also representation of possible future paths in space; recorded neural activity guides future behavior.

Amy Orsborn (Clare Boothe Luce Assistant Professor, UW Electrical Engineering & Bioengineering Departments): Interfaces to monitor and manipulate large-scale neural circuits in primates. Brain-machine interfaces (BMIs) can restore motor abilities; individuals who are paralyzed or missing a limb can control robotic limbs through motor cortex activity in their brain. Orsborn studies motor learning in order to improve BMIs. She discussed research working to improve the algorithms used to map neural activity onto control of robotic devices in BMIs. Orsborn also talked about tools she is developing to study neural networks across multiple spatial scales.

Nino Ramirez (Professor, UW Neurological Surgery Department, Director, Center for Integrative Brain Research, Seattle Children’s Research Institute): Dynamic Mechanisms underlying rhythm generation in cortical and brainstem microcircuits. Ramirez discussed microcircuits in the brain that establish rhythms for breathing, and specifically presented work on the role of inhibition and excitation in breathing mechanisms. Understanding these mechanisms has implications for understanding conditions associated with disordered respiratory rhythms.

Takaki Komiyama (Associate Professor of Neurobiology and Neurosciences, UC San Diego): Circuit basis for behavioral flexibility. Komiyama studies how circuits of neurons allow for behavioral flexibility; noting how amazing the flexibilities of the brain and our behavior are. He addresses the questions of what circuits are involved in motor learning over many repetitions and practice, and if the relationship between brain activity and movement is stable. Komiyama uses wide-field calcium imaging to assess how brain activity changes with motor learning.

The full schedule can be found here.

*I dedicate this awful pun to Tom Daniel. I tried my best! To all others: I sincerely apologize.

Source