This year's COSYNE meeting (Feb25–28) brought together leading theoretical and computational scientists to study fundamental problems in systems neuroscience, including keynote speakers such as Xiao-Jing Wang (NYU), Paul Smolensky (Johns Hopkins), Mala Murthy (Princeton), Leslie Vosshall (Rockefeller), Greg DeAngelis (Rochester), Richard Mooney (Duke), Marisa Carrasco (NYU), and Blaise Agüera y Arcas (Google).
Last year's meeting caused some controversy about declining acceptance rates, but the organizers decided to keep the size of the meeting the same this year. As a fun fact, they performed some statistical analyses on the submitted abstracts, effectively putting to rest the myth of "If it weren't for that darn 3rd reviewer!!". Turns out that excluding the least favorable review from the decision process led to mostly the same number and selection of accepted abstracts. These results suggest that the 3rd reviewer does not just categorically hate you more than all the others, but might instead simply be another independent reviewer... ;-)
The following is a (selective) collection of notes from talks at both the main meeting and the workshops. You can find more detailed information about the authors and their presentations at cosyne.org.
Main Meeting—Day 1
Xiao-Jing Wang, NYU. Building a large-scale brain model: a dynamics- and function-based approach.
This year's Cosyne must have been very busy for Xiao-Jing Wang: Not only did he kick off the meeting with his keynote talk, but he also gave three workshop talks and found his name on a total of seven accepted abstracts (see also Cosyne 2016 by the numbers).
In his first talk, Wang presented some recent findings from his large-scale circuit models of macaque and mouse cortex. By taking into account quantitative heterogeneity across cortical areas (such as from Markov et al. (2012)), they found that large-scale models of cortex give rise to a hierarchy of timescales, where early sensory areas respond rapidly to an external input and the response decays away immediately after stimulus offset (appropriate for sensory processing), whereas association areas higher in the brain hierarchy are capable of integrating inputs over a long time and exhibit persistent activity (suitable for decision-making and working memory). Their models could also explain how one might find different dynamical hierarchies based on the same underlying anatomy: Changing the type of sensory input from (e.g.) visual to somatosensory changes the time-course of processing, which in turn changes the role of different brain areas as well as their position in the functional hierarchy (see Chaudhuri et al., (2015)).
The second part of his talk focused on how disinhibition can act as a local-circuit motif to implement sensory gating. For instance, when you try to read a book in a noisy café, it is desirable for your brain to "gate in" visual information while "gating out" auditory inputs. Inhibitory neurons in a disinhibitory circuit motif may provide a biophysical mechanism to implement this gating.
© Neuron 2015
Main Meeting—Day 2
Blaise Agüera y Arcas, Google. Engineering neural-ish systems. Agüera y Arcas gave a nice overview about what Google Research has been up to, which of course included their deep learning approach to Go. He pointed out that while these artificial systems are not usually designed to be biologically plausible in their implementation details, they are decidedly more "neural" than previous approaches to AI or feature-engineered machine learning. These "neuralish" systems allow to draw comparisons between ANN hidden-layer activity and electrophysiological findings. However, it remains to be demonstrated if these resemblances go beyond Gabor filters and sparse representations.
Noga Weiss-Mosheiff, Hebrew University, Israel. Efficient coding of a dynamic trajectory predicts nonuniform allocation of grid cells to modules. Weiss-Mosheiff used information theory to show that decoding position from grid cell activity is most efficient when the spacing ratio of grid cell modules follows a near-geometric progression.
Sakyasingha Dasgupta, RIKEN, Japan. Efficient signal processing in random networks that generate variability. Dasgupta studied how chaotic activity due to stochastic nature of spiking neurons affects signal processing, highlighting a family of randomly connected networks that allow for efficient signal processing across the spectrum of deterministic and stochastic networks.
Itamar Landau, Hebrew University, Israel. Slow adaptation facilitates excitation-inhibition balance in the presence of structural heterogeneity. Landau showed how the excitation-inhibition balance in local microcircuits can be controlled with strong synapses and dominant inhibition, where the spike-frequency adaptation of local inhibitory neurons gives rise to a homeostatic mechanism that can be used to regulate neuronal activity.
Ashesh Dhawale, Harvard. Long-term stability in behaviorally relevant neural circuit dynamics. Dhawala and colleagues presented Fast Automatic Spike Tracker (FAST), a toolchain that facilitates the long-term recording of physiological signals over days/weeks when an animal is going through different behavioral states.
Main Meeting—Day 3
Stephanie Palmer, U Chicago. Understanding early vision through the lens of prediction.
The Efficient Coding Hypothesis (ECH) posits that sensory systems may employ statistical knowledge about their inputs to improve the efficiency of their encoding. As a result, the brain may be "tuned" to its most likely inputs.
Related to this idea is the Predictive Coding Hypothesis (PCH), which posits that sensory systems should only encode information it could not predict.
As a result, the brain may encode encode stimuli that are most "surprising" or novel in a statistical sense. Supporting evidence has been found for both ECH and PCH. But, is this true even in the retina?
Studies have shown that sensory processing is quite complex and composite even in the retina, with retinal cells exhibiting motion anticipation, object motion sensitivity, response reversal, omitted stimulus responses, and lag normalization. Palmer argues that prediction is necessary even in the retina due to sensorimotor lags. Under assumption of the PCH, the scientific question then becomes whether the retina has as much information as possible about the future (in a Bayesian sense).
Using information-theoretic measures, Palmer investigated this question by looking at how much information retinal cells encode about the past and the future, and compared the information content to that of an LN model. In what she termed the "bottleneck problem", retinal cells should be considered with compressing information about the past as much as possible, under the condition that as much information about the future as possible is conserved. She was able to show that 1) this is a hard problem, 2) that retinal cells operate suspiciously close to the optimum, and 3) that a simple LN model of the retina cannot capture this phenomenon (see Palmer et al., PNAS, 2015).
J. Sacramento, U Bern, Switzerland. Bayesian multisensory integration by dendrites. Sacramento presented a compartmental model that is capable of optimally integrating (in the Bayesian sense) multimodal sensory cues by integrating dendritic voltages. Sensory information is represented by voltage, whereas reliability is given by the total excitatory and inhibitory conductances across the dendritic tree. An important ingredient of the model are conductance-based synapses.
D. Wilson, Max Planck Florida. Orientation selectivity and functional clustering of synaptic inputs in V1. Wilson studied orientation selectivity in V1 and found that synaptic tuning width influences somatic tuning width in nonintuitive ways. Dendritic organization was found to be most important: The spatial organization of dendritic integration shapes somatic specificity, but similar inputs on different dendritic branches can lead to diverging somatic selectivity.
Gregory DeAngelis, U Rochester. Neural computations underlying perception of depth from motion. Depth cues come in two flavors, either as pictorial cues (e.g., from relative object sizes, occlusions, texture gradients) or as geometric cues (e.g., when a science is viewed from multiple vantage points). Conventionally, it has been assumed that extraretinal signals, such as an efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. However, DeAngelis presented some strong evidence that the visual system can infer eye rotations from global patterns of image motion without relying on extraretinal signals. As the eye rotates to maintain fixation on an object, perspective bends the object, creating a rocking motion that changes over time. By sensing these "dynamic perspective cues", the visual system is able to signal depth sign from motion parallax (i.e., to tell whether an object is near or far). Moreover, the depth-sign selectivity generated by dynamic perspective cues was generally consistent with that produced by smooth eye movements.
H. Lin, Janelia Farm Research Campus. Neural correlate of visual prey selection. Lin gave a fantastic talk detailing a large indoor flight arena they built to study the sophisticated hunting and flying maneouvers of dragonflies. By making rapid flights that predict and intercept the prey insect's trajectory, dragonflies capture nearly all of the insects they chase. The final premotor computations in dragonfly prey capture are implemented by a set of target-selective descending neurons (TSDNs) in the ventral nerve cord. This set is composed of 16 identified neurons that map visual space and are sensitive to the motion of small moving targets, yet also modulate wing muscle activity. In order to record neural activity from TSDNs while the dragonfly is behaving, Lin and colleagues developed custom electrodes connected to a wireless multichannel neural amplifiers that dragonflies can carry during prey capture. Lin used this setup to reveal that a dragonfly's pursuit decision relies on distance, prey speed as well as size. In addition, TSDN activity reliably predicts prey position and selection.
Brandon Yu, CMU. Dimensionality reduction of neural population activity during sensorimotor control. Yu highlighted the need for internal models during sensorimotor control, such that subjects are able to estimate current motor states by integrating outdating sensory feedback with motor commands. Using a brain-computer interface (BCI), Yu provided evidence that when subjects make errors, these errors might be due to mismatch between the internal model and reality.
Alex Pouget, U Geneva, Switzerland. Confidence and certainty: Distinct probabilistic quantities for different goals. This year's second most productive PI (6 accepted abstracts) tried to give a mathematical account of probabilistic quantities such as confidence and certainty, but found himself amidst a heated discussion about semantics. He defined confidence in decision-making as the (objective) probability that a decision is correct, and was then looking for cortical areas that could contain a representation of such a quantity on the population level. However, the audience seemed to fundamentally disagree with his definition of confidence, highlighting that there might be a distinctively subjective nature to the term. Someone in the audience suggested that subjectivity might enter the equation via the perception of the "amount of evidence" a person is using to infer the probability that a decision is correct, but the speaker categorically rejected this suggestion without further explanation.
R. Memmesheimer, Columbia. Learning versatile computations with RNNs. Memmesheimer presented rigorous math that allows adaptation of reservoir computing for spiking neural networks (SNNs), and termed the resulting neural network model continuous signal coding SNNs (CSNs). Although the result is a versatile and general-purpose architecture, its relationship to liquid state machines remained unclear.
Stefano Fusi, Columbia. Weird neurons for flexible representations. Hiding behind this nondescript title is the idea that neural populations might employ high-dimensional representations of input data (encoding) in order to allow for efficient low-dimensional readout mechanisms (decoding). Although this idea is not necessarily novel, Fusi presented some insightful examples where this coding strategy might apply in the brain, such as in the prefrontal cortex (PFC) and the dentate gyrus (DG). For example, place cells in DG are thought to signal the allocentric position of an animal during a behavioral task. However, place cells make up only a fraction of cells in DG, whereas other cells in DG have less succinct tuning curves (Note: These are the "weird" cells). Fusi and colleagues were able to decode a rat's position during a behavioral task from DG neuronal activity using just a linear decoder. Surprisingly, this was possible even when all "nice-looking" place cells were excluded from the population, highlighting the functional importance of cells in DG that have nonintuitive tuning curves.
Tim Buschmann, MIT & Princeton. Neural dynamics for flexible networks in cognitive control. Buschmann elaborated on the idea of synchronous oscillations as a means to "tying together" different signals in stimulus-response association tasks. He showed that neurons in the lateral PFC show mixed selectivity for task-relevant parameters, and that synchronous oscillations might be one way in which these neuronal ensembles could be tied together (or "coupled") in a timely and task-dependent fashion.
Angela Yu, UCSD. Decision-making model of inhibitory control. Yu presented a model of the dorsal anterior cingulate cortex (dACC) for "stop-and-go"-type decision-making tasks, which suggests that subjects might repeatedly weigh the expected cost of "go" and "wait" responses in order to choose the least costly action. She also observed that subjects exhibit what might be "strategic waiting" in order to allow for more time to estimate associated costs more accurately.
Jochen Triesch, Frankfurt. Where's the noise? Cortical circuits are assumed to be very noisy, but Triesch reminded us that trial-to-trial responses can be astonishingly well predicted from spontaneous activity right before stimulus on-set (i.e., the "state" of the network). Both spontaneous and evoked activity are shaped by recurrent connectivity, which in turn are shaped by synaptic plasticity. He thus went on to show how self-organizing recurrent networks (SORNs) can give rise to seemingly "noisy" population activity using spike-timing dependent plasticity (STDP) and homeostasis. Although SORNs are inherently deterministic, they can emulate sampling-based inference, exhibit pseudo-randomized sequence replay, and reproduce synaptic weight distributions found throughout the cortex.
Omni Barak. Local dynamics in trained RNNs. Barak pointed out that the field of machine learning lacks theory, mostly because complex networks are hard to analyze analytically. He therefore proposes to start with rigorous analysis of very simple networks engaging in very simple tasks, which might provide the "building blocks" of understanding larger and more complicated networks. However, the networks he studied were so simple that it remained unclear how any gained insights could scale to networks of practical size or complexity.
J. Brown, Indiana. Hierarchical prediction errors in mPFC. Brown presented his hierarchical error representation (HER) model of decision-making and working memory, demonstrating how the frontal lobe might guide hierarchically structured, goal-oriented behavior in a hierarchical context switching task (12 AX task). He argued that the prediction error in mPFC can be used to train working memory representations in the lPFC, and that the job of top-down modulation is to predict the prediction error.
Richard Krauzlis, National Eye Institute. Interactions between cortical and subcortical signals. Krauzlis gave an engaging talk about functional pathways involving the superior colliculus (SC) and visual cortex. The SC is important for the internal monitoring of eye movements (such as saccades), with electrical stimulation of the SC leading to attention-like shifts in behavioral performance. Inactivating neurons in the SC leads to severe perceptual and behavioral deficits, but the functional pathways through which the SC acts on cortex remains poorly understood. His studies revealed that SC inactivation has no influence on neuronal firing, Fano factor, or spike correlation in areas such as MT, MST, LIP, and FEF. However, SC seems to affect cortical processing through connections to the fundus of the superior temporal area (FST).
Ruth Rosenholtz, MIT. Attention: What have we learned by studying peripheral vision. Rosenholtz argued that even though some visual search tasks might be challenging for subjects, these tasks are not conceptually difficult. For example, even though it might take subjects a relatively long time to spot an "L" in a field of "T"s, once the location of the oddball is pointed out to them, the differences are evidently apparent. She thus went on to suggest that most of these perceptual effects are due to foveal vision and attentional effects, demonstrating (e.g.) that visual crowding has larger effects on performance than acuity, and that vision might be less "dynamic" than previously thought.
Unfortunately, I was unable to attend the Tuesday afternoon session due to my flight schedule, so I missed what I'm sure were high-caliber talks from people including Stefano Panzeri, Jeff Hawkins, James Bisley, Jim Cavanagh, and Tatyana Sharpee.