CNS 2016 Workshop > Jeju, South Korea, July 7


Il Memming Park Stony Brook University,
Ian Stevenson University of Connecticut,


New technologies for recording from large groups of neurons provide an exciting opportunity for figuring out how the nervous system implements computations that underlie perception, cognition, and behavior. However, neural time series are complex and often high-dimensional, and there is a major bottleneck in statistical and computational methods for making sense of them. We aim to discuss statistical approaches for analyzing neural time series to increase our understanding of the neural code and computation. Scientific questions of interest include, but not limited to,

  1. How can we incorporate neuroscience knowledge on the structure of the circuit or dynamics into neural data analysis?
  2. How can we make efficient use of noisy, limited data? and
  3. What machine learning tools can be applied to nonlinear neural time series?


08:55-09:00 Welcome
09:00-09:45 Ian Stevenson
      A Sparse Common Input Model for Population Neural Activity
09:45-10:30 Shinsuke Koyama
      Fluctuation Scaling in Neural Spike Trains
10:30-10:50 Break
10:50-11:35 Sukbin Lim
      A Network Model with Plastic Recurrent Connectivity that Reproduces the Time Course of Visual Responses to Novel and Familiar Images in IT Cortex
11:35-13:30 Lunch
13:30-14:10 Justin Dauwels
      Modeling Time-Varying Functional Networks in Multichannel Neural Recordings

Abstract: We consider the problem of learning dynamic graphical models that capture time-evolving interactions between a set of data-streams. Specifically, we propose a tuning-free Bayesian inference method. In the propose model, sparsity-promoting priors are imposed on graphical models at all time points as well as the differences between every two consecutive graphical models. This encourages adjacent sparse graphs to share similar structures. Efficient variational Bayes algorithms are then developed to learn the model. We further apply the proposed mechanism to multichannel neural recordings, and show the dynamics of functional networks through epileptic seizures. Our results suggest that the functional networks are usually dense at the onset and end of seizures but sparse in between.

14:10-14:50 Shin Ishii
      Machine Learning Approaches to Decoding Neural Activities

Abstract: Recent progress in imaging technologies now allow us to observe activities of neural systems with high throughput. We should rely on machine learning techniques to extract information from such large-scale data. In this talk, I present two decoding studies recently done in our group. The first topic is to identify how a small animal recognizes its surrounding environment in a natural condition. Our collaborator has successfully conducted Calcium imaging of a thermal sensory neuron during worms (C. elegans) are freely performing thermotactic behaviors. We attempted to identify the response function of the sensory neuron, and found that the neuron actually translates the worm’s thermal environment to its activity with high sensitivity. Our decoding analysis also found that the worm’s sensory system is consistent over individuals and moreover it is sufficient for reconstructing the thermal environment to know the activity of the sensory neuron. The second topic is to decode human’s spatial attention from his/her electroencephalography (EEG). When considering applications to brain machine interface in real-life situations, necessity to collect data from each individual could be a burden. To make it more applicable, we developed a subject-transfer decoding technique, in which a target subject’s decoder would be constructed by transforming another decoder that had been constructed based on non-target subjects’ data. This subject-transferability was established by combing dictionary learning from multiple subjects’ data and calibration based on resting brain activities. We found that our dictionary learning could extract common EEG bases over the subjects, and the resting activity-based calibration worked well to make those bases applicable to new target subjects such to compensate unknown individuality.

14:50-15:20 Break
15:20-16:00 Memming Park
      Scalable Latent Trajectory Inference for Identifying Neural Computation
16:00-16:40 Taro Toyoizumi
      Untangling Brain-Wide Dynamics by Cross-Embedding

Abstract: Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness.

16:40-17:00 Closing Remarks