PRNI

  16-18 May 2011 Korea University Seoul Korea

 2011

ieee_smc

IEEE International Workshop on Pattern Recognition in NeuroImaging

Home

Overview

Call for Papers

Committee

Keynote Speakers

Technical Program

Tutorials

Paper Submissions

Awards

Registration

Booklet

Conference Venue

Hotel
Accommodation


Sponsors

Travel Information

Transportation

Keynote Speakers

| Keynote speech 1 | Keynote speech 2 | Keynote speech 3 |
 
Keynote speech 1
Title Prediction, and Activation-Pattern Sparsity and Stability in fMRI analysis
Speaker

keynote_speeker1Dr. Stephen Strother
Senior Scientist, Rotman Research Institute, CA
Baycrest and Professor of Medical Biophysics, Univ. of Toronto, CA

Abstract

While the so-called "mind-reading" literature has tended to emphasize high classification accuracy/prediction it is rare that the goal does not also include a desire to interpret the extracted salient spatial voxels as activation patterns representing the neural basis of cognition. This makes explicit study of potential tradeoffs between prediction, and sparsity and stability metrics for the associated salient spatial patterns of critical concern. However, these issues have often been ignored in the neuroimaging literature. I will briefly introduce the split-half, subsampling approach we have dubbed NPAIRS (Strother et al., NI 2002; NI 2004; CompStat2010), that we have been using for the last decade, and indicate its links to recent results in the theory of subsampling for stability and sparse variable selection. In particular, we have used NPAIRS to explore empirical tradeoffs between prediction, sparsity and particular stability/reproducibility metrics for linear discriminant models in fMRI analysis. In general we have found that there are at best limited benefits from using modern non-linear kernel techniques, that comprehensive model regularization is often much more important than the particular model used, and that prediction and various sparsity objectives (e.g., elastic net) provide inconsistent objectives for reliable variable selection. Finally our recent results show that in typical group fMRI analyses the iid assumption underlying cross-validation, bootstrap and subsampling may be violated making predictive modeling in fMRI even more challenging than it currently appears.

Work performed with Nathan Churchill and Grigori Yourganov, Rotman Institute/University of Toronto, and Peter Rassmusen. Kristoffer Madsen, and Lars Kai Hansen, Danish Research Centre for MR and Danish Technical University.

  
Keynote speech 2
Title Enhancing Supervised Learning-based Realtime fMRI
Speaker keynote_speeker2Dr. Stephen La Conte
Baylor College of Medicine, USA
Abstract
Supervised learning methods enable us to decode brain states from functional magnetic resonance imaging (fMRI) data. In other words, to determine what the subject was "doing" during the experiment - e.g. receiving sensory input, effecting motor output, or otherwise internally focusing on a prescribed task or thought. Moreover, it is possible to apply these techniques to obtain real-time fMRI (rtfMRI) neurofeedback that is controlled by updated measurements of brain state. rtfMRI technology has exciting potential to enable an entirely new level of experimental flexibility and to facilitate learning and plasticity in rehabilitation and therapeutic applications. Breakthroughs will only be possible, though, through a growing research community that is willing to expand current realtime capabilities, share software, and develop experimental frameworks to serve as tangible examples of adaptive paradigms. Going beyond the limitations of conventional stimuli to harness the utility of feedback is an exciting new frontier for scientific discovery. This talk discusses the potential of rtfMRI to advance neuroimaging and focuses on current limitations and future challenges.
   
Keynote speech 3
Title Decoding Visual Representations in Human Brain Activity
Speaker

keynote_speeker3Dr. Yukiyasu Kamitani
Computational Neuroscience Laboratories, ATR, Japan

Abstract
Objective assessment of mental experience in terms of brain activity represents a major challenge in neuroscience. Despite its wide-spread use in human brain mapping, functional magnetic resonance imaging (fMRI) has been thought to lack the resolution to probe into putative neural representations of perceptual and behavioral features, which are often found in neural clusters smaller than the size of single fMRI voxels. As a consequence, the potential for reading out mental contents from human brain activity, or ‘neural decoding’, has not been fully explored. In this talk, I present our recent work on the decoding of fMRI signals based on machine learning-based analysis. I first show that visual features represented in ‘subvoxel’ neural structures can be decoded from ensemble fMRI responses, using a machine learning model (‘decoder’) trained on sample fMRI responses to visual features. Decoding of stimulus features is extended to the method for ‘neural mind-reading’, which predicts a person's subjective state using a decoder trained with unambiguous stimulus presentation. Various applications of this approach will be presented including fMRI-based brain-machine interface. We next discuss how a multivoxel pattern can represent more information than the sum of individual voxels, and how an effective set of voxels for decoding can be selected from all available ones. Finally, a modular decoding approach is presented in which a wide variety of contents can be predicted by combining the outputs of multiple modular decoders. I demonstrate an example of visual image reconstruction where binary 10 x 10-pixel images (2^00 possible states) can be accurately reconstructed from a singe-trial or single-volume fMRI signals, using a small number of training data. Our approach thus provides an effective means to read out complex mental states from brain activity while discovering information representation in multi-voxel patterns.
  
E-mail comments to prni2011@image.korea.ac.kr
Last update: Mar. 9, 2011