Cross-Subject and Cross-Experiment Prediction of Image Rapid Serial Visual Presentation Events with Deep Learning

Abstract: We report in the paper the first comprehensive investigation of deep learning (DL) methods for target prediction in time-locked rapid serial visual presentation (RSVP) experiments. We reported a > 6% improvement for within-subject predictions by DL classifiers over the state-of-the-art hierarchical discriminant component analysis (HDCA) algorithm. For cross-subject and cross-experiment predictions, we showed that DL classifiers are much more robust in that, they significantly reduced the performance degradation of cross-subject prediction relative to within-subject performance from 13.5% of HDCA to 4% and of cross-experiment prediction from 16% of HDCA to 12%. We also proposed a novel transfer learning Deep Stacking Networks architecture and showed that it can transfer the discriminant information from other subject to achieve improved performance. Finally, we developed a new approach and a Matlab-based software to assist uncovering and visualizing the robust, subject-specific discriminant DL EEG patterns for both target and non-target events. Our study suggests that deep learning has great potential to be a powerful tool for RSVP target prediction and for brain computer interaction research in general.

Read More     »  

EEG Deep Features Visualization Tool

This is a software for the feature representation of neural activities learning by deep stacking network (DSN). We include a set of previously computed results as the input files for demonstration.

Download Matlab Based Software     »  

Deep Stacking Network For Transfer Learning

We propose a novel DSN architecture for transfer learning (DSNTF). The uniqueness of DSNTF is that it is essentially an augmentation of a subject of interest (SOI) DSN with a cross-subject DSN. To train this DSNTF, we first train a cross-subject DSN suing data from other subjects. Then, the SOI DSN is trained using only data from SOI. However, before training, the hidden units in each module are expanded by including those from the trained cross-subject DSN, where the corresponding RBM weights for the cross-subject hidden units are also retained. The training thus involves learning the weights for the SOI hidden units as well as updating the weights for cross-subject hidden units using the subject-specific data. Because the training of each DSNTF module should maximize the classification performance, weight updating for cross-subject hidden units functions to transfer the discriminant information learned from other subjects about the target event to the individualized DSN designed for the SOI.

Download Matlab Code     »  

Static Motion & CT2WS RSVP DataSet

The static motion experiment include presentation of colorful images of enemy soldiers/combatants (target) versus the background images (non-target).… such as village street scenes. In the CT2WS experiment, images are in gray scale. tTarget images include moving people and vehicle animations, whereas the non-targets are other types of animation such as plants or buildings. Each subjects performed four sessions in static motion while only one session in CT2WS dataset. And each session lasts about 15 minutes. There were a total of 16 and 1 5 subjects in the static motion and CT2WS experiment, respectively. We have divided these two datasets into training, testing and validation sections for double cross-validation, shown below.