View on GitHub

Mind-heart-reader

An experiment with EEG Signals

Download this project as a .zip file Download this project as a tar.gz file

Introduction

Project Presentation

Project Goals and Research Questions

We used the IAPS[1] and IADS[2] dataset to compare the patterns in brain waves in response to visual and aural stimuli. The key question we had was whether users’ emotional response to a certain stimulus differs based on the medium of that stimulus (in this case images v/s. sounds). For consistency, we used 25 files from IADS and 25 files from IADP that correlated to the same theme for testing.

Potential Applications

Targeted Advertising: focus on showing images that can solicit the desired behavior from the user based on their history. The same goes for stimulus selection. If a user is known to respond better to sounds than images, then advertisments should have a strong sound element. Sound or image can be emphasized or de-emphasized and the content can be made more adaptive to user’s responses without any direct feedback. Standardized Tests: By understanding users EEG patterns, standardized psychological and personality tests could be improved by reducing the likelihood of a user lying to fool the test. Education: If one type of stimulus is proven to be better than the other in eliciting certain types of responses (e.g attention, appreciation, etc.), classroom instruction methods can be redesigned or diversified to exploit these results for more effective teaching. Cinema/Filmmaking/Media: The results can better inform filmmakers about which elements of stimulus might be more effective in which contexts in cinematography. Marketing/Advertising: By identifying which type of medium accrues the greater brainwave data, there can be an understanding of which medium to use for marketing purposes to capture the audience’s attention. Comparing the peaks for each medium, it can help determine which will allow for optimal interest and focus.

Interviews and Ideation Plan

We built a user interface which displays images and plays sounds for the subject to rate them in one of the three categories - Like, Dislike and Neutral. We customized the interview methodology used in the IAPS and IADS studies which was a paper based version and asked users to rate the images and sound in a booklet. They closely follow the Self-assessment Manikin(SAM, Lang 1980) methodology. Parameters such as time the image/sound is provided to user are to be determined.

Components

Dataset:

The IAPS and IADS datasets are standardized and have been used in research extensively. We aim to use subsets of each dataset that are equivalent to each other. The images and sounds used in each experiment will be selected carefully. We aim to use more extreme images along the emotional dimension the dataset has. Our plan is get image that are likely to illicit highly positive or highly negative emotion. Neutral images help control for both responses. This will , hopefully, reduce dimensionality of user responses and improve chances of predicting those responses. We customized the interview methodology used in the IAPS and IADS studies which was a paper based version and asked users to rate the images and sound in a booklet. They closely follow the Self-assessment Manikin(SAM, Lang 1980) methodology. Parameters such as time the image/sound is provided to user are to be determined. Instead of using papers, we built a user interface which displays images and plays sounds for the subject to rate them in one of the three categories - Like, Dislike and Neutral.

Hardware:

Mindwave: We will be using the Mindwave device to capture brainwave data HR Sensor: To capture heart rate data. This was planned but we could not find a HR sensor that gives out the data per second. Devices available provided readings based on minutes.

Software:

Python/Web: We are using a python web app (with flask) to conduct the experiments, analyze the data, and visualize the conclusions. This architecture allows us to combine both UI elements with backend data processing. Also, because timing is critical, we need to ensure we time user response accurately to synchronize them with data inputs from the sensors. Using python also allows us to use the mindwave sensor in real time if possible. We have extended the indra-client module to allows us (and anyone else) to run it in parallel with our application. One downside to this is that we won’t be able to use NeuroSky metrics which need to be post-calculated.

Scikit-Learn: Python library for machine learning

NeuroSky Apps: to capture and calculate mindwave data and metrics

Tableau: Tableau for visual analysis of the data.

Implementation

Approach

Build the technical backend to conduct the experiment and capture user responses. Conduct several experiments informally within the I School friends and family network to capture bio data and responses. Prepare bio data for analysis * calculate mindwave metrics and load them to the database

Image Selection:

IAPS and IADS datasets use three metrics - Pleasure, Arousal and Dominance which are based on the Self-Assessment Manikin (SAM) methodology. We used these three metrics to find the extreme images or sounds for the experiment. We sampled the files which were +/- 1.5 or 2 standard deviations away from mean. We manually looked at the files as well to ensure that the content is extreme. To pick up neutral stimuli, we sampled files around the sample. We build scatter plots for the combination of pleasure, arousal and dominance to find the right set of files. We repeated this process for finding a set for males and females for each stimuli - image and sound. In all, 4 sets were created and used in experiments.

Experiment App

We built the Mind-Heart-Reader app and backend database to display and capture user images

Experiments

IMAGE ALT TEXT HERE

The demo shows the process of initiating the sign up process for the experiment and the user completing the experiment with the IAPS data set.

Because users were unfamiliar with the device, we were there to help setup the Mindwave instrument on them:

Users were shown images and given the option to Like, Dislike, or Neutral about how they felt about the image.

Reactions varied based on what the user encountered:

Visualization

Classification

Evaluation

Analysis & Results

More Details in the presentation.

Project Presentation

Raw data and Tableau analysis books are available on the repository under the analysis folder.

Challenges

References

[1] Lang, P.J., Bradley, M.M., & Cuthbert, B.N. (2008). International affective picture system (IAPS): Affective ratings of pictures and instruction manual. Technical Report A-8. University of Florida, Gainesville, FL.

[2] Bradley, M. M., & Lang, P. J. (1999). International affective digitized sounds (IADS): Stimuli, instruction manual and affective ratings (Tech. Rep. No. B-2). Gainesville, FL: The Center for Research in Psychophysiology, University of Florida

[3] Tableau

[4] Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825-2830, 2011.