Skip to content

Instantly share code, notes, and snippets.

@DiGyt
Last active August 25, 2020 00:37
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save DiGyt/717393296a49e77a769c8fe182d3ffff to your computer and use it in GitHub Desktop.
Save DiGyt/717393296a49e77a769c8fe182d3ffff to your computer and use it in GitHub Desktop.
Google Summer of Code 2019 - MNE-Python: Improve Time Frequency Analysis in Source Space

MNE-Python:
Improve Time Frequency Analysis in Source Space

My project for Google Summer of Code 2019

by Dirk Gütlin.

Hello and welcome to my project gist for Google Summer of Code 2019. On this page, you can find summary of my GSoC project, as well as various links to different parts of the project.

What is 'Google Summer of Code'?

Google Summer of Code (GSoC) is a program launched by Google that helps students from all around the world to get in touch with Open Source Software development. Each year, students hand in project proposals to different open source organizations. If their project is admitted, they can spend the summer realizing their project, while receiving a stipend from Google to do so.

What's my project?

I worked on a project for the Python Software Foundation, which served as an umbrella for various Python-based Open Source Softwares.

The actual project was realized for MNE-Python, a leading Open Source Software for neurophysiological data analysis. The somewhat clumsy title "Improve Time Frequency Analysis in Source Space" is my shortest possible description for the attempt to combine two main functionalities of MNE: The analysis of oscillatory differences in brain activity which can be measured with MEG/EEG, and the projection of this measured data onto a digitized model of a subject's brain.
While MNE already provided functions to return Time Frequency Analyses on the Source Level, they were restricted to one specific method (a morlet wavelet analysis). The goal of my project was to create an alternative solution for this, by enabling Time Frequency transforms to be calculated directly on existing SourceEstimate data containers, enhancing the number of methods that can be used, and adding optionality and flexibility for operations in this domain.

For more information on the project goals, you can check out my proposal on Google Docs.

How did the project go?

Obviously, there were easier and harder parts, some of which went nearly on the fly, others keeping my head buzzing for weeks. If you're interest in the developing process, check out the project blog I wrote on python-gsoc.org.

What did we achieve?

Nearly all options mentioned in my proposal could be realized and are now under review before merging them into the master branch.

We built support for the different classes of MNE source data containers (SourceEstimates, VolSourceEstimates, VectorSourceEstimates, and VolVectorSourceEstimates) to work with the three planned Time Frequency transformation methods (Morlet Wavelet, Multitapers, and Stockwell Transform). We also made sure that all additional options of these functions were covered, like returning the Inter-Trial-Coherence between multiple trials of recorded brain data.
In order to save memory and computing time, we built in functional support for lists and generator objects, as well as using a "kernel trick" that allows computing the Time Frequency Transformation on a sparse set of (sensor level) data, before projecting it to source level, where the data is multiplied to a huge array.
The transformed data can be stored in a new class called SourceTFR, that combines support for the different types of SourceEstimates, as well as different Time Frequency methods. We also allowed the visualization of such data, by relying on the plotting functions that already exist for SourceEstimates. In my proposal, I aimed to implement a new type of plot, which allows to switch over the computed frequencies manually trough the plotting GUI. However, this could not yet be implemented, as it requires a lot of additional work, and will need to be done later on.

Here are the links to the Pull Requests related to this project:

Additionally, I spent a part of the project implementing an IO data reader for MNE. Find those Pull Requests here:



What can you actually do with this?

After this boring and incomprehensible review, the best way to demonstrate the newly gained functionality is by looking at one of the explainatory examples I created:

Compute induced power and itc for SourceEstimates using stockwell transform

Compute induced power and inter-trial-coherence in source space, using a stockwell time-frequency transform on a list of SourceEstimate objects.

# Authors: Dirk Gütlin <dirk.guetlin@gmail.com>
#
# License: BSD (3-clause)

import os.path as op

import mne
from mne.datasets import somato
from mne.minimum_norm import make_inverse_operator, apply_inverse_epochs
from mne.time_frequency import tfr_stockwell


print(__doc__)

Prepare the data

data_path = somato.data_path()
fname_raw = op.join(data_path, 'sub-01', 'meg', 'sub-01_task-somato_meg.fif')
fname_fwd = op.join(data_path, 'derivatives', 'sub-01',
                    'sub-01_task-somato-fwd.fif')
subjects_dir = op.join(data_path, 'derivatives', 'freesurfer', 'subjects')

raw = mne.io.read_raw_fif(fname_raw)

# Set picks, use a single sensor type
picks = mne.pick_types(raw.info, meg='grad', exclude='bads')

# Read epochs
events = mne.find_events(raw)[:20]  # crop the events to save computation time
tmin, tmax = -0.2, 0.648  # use 256 samples for avoid stockwell zero-padding
epochs = mne.Epochs(raw, events, event_id=1, tmin=tmin, tmax=tmax, picks=picks)

# estimate noise covarariance
noise_cov = mne.compute_covariance(epochs, tmax=0, method='shrunk', rank=None)

# Read forward operator
fwd = mne.read_forward_solution(fname_fwd)

# make an inverse operator
inverse_operator = make_inverse_operator(epochs.info, fwd, noise_cov,
                                         loose=0.2, depth=0.8)

Apply the Inverse solution. If we set return_generator to True, the time frequency transform will process each element of stcs subsequently. This will reduce the memory load. We can also reduce computational time drastically by setting delayed to True. This will allow the time frequency transform to be computed on the sensor space data, and then automatically project it into source space.

Note: If you call stc.data for any stc in stcs, before doing the time frequency transform, the full source time series will be computed. In this case, the time frequency transform will be calculated for each dipole instead of each sensor (more time consuming).

snr = 3.0
lambda2 = 1.0 / snr ** 2

stcs = apply_inverse_epochs(epochs, inverse_operator, lambda2=lambda2,
                            method="dSPM", pick_ori="normal", prepared=False,
                            return_generator=True, delayed=True)

Compute power and inter trial coherence, using a stockwell transform. We will investigate the alpha band from 8 Hz to 12 Hz.

fmin, fmax = 8, 12
power, itc = tfr_stockwell(stcs, fmin=fmin, fmax=fmax, width=1.2,
                           return_itc=True)

As a result, we get SourceTFR objects, a class to store time frequency data in source space. The underlying source type is automatically adopted from the SourceEstimate type used to create the data.

power

Plot the induced power and itc, averaged over the first two stockwell frequencies.

fmin, fmax = power.freqs[:2]
initial_time = 0.1

power.plot(fmin=fmin, fmax=fmax, subjects_dir=subjects_dir,
           subject='01', initial_time=initial_time)
itc.plot(fmin=fmin, fmax=fmax, subjects_dir=subjects_dir,
         subject='01', initial_time=initial_time)

power_plot itc_plot


Thanks

To finish this GSoC project, I'd like to thank all mentors and reviewers.

Special thanks are due to @thht and the Salzburg Brain Dynamics Lab for laying the foundations of this project and providing me with the project idea, @larsoner for always helping me out with tips and ideas when I got stuck, and most of all @massich and @agramfort, for again and again finding time to deal with my code and me as a person :).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment