Skip to content

Instantly share code, notes, and snippets.

@alexrockhill
Created September 10, 2022 06:13
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save alexrockhill/c6fc691d145f56287c2a4fe20962ba86 to your computer and use it in GitHub Desktop.
Save alexrockhill/c6fc691d145f56287c2a4fe20962ba86 to your computer and use it in GitHub Desktop.
GSOC 2022 Time-Frequency Visualization Final Report

For my GSoC this year, I made a graphical user interface (GUI) to view estimates of the time-course of activity in the brain from non-invasive electroecephalography (EEG) and magnetoencephalography (MEG) recordings. Although, the main pull request (PR) is ready but hasn't yet been merged (mne-tools/mne-python#10920), I am very happy with the end result and will see it through to the finish in the very near future. Being my second GSoC, I took on a project that was more central to the function of the MNE-Python tool; viewing estimates of brain activity, than my previous GSoC, which dealt with the new extension of supporting intracranial electrophysiology. I hope this will be an upgrade in functionality that empowers many users because of its centrality to MNE-Python. I really tried to make the development very stable and a good example because of this importance as well and I'm really happy with that although there are more PRs left on that (below).

The first task I took on that was indirectly related to this project was implementing a backend that looked good on both pyvistaqt and notebook. There was previously some backend code that did this but not all the functionality was in the notebook backend and, more importantly, it was in real need of a top-to-bottom re-write because it had been built as a jalopy where the first few additions worked just fine but the structure wasn't supportive of being able to add widget components (each additional widget would have had to be reimplemented more than five times over both in pyvistaqt and in notebook, once for each kind of layout it was in, which was impractical). This took a couple weeks of the GSoC but was a very worthwhile PR for the future of GUIs in MNE-Python (mne-tools/mne-python#10913).

A second, side task indirectly related to main project was to document the application programming interface (API) entries in the MNE-Python project that were used in the executable examples (sphinx-gallery/sphinx-gallery#983 with sphinx-gallery/sphinx-gallery#997 and sphinx-gallery/sphinx-gallery#1001 to clean up leftover issues). This task was inspired by trying to understand the source estimation techniques for this project but seeing them spread out among different examples. I think in MNE-Python, as is typical for the type of software project that it is, the code and prose is written at different times and by different people such that it is a product of the state of the software and the authors' knowledge and involvement in the software at that particular time. Since the software and the authors change over time, approachs, terminology and perspectives can be a bit different across examples of similar things within a project. In an effort to harmonize that, I made graphs of which functions in a module, the beamformer module for instance, are used in which examples. This will hopefully help developers harmonize the examples in a way that makes sense at the level of modules. And, since it was added to Sphinx-Gallery, I hope it has a broad impact beyond, MNE-Python.

Finally, the main task was really in two parts 1) make a source time estimate from MEG and EEG data resolved in time-frequency and 2) display that source estimate in a GUI. Before starting this project, honestly, I was under the impression that the first item was already solved but realized that this was in fact not the case at all (besides one approach by a 2017 GSoC by one of the mentors that I only found out about halfway through the project as it unfortunately never made it into the MNE-Python documentation itself). I started out with the idea that linearly-constrained minimum-variance (LCMV) beamforming would be the way to go since it was time-resolved. This was the previous approach that was done by one of the mentors in her GSoC. Since I didn't know much beamformers coming in, I really had to learn about what was going on in the code and in doing so I realized just how much the LCMV and dynamic imaging of coherent sources (DICS) beamforming were the same thing for time-frequency estimation. For consistent terminology, we decided to stick with the DICS nomenclature since that is associated with frequency resolution and the code handled complex-valued estimates whereas the LCMV beamformer implementation would have to discard the phase component (implemented here mne-tools/mne-python#11096). Much easier was to simply iterate over frequencies for time-frequency epochs data and apply the minimum norm estimation methods to the data for each frequency indepenedently (implemented here mne-tools/mne-python#11095). For the second component, I was able to abstract out the part of the intracranial electrode localization GUI (from my previous GSoC) that browsed through slices of a magnetic resonance (MR) image. This meant that I was most of the way to the functional GUI right out of the gates, I just had to overlay the estimation of brain activity over time and make a plot for the chosen vertex (a time-frequency spectrogram). These were not too difficult except that the MR image was reoriented from head-first supine to right-anterior-superior (RAS) and used Freesurfer surface RAS coordinates. This meant that the source estimate image also had to be reoriented to RAS to align it but in order to get to surface RAS of the MR image it needed to first go from voxels to scanner RAS (which is shared by both images) and then to MR voxels and finally to MR surface RAS. These coordinate transforms were a bit complicated and took some debugging and time to figure out. Once everything was finally aligned and all the buttons and sliders worked, the tool turned out great and I think is immediately useful for viewing this high dimensional data.

As a footnote, there were some more PRs that were part of the process mostly of developing the source estimation method; adding provenance to time-frequency epoch baseline correction (mne-tools/mne-python#10979), allowing time-frequency epochs to be decimated (mne-tools/mne-python#10940), allowing cross spectral density to be calculated from time-frequency epochs rather than re-computing the time-frequency decomposition on time epochs (mne-tools/mne-python#10986), allowing vector DICS source estimates (mne-tools/mne-python#10980), adding the Hilbert transform to the time-frequency methods comparison example (mne-tools/mne-python#11116).

The main outstanding issue is to use the backend abstraction in the source time course viewer, partially implemented here (mne-tools/mne-python#10803). This shouldn't take more than a day or two once the main GUI PR is merged. As a longer term goal, it would be nice to get the GUI elements out of the mne.viz.Brain class and use this GUI for surface source estiamtes as well. Since I made the project with this in mind, this should also not be too difficult. Lastly, it would be really nice to see an actual analysis that uses time-frequency resolved source estimation to show how the GUI can be helpful in understanding this high dimensional data. Maybe the first two projects within MNE-Python will be taken over by the next GSoC-er, I hope to do the data analysis project at some point during my research though. All-in-all, a very productive summer for open-source contribution and learning about electrophysiology signal processing and physics methods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment