Skip to content

Instantly share code, notes, and snippets.

@valentina-s
Last active July 11, 2017 16:03
Show Gist options
  • Save valentina-s/0549be1dc47b3348fdfee077f260ca08 to your computer and use it in GitHub Desktop.
Save valentina-s/0549be1dc47b3348fdfee077f260ca08 to your computer and use it in GitHub Desktop.
Instructions to run the tutorial notebooks for the Allen Brain Observatory dataset.

Setup Python Environment

Instructions to run the tutorial notebooks for the [Allen Brain Observatory dataset] (http://observatory.brain-map.org/visualcoding/sdk/index).

Install Python from https://www.continuum.io/downloads.

Install git https://git-scm.com/book/en/v2/Getting-Started-Installing-Git

(On Windows use git bash as a shell to run the following commands)

Downloading all the files.

git clone https://github.com/alleninstitute/AllenSDK/
cd AllenSDK/doc_template/examples/nb

The AllenSDK is Python 2 compatible for now. So if you have Python 2 you can install it directly.

pip install allensdk
jupyter notebook

You can browse through the notebooks and run them cell by cell.

If you have Python 3 you can run the notebooks within a virtual environment for Python 2:

conda create --name py2allen python=2.7 anaconda
activate py2allen (on Windows)
source activate py2allen (on Linux, Mac)
pip install allensdk
jupyter notebook

Once you are done with work you can deactivate the environment:

deactivate (on Windows)
source activate (on Linux, Mac)

Start Jupyter Notebooks

  • get the notebooks
  git clone https://github.com/valentina-s/AllenSDK

This creates a directory AllenSDK in the current folder. The notebooks are in the subfolder doc_template/examples/nb:

  cd AllenSDK/doc_template/examples/nb
  ls
  • start the notebooks
  jupyter notebook

The brain_observatory.ipynb notebook is a good place to start.

Download Experiment Data

When running the boc.get_ophys_experiments function several files are created in the boc/ophys_experiment_data/ each with extension .niw, i.e. Neurodata Without Borders format.

These files can be treated as .hdf5 format files and can be read with any HDF5 reader (in Python with the h5py package). They have nested structure so one need to go deeper to get the data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment