Skip to content

Instantly share code, notes, and snippets.

@rly
rly / trim_beans.py
Last active May 10, 2021 21:31
Script to trim the Frank Lab beans20190718.nwb file to 2 GB
import pynwb
import ndx_franklab_novela
with pynwb.NWBHDF5IO('beans20190718.nwb', 'r') as io:
nwbfile = io.read()
orig_eseries = nwbfile.acquisition['e-series']
n_timestamps = 4000000 # / 20000 Hz sampling rate = 200 seconds
data = orig_eseries.data[0:n_timestamps, :]
ts = orig_eseries.timestamps[0:n_timestamps]
electrodes = nwbfile.create_electrode_table_region(
@rly
rly / write_unit_marks.py
Last active May 13, 2021 15:05
Script to add marks per electrode per spike time per unit (doubly indexed column) to Units table
import datetime
import numpy as np
from pynwb import NWBFile, NWBHDF5IO, validate
nwbfile = NWBFile(
session_description='session_description',
identifier='identifier',
session_start_time=datetime.datetime.now(datetime.timezone.utc),
)
@rly
rly / export_move_container.py
Last active January 21, 2022 17:23
Demonstration of how to move a container from one location to another after it has been written to disk.
from pynwb import NWBFile, NWBHDF5IO
from pynwb.ecephys import LFP, ElectricalSeries
import numpy as np
import datetime
# Create a test file
nwb = NWBFile(
session_description='session_description',
identifier='identifier',
@rly
rly / demo_insert_labmemberinfo.ipynb
Last active January 25, 2022 01:21
Demonstration of how to insert a row into the LabMemberInfo table in nwb_datajoint
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@rly
rly / example_iterator.py
Created June 20, 2022 16:36
Demonstration of how to trim and repack (change compression, chunking, etc.) of a very large HDF5 dataset in an NWB file
from pynwb import NWBHDF5IO
import pynwb
from hdmf.data_utils import GenericDataChunkIterator
from hdmf.backends.hdf5.h5_utils import H5DataIO
filepath = r"D:\GiocomoData_dandiset53\000053\sub-npI1\sub-npI1_ses-20190413_behavior+ecephys.nwb"
class H5DatasetDataChunkIterator(GenericDataChunkIterator):
"""A data chunk iterator that reads chunks over the 0th dimension of an HDF5 dataset up to a max length.
@rly
rly / print_namespaces.py
Created July 1, 2022 16:11
Demonstration of the loading of NWB namespaces in different contexts
import pynwb
from pynwb.spec import NWBDatasetSpec, NWBGroupSpec, NWBNamespace
from hdmf.spec import NamespaceCatalog
from hdmf.build import TypeMap
def print_namespace_versions(type_map):
"""Print the namespace name and version of all namespaces in the given type map."""
for ns_name in type_map.namespace_catalog.namespaces:
@rly
rly / print_namespaces.ipynb
Last active July 6, 2022 06:51
Notebook demonstrating the loading of NWB namespaces in different contexts
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@rly
rly / get_neurodata_type_objects.py
Last active July 25, 2023 14:12
Function to return all PyNWB objects in the file that are instances of the neurodata_type class
from pynwb import NWBHDF5IO, NWBFile
def get_neurodata_type_objs_io(io: NWBHDF5IO, neurodata_type: str, namespace: str):
"""Return all PyNWB objects in the file that have a neurodata type from a namespace.
This works regardless of whether the extension was imported earlier in the python execution.
All objects that are instances of the class associated with the given neurodata_type in the
given namespace will be returned. This includes objects that are instances of a subclass.
@rly
rly / get_num_chunks.ipynb
Created January 10, 2023 22:59
Get the number of chunks in an h5py.Dataset
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@rly
rly / adjust_nwb.py
Last active February 24, 2023 23:31
Tailored Python script to replace particular values in an NWB file to conform with DANDI upload requirements
import glob
import h5py
import numpy as np
import argparse
import pynwb
STR_DTYPE = h5py.special_dtype(vlen=str)