Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save fiona-naughton/740f3911bebc16ca2b00614311ee8829 to your computer and use it in GitHub Desktop.
Save fiona-naughton/740f3911bebc16ca2b00614311ee8829 to your computer and use it in GitHub Desktop.
prototypes for UniverseCollection
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import MDAnalysis as mda"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from mdsynthesis import Sim\n",
"import datreant.core as dtr"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Currently assuming:\n",
"\n",
"**trajectories**\n",
"* passed in as Universes \n",
"* *Later*: allow topol (one or multi)/traj; Sims if using Option 3\n",
"\n",
"**auxiliaries**\n",
"* already present in the universe\n",
"* pass in the list of common auxnames or update from common auxs across all universes after init\n",
"* *Later*: allow to pass in args to set up auxiliaries in init\n",
"* (don't really need to store aux_fields, unless we want to check when adding other simulations later they have all the expected auxs? - can just look for fields are common across all; could still pass in an initial list to check they all have the auxs we want)\n",
"\n",
"**data**\n",
"* pass in names/values as dict (as `{'data_field1': [val_for_univ_0, val_for_univ_1, ...], 'data_field_2': ..}` - values will be stored in the appropriate place\n",
"* OR could set prior to init for each universe + update data_fields list with fields common to all universes in Option 2 (and similarly in Option 3 when passing in Sims is allowed)\n",
"* *Later*: pass in just list of field names, initiate as None if not already existing?\n",
"* (again, don't really need to store data_fields, except to check when we initiate/when we add a new trajectory they've got the expected fields/set a default value if not - can just look for common fields?)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Common stuff**"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"class UniverseCollection(object):\n",
" def __init__(self, univs, **kwargs):\n",
" # keep the record of our input order (change later to change index order)\n",
" self.ordered_list = kwargs.get('names', [str(i) for i in range(len(univs))])\n",
" self.aux_fields = kwargs.get('auxs', [])\n",
" \n",
" def update_data_fields(self):\n",
" self.data_fields = self.find_data_fields()\n",
" \n",
" def update_aux_fields(self):\n",
" self.aux_fields = self.find_aux_fields() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Option 1**: Storing metadata in the container"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"class UniverseCollection_container(UniverseCollection):\n",
" def __init__(self, univs, **kwargs):\n",
" super(UniverseCollection_container, self).__init__(univs, **kwargs)\n",
"\n",
" self.universes = {self.ordered_list[i]: univs[i] for i in range(len(univs))}\n",
" \n",
" data = kwargs.get('metadata', {})\n",
" if isinstance(data, dict):\n",
" self.data_fields = data.keys()\n",
" # if we want to make new data_fields for individual univs without having to set the\n",
" # others to some default value, could change self.data to {'univ': {'field': val, ...}, ...}\n",
" # instead of current {'field': [val, ...], }\n",
" self.data = data\n",
"\n",
" \n",
" def __len__(self):\n",
" return len(self.universes)\n",
" \n",
" def get_univ_index(self, u_key):\n",
" if isinstance(u_key, str):\n",
" return self.ordered_list.index(u_key)\n",
" if isinstance(u_key, int):\n",
" return u_key\n",
" \n",
" def get_univ_name(self, u_key):\n",
" if isinstance(u_key, str):\n",
" return u_key\n",
" if isinstance(u_key, int):\n",
" return self.ordered_list[u_key]\n",
" \n",
" def __getitem__(self, u_key):\n",
" if isinstance(u_key, (str, int)):\n",
" return self.universes[self.get_univ_name(u_key)]\n",
" \n",
" def get_data(self, u_key, fieldname):\n",
" u_index = self.get_univ_index(u_key)\n",
" return self.data[fieldname][u_index]\n",
"\n",
" def set_data(self, u_key, fieldname, value):\n",
" u_index = self.get_univ_index(u_key)\n",
" try:\n",
" self.data[fieldname][u_index] = value\n",
" except KeyError:\n",
" # fieldname doesn't exist yet - will have to set the entry for all other univs to some default\n",
" self.data[fieldname] = [None]*len(self)\n",
" self.data[fieldname][u_index] = value\n",
" \n",
" def get_data_series(self, fieldname):\n",
" return self.data[fieldname]\n",
" \n",
" def set_data_series(self, fieldname, values):\n",
" # assume values is list in same as originally - otherwise, could pass in with names as dict?\n",
" self.data[fieldname] = values\n",
" self.update_data_fields()\n",
" \n",
" def find_data_fields(self):\n",
" # find only those fields in data where none of the univs have the 'default' entry? Would need to\n",
" # make sure default entry isn't something we're likely to want to actually set as a value...\n",
" # or switch to storing data indexed by univ first, rather than fieldname first\n",
" return [field for field, vals in self.data.items() if None not in vals]\n",
" \n",
" def find_aux_fields(self):\n",
" auxs = set(self[0].trajectory.aux_list)\n",
" for u in self:\n",
" auxs = set.intersection(auxs, set(u.trajectory.aux_list))\n",
" return list(auxs)\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Option 2**: Storing metadata in the universe"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"class UniverseCollection_universe(UniverseCollection):\n",
" def __init__(self, univs, **kwargs):\n",
" super(UniverseCollection_universe, self).__init__(univs, **kwargs)\n",
" \n",
" self.universes = {self.ordered_list[i]: univs[i] for i in range(len(univs))}\n",
"\n",
" data = kwargs.get('metadata', {})\n",
" if isinstance(data, dict):\n",
" self.data_fields = data.keys()\n",
" for name, u in self.universes.items():\n",
" new_data = {key: vals[self.ordered_list.index(name)] \n",
" for key, vals in data.items()}\n",
" try:\n",
" u.data.update(new_data)\n",
" except AttributeError:\n",
" u.data = new_data\n",
"\n",
" \n",
" def __len__(self):\n",
" return len(self.universes)\n",
" \n",
" def __getitem__(self, u_key):\n",
" if isinstance(u_key, str):\n",
" return self.universes[u_key]\n",
" if isinstance(u_key, int):\n",
" return self.universes[self.ordered_list[u_key]]\n",
" \n",
" def get_data(self, u_key, fieldname):\n",
" u = self[u_key]\n",
" return u.data[fieldname]\n",
" \n",
" def set_data(self, u_key, fieldname, value):\n",
" u = self[u_key]\n",
" u.data[fieldname] = value\n",
" \n",
" def get_data_series(self, fieldname):\n",
" return [self.get_data(u_key, fieldname) for u_key in self.ordered_list]\n",
"\n",
" def set_data_series(self, fieldname, values):\n",
" [self.set_data(i, fieldname, values[i]) for i in range(len(self))]\n",
" self.update_data_fields()\n",
"\n",
" def find_data_fields(self):\n",
" data_fields = set(self[0].data.keys())\n",
" # go through each univ + keep only fields present in all those checked so far...\n",
" # better way to do this?\n",
" for u in self:\n",
" data_fields = set.intersection(data_fields, set(u.data.keys()))\n",
" return list(data_fields)\n",
" \n",
" def find_aux_fields(self):\n",
" auxs = set(self[0].trajectory.aux_list)\n",
" for u in self:\n",
" auxs = set.intersection(auxs, set(u.trajectory.aux_list))\n",
" return list(auxs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Option 3**: Wrap universes using MDSynthesis.Sims + store metadata in wrapper; group Sims as a Bundle"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"class UniverseCollection_wrapper(UniverseCollection):\n",
" def __init__(self, univs, **kwargs):\n",
" super(UniverseCollection_wrapper, self).__init__(univs, **kwargs)\n",
"\n",
" # force to create new since we're testing with the same files etc\n",
" # might want to set name as a category as well/instead - if passing in Sims, Sim name/name in kwargs\n",
" # will not necessarily match up (could take Sim names in place of passing in?)\n",
" sims = [Sim(name, new=True) for name in self.ordered_list]\n",
" for i, sim in enumerate(sims):\n",
" sim.universe = univs[i]\n",
" \n",
" data = kwargs.get('metadata', {})\n",
" if isinstance(data, dict):\n",
" self.data_fields = data.keys()\n",
" for i, sim in enumerate(sims):\n",
" sim.categories.add({key: vals[i]\n",
" for key, vals in data.items()})\n",
" \n",
" self.bundle = dtr.Bundle(sims)\n",
" \n",
" def __len__(self):\n",
" return len(self.bundle)\n",
" \n",
" def __getitem__(self, u_key):\n",
" # returning the Sim, but could directly return the universe?\n",
" if isinstance(u_key, int):\n",
" return self.bundle[u_key]\n",
" if isinstance(u_key, str):\n",
" # returns as bundle; but if we're assuming unique names, should be only one\n",
" # take first to return as Sim\n",
" return self.bundle[u_key][0]\n",
" \n",
" def get_data(self, u_key, fieldname):\n",
" s = self[u_key]\n",
" return s.categories[fieldname]\n",
" \n",
" def set_data(self, u_key, fieldname, value):\n",
" s = self[u_key]\n",
" s.categories[fieldname] = value\n",
" \n",
" def get_data_series(self, fieldname):\n",
" return self.bundle.categories[fieldname]\n",
" \n",
" def set_data_series(self, fieldname, values):\n",
" self.bundle.categories[fieldname] = values\n",
" self.update_data_fields()\n",
"\n",
" def find_data_fields(self):\n",
" return self.bundle.categories.keys()\n",
" \n",
" def find_aux_fields(self):\n",
" auxs = set(self[0].universe.trajectory.aux_list)\n",
" for s in self:\n",
" auxs = set.intersection(auxs, set(s.universe.trajectory.aux_list))\n",
" return list(auxs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Option 3b** Wrap with MDSynthesis.Sim + store in wrapper, but without using Bundle (eg. how a very simple wrapper in MDAnalysis might work...)\n"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"class UniverseCollection_wrapper_part(UniverseCollection):\n",
" def __init__(self, univs, **kwargs):\n",
" super(UniverseCollection_wrapper_part, self).__init__(univs, **kwargs)\n",
" \n",
" self.sims = {name: Sim(name, new=True) for name in self.ordered_list}\n",
" for name, sim in self.sims.items():\n",
" sim.universe = univs[self.ordered_list.index(name)]\n",
"\n",
" data = kwargs.get('metadata', {})\n",
" if isinstance(data, dict):\n",
" self.data_fields = data.keys()\n",
" for name, sim in self.sims.items():\n",
" sim.categories.add({key: vals[self.ordered_list.index(name)]\n",
" for key, vals in data.items()})\n",
" \n",
" \n",
" def __len__(self):\n",
" return len(self.sims)\n",
" \n",
" def __getitem__(self, u_key):\n",
" if isinstance(u_key, str):\n",
" return self.sims[u_key]\n",
" if isinstance(u_key, int):\n",
" return self.sims[self.ordered_list[u_key]]\n",
" \n",
" def get_data(self, u_key, fieldname):\n",
" s = self[u_key]\n",
" return s.categories[fieldname]\n",
" \n",
" def set_data(self, u_key, fieldname, value):\n",
" s = self[u_key]\n",
" s.categories[fieldname] = value\n",
" \n",
" def get_data_series(self, fieldname):\n",
" return [self.get_data(u_key, fieldname) for u_key in self.ordered_list]\n",
" \n",
" def set_data_series(self, fieldname, values):\n",
" [self.set_data(i, fieldname, values[i]) for i in range(len(self))]\n",
" self.update_data_fields()\n",
"\n",
" def find_data_fields(self):\n",
" data_fields = set(self[0].categories.keys())\n",
" for s in self:\n",
" data_fields = set.intersection(data_fields, set(s.categories.keys()))\n",
" return list(data_fields)\n",
" \n",
" def find_aux_fields(self):\n",
" auxs = set(self[0].universe.trajectory.aux_list)\n",
" for s in self:\n",
" auxs = set.intersection(auxs, set(s.universe.trajectory.aux_list))\n",
" return list(auxs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## TESTING"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# set up test files our example universes\n",
"univ_a = mda.Universe('./TEST_a.gro', './TEST_a.xtc')\n",
"univ_b = mda.Universe('./TEST_b.pdb', './TEST_b.xtc')\n",
"# we're not really doing anything with auxs so just use the same file\n",
"univ_a.trajectory.add_auxiliary('pullf', './TEST_short.xvg')\n",
"univ_b.trajectory.add_auxiliary('pullf', './TEST_short.xvg')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"All options should work in the same way.\n",
"Setting up..."
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"test_cont = UniverseCollection_container([univ_a, univ_b], metadata={'test':['1', '2']}, \n",
" names = ['a', 'b'])\n",
"test_univ = UniverseCollection_universe([univ_a, univ_b], metadata={'test':['1', '2']},\n",
" names = ['a', 'b'])\n",
"test_wrap = UniverseCollection_wrapper([univ_a, univ_b], metadata={'test':['1', '2']},\n",
" names = ['a', 'b'])\n",
"test_wrap_part = UniverseCollection_wrapper_part([univ_a, univ_b], metadata={'test':['1', '2']},\n",
" names = ['a', 'b'])"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"tests = [test_cont, test_univ, test_wrap, test_wrap_part]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"length should be 2"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2\n",
"2\n",
"2\n",
"2\n"
]
}
],
"source": [
"for test in tests:\n",
" print len(test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Indexing by number, i.e. input order; for *cont* and *univ* should return universes and for *wrap* Sims"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<Universe with 14158 atoms> <Universe with 5687 atoms>\n",
"<Universe with 14158 atoms> <Universe with 5687 atoms>\n",
"/sansom/n15/linc3610/TEST/a /sansom/n15/linc3610/TEST/b\n",
"/sansom/n15/linc3610/TEST/a /sansom/n15/linc3610/TEST/b\n"
]
}
],
"source": [
"for test in tests:\n",
" print test[0], test[1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Indexing by name; should be same as above"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<Universe with 14158 atoms> <Universe with 5687 atoms>\n",
"<Universe with 14158 atoms> <Universe with 5687 atoms>\n",
"/sansom/n15/linc3610/TEST/a /sansom/n15/linc3610/TEST/b\n",
"/sansom/n15/linc3610/TEST/a /sansom/n15/linc3610/TEST/b\n"
]
}
],
"source": [
"for test in tests:\n",
" print test['a'], test['b']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get individual data - should return '1' and '2'"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1 2\n",
"1 2\n",
"1 2\n",
"1 2\n"
]
}
],
"source": [
"for test in tests:\n",
" print test.get_data('a', 'test'), test.get_data(1, 'test')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Change existing data series - should change 'test' from [1,2] to [A,B]"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['1', '2'] --> ['A', 'B']\n",
"['1', '2'] --> ['A', 'B']\n",
"[u'1', u'2'] --> [u'A', u'B']\n",
"[u'1', u'2'] --> [u'A', u'B']\n"
]
}
],
"source": [
"for test in tests:\n",
" print test.get_data_series('test'), '-->',\n",
" test.set_data_series('test', ['A', 'B'])\n",
" print test.get_data_series('test')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Add a new data series; should update data_fields list"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['test'] --> ['test', 'test2'] \t\t['11', '22']\n",
"['test'] --> ['test', 'test2'] \t\t['11', '22']\n",
"[u'test'] --> [u'test', u'test2'] \t\t[u'11', u'22']\n",
"[u'test'] --> [u'test', u'test2'] \t\t[u'11', u'22']\n"
]
}
],
"source": [
"for test in tests:\n",
" print test.data_fields, '-->',\n",
" test.set_data_series('test2', ['11', '22'])\n",
" print test.data_fields,\n",
" print '\\t\\t'+str(test.get_data_series('test2'))\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Change individual data"
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['A', 'B'] --> ['1', 'B']\n",
"['A', 'B'] --> ['1', 'B']\n",
"[u'A', u'B'] --> [u'1', u'B']\n",
"[u'A', u'B'] --> [u'1', u'B']\n"
]
}
],
"source": [
"for test in tests:\n",
" print test.get_data_series('test'), '-->',\n",
" test.set_data('a', 'test', '1')\n",
" print test.get_data_series('test')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Set a new datafield for a trajectory - for *container*, values for other trajectories will be set to default value of None; for *wrapper*, will return value of 'None' for other trajectories; *wrapper-part* and *universe* will error when we try to get the data series. In all cases, the data_field won't show up when we find_data_fields. "
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['x', None] \t\t['test', 'test2']\n",
"Can't get series \t\t['test', 'test2']\n",
"[u'x', None] \t\t[u'test', u'test2']\n",
"Can't get series \t\t[u'test', u'test2']\n"
]
}
],
"source": [
"for test in tests:\n",
" test.set_data('a', 'new', 'x')\n",
" try:\n",
" print test.get_data_series('new'),\n",
" except KeyError:\n",
" print \"Can't get series\",\n",
" print '\\t\\t'+str(test.find_data_fields())\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can add a new datafield to each trajectory separately; find_data_field should now identify it along with the existing data fields"
]
},
{
"cell_type": "code",
"execution_count": 54,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['test', 'new', 'test2']\n",
"['test', 'new', 'test2']\n",
"[u'test', u'new', u'test2']\n",
"[u'test', u'new', u'test2']\n"
]
}
],
"source": [
"for test in tests:\n",
" test.set_data('b', 'new', 'y')\n",
" print test.find_data_fields()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Updating data_fields should then add 'new'..."
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['test', 'test2'] --> ['test', 'new', 'test2']\n",
"[u'test', u'test2'] --> [u'test', u'new', u'test2']\n",
"[u'test', u'test2'] --> [u'test', u'new', u'test2']\n"
]
}
],
"source": [
"for test in tests:\n",
" if test != test_cont:\n",
" print test.data_fields, '-->', \n",
" test.update_data_fields()\n",
" print test.data_fields"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Find auxs held by all trajectories; should pick up 'pullf'"
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['pullf']\n",
"['pullf']\n",
"['pullf']\n",
"['pullf']\n"
]
}
],
"source": [
"for test in tests:\n",
" print test.find_aux_fields()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Update aux fields to add this..."
]
},
{
"cell_type": "code",
"execution_count": 57,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[] --> ['pullf']\n",
"[] --> ['pullf']\n",
"[] --> ['pullf']\n",
"[] --> ['pullf']\n"
]
}
],
"source": [
"for test in tests:\n",
" print test.aux_fields, '-->',\n",
" test.update_aux_fields()\n",
" print test.aux_fields"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Do some analysis for each trajectory by looping through. *cont* and *univ* return Universes directly, *wrap* returns Sims, so need to pull the universe from this."
]
},
{
"cell_type": "code",
"execution_count": 58,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"76.7231125257 49.8010170215\n",
"76.7231125257 49.8010170215\n",
"76.7231125257 49.8010170215\n",
"76.7231125257 49.8010170215\n"
]
}
],
"source": [
"for test in tests:\n",
" # loop through added universes\n",
" for i in test:\n",
" # wrappers return Sims, rather than Universes, at the moment, so pick out the universe first...\n",
" if isinstance(i, Sim):\n",
" u = i.universe\n",
" if isinstance(i, mda.Universe):\n",
" u = i\n",
" \n",
" coms = []\n",
" for ts in u.trajectory:\n",
" coms.append(u.select_atoms('all').center_of_mass()[2])\n",
" print max(coms),\n",
" print"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "mda_dev"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment