Skip to content

Instantly share code, notes, and snippets.

@fonnesbeck
Created June 24, 2013 14:23
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 4 You must be signed in to fork a gist
  • Save fonnesbeck/5850413 to your computer and use it in GitHub Desktop.
Save fonnesbeck/5850413 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"metadata": {
"name": "2. Data Wrangling with Pandas"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Wrangling with Pandas\n",
"\n",
"Now that we have been exposed to the basic functionality of Pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks.\n",
"\n",
"As most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed.\n"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"import pandas as pd\n",
"import numpy as np\n",
"\n",
"# Set some Pandas options\n",
"pd.set_option('display.notebook_repr_html', False)\n",
"pd.set_option('display.max_columns', 20)\n",
"pd.set_option('display.max_rows', 25)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 1
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Date/Time data handling\n",
"\n",
"Date and time data are inherently problematic. There are an unequal number of days in every month, an unequal number of days in a year (due to leap years), and time zones that vary over space. Yet information about time is essential in many analyses, particularly in the case of time series analysis."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `datetime` built-in library handles temporal information down to the nanosecond."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"from datetime import datetime"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 2
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"now = datetime.now()\n",
"now"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 3,
"text": [
"datetime.datetime(2013, 6, 24, 13, 44, 48, 183558)"
]
}
],
"prompt_number": 3
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"now.day"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 4,
"text": [
"24"
]
}
],
"prompt_number": 4
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"now.weekday()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 5,
"text": [
"0"
]
}
],
"prompt_number": 5
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In addition to `datetime` there are simpler objects for date and time information only, respectively."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"from datetime import date, time"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 6
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"time(3, 24)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 7,
"text": [
"datetime.time(3, 24)"
]
}
],
"prompt_number": 7
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"date(1970, 9, 3)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 8,
"text": [
"datetime.date(1970, 9, 3)"
]
}
],
"prompt_number": 8
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Having a custom data type for dates and times is convenient because we can perform operations on them easily. For example, we may want to calculate the difference between two times:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"my_age = now - datetime(1970, 9, 3)\n",
"my_age"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 9,
"text": [
"datetime.timedelta(15635, 49488, 183558)"
]
}
],
"prompt_number": 9
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"my_age.days/365."
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 10,
"text": [
"42.83561643835616"
]
}
],
"prompt_number": 10
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed. \n",
"\n",
"The International Maritime Organization\u2019s (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information.\n",
"\n",
"For our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments = pd.read_csv(\"data/AIS/transit_segments.csv\")\n",
"segments.head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 11,
"text": [
" mmsi name transit segment seg_length avg_sog min_sog \\\n",
"0 1 Us Govt Ves 1 1 5.1 13.2 9.2 \n",
"1 1 Dredge Capt Frank 1 1 13.5 18.6 10.4 \n",
"2 1 Us Gov Vessel 1 1 4.3 16.2 10.3 \n",
"3 1 Us Gov Vessel 2 1 9.2 15.4 14.5 \n",
"4 1 Dredge Capt Frank 2 1 9.2 15.4 14.6 \n",
"\n",
" max_sog pdgt10 st_time end_time \n",
"0 14.5 96.5 2/10/09 16:03 2/10/09 16:27 \n",
"1 20.6 100.0 4/6/09 14:31 4/6/09 15:20 \n",
"2 20.5 100.0 4/6/09 14:36 4/6/09 14:55 \n",
"3 16.1 100.0 4/10/09 17:58 4/10/09 18:34 \n",
"4 16.2 100.0 4/10/09 17:59 4/10/09 18:35 "
]
}
],
"prompt_number": 11
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments.seg_length.hist(bins=500)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 12,
"text": [
"<matplotlib.axes.AxesSubplot at 0x111d53f10>"
]
},
{
"metadata": {},
"output_type": "display_data",
"png": "iVBORw0KGgoAAAANSUhEUgAAAY0AAAECCAYAAAACQYvcAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3X9sVfd9//GnuTaxTWLf3FtsK7Fot/hnJhTGNXGRO3Bi\n2ipupSFlMNmblhgVpDk1oJqRqJoU3MjgJjfFLbpuBEEhbTdplawiTfLWLdc/hOkQ1zGOLIWoN25o\nYiXOFdzLBQf/uvb5/sGXmziAfcEfx8fc1+Of7hzfe/P5vHS9F+d9z4UUy7IsREREErBiqRcgIiLL\nh0pDREQSptIQEZGEqTRERCRhKg0REUmYSkNERBKWOt8D3nrrLbq7u5mamqK0tJRnn32WsbExjhw5\nQigUIjc3l4aGBtLT0wHo6Oigs7MTh8NBXV0dJSUlAAwPD9PW1sbk5CQej4eamhoAYrEYx44dY2ho\niKysLHbv3o3T6VzELYuIyN2a80pjdHSU3/3ud/zrv/4rhw4d4pNPPmFgYID29naKi4vxer0UFhbS\n3t4OXC+Grq4uWlpaaGxsxOfzceNrID6fjx07duD1erlw4QIDAwMA+P1+0tLS8Hq9bNmyhRMnTizu\njkVE5K7NWRorV64E4Nq1a0xOTjIxMcGqVavo6+tj8+bNAFRWVhIIBAAIBAJUVFSQmppKTk4OeXl5\nBINBIpEI4+PjFBQUALBp0ybOnj0LMOu1ysvLGRwcXJydiojIgs05nlq5ciU/+MEPeO6550hLS+Op\np56isLCQaDQaHyFlZ2cTjUYBiEQiFBYWxp/vdrsJh8Okpqbicrni510uF+FwGIBwOIzb7QbA4XCQ\nmZnJ6Ogo999/v9mdiojIgs1ZGleuXOH111/n8OHDrFq1ip/97Ge8/fbbsx6TkpKyqAsUERH7mLM0\n3n//fQoLC8nLywNg48aNnD9/nuzsbC5fvozT6SQSiZCdnQ1cv4K4dOlS/PmXLl3C7XbPurL44vkb\nz7l48SIul4vp6WmuXbs251XG//zP/+BwOO5+xyIiScjpdOLxeBb8OnOWRklJCW+88Qajo6Pcd999\nnDt3jurqagC6u7vZunUrPT09bNiwAYCysjJ+/vOf8/3vf59wOMzIyAgFBQWkpKSQkZFBMBikoKCA\nU6dO8dRTT8Wf09PTQ1FREWfOnGHt2rVzLtjhcLB+/foFb1xEJJn09/cbeZ2U+f6W2+7ubrq6upic\nnOSxxx5j+/btTExMzHnLrd/vj99yW1paCnx+y+3ExAQej4fa2lrg+i23R48ejd9yu2fPnjlvufX7\n/SoNg3p7e/nWt7611Mu4JyhLs5SnWf39/VRVVS34deYtDbtRaZilX0xzlKVZytMsU6Whb4QnOf1S\nmqMszVKe9qTSEBGRhKk0klxvb+9SL+GeoSzNUp72pNIQEZGEqTSSnObG5ihLs5SnPak0REQkYSqN\nJKe5sTnK0izlaU8qDRERSZhKI8lpbmyOsjRLedqTSkNERBKm0khymhuboyzNUp72pNIQEZGEqTSS\nnObG5ihLs5SnPak0REQkYSqNJKe5sTnK0izlaU8qDRERSZhKI8lpbmyOsjRLedrTsi6NT65M8MmV\niaVehohI0ljWpREanSQ0OrnUy1jWNDc2R1mapTztKXW+B3z88ce0trbGjz/99FP+/u//nieeeIIj\nR44QCoXIzc2loaGB9PR0ADo6Oujs7MThcFBXV0dJSQkAw8PDtLW1MTk5icfjoaamBoBYLMaxY8cY\nGhoiKyuL3bt343Q6F2O/IiKyAPNeaTz00EO8/PLLvPzyy7S0tHDffffx+OOP097eTnFxMV6vl8LC\nQtrb24HrxdDV1UVLSwuNjY34fD4sywLA5/OxY8cOvF4vFy5cYGBgAAC/309aWhper5ctW7Zw4sSJ\nxduxzKK5sTnK0izlaU93NJ4aHBwkLy+Pr33ta/T19bF582YAKisrCQQCAAQCASoqKkhNTSUnJ4e8\nvDyCwSCRSITx8XEKCgoA2LRpE2fPngWY9Vrl5eUMDg4a26CIiJhzR6Vx+vRpKioqAIhGo/ERUnZ2\nNtFoFIBIJILb7Y4/x+12Ew6HiUQiuFyu+HmXy0U4HAYgHA7Hn+NwOMjMzGR0dHQB25JEaW5sjrI0\nS3naU8KlEYvFePvtt9m4ceNNP0tJSTG6KBERsaeES+PcuXP85V/+JVlZWcD1q4vLly8D168usrOz\ngetXEJcuXYo/79KlS7jd7llXFl88f+M5Fy9eBGB6eppr165x//3333YtX/wTSDQanXXc29ur4zs4\nvnHOLutZzsff+ta3bLWe5X6sPBfn932hUqwbn1LPo7W1lXXr1lFZWQnAb37zG+6//362bt3KyZMn\n+eyzz/iHf/gHhoeH+fnPf86hQ4cIh8O89NJL/OIXvyAlJYUf//jH1NXVUVBQQEtLC0899RTr1q3j\n97//PR9++CE7d+7k9OnTBAIB9u7de8t1+P1+1q9fD8A7H18F4LGHHjAQhYjIvau/v5+qqqoFv05C\nVxrj4+MMDg5SXl4eP/f000/zxz/+kX379hEMBnn66acByM/P54knnuD555/H6/VSX18fH1/V19fz\nxhtvsG/fPr7+9a+zbt06AKqqqpiamqKxsZG33nqLZ599dsEbk8SY/lNIMlOWZilPe5r3exoA6enp\nHD9+fNa5jIwM9u/ff8vHV1dXU11dfdP5/Px8Dh48ePMiUlOpr69PZCkiIrKElvU3wmXhdC+8OcrS\nLOVpTyoNERFJmEojyWlubI6yNEt52pNKQ0REEqbSSHKaG5ujLM1Snvak0hARkYSpNJKc5sbmKEuz\nlKc9qTRERCRhKo0kp7mxOcrSLOVpTyoNERFJmEojyWlubI6yNEt52pNKQ0REEqbSSHKaG5ujLM1S\nnvak0hARkYSpNJKc5sbmKEuzlKc9qTRERCRhKo0kp7mxOcrSLOVpTyoNERFJmEojyWlubI6yNEt5\n2lNC/0b4+Pg4r7/+Oh9++CFTU1PU19eTn5/PkSNHCIVC5Obm0tDQQHp6OgAdHR10dnbicDioq6uj\npKQEgOHhYdra2picnMTj8VBTUwNALBbj2LFjDA0NkZWVxe7du3E6nYu0ZRERuVsJXWm8/vrrPPro\no7z88st4vV4efvhh2tvbKS4uxuv1UlhYSHt7O3C9GLq6umhpaaGxsRGfz4dlWQD4fD527NiB1+vl\nwoULDAwMAOD3+0lLS8Pr9bJlyxZOnDixOLuVm2hubI6yNEt52tO8pXHt2jXee+89nnzySQAcDgeZ\nmZn09fWxefNmACorKwkEAgAEAgEqKipITU0lJyeHvLw8gsEgkUiE8fFxCgoKANi0aRNnz54FmPVa\n5eXlDA4Omt+piIgs2LylEQqFyMrKwufz0djYyGuvvcbExATRaDQ+QsrOziYajQIQiURwu93x57vd\nbsLhMJFIBJfLFT/vcrkIh8MAhMPh+HNulNLo6Ki5XcptaW5sjrI0S3na07ylMT09zdDQEOXl5Rw6\ndIhYLMb//d//zXpMSkrKoi3wVr74ZopGo7OOe3t7dXwHx4ODg7Zaj451rOPFOzYhxbrxgcNtXL58\nmcbGRo4fPw7AuXPn6Onp4c9//jMvvvgiTqeTSCRCU1MTra2tnDx5EoCtW7cC0NzczPbt21m9ejVN\nTU0cPnw4vpnz58+zc+dOmpub2bZtG0VFRUxPT7Nr1674f+/L/H4/69evB+Cdj68C8NhDDxiIQkTk\n3tXf309VVdWCX2feKw2n0xn/XGJmZob+/n7Wrl2Lx+Ohu7sbgJ6eHjZs2ABAWVkZp0+fJhaLEQqF\nGBkZoaCgAKfTSUZGBsFgEMuyOHXq1Kzn9PT0AHDmzBnWrl274I2JiIh5815pAHz88cf4fD6uXLnC\nmjVraGhowLKsOW+59fv98VtuS0tLgc9vuZ2YmMDj8VBbWwtcv+X26NGj8Vtu9+zZc9tbbnWlYVZv\nb6/uUjFEWZqlPM0ydaWRUGnYiUrDLP1imqMszVKeZn1l4ym5t+mX0hxlaZbytCeVhoiIJEylkeRM\n346XzJSlWcrTnlQaIiKSMJVGktPc2BxlaZbytCeVhoiIJEylkeQ0NzZHWZqlPO1JpSEiIglTaSQ5\nzY3NUZZmKU97UmmIiEjCVBpJTnNjc5SlWcrTnlQaIiKSMJVGktPc2BxlaZbytCeVhoiIJEylkeQ0\nNzZHWZqlPO1JpSEiIglTaSQ5zY3NUZZmKU97UmmIiEjCVBpJTnNjc5SlWcrTnlITedBzzz1HRkYG\nK1aswOFwcOjQIcbGxjhy5AihUIjc3FwaGhpIT08HoKOjg87OThwOB3V1dZSUlAAwPDxMW1sbk5OT\neDweampqAIjFYhw7doyhoSGysrLYvXs3TqdzkbYsIiJ3K+ErjQMHDvDyyy9z6NAhANrb2ykuLsbr\n9VJYWEh7eztwvRi6urpoaWmhsbERn8+HZVkA+Hw+duzYgdfr5cKFCwwMDADg9/tJS0vD6/WyZcsW\nTpw4YXibcjuaG5ujLM1SnvaUcGnc+H/8N/T19bF582YAKisrCQQCAAQCASoqKkhNTSUnJ4e8vDyC\nwSCRSITx8XEKCgoA2LRpE2fPnr3ptcrLyxkcHFz4zkRExLiESiMlJYWf/OQn7N+/n7feeguAaDQa\nHyFlZ2cTjUYBiEQiuN3u+HPdbjfhcJhIJILL5Yqfd7lchMNhAMLhcPw5DoeDzMxMRkdHDWxP5qO5\nsTnK0izlaU8Jfabx0ksv8eCDDzI8PMyhQ4d4+OGHZ/08JSVlURZ3O729vfFL12g0Su+f3okf33ij\n6Tix4xtXdXZZj451rOPFOzYhxfry3Gkeb775Ji6XC7/fz4EDB3A6nUQiEZqammhtbeXkyZMAbN26\nFYDm5ma2b9/O6tWraWpq4vDhw/HNnD9/np07d9Lc3My2bdsoKipienqaXbt2cfz48Vv+9/1+P+vX\nrwfgnY+vAvDYQw/c3e5FRJJEf38/VVVVC36decdTExMTjI2NAXDlyhXOnTvHmjVrKCsro7u7G4Ce\nnh42bNgAQFlZGadPnyYWixEKhRgZGaGgoACn00lGRgbBYBDLsjh16tSs5/T09ABw5swZ1q5du+CN\niYiIefNeaYRCIV555RUAHnjgATZu3Mi3v/3teW+59fv98VtuS0tLgc9vuZ2YmMDj8VBbWwtcv+X2\n6NGj8Vtu9+zZc9tbbnWlYdYXR32yMMrSLOVplqkrjTseTy01lYZZ+sU0R1mapTzN+srGU3Jv0y+l\nOcrSLOVpTyoNERFJmEojyeleeHOUpVnK055UGiIikjCVRpLT3NgcZWmW8rQnlYaIiCRMpZHkNDc2\nR1mapTztSaUhIiIJU2kkOc2NzVGWZilPe1JpiIhIwlQaSU5zY3OUpVnK055UGiIikjCVRpLT3Ngc\nZWmW8rQnlYaIiCRMpZHkNDc2R1mapTztSaUhIiIJU2kkOc2NzVGWZilPe1JpiIhIwlQaSU5zY3OU\npVnK055SE3nQzMwML7zwAi6XixdeeIGxsTGOHDlCKBQiNzeXhoYG0tPTAejo6KCzsxOHw0FdXR0l\nJSUADA8P09bWxuTkJB6Ph5qaGgBisRjHjh1jaGiIrKwsdu/ejdPpXKTtiojIQiR0pdHR0UF+fj4p\nKSkAtLe3U1xcjNfrpbCwkPb2duB6MXR1ddHS0kJjYyM+nw/LsgDw+Xzs2LEDr9fLhQsXGBgYAMDv\n95OWlobX62XLli2cOHFiEbYpt6O5sTnK0izlaU/zlsalS5c4d+4cTz75ZLwA+vr62Lx5MwCVlZUE\nAgEAAoEAFRUVpKamkpOTQ15eHsFgkEgkwvj4OAUFBQBs2rSJs2fP3vRa5eXlDA4Omt+liIgYMW9p\nvPnmm/zjP/4jK1Z8/tBoNBofIWVnZxONRgGIRCK43e7449xuN+FwmEgkgsvlip93uVyEw2EAwuFw\n/DkOh4PMzExGR0cNbE0SobmxOcrSLOVpT3OWxttvv01WVhZ/8Rd/Eb/K+LIbI6uv0hffTNFodNZx\nb2+vju/geHBw0Fbr0bGOdbx4xyakWLdrA+Df//3fOXXqFCtWrGBqaoqxsTEef/xxhoaGOHDgAE6n\nk0gkQlNTE62trZw8eRKArVu3AtDc3Mz27dtZvXo1TU1NHD58OL6R8+fPs3PnTpqbm9m2bRtFRUVM\nT0+za9cujh8/ftsF+/1+1q9fD8A7H18F4LGHHjCThojIPaq/v5+qqqoFv86cVxq1tbX88pe/xOfz\nsXfvXv7qr/6KhoYGysrK6O7uBqCnp4cNGzYAUFZWxunTp4nFYoRCIUZGRigoKMDpdJKRkUEwGMSy\nLE6dOjXrOT09PQCcOXOGtWvXLnhTIiKyOO7oexo3RlFPP/00f/zjH9m3bx/BYJCnn34agPz8fJ54\n4gmef/55vF4v9fX18efU19fzxhtvsG/fPr7+9a+zbt06AKqqqpiamqKxsZG33nqLZ5991uD2ZD6m\nL12TmbI0S3naU0Lf0wB49NFHefTRRwHIyMhg//79t3xcdXU11dXVN53Pz8/n4MGDNy8gNZX6+vpE\nlyEiIkto2X8jfNVKx1IvYVnTvfDmKEuzlKc9LfvSuP8+lYaIyFdl2ZeGLIzmxuYoS7OUpz2pNERE\nJGEqjSSnubE5ytIs5WlPKg0REUmYSiPJaW5sjrI0S3nak0pDREQSptJIcpobm6MszVKe9qTSEBGR\nhKk0kpzmxuYoS7OUpz2pNEREJGEqjSSnubE5ytIs5WlPKg0REUmYSiPJaW5sjrI0S3nak0pDREQS\nptJIcpobm6MszVKe9qTSEBGRhKk0kpzmxuYoS7OUpz3N+W+ET05OcuDAAaampli5ciUbN27k+9//\nPmNjYxw5coRQKERubi4NDQ2kp6cD0NHRQWdnJw6Hg7q6OkpKSgAYHh6mra2NyclJPB4PNTU1AMRi\nMY4dO8bQ0BBZWVns3r0bp9O5yNsWEZG7MeeVxsqVK3nxxRd55ZVXOHDgAF1dXXzyySe0t7dTXFyM\n1+ulsLCQ9vZ24HoxdHV10dLSQmNjIz6fD8uyAPD5fOzYsQOv18uFCxcYGBgAwO/3k5aWhtfrZcuW\nLZw4cWJxdyyzaG5sjrI0S3na07zjqfvuuw+A8fFxZmZmSEtLo6+vj82bNwNQWVlJIBAAIBAIUFFR\nQWpqKjk5OeTl5REMBolEIoyPj1NQUADApk2bOHv2LMCs1yovL2dwcND8LkVExIh5S2NmZoZ/+Zd/\nYefOnXz3u9/la1/7GtFoND5Cys7OJhqNAhCJRHC73fHnut1uwuEwkUgEl8sVP+9yuQiHwwCEw+H4\ncxwOB5mZmYyOjprbocxJc2NzlKVZytOe5i2NFStW8Morr/CLX/yC3//+93zwwQezfp6SkrJoi7ud\nL76ZJsYnZh339vbq+A6OBwcHbbUeHetYx4t3bEKKdeNDhwT86le/wu1287//+78cOHAAp9NJJBKh\nqamJ1tZWTp48CcDWrVsBaG5uZvv27axevZqmpiYOHz4c38j58+fZuXMnzc3NbNu2jaKiIqanp9m1\naxfHjx+/7Rr8fj/r168H4J2Pr5L7wEryHrjvrgMQEUkG/f39VFVVLfh15rzSuHLlCp999hkAV69e\nZWBggDVr1lBWVkZ3dzcAPT09bNiwAYCysjJOnz5NLBYjFAoxMjJCQUEBTqeTjIwMgsEglmVx6tSp\nWc/p6ekB4MyZM6xdu3bBmxIRkcUx55XGhx9+iM/nY2ZmBqfTycaNG3nyySfnveXW7/fHb7ktLS0F\nPr/ldmJiAo/HQ21tLXD9ltujR4/Gb7nds2fPnLfc6krDrN7eXt2lYoiyNEt5mmXqSuOOxlN2oNIw\nS7+Y5ihLs5SnWV/JeEruffqlNEdZmqU87UmlISIiCVNpJDnTt+MlM2VplvK0J5WGiIgkTKWR5DQ3\nNkdZmqU87UmlISIiCVNpJDnNjc1RlmYpT3tSaYiISMJUGklOc2NzlKVZytOeVBoiIpIwlUaS09zY\nHGVplvK0J5WGiIgkTKWR5DQ3NkdZmqU87UmlISIiCVNpJDnNjc1RlmYpT3tSaYiISMKWfWl8NjnN\nJ1cmlnoZy5bmxuYoS7OUpz0t+9IIX5siNDq51MsQEUkKy740ZGE0NzZHWZqlPO0pdb4HXLx4EZ/P\nRzQaJSsri8rKSiorKxkbG+PIkSOEQiFyc3NpaGggPT0dgI6ODjo7O3E4HNTV1VFSUgLA8PAwbW1t\nTE5O4vF4qKmpASAWi3Hs2DGGhobIyspi9+7dOJ3ORdy2iIjcjXmvNFJTU3nmmWf42c9+xo9+9CP+\n7d/+jeHhYdrb2ykuLsbr9VJYWEh7eztwvRi6urpoaWmhsbERn8+HZVkA+Hw+duzYgdfr5cKFCwwM\nDADg9/tJS0vD6/WyZcsWTpw4sXg7llk0NzZHWZqlPO1p3tJwOp184xvfACArK4tHHnmEcDhMX18f\nmzdvBqCyspJAIABAIBCgoqKC1NRUcnJyyMvLIxgMEolEGB8fp6CgAIBNmzZx9uxZgFmvVV5ezuDg\noPGNiojIwt3RZxojIyMMDw9TVFRENBqNj5Cys7OJRqMARCIR3G53/Dlut5twOEwkEsHlcsXPu1wu\nwuEwAOFwOP4ch8NBZmYmo6OjC9uZJERzY3OUpVnK054SLo3x8XFaW1t55pln4p9d3JCSkmJ8YXP5\n4ptpZsaKF9aNn33x5zqe+3hwcNBW69GxjnW8eMcmpFg3PnCYQywW46c//Snr1q3je9/7HgB79+7l\nwIEDOJ1OIpEITU1NtLa2cvLkSQC2bt0KQHNzM9u3b2f16tU0NTVx+PDh+GbOnz/Pzp07aW5uZtu2\nbRQVFTE9Pc2uXbs4fvz4Ldfi9/tZv349AO98fJXJ6RlWOlbw2EMPLDwNEZF7VH9/P1VVVQt+nXmv\nNCzL4rXXXiM/Pz9eGABlZWV0d3cD0NPTw4YNG+LnT58+TSwWIxQKMTIyQkFBAU6nk4yMDILBIJZl\ncerUqVnP6enpAeDMmTOsXbt2wRsTERHz5r3SeO+993jxxRdZs2ZNfAxVW1tLcXHxnLfc+v3++C23\npaWlwOe33E5MTODxeKitrQWuX8kcPXo0fsvtnj17bnvLra40zOrt7dVdKoYoS7OUp1mmrjTm/Z5G\nSUkJ//Ef/3HLn+3fv/+W56urq6murr7pfH5+PgcPHrx5Eamp1NfXz7cUERFZYvpGeJLTn+TMUZZm\nKU97UmmIiEjCVBpJzvTteMlMWZqlPO1JpSEiIglTaSQ5zY3NUZZmKU97UmmIiEjCVBpJTnNjc5Sl\nWcrTnlQaIiKSsHuiNBwr0L8Tfpc0NzZHWZqlPO3pniiN6Pi0/p1wEZGvwD1RGnL3NDc2R1mapTzt\nSaUhIiIJU2kkOc2NzVGWZilPe1JpiIhIwlQaSU5zY3OUpVnK055UGiIikjCVRpLT3NgcZWmW8rQn\nlYaIiCRs2ZbGJ1cmmJ77nzeXBGhubI6yNEt52tO8/0Z4W1sb586dIysri1dffRWAsbExjhw5QigU\nIjc3l4aGBtLT0wHo6Oigs7MTh8NBXV0dJSUlAAwPD9PW1sbk5CQej4eamhoAYrEYx44dY2hoiKys\nLHbv3o3T6Zx34aHRSaZnVBoiIl+lea80nnjiCX784x/POtfe3k5xcTFer5fCwkLa29uB68XQ1dVF\nS0sLjY2N+Hw+rP9/NeDz+dixYwder5cLFy4wMDAAgN/vJy0tDa/Xy5YtWzhx4oThLcpcNDc2R1ma\npTztad7SKC0tZdWqVbPO9fX1sXnzZgAqKysJBAIABAIBKioqSE1NJScnh7y8PILBIJFIhPHxcQoK\nCgDYtGkTZ8+evem1ysvLGRwcNLc7EREx6q4+04hGo/ERUnZ2NtFoFIBIJILb7Y4/zu12Ew6HiUQi\nuFyu+HmXy0U4HAYgHA7Hn+NwOMjMzGR0dPTudiN3THNjc5SlWcrTnhb8QXhKSoqJddyRL76ZZmYs\nYrHYrJ998ec6nvt4cHDQVuvRsY51vHjHJqRY1vy3IIVCIX7605/GPwjfu3cvBw4cwOl0EolEaGpq\norW1lZMnTwKwdetWAJqbm9m+fTurV6+mqamJw4cPxzdy/vx5du7cSXNzM9u2baOoqIjp6Wl27drF\n8ePHb7sWv9/P+vXreefjq0xOzwAwOW1x/0oHjz30wMLSEBG5R/X391NVVbXg17mrK42ysjK6u7sB\n6OnpYcOGDfHzp0+fJhaLEQqFGBkZoaCgAKfTSUZGBsFgEMuyOHXq1Kzn9PT0AHDmzBnWrl274E2J\niMjimPdKo7W1lfPnz3P16lWys7PZvn073/zmN+e85dbv98dvuS0tLQU+v+V2YmICj8dDbW0tcP2W\n26NHj8Zvud2zZ8+ct9zqSsOs3t5e3aViiLI0S3maZepKY97vaezdu/eW5/fv33/L89XV1VRXV990\nPj8/n4MHD968gNRU6uvr51uGiIjYwLL9RriYoT/JmaMszVKe9qTSEBGRhKk0kpzp2/GSmbI0S3na\n07IsjU+uTMQ/BL9h1UrHEq1GRCR5LMvSCI1OMjk9+6av++9TadwNzY3NUZZmKU97WpalISIiS0Ol\nkeQ0NzZHWZqlPO1JpSEiIglTaSQ5zY3NUZZmKU97UmmIiEjCVBpJTnNjc5SlWcrTnu6Z0vhscppP\nrkws9TJERO5p90xphK9NERqdXOplLDuaG5ujLM1SnvZ0z5QGgGMFXJ2Izf9AERG5K/dUaUTHp/ls\ncnqpl7GsaG5sjrI0S3na0z1VGiIisrhUGklOc2NzlKVZytOeVBoiIpIw25TGu+++y/PPP8++ffv4\nr//6r6VeTtLQ3NgcZWmW8rQnW5TGzMwMv/zlL2lsbKSlpYXOzk6Gh4eXelkiIvIltiiN999/n7y8\nPHJyckhNTaWiooK+vr67eq3PJqcZuniNS5/pOxuJ0NzYHGVplvK0p9SlXgBAOBzG7XbHj10uF++/\n//7dvda1qev/QFPK9WP3qpUmligiItjkSmMxhK9NcXk8xtDFa3xw6Vr8f2/837oSuU5zY3OUpVnK\n055scaXLan8fAAAE7klEQVThcrm4dOlS/PjSpUu4XK5bPtbpdHJ5JEjGF85l3OZ/o3P8N6PAn+9u\nufeUzMxM+vv7l3oZ9wRlaZbyNMvpdBp5HVuUxiOPPMLIyAihUAiXy8Uf/vAH9uzZc8vHejyer3h1\nIiJyQ4plWdZSLwKu33J74sQJpqenqaqqorq6eqmXJCIiX2Kb0hAREfu7Zz8IFxER81QaIiKSMFt8\nEJ6Id999lzfffDP+mcdTTz211EtaFp577jkyMjJYsWIFDoeDQ4cOMTY2xpEjRwiFQuTm5tLQ0EB6\nejoAHR0ddHZ24nA4qKuro6SkZIl3sHTa2to4d+4cWVlZvPrqqwB3ld3w8DBtbW1MTk7i8XioqalZ\nsj0tpVvl+dvf/pbOzk6ysrIAqKmp4a//+q8B5Tmfixcv4vP5iEajZGVlUVlZSWVl5eK/R61lYHp6\n2vrhD39offrpp9bU1JS1b98+66OPPlrqZS0L9fX11tWrV2ed+/Wvf22dPHnSsizL+t3vfmf95je/\nsSzLsj766CNr37591tTUlPXpp59aP/zhD63p6emvfM128e6771p/+tOfrB/96Efxc3eS3czMjGVZ\nlvXCCy9YwWDQsizLOnjwoHXu3LmveCf2cKs8f/vb31r/+Z//edNjlef8IpGI9cEHH1iWZVnRaNT6\nwQ9+YH300UeL/h5dFuMpk3/NSDKyvnSvQ19fH5s3bwagsrKSQCAAQCAQoKKigtTUVHJycsjLy7vr\nb+bfC0pLS1m1atWsc3eSXTAYJBKJMD4+TkFBAQCbNm3i7NmzX+1GbOJWecLN709QnolwOp184xvf\nACArK4tHHnmEcDi86O/RZTGeMvnXjCSblJQUfvKTn5CSksJ3vvMdtmzZQjQajX/RJzs7m2j0+tcg\nI5EIhYWF8ee63W7C4fCSrNuu7jS71NTUWV9UdblcyvRL/vu//5vOzk6Kior4p3/6J1atWqU879DI\nyAjDw8MUFRUt+nt0WZSG3L2XXnqJBx98kOHhYQ4dOsTDDz886+cpKSlzPn++nyczZbNw3/nOd/i7\nv/s7xsbG+PWvf82vfvUr/vmf/3mpl7WsjI+P09rayjPPPBP/7OKGxXiPLovx1J38NSMy24MPPghA\nfn4+jz/+OO+//z7Z2dlcvnwZuP6nj+zsbEA5J+JOsnO73Tf9qU2ZzpadnU1KSgqZmZl897vfjU8Q\nlGdiYrEYr776Kn/zN3/Dhg0bgMV/jy6L0vjiXzMSi8X4wx/+QFlZ2VIvy/YmJiYYGxsD4MqVK5w7\nd441a9ZQVlZGd3c3AD09PfE3W1lZGadPnyYWixEKhRgZGYnPOeW6O83O6XSSkZFBMBjEsixOnTrF\n448/voQ7sJdIJALA9PQ0vb29rFmzBlCeibAsi9dee438/Hy+973vxc8v9nt02XwjXH/NyJ0LhUK8\n8sorADzwwANs3LiRb3/72/Pekuf3++O35JWWli7lFpZUa2sr58+f5+rVq2RnZ7N9+3a++c1v3nF2\nN25nnJiYwOPxUFtbu5TbWjI38rxy5QpOp5Nt27bx7rvvcuHCBVJTUyktLeVv//Zv4/N45Tm39957\njxdffJE1a9bEx1C1tbUUFxcv6nt02ZSGiIgsvWUxnhIREXtQaYiISMJUGiIikjCVhoiIJEylISIi\nCVNpiIhIwlQaIiKSMJWGiIgk7P8BidKuliAOOEQAAAAASUVORK5CYII=\n",
"text": [
"<matplotlib.figure.Figure at 0x10b2cfd90>"
]
}
],
"prompt_number": 12
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. This is where a transformation is useful:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments.seg_length.apply(np.log).hist(bins=500)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 13,
"text": [
"<matplotlib.axes.AxesSubplot at 0x10d30d8d0>"
]
},
{
"metadata": {},
"output_type": "display_data",
"png": "iVBORw0KGgoAAAANSUhEUgAAAYMAAAECCAYAAAAciLtvAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3X9s1Pd9x/Hn+cxhjLGvdy14m8fSYQeyKIPFJFHkDEyo\nKpHkDySkTDSVAijrFFO6KKDA0lVKFrWw4SwQ70ylIbWRur8mK1ErMa0KP1KctsKAiaLSBoeEpl5M\nrviOO/+6+/p+7A/nLsY+G2N/v/5+P/B6SFH4fv09/Lqv8b3v+35/v9/z5fP5PCIickcrczuAiIi4\nT8VARERUDERERMVARERQMRAREVQMREQEKJ/ui+3t7XR3d1NdXc2rr74KwMjICG1tbUSjUZYtW8au\nXbuoqKgA4NixY5w4cQK/38/27dtZtWoVAL29vbS3t2NZFo2NjWzduhWATCbDf/7nf3L58mWqq6v5\nzne+QzAYdPL5iohICdMeGWzYsIEXX3zxhnUdHR2sXLmS1tZWGhoa6OjoAMZe8E+ePMmBAwfYvXs3\nkUiEwiUMkUiEHTt20NraypUrV7hw4QIAx48fZ8GCBbS2tvK1r32NH//4xw48RRERuZlpi8E999zD\n4sWLb1h39uxZ1q9fD0BzczNdXV0AdHV10dTURHl5OUuXLqW2tpaenh7i8TipVIr6+noA1q1bx5kz\nZyb9XQ899BDvv/++vc9ORERm5JZnBolEotjKqampIZFIABCPxwmHw8XtwuEwsViMeDxOKBQqrg+F\nQsRiMQBisVjxMX6/n8rKSgYHB2f/bEREZFbmNED2+Xx25RARERdNO0AupaamhuvXrxMMBonH49TU\n1ABj7/j7+/uL2/X39xMOh284Ehi/vvCYa9euEQqFyGazDA8PU1VVNeX3/vnPf47f77/VyCIid7Rg\nMEhjY+O029xyMVi7di2nTp1i8+bNvPPOOzzwwAPF9YcPH+aJJ54gFotx9epV6uvr8fl8LFq0iJ6e\nHurr6zl9+jSbNm0qPuadd97h7rvv5te//jX33XfftN/b7/dz//3332pkEZE72vnz52+6zbRtokOH\nDvG9732Pvr4+nn32WU6ePMmWLVu4dOkSe/bsoaenhy1btgBQV1fHhg0b2Lt3L62trbS0tBTbSC0t\nLfzoRz9iz549/MVf/AVr1qwBYOPGjYyOjrJ7927efvtttm3bNsen7A2dnZ1uR5gR5bSXCTlNyAjK\n6YZpjwyee+65kutfeOGFkusfe+wxHnvssUnr6+rq+MEPfjD5m5eX09LSMpOcIiLiIJ9Jn2dw/Phx\ntYlERG7R+fPn2bhx47Tb6HYUIiKiYuAEU/qIymkvE3KakBGU0w0qBiIiopmBiMjtTjMDERGZERUD\nB5jSR1ROe5mQ04SMoJxuUDEQERHNDEREbneaGYjx+pJp+pJpt2OI3PZUDBxgSh/RhJzRQYue//uj\n2zFmxIT9aUJGUE43qBiIiIhmBuJt7306AMDqP13ichIRc2lmICIiM6Ji4ABT+oim5Cx8zrbXmbA/\nTcgIyukGFQMREdHMQLxNMwORudPMQEREZkTFwAGm9BFNyamZgX1MyAjK6QYVAxER0cxAvE0zA5G5\n08xARERmRMXAAab0EU3JqZmBfUzICMrpBhUDERHRzEC8TTMDkbnTzEBERGZExcABpvQRTcmpmYF9\nTMgIyukGFQMREdHMQLxNMwORudPMQEREZkTFwAGm9BFNyTl+ZtCXTNOXTLuYZmom7E8TMoJyukHF\nQIwSHbSIDlpuxxC57WhmIJ42cWagGYLIrdPMQEREZkTFwAGm9BFNyanrDOxjQkZQTjeoGIiIyOxn\nBm+//TanTp1idHSUe+65h23btjEyMkJbWxvRaJRly5axa9cuKioqADh27BgnTpzA7/ezfft2Vq1a\nBUBvby/t7e1YlkVjYyNbt26d8ntqZnDn0cxAZO4cmxkMDg7y5ptv8s///M/s37+fvr4+Lly4QEdH\nBytXrqS1tZWGhgY6OjqAsRf8kydPcuDAAXbv3k0kEqFQgyKRCDt27KC1tZUrV65w4cKF2UQSEZE5\nmFUxCAQCAAwPD2NZFul0msWLF3P27FnWr18PQHNzM11dXQB0dXXR1NREeXk5S5cupba2lp6eHuLx\nOKlUivr6egDWrVvHmTNn7HherjKlj2hKTs0M7GNCRlBON5TP5kGBQIBnnnmGnTt3smDBAjZt2kRD\nQwOJRIJgMAhATU1N8Zc4Ho/T0NBQfHw4HCYWi1FeXk4oFCquD4VCxGKxuTwfERGZhVkdGSSTSY4e\nPcprr71GJBLh0qVLnDt37oZtfD6fLQEnGl+JOzs7Pbn8yCOPeCrPVMuFdV7JU2o5lU5TU1NTXB5/\nlOCFfKbtz4lZ3c4z1fIjjzziqTxTLY/nhTw3+/c5nVkNkM+fP88vfvELnnvuOQB+/vOfE41GOXv2\nLC+99BLBYJB4PM7LL7/MoUOHeOuttwDYvHkzAN///vd58skn+cpXvsLLL7/Ma6+9Vgx/8eJFvvWt\nb5X8vhog33k0QBaZO8cGyKtWreLy5csMDg4yOjpKd3c3q1evZu3atZw6dQqAd955hwceeACAtWvX\n8u6775LJZIhGo1y9epX6+nqCwSCLFi2ip6eHfD7P6dOnefDBB2cTyVNupRq7yZScmhnYx4SMoJxu\nmNXMoLKyki1btnDw4EEsy2L16tXce++91NfX09bWxp49e4qnlgLU1dWxYcMG9u7di9/vp6WlpdhG\namlpob29nXQ6TWNjI2vWrLHv2cltoaZyodsRRG57ujeReNp7nw6wbEmA2iULi8ugNpHIrdC9iURE\nZEZUDBxgSh/RlJzplDc/v2AiE/anCRlBOd2gYiAiIpoZiLdpZiAyd5oZiIjIjKgYOMCUPqIpOTUz\nsI8JGUE53aBiIJ7Vl0yTNaeLKWI0zQzEs977dAArm+PPgxWaGYjMgWYGIiIyIyoGDjClj2hKTs0M\n7GNCRlBON6gYiIiIZgbiXZoZiNhDMwMREZkRFQMHmNJHNCWnZgb2MSEjKKcbVAxEREQzA/EuzQxE\n7KGZgYiIzIiKgQNM6SOaklMzA/uYkBGU0w0qBiIiopmBeE9fcuxIIDpoaWYgYoOZzAzK5ymLyIxF\nBy23I4jccdQmcoApfURTcmpmYB8TMoJyukHFQERENDMQ7ynMBQDNDERsoOsMRERkRlQMHGBKH9GU\nnJoZ2MeEjKCcblAxEBERzQzEezQzELGXZgYiIjIjKgYOMKWPaEpOzQzsY0JGUE43qBiIiIhmBuI9\nmhmI2Ev3JpLbRuHmdSLiDLWJHGBKH9GUnMkRi95EyvM3sDNhf5qQEZTTDSoG4nnxkSxW1phupoiR\nZj0zSKVSHD16lE8++YTR0VFaWlqoq6ujra2NaDTKsmXL2LVrFxUVFQAcO3aMEydO4Pf72b59O6tW\nrQKgt7eX9vZ2LMuisbGRrVu3Tvk9NTO4M0ycGYz9P09VwF9cr5mByMw5ep3B0aNH+au/+iv+7d/+\njdbWVv7sz/6Mjo4OVq5cSWtrKw0NDXR0dABjL/gnT57kwIED7N69m0gkQqEGRSIRduzYQWtrK1eu\nXOHChQuzjSS3ucXjioGI2GtWxWB4eJjf/e53PProowD4/X4qKys5e/Ys69evB6C5uZmuri4Aurq6\naGpqory8nKVLl1JbW0tPTw/xeJxUKkV9fT0A69at48yZM3Y8L1eZ0kc0JWcuN/bGoWqht4uBCfvT\nhIygnG6Y1dlE0WiU6upqIpEIH330EQ0NDWzfvp1EIkEwGASgpqaGRCIBQDwep6Ghofj4cDhMLBaj\nvLycUChUXB8KhYjFYnN5PiIiMguzOjLIZrNcvnyZhx56iP3795PJZPjVr351wzY+n8+WgCZ65JFH\n3I4wI6bkLCsz49+SCfvThIygnG6YVTEIh8NUVVWxdu1aAoEATU1NXLhwgWAwyPXr14Gxo4Gamhpg\n7B1/f39/8fH9/f2Ew+FJRwL9/f03HCmUMv6wrLOzU8u38XIikSi2iGDsthSFo82pHv/JJ594Jr+W\nteyl5ZuZ9dlE3/3ud9m2bRsrVqzgRz/6EXfddRd9fX1UVVWxefNm3nrrLYaGhnjqqafo7e3l8OHD\n7N+/n1gsxiuvvMLrr7+Oz+fjxRdfZPv27dTX13PgwAE2bdrEmjVrSn5PU84m6uzsNOIdg1dzTjyb\nKJfLk8nDivAiPhuwWBzwU//lShcTlubV/TmeCRlBOe3m6BXIO3fuJBKJkEwmWb58OU899RT5fJ62\ntjb27NlTPLUUoK6ujg0bNrB37178fj8tLS3FNlJLSwvt7e2k02kaGxunLAQiBV4fJIuYSPcmEs95\n79MBFgf8DFnZG64zKBwZLFsSKN6rSERuTp9nIMbSu3+R+aVi4IBbGdq4yZSc44fIXmbC/jQhIyin\nG1QMRERExcAJJpxdAObk1HUG9jEhIyinG1QMRERExcAJpvQRTcmpmYF9TMgIyukGFQMREdF1BuI9\n7306wLIlAT4bsG64zqB2SYDY8OgNn4ksIjen6wzkthIbHtUnnok4RMXAAab0Eb2cc/zVx5oZ2MeE\njKCcblAxEE/SUYDI/NLMQDznvU8HsLI5rGyegH/sGoPCnwv3KNLMQGTmNDOQO05fMk1fMu12DBHj\nqBg4wJQ+oik5b2VmEB20iA5aDqaZmgn704SMoJxuUDEQT+lLpouDYxGZPyoGDjDlfiVezBkdtCYN\njnVvIvuYkBGU0w0qBiIiomLgBFP6iKbknDgzGLKyDKQzLqWZmgn704SMoJxuUDEQ48SGRxmysm7H\nELmtqBg4wJQ+oik5NTOwjwkZQTndoGIgIiIqBk4wpY9oSk7dm8g+JmQE5XSDioGIiKgYOMGUPqIp\nOTUzsI8JGUE53aBiICIiKgZOMKWPaEpOzQzsY0JGUE43qBiIiIiKgRNM6SOaklMzA/uYkBGU0w0q\nBiIiomLgBFP6iKbk1MzAPiZkBOV0g4qBiIioGDjBlD6iKTk1M7CPCRlBOd2gYiAiIioGTjClj2hK\nTs0M7GNCRlBON6gYiIiIioETTOkjmpJTMwP7mJARlNMN5XN5cC6XY9++fYRCIfbt28fIyAhtbW1E\no1GWLVvGrl27qKioAODYsWOcOHECv9/P9u3bWbVqFQC9vb20t7djWRaNjY1s3bp17s9KRERuyZyO\nDI4dO0ZdXR0+39g7t46ODlauXElraysNDQ10dHQAYy/4J0+e5MCBA+zevZtIJEI+P9YHjkQi7Nix\ng9bWVq5cucKFCxfm+JTcZ0of0ZScmhnYx4SMoJxumHUx6O/vp7u7m0cffbT4wn727FnWr18PQHNz\nM11dXQB0dXXR1NREeXk5S5cupba2lp6eHuLxOKlUivr6egDWrVvHmTNn5vqc5A7Vl0yT/fzfYl8y\nTV8y7XIiEXPMuhi88cYbfPOb36Ss7Iu/IpFIEAwGAaipqSGRSAAQj8cJh8PF7cLhMLFYjHg8TigU\nKq4PhULEYrHZRvIMU/qIpuSc6cwgOmiR/fwoIjpoER20nIw1iQn704SMoJxumFUxOHfuHNXV1Xz1\nq18tHhVMVGgdiYiI982qGHzwwQecO3eOnTt3cvjwYX7zm9/Q1tZGTU0N169fB8aOBmpqaoCxd/z9\n/f3Fx/f39xMOhycdCfT3999wpFDK+B5dZ2enJ5cL67ySZ6rlI0eOeCpPZ2dn8WgSIJPJkMvlizOD\nXC5PJpMpfr370u/pvvT74nIikbhhvpBIJO74/Tlx+ciRI57KM9XyxN8lt/NMtWzK/pwJX36qt/Yz\ndPHiRX7605+yb98+fvKTn1BVVcXmzZt56623GBoa4qmnnqK3t5fDhw+zf/9+YrEYr7zyCq+//jo+\nn48XX3yR7du3U19fz4EDB9i0aRNr1qwp+b2OHz/O/fffP5e486Kzs9OIw0cv5nzv0wEGrSwBvw8r\nmyfg940VgTw3rPvzYAWfDYy1gVb/6ZLiY61sji8tWsCQlb3ha/PBi/tzIhMygnLa7fz582zcuHHa\nbeZ0amlBoSW0ZcsW2tra2LNnT/HUUoC6ujo2bNjA3r178fv9tLS0FB/T0tJCe3s76XSaxsbGKQuB\nSUz4xwHm5Cwr80F25u9Zqhb6i8VgPpmwP03ICMrphjkfGcwnU44MZPZKHRkAxT/P5Mig1NdE7mQz\nOTLQFcgOuJU+nZtMyanrDOxjQkZQTjeoGIiIiIqBE0zpI5qSU/cmso8JGUE53aBiICIiKgZOMKWP\naErOUjODISuLlc25kGZqJuxPEzKCcrpBxUCMFBsexbqF001FZHoqBg4wpY9oSs5bnRm4ddRgwv40\nISMopxtUDOS2o6MGkVunYuAAU/qIpuTUdQb2MSEjKKcbVAxERETFwAmm9BFNyXmzmUFfMs1AOjPt\nNvPBhP1pQkZQTjfYcqM6ETdFBy18voDbMUSMpiMDB5jSRzQlp2YG9jEhIyinG1QMxGj+Mjx38ZmI\nidQmcoApfURTck73eQaJ1Px/bsFUTNifJmQE5XSDjgxERETFwAmm9BFNyamZgX1MyAjK6QYVAxER\nUTFwgil9RFNy6vMM7GNCRlBON6gYiIiIioETTOkjmpJTMwP7mJARlNMNKgYiIqJi4ART+oim5Jzr\nzKAvmaYvmbYpzdRM2J8mZATldIOKgdz2ooMW0UHL7RginqZi4ABT+oim5NTMwD4mZATldIOKgXhG\nXzJt632GFgf8tv1dIrc7Xz6fN+NtF3D8+HHuv/9+t2OIQ977dIBBa+xeQwG/DyubJ+AfmxcU/jxx\nXcGK8CI+G7CKxcTK5lkRXkQ+D72JFIsWlBGuDPAn1Qvn+VmJuO/8+fNs3Lhx2m10ZCC3teighZXN\nk0hlNTcQmYaKgQNM6SOaknMmM4MhK+v6raxN2J8mZATldIOKgXiK3ze700hjw6M3tI1E5NaoGDjA\nlHOPvZjTX+Jf5HTXGcy2eDjBi/tzIhMygnK6QcVAjFaqeEy37XxcfCZiIhUDB5jSRzQlp13XGTg9\nRDZhf5qQEZTTDSoGIiKiYuAEU/qIpuTU5xnYx4SMoJxuUDGQ256XhswiXlU+mwddu3aNSCRCIpGg\nurqa5uZmmpubGRkZoa2tjWg0yrJly9i1axcVFRUAHDt2jBMnTuD3+9m+fTurVq0CoLe3l/b2dizL\norGxka1bt9r37FzS2dlpxDsGU3LOdWbgL4Ns9ovlwhDZ7quRTdifJmQE5XTDrI4MysvLefrpp/n3\nf/93nn/+ef7rv/6L3t5eOjo6WLlyJa2trTQ0NNDR0QGMveCfPHmSAwcOsHv3biKRCIW7YEQiEXbs\n2EFraytXrlzhwoUL9j07MYbd9yWaju5iKjLZrIpBMBjkrrvuAqC6upoVK1YQi8U4e/Ys69evB6C5\nuZmuri4Aurq6aGpqory8nKVLl1JbW0tPTw/xeJxUKkV9fT0A69at48yZMzY8LXeZ8k7BSzkLt40o\nZbYzg/FXJc9Hq8hL+3MqJmQE5XTDnGcGV69epbe3l7vvvptEIkEwGASgpqaGRCIBQDweJxwOFx8T\nDoeJxWLE43FCoVBxfSgUIhaLzTWSCHDjVcm3cj2CyJ1oTr8iqVSKQ4cO8fTTTxdnAwU+h96JjT+v\nt7Oz05PLhXVeyTPV8pEjRzyVJ5PJ3DAfKCwX1uVyeTKZzE23Lyi1feENihP5vbY/Sy0fOXLEU3mm\nWp74u+R2nqmWTdmfMzHrW1hnMhn+9V//lTVr1vD4448D8Nxzz/HSSy8RDAaJx+O8/PLLHDp0iLfe\neguAzZs3A/D973+fJ598kq985Su8/PLLvPbaa8XwFy9e5Fvf+lbJ72nKLaw7O80YKnkpZ+H21eNv\nU134fy6XJ5O/+W2tZ3LL65oKPyOjOb60aAH1X6609Tl4aX9OxYSMoJx2c+wW1vl8nh/+8IfU1dUV\nCwHA2rVrOXXqFADvvPMODzzwQHH9u+++SyaTIRqNcvXqVerr6wkGgyxatIienh7y+TynT5/mwQcf\nnE0kTzHhHweYk9PO6wwSqSxWNk/VQvs/+MaE/WlCRlBON8zq1NIPPviA06dPs3z5cl544QUAvvGN\nb7Blyxba2trYs2dP8dRSgLq6OjZs2MDevXvx+/20tLQU20gtLS20t7eTTqdpbGxkzZo1Nj01kakN\nWVn6kml92I3I5/RJZw4w5dDRSznnq01UEPD7CPjLWP2nS2x7Dl7an1MxISMop930SWci0/CXwUA6\nc/MNRe4AKgYOMOGdApiT06l7EyVSWYas7M03nCET9qcJGUE53aBiIK6bz6uPdZ8ikdJUDBxwK+f2\nuskrOae7+hjs+zwDcPbiM6/sz+mYkBGU0w0qBiIiomLgBFP6iKbk1OcZ2MeEjKCcblAxEBERFQMn\nmNJHNCWnnTMDJ5mwP03ICMrpBhUDcdV8nkkkIlNTMXCAKX1EL+S82ZlEoJmBnUzICMrpBhUDERFR\nMXCCKX1EU3JqZmAfEzKCcrpBxUA8QVcGi7hLxcABpvQRvZRzuiuDnZwZDFlZ+ocsW/4uL+3PqZiQ\nEZTTDSoGckeLDY8yakgbSsRJKgYOMKWPaEpOzQzsY0JGUE43qBiIiIiKgRNM6SOaklPXGdjHhIyg\nnG5QMRDX6UwiEfepGDjAlD6iV3Le7DMG5mNm0JdM05dMz+nv8Mr+nI4JGUE53VDudgARtw1ZWWLD\nowT8ZfxJ9UK344i4wpfP5804VQM4fvw4999/v9sxxCZ9yTS9iVRx2crmCfh9k/5f6mtObF9T4Sdc\nGZi2IPQl01Qt9LNkod5HiTnOnz/Pxo0bp91GbSJxzUxuUjefBtM5ooPTX4AWHbQYsrLzlEhk/qgY\nOMCUPqJbOfuSaS5fGyY7w4PS+brOwF829t9Us4O+ZHrazCb83E3ICMrpBh3ryryLDloMWtlii8ZL\nEqksFeXZYkEY3zKKDlpkDbkATuRW6cjAAaace+xUTjvOzBlvvq8z8PmgN5G6actoIhN+7iZkBOV0\ng4qB2C46aE35QnqzVosXxIZHyebGWkYD6YzbcUTmhYqBA0zpI9qdszALsLI5Fgf8N6z7fXyEy9eG\n+b9k+pZbLW7cm8hfNtYyKgyLZ/LxnCb83E3ICMrpBs0MxDaFWQCMtVouXxsmnsqQzuQmncppiiEr\ny+VrwyTSmWL+IWtspqBrEuR2omLgAFP6iE7mjA2P2vbiX1bmA5dOQS31PGLDowz6s5OKgQk/dxMy\ngnK6QW0isYUJswARmZrRxcDus1bsYkofcaY5x+/nifu8L5mmf8hy9LRLfZ6BfUzICMrpBqPbRIUz\nVkr1bkudJy43N3G/FW4ZEfj8bnK9iRSLFpQxbGWpDPiJDlosWxJwLa+I2MPoYjCd8YVivguDKX3E\nUjknnhJauGVEwP/Fn61slgRZqj4/Y2giu29J7ebMYCr+Mugfsggv/qIQmvBzNyEjKKcbbttiMN50\nRxAw+fYDOpqYXBRuxc1uSX07SKSyLFowdoaRjkzlduCJX9uLFy+yd+9e9uzZw//8z//M+/cvXCQ1\n3cVSE5WaVxTOqe++9HsnYtqiL5kuXkg1Vb9z8YR3/OPf6Rf+PH7dkJW96Tn4c+HVmUFseJT+YYv+\nIYu+ZLr4c+9Lpvm/ZJpEKuO5mZYpPW7lnH+uHxnkcjmOHDnC9773PUKhEP/0T//EfffdR11dndvR\nbjDxl7pUG6pwnn0266cvmS721ad75wjcsF1fMl08J3/YyrJ0SYDBdJaqhX4G09kbvlZQ5oOKBWMv\n4KnRLLl86XXlfh/RQYtQ5QKiAxZfXnEvv4+PkMnmKff7yGTzjOby+HxgZXMsWVjOQDqDvwwWlpcV\n/5zNfnFTNyubt/U0UtMUjhASqQxUVPFx/zDXhkeBsauXM7lccf9P9W9BxAtcLwYffvghtbW1LF26\nFICmpibOnj3ruWIw3RHDxK/V1NQUC0NViReAiUPZ8dsVBrKFxy9eODak9fkCk75WEPD7CPjHlq1s\nrvjCvGhBlpHR3Ofrv7iH/xcv3mUMFGcCX1wUVvj6wLgLrcb/uSCRmp+bzXlxZjBe7PMXfyvnJzuh\nMCZSYz8DK5snXLmAYStLmQ+qK8oJLw5M2U4qNcif7s3FTJnS41bO+ed6MYjFYoTD4eJyKBTiww8/\ndDGR88YPZZ2USI0ViDvxHbsXjS+oOeD6SIZ4KkNFuY/U6NjPavwR3JKF5cXi0T8ydiX3+IJSsUBH\nGmIf14vB7SiRSFBTU+N2jJvK5fLzfkfQ2fDqzGCiTCZDwL9gRtuOb60lUrmSR3CljsbGr1uyMF8s\nDIXWYOH/UHrd4PAwVZWVxRZiQWo0WywuhVbl+DOlZsquwXlnZ+ekd91eHMqXymkq1z/28tKlS/z3\nf/833/3udwF488038fl8bN68edK2586d4/r16/MdUUTEaMFgkMbGxmm3cf3IYMWKFVy9epVoNEoo\nFOKXv/wl//iP/1hy25s9GRERmR3Xjwxg7NTSH//4x2SzWTZu3Mhjjz3mdiQRkTuKJ4qBiIi4yxMX\nnYmIiLtUDERExP0B8kxdvHiRN954ozhX2LRpk9uRJmlvb6e7u5vq6mpeffVVt+NM6dq1a0QiERKJ\nBNXV1TQ3N9Pc3Ox2rBtYlsVLL73E6OgogUCAhx9+mCeeeMLtWFPK5XLs27ePUCjEvn373I5T0s6d\nO1m0aBFlZWX4/X7279/vdqSSUqkUR48e5ZNPPmF0dJRnn32Wu+++2+1YN/j00085dOhQcfmzzz7j\n7/7u7zw573z77bc5deoUo6Oj3HPPPWzbtq30hnkDZLPZ/Le//e38Z599lh8dHc3v2bMn/4c//MHt\nWJNcvHgx/9FHH+Wff/55t6NMKx6P5z/++ON8Pp/PJxKJ/DPPPOPJ/ZlKpfL5fD5vWVb++eefz/f1\n9bmcaGo/+9nP8ocPH84fOHDA7ShTamlpyQ8MDLgd46ba2tryx48fz+fz+Xwmk8kPDQ25nGh62Ww2\n//d///f5P/7xj25HmWRgYCDf0tKSHxkZyWez2fwPfvCDfHd3d8ltjWgTjb9lRXl5efGWFV5zzz33\nsHjxYrenaMLJAAADrUlEQVRj3FQwGOSuu+4CoLq6mhUrVhCPx90NVcLChWMXF6VSKbLZLOXl3jyQ\n7e/vp7u7m0cffZS8x8/H8Hq+4eFhfve73/Hoo48C4Pf7qaysdDnV9N5//32WLVvGl7/8ZbejTBII\njF04ODw8jGVZpNNpqqqqSm7rzd+uCe7EW1bMl6tXr9Lb20tDQ4PbUSbJ5XLs3buXP/zhD2zbts2T\nv2wAb7zxBt/85jcZGRlxO8q0fD4f//Iv/4LP5+PrX/86X/va19yONEk0GqW6uppIJMJHH31EQ0MD\nO3bsKL6oedG7777r2auQA4EAzzzzDDt37mTBggVs2rSJ+vr6ktsacWQgzkilUhw6dIinn36aiooK\nt+NMUlZWxsGDB3n99df53//9Xz7++GO3I01y7tw5qqur+epXv+r5d92vvPIKBw8e5Dvf+Q5vvvkm\nv/3tb92ONEk2m+Xy5cs89NBD7N+/n0wmw69+9Su3Y00pk8lw7tw5Hn74YbejlJRMJjl69CivvfYa\nkUiES5cucf78+ZLbGlEMQqEQ/f39xeX+/n5CoZCLicyXyWR49dVX+du//VseeOABt+NMa+nSpfzN\n3/wNFy9edDvKJB988AHnzp1j586dHD58mN/85jf8x3/8h9uxSvrSl74EQF1dHQ8++KAnj67D4TBV\nVVWsXbuWQCBAU1MT3d3dbseaUnd3N3/5l39JdXW121FK+vDDD2loaKC2tpYlS5bw8MMPT/l7ZEQx\nGH/Likwmwy9/+UvWrl3rdixj5fN5fvjDH1JXV8fjjz/udpySkskkQ0NDAAwMDHDhwgWWL1/ucqrJ\nvvGNb3DkyBEikQjPPfcc9957L9/+9rfdjjVJOp0utrGSySTd3d2e3J/BYJDa2lp6enrI5XKcP3+e\nv/7rv3Y71pTeffddmpqa3I4xpVWrVnH58mUGBwcZHR2lu7ub1atXl9zWiJmB3+/n2WefpbW1tXhq\nqdc+7wDg0KFD/Pa3v2VgYIBnn32WJ598kg0bNrgda5IPPviA06dPs3z5cl544QVg7EVtzZo1Lif7\nwvXr14lEIuRyOYLBIE888QT33Xef27Fuymfz5z/bJZFIcPDgQQCWLFnC448/PuWLgtt27txJJBIh\nmUyyfPlynnrqKbcjlZRKpXj//ff5h3/4B7ejTKmyspItW7Zw8OBBLMti9erV3HvvvSW31e0oRETE\njDaRiIg4S8VARERUDERERMVARERQMRAREVQMREQEFQMREUHFQEREgP8HXM8GaS2rZLkAAAAASUVO\nRK5CYII=\n",
"text": [
"<matplotlib.figure.Figure at 0x10da5df10>"
]
}
],
"prompt_number": 13
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as `datetime`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments.st_time.dtype"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 15,
"text": [
"dtype('O')"
]
}
],
"prompt_number": 15
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Our first order of business will be to convert these data to `datetime`. The `strptime` method parses a string representation of a date and/or time field, according to the expected format of this information."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"datetime.strptime(segments.st_time.ix[0], '%m/%d/%y %H:%M')"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 16,
"text": [
"datetime.datetime(2009, 2, 10, 16, 3)"
]
}
],
"prompt_number": 16
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `dateutil` package includes a parser that attempts to detect the format of the date strings, and convert them automatically."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"from dateutil.parser import parse"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 17
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"parse(segments.st_time.ix[0])"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 18,
"text": [
"datetime.datetime(2009, 2, 10, 16, 3)"
]
}
],
"prompt_number": 18
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can convert all the dates in a particular column by using the `apply` method."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments.st_time.apply(lambda d: datetime.strptime(d, '%m/%d/%y %H:%M'))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 19,
"text": [
"0 2009-02-10 16:03:00\n",
"1 2009-04-06 14:31:00\n",
"2 2009-04-06 14:36:00\n",
"3 2009-04-10 17:58:00\n",
"4 2009-04-10 17:59:00\n",
"5 2010-03-20 16:06:00\n",
"6 2010-03-20 18:05:00\n",
"7 2011-05-04 11:28:00\n",
"8 2010-06-05 11:23:00\n",
"9 2010-06-08 11:03:00\n",
"...\n",
"262515 2010-05-31 14:27:00\n",
"262516 2010-06-05 05:25:00\n",
"262517 2010-06-27 02:35:00\n",
"262518 2010-07-01 03:49:00\n",
"262519 2010-07-02 03:30:00\n",
"262520 2010-06-13 10:32:00\n",
"262521 2010-06-15 12:49:00\n",
"262522 2010-06-15 21:32:00\n",
"262523 2010-06-17 19:16:00\n",
"262524 2010-06-18 02:52:00\n",
"262525 2010-06-18 10:19:00\n",
"Name: st_time, Length: 262526, dtype: datetime64[ns]"
]
}
],
"prompt_number": 19
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As a convenience, Pandas has a `to_datetime` method that will parse and convert an entire Series of formatted strings into `datetime` objects."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.to_datetime(segments.st_time)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 20,
"text": [
"0 2009-02-10 16:03:00\n",
"1 2009-04-06 14:31:00\n",
"2 2009-04-06 14:36:00\n",
"3 2009-04-10 17:58:00\n",
"4 2009-04-10 17:59:00\n",
"5 2010-03-20 16:06:00\n",
"6 2010-03-20 18:05:00\n",
"7 2011-05-04 11:28:00\n",
"8 2010-06-05 11:23:00\n",
"9 2010-06-08 11:03:00\n",
"...\n",
"262515 2010-05-31 14:27:00\n",
"262516 2010-06-05 05:25:00\n",
"262517 2010-06-27 02:35:00\n",
"262518 2010-07-01 03:49:00\n",
"262519 2010-07-02 03:30:00\n",
"262520 2010-06-13 10:32:00\n",
"262521 2010-06-15 12:49:00\n",
"262522 2010-06-15 21:32:00\n",
"262523 2010-06-17 19:16:00\n",
"262524 2010-06-18 02:52:00\n",
"262525 2010-06-18 10:19:00\n",
"Name: st_time, Length: 262526, dtype: datetime64[ns]"
]
}
],
"prompt_number": 20
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Pandas also has a custom NA value for missing datetime objects, `NaT`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.to_datetime([None])"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 21,
"text": [
"<class 'pandas.tseries.index.DatetimeIndex'>\n",
"[NaT]\n",
"Length: 1, Freq: None, Timezone: None"
]
}
],
"prompt_number": 21
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Also, if `to_datetime()` has problems parsing any particular date/time format, you can pass the spec in using the `format=` argument."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Merging and joining DataFrame objects"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have the vessel transit information as we need it, we may want a little more information regarding the vessels themselves. In the `data/AIS` folder there is a second table that contains information about each of the ships that traveled the segments in the `segments` table."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"vessels = pd.read_csv(\"data/AIS/vessel_information.csv\", index_col='mmsi')\n",
"vessels.head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 22,
"text": [
" num_names names sov \\\n",
"mmsi \n",
"1 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y \n",
"9 3 000000009/Raven/Shearwater N \n",
"21 1 Us Gov Vessel Y \n",
"74 2 Mcfaul/Sarah Bell N \n",
"103 3 Ron G/Us Navy Warship 103/Us Warship 103 Y \n",
"\n",
" flag flag_type num_loas loa \\\n",
"mmsi \n",
"1 Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 \n",
"9 Unknown Unknown 2 50.0/62.0 \n",
"21 Unknown Unknown 1 208.0 \n",
"74 Unknown Unknown 1 155.0 \n",
"103 Unknown Unknown 2 26.0/155.0 \n",
"\n",
" max_loa num_types type \n",
"mmsi \n",
"1 156 4 Dredging/MilOps/Reserved/Towing \n",
"9 62 2 Pleasure/Tug \n",
"21 208 1 Unknown \n",
"74 155 1 Unknown \n",
"103 155 2 Tanker/Unknown "
]
}
],
"prompt_number": 22
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"[v for v in vessels.type.unique() if v.find('/')==-1]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 23,
"text": [
"['Unknown',\n",
" 'Other',\n",
" 'Tug',\n",
" 'Towing',\n",
" 'Pleasure',\n",
" 'Cargo',\n",
" 'WIG',\n",
" 'Fishing',\n",
" 'BigTow',\n",
" 'MilOps',\n",
" 'Tanker',\n",
" 'Passenger',\n",
" 'SAR',\n",
" 'Sailing',\n",
" 'Reserved',\n",
" 'Law',\n",
" 'Dredging',\n",
" 'AntiPol',\n",
" 'Pilot',\n",
" 'HSC',\n",
" 'Diving',\n",
" 'Resol-18',\n",
" 'Tender',\n",
" 'Spare',\n",
" 'Medical']"
]
}
],
"prompt_number": 23
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"vessels.type.value_counts()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 28,
"text": [
"Cargo 5622\n",
"Tanker 2440\n",
"Pleasure 601\n",
"Tug 221\n",
"Sailing 205\n",
"Fishing 200\n",
"Other 178\n",
"Passenger 150\n",
"Towing 117\n",
"Unknown 106\n",
"...\n",
"BigTow/Tanker/Towing/Tug 1\n",
"Fishing/SAR/Unknown 1\n",
"BigTow/Reserved/Towing/Tug/WIG 1\n",
"Reserved/Tanker/Towing/Tug 1\n",
"Cargo/Reserved/Unknown 1\n",
"Reserved/Towing/Tug 1\n",
"BigTow/Unknown 1\n",
"Fishing/Law 1\n",
"BigTow/Towing/WIG 1\n",
"Towing/Unknown/WIG 1\n",
"AntiPol/Fishing/Pleasure 1\n",
"Length: 206, dtype: int64"
]
}
],
"prompt_number": 28
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a *one-to-many* relationship with the segments.\n",
"\n",
"In Pandas, we can combine tables according to the value of one or more *keys* that are used to identify rows, much like an index. Using a trivial example:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4)))\n",
"df2 = pd.DataFrame(dict(id=range(3)+range(3), score=np.random.random(size=6)))\n",
"\n",
"df1, df2"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 30,
"text": [
"( age id\n",
"0 27 0\n",
"1 21 1\n",
"2 30 2\n",
"3 29 3,\n",
" id score\n",
"0 0 0.471604\n",
"1 1 0.910118\n",
"2 2 0.163896\n",
"3 0 0.811360\n",
"4 1 0.799790\n",
"5 2 0.951837)"
]
}
],
"prompt_number": 30
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.merge(df1, df2)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 31,
"text": [
" age id score\n",
"0 27 0 0.471604\n",
"1 27 0 0.811360\n",
"2 21 1 0.910118\n",
"3 21 1 0.799790\n",
"4 30 2 0.163896\n",
"5 30 2 0.951837"
]
}
],
"prompt_number": 31
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that without any information about which column to use as a key, Pandas did the right thing and used the `id` column in both tables. Unless specified otherwise, `merge` will used any common column names as keys for merging the tables. \n",
"\n",
"Notice also that `id=3` from `df1` was omitted from the merged table. This is because, by default, `merge` performs an **inner join** on the tables, meaning that the merged table represents an intersection of the two tables."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.merge(df1, df2, how='outer')"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 32,
"text": [
" age id score\n",
"0 27 0 0.471604\n",
"1 27 0 0.811360\n",
"2 21 1 0.910118\n",
"3 21 1 0.799790\n",
"4 30 2 0.163896\n",
"5 30 2 0.951837\n",
"6 29 3 NaN"
]
}
],
"prompt_number": 32
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The **outer join** above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform **right** and **left** joins to include all rows of the right or left table (*i.e.* first or second argument to `merge`), but not necessarily the other."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking at the two datasets that we wish to merge:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments.head(1)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 33,
"text": [
" mmsi name transit segment seg_length avg_sog min_sog max_sog \\\n",
"0 1 Us Govt Ves 1 1 5.1 13.2 9.2 14.5 \n",
"\n",
" pdgt10 st_time end_time \n",
"0 96.5 2/10/09 16:03 2/10/09 16:27 "
]
}
],
"prompt_number": 33
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"vessels.head(1)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 34,
"text": [
" num_names names sov \\\n",
"mmsi \n",
"1 8 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... Y \n",
"\n",
" flag flag_type num_loas loa \\\n",
"mmsi \n",
"1 Unknown Unknown 7 42.0/48.0/57.0/90.0/138.0/154.0/156.0 \n",
"\n",
" max_loa num_types type \n",
"mmsi \n",
"1 156 4 Dredging/MilOps/Reserved/Towing "
]
}
],
"prompt_number": 34
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"we see that there is a `mmsi` value (a vessel identifier) in each table, but it is used as an index for the `vessels` table. In this case, we have to specify to join on the index for this table, and on the `mmsi` column for the other."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi')"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 35
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments_merged.head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 36,
"text": [
"<class 'pandas.core.frame.DataFrame'>\n",
"Int64Index: 5 entries, 0 to 4\n",
"Data columns (total 21 columns):\n",
"num_names 5 non-null values\n",
"names 5 non-null values\n",
"sov 5 non-null values\n",
"flag 5 non-null values\n",
"flag_type 5 non-null values\n",
"num_loas 5 non-null values\n",
"loa 5 non-null values\n",
"max_loa 5 non-null values\n",
"num_types 5 non-null values\n",
"type 5 non-null values\n",
"mmsi 5 non-null values\n",
"name 5 non-null values\n",
"transit 5 non-null values\n",
"segment 5 non-null values\n",
"seg_length 5 non-null values\n",
"avg_sog 5 non-null values\n",
"min_sog 5 non-null values\n",
"max_sog 5 non-null values\n",
"pdgt10 5 non-null values\n",
"st_time 5 non-null values\n",
"end_time 5 non-null values\n",
"dtypes: float64(6), int64(6), object(9)"
]
}
],
"prompt_number": 36
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other. \n",
"\n",
"Notice that `mmsi` field that was an index on the `vessels` table is no longer an index on the merged table."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, we used the `merge` function to perform the merge; we could also have used the `merge` method for either of the tables:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"vessels.merge(segments, left_index=True, right_on='mmsi').head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 37,
"text": [
"<class 'pandas.core.frame.DataFrame'>\n",
"Int64Index: 5 entries, 0 to 4\n",
"Data columns (total 21 columns):\n",
"num_names 5 non-null values\n",
"names 5 non-null values\n",
"sov 5 non-null values\n",
"flag 5 non-null values\n",
"flag_type 5 non-null values\n",
"num_loas 5 non-null values\n",
"loa 5 non-null values\n",
"max_loa 5 non-null values\n",
"num_types 5 non-null values\n",
"type 5 non-null values\n",
"mmsi 5 non-null values\n",
"name 5 non-null values\n",
"transit 5 non-null values\n",
"segment 5 non-null values\n",
"seg_length 5 non-null values\n",
"avg_sog 5 non-null values\n",
"min_sog 5 non-null values\n",
"max_sog 5 non-null values\n",
"pdgt10 5 non-null values\n",
"st_time 5 non-null values\n",
"end_time 5 non-null values\n",
"dtypes: float64(6), int64(6), object(9)"
]
}
],
"prompt_number": 37
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes `_x` and `_y` to the columns to uniquely identify them."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments['type'] = 'foo'\n",
"pd.merge(vessels, segments, left_index=True, right_on='mmsi').head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 38,
"text": [
"<class 'pandas.core.frame.DataFrame'>\n",
"Int64Index: 5 entries, 0 to 4\n",
"Data columns (total 22 columns):\n",
"num_names 5 non-null values\n",
"names 5 non-null values\n",
"sov 5 non-null values\n",
"flag 5 non-null values\n",
"flag_type 5 non-null values\n",
"num_loas 5 non-null values\n",
"loa 5 non-null values\n",
"max_loa 5 non-null values\n",
"num_types 5 non-null values\n",
"type_x 5 non-null values\n",
"mmsi 5 non-null values\n",
"name 5 non-null values\n",
"transit 5 non-null values\n",
"segment 5 non-null values\n",
"seg_length 5 non-null values\n",
"avg_sog 5 non-null values\n",
"min_sog 5 non-null values\n",
"max_sog 5 non-null values\n",
"pdgt10 5 non-null values\n",
"st_time 5 non-null values\n",
"end_time 5 non-null values\n",
"type_y 5 non-null values\n",
"dtypes: float64(6), int64(6), object(10)"
]
}
],
"prompt_number": 38
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This behavior can be overridden by specifying a `suffixes` argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Concatenation\n",
"\n",
"A common data manipulation is appending rows or columns to a dataset that already conform to the dimensions of the exsiting rows or colums, respectively. In NumPy, this is done either with `concatenate` or the convenience functions `c_` and `r_`:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"np.concatenate([np.random.random(5), np.random.random(5)])"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 39,
"text": [
"array([ 0.87689804, 0.87175082, 0.79683534, 0.48575766, 0.44686505,\n",
" 0.96962006, 0.6165222 , 0.02947637, 0.034778 , 0.16926134])"
]
}
],
"prompt_number": 39
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"np.r_[np.random.random(5), np.random.random(5)]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 40,
"text": [
"array([ 0.51606204, 0.16778644, 0.11720138, 0.47316661, 0.04843167,\n",
" 0.83395133, 0.15165154, 0.38555231, 0.91867151, 0.7072754 ])"
]
}
],
"prompt_number": 40
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"np.c_[np.random.random(5), np.random.random(5)]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 41,
"text": [
"array([[ 0.02698923, 0.34078316],\n",
" [ 0.42019534, 0.22171457],\n",
" [ 0.8503663 , 0.78958016],\n",
" [ 0.21265774, 0.27176873],\n",
" [ 0.65934945, 0.53319755]])"
]
}
],
"prompt_number": 41
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This operation is also called *binding* or *stacking*.\n",
"\n",
"With Pandas' indexed data structures, there are additional considerations as the overlap in index values between two data structures affects how they are concatenate.\n",
"\n",
"Lets import two microbiome datasets, each consisting of counts of microorganiams from a particular patient. We will use the first column of each dataset as the index."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb1 = pd.read_excel('data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)\n",
"mb2 = pd.read_excel('data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)\n",
"mb1.shape, mb2.shape"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 42,
"text": [
"((272, 1), (288, 1))"
]
}
],
"prompt_number": 42
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb1.head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 43,
"text": [
" 1\n",
"0 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7\n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus 2\n",
"Archaea \"Crenarchaeota\" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus 3\n",
"Archaea \"Crenarchaeota\" Thermoprotei Thermoproteales Thermofilaceae Thermofilum 3\n",
"Archaea \"Euryarchaeota\" \"Methanomicrobia\" Methanocellales Methanocellaceae Methanocella 7"
]
}
],
"prompt_number": 43
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's give the index and columns meaningful labels:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb1.columns = mb2.columns = ['Count']"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 44
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb1.index.name = mb2.index.name = 'Taxon'"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 45
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb1.head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 46,
"text": [
" Count\n",
"Taxon \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7\n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus 2\n",
"Archaea \"Crenarchaeota\" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus 3\n",
"Archaea \"Crenarchaeota\" Thermoprotei Thermoproteales Thermofilaceae Thermofilum 3\n",
"Archaea \"Euryarchaeota\" \"Methanomicrobia\" Methanocellales Methanocellaceae Methanocella 7"
]
}
],
"prompt_number": 46
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The index of these data is the unique biological classification of each organism, beginning with *domain*, *phylum*, *class*, and for some organisms, going all the way down to the genus level.\n",
"\n",
"![classification](http://upload.wikimedia.org/wikipedia/commons/thumb/a/a5/Biological_classification_L_Pengo_vflip.svg/150px-Biological_classification_L_Pengo_vflip.svg.png)"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb1.index[:3]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 47,
"text": [
"Index([u'Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera', u'Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus', u'Archaea \"Crenarchaeota\" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus'], dtype=object)"
]
}
],
"prompt_number": 47
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb1.index.is_unique"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 48,
"text": [
"True"
]
}
],
"prompt_number": 48
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we concatenate along `axis=0` (the default), we will obtain another data frame with the the rows concatenated:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.concat([mb1, mb2], axis=0).shape"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 49,
"text": [
"(560, 1)"
]
}
],
"prompt_number": 49
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, the index is no longer unique, due to overlap between the two DataFrames."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.concat([mb1, mb2], axis=0).index.is_unique"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 50,
"text": [
"False"
]
}
],
"prompt_number": 50
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Concatenating along `axis=1` will concatenate column-wise, but respecting the indices of the two DataFrames."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.concat([mb1, mb2], axis=1).shape"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 51,
"text": [
"(438, 2)"
]
}
],
"prompt_number": 51
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.concat([mb1, mb2], axis=1).head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 52,
"text": [
" Count \\\n",
"Archaea \"Crenarchaeota\" Thermoprotei Acidilobales Acidilobaceae Acidilobus NaN \n",
"Archaea \"Crenarchaeota\" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera NaN \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus NaN \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera NaN \n",
"\n",
" Count \n",
"Archaea \"Crenarchaeota\" Thermoprotei Acidilobales Acidilobaceae Acidilobus 2 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera 14 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 23 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus 1 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera 2 "
]
}
],
"prompt_number": 52
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.concat([mb1, mb2], axis=1).values[:5]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 53,
"text": [
"array([[ nan, 2.],\n",
" [ nan, 14.],\n",
" [ 7., 23.],\n",
" [ nan, 1.],\n",
" [ nan, 2.]])"
]
}
],
"prompt_number": 53
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we are only interested in taxa that are included in both DataFrames, we can specify a `join=inner` argument."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.concat([mb1, mb2], axis=1, join='inner').head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 54,
"text": [
" Count \\\n",
"Taxon \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus 2 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus 3 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Thermoproteales Thermofilaceae Thermofilum 3 \n",
"Archaea \"Euryarchaeota\" \"Methanomicrobia\" Methanocellales Methanocellaceae Methanocella 7 \n",
"\n",
" Count \n",
"Taxon \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 23 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus 2 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus 10 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Thermoproteales Thermofilaceae Thermofilum 9 \n",
"Archaea \"Euryarchaeota\" \"Methanomicrobia\" Methanocellales Methanocellaceae Methanocella 9 "
]
}
],
"prompt_number": 54
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we wanted to use the second table to fill values absent from the first table, we could use `combine_first`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb1.combine_first(mb2).head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 55,
"text": [
" Count\n",
"Taxon \n",
"Archaea \"Crenarchaeota\" Thermoprotei Acidilobales Acidilobaceae Acidilobus 2\n",
"Archaea \"Crenarchaeota\" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera 14\n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7\n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus 1\n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera 2"
]
}
],
"prompt_number": 55
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also create a hierarchical index based on keys identifying the original tables."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.concat([mb1, mb2], keys=['patient1', 'patient2']).head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 58,
"text": [
" Count\n",
" Taxon \n",
"patient1 Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7\n",
" Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus 2\n",
" Archaea \"Crenarchaeota\" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus 3\n",
" Archaea \"Crenarchaeota\" Thermoprotei Thermoproteales Thermofilaceae Thermofilum 3\n",
" Archaea \"Euryarchaeota\" \"Methanomicrobia\" Methanocellales Methanocellaceae Methanocella 7"
]
}
],
"prompt_number": 58
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.concat([mb1, mb2], keys=['patient1', 'patient2']).index.is_unique"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 59,
"text": [
"True"
]
}
],
"prompt_number": 59
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 60,
"text": [
" patient1 \\\n",
" Count \n",
"Archaea \"Crenarchaeota\" Thermoprotei Acidilobales Acidilobaceae Acidilobus NaN \n",
"Archaea \"Crenarchaeota\" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera NaN \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 7 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus NaN \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera NaN \n",
"\n",
" patient2 \n",
" Count \n",
"Archaea \"Crenarchaeota\" Thermoprotei Acidilobales Acidilobaceae Acidilobus 2 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera 14 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera 23 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus 1 \n",
"Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera 2 "
]
}
],
"prompt_number": 60
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want `concat` to work like `numpy.concatanate`, you may provide the `ignore_index=True` argument."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise\n",
"\n",
"In the *data/microbiome* subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10th file that describes the content of each. Write code that imports each of the data spreadsheets and combines them into a single `DataFrame`, adding the identifying information from the metadata spreadsheet as columns in the combined `DataFrame`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 60
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reshaping DataFrame objects\n",
"\n",
"In the context of a single DataFrame, we are often interested in re-arranging the layout of our data. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This dataset in from Table 6.9 of [Statistical Methods for the Analysis of Repeated Measurements](http://www.amazon.com/Statistical-Methods-Analysis-Repeated-Measurements/dp/0387953701) by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.\n",
"\n",
"* Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)\n",
"* Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)\n",
"* TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia = pd.read_csv(\"data/cdystonia.csv\", index_col=None)\n",
"cdystonia.head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 61,
"text": [
" patient obs week site id treat age sex twstrs\n",
"0 1 1 0 1 1 5000U 65 F 32\n",
"1 1 2 2 1 1 5000U 65 F 30\n",
"2 1 3 4 1 1 5000U 65 F 24\n",
"3 1 4 8 1 1 5000U 65 F 37\n",
"4 1 5 12 1 1 5000U 65 F 39"
]
}
],
"prompt_number": 61
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing mutliple measurements.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `stack` method rotates the data frame so that columns are represented in rows:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"stacked = cdystonia.stack()\n",
"stacked"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 62,
"text": [
"0 patient 1\n",
" obs 1\n",
" week 0\n",
" site 1\n",
" id 1\n",
" treat 5000U\n",
" age 65\n",
" sex F\n",
" twstrs 32\n",
"1 patient 1\n",
"...\n",
"629 sex M\n",
" twstrs 36\n",
"630 patient 109\n",
" obs 6\n",
" week 16\n",
" site 9\n",
" id 11\n",
" treat 5000U\n",
" age 57\n",
" sex M\n",
" twstrs 51\n",
"Length: 5679, dtype: object"
]
}
],
"prompt_number": 62
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To complement this, `unstack` pivots from rows back to columns."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"stacked.unstack().head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 63,
"text": [
" patient obs week site id treat age sex twstrs\n",
"0 1 1 0 1 1 5000U 65 F 32\n",
"1 1 2 2 1 1 5000U 65 F 30\n",
"2 1 3 4 1 1 5000U 65 F 24\n",
"3 1 4 8 1 1 5000U 65 F 37\n",
"4 1 5 12 1 1 5000U 65 F 39"
]
}
],
"prompt_number": 63
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For this dataset, it makes sense to create a hierarchical index based on the patient and observation:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia2 = cdystonia.set_index(['patient','obs'])\n",
"cdystonia2.head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 64,
"text": [
" week site id treat age sex twstrs\n",
"patient obs \n",
"1 1 0 1 1 5000U 65 F 32\n",
" 2 2 1 1 5000U 65 F 30\n",
" 3 4 1 1 5000U 65 F 24\n",
" 4 8 1 1 5000U 65 F 37\n",
" 5 12 1 1 5000U 65 F 39"
]
}
],
"prompt_number": 64
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia2.index.is_unique"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 65,
"text": [
"True"
]
}
],
"prompt_number": 65
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we want to transform this data so that repeated measurements are in columns, we can `unstack` the `twstrs` measurements according to `obs`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"twstrs_wide = cdystonia2['twstrs'].unstack('obs')\n",
"twstrs_wide.head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 80,
"text": [
"obs 1 2 3 4 5 6\n",
"patient \n",
"1 32 30 24 37 39 36\n",
"2 60 26 27 41 65 67\n",
"3 44 20 23 26 35 35\n",
"4 53 61 64 62 NaN NaN\n",
"5 53 35 48 49 41 51"
]
}
],
"prompt_number": 80
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia_long = cdystonia[['patient','site','id','treat','age','sex']].drop_duplicates().merge(\n",
" twstrs_wide, right_index=True, left_on='patient', how='inner').head()\n",
"cdystonia_long"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 100,
"text": [
" patient site id treat age sex 1 2 3 4 5 6\n",
"0 1 1 1 5000U 65 F 32 30 24 37 39 36\n",
"6 2 1 2 10000U 70 F 60 26 27 41 65 67\n",
"12 3 1 3 5000U 64 F 44 20 23 26 35 35\n",
"18 4 1 4 Placebo 59 F 53 61 64 62 NaN NaN\n",
"22 5 1 5 10000U 76 F 53 35 48 49 41 51"
]
}
],
"prompt_number": 100
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs'].unstack('week').head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 103,
"text": [
"week 0 2 4 8 12 16\n",
"patient site id treat age sex \n",
"1 1 1 5000U 65 F 32 30 24 37 39 36\n",
"2 1 2 10000U 70 F 60 26 27 41 65 67\n",
"3 1 3 5000U 64 F 44 20 23 26 35 35\n",
"4 1 4 Placebo 59 F 53 61 64 62 NaN NaN\n",
"5 1 5 10000U 76 F 53 35 48 49 41 51"
]
}
],
"prompt_number": 103
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To convert our \"wide\" format back to long, we can use the `melt` function, appropriately parameterized:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.melt(cdystonia_long, id_vars=['patient','site','id','treat','age','sex'], \n",
" var_name='obs', value_name='twsters').head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 104,
"text": [
" patient site id treat age sex obs twsters\n",
"0 1 1 1 5000U 65 F 1 32\n",
"1 2 1 2 10000U 70 F 1 60\n",
"2 3 1 3 5000U 64 F 1 44\n",
"3 4 1 4 Placebo 59 F 1 53\n",
"4 5 1 5 10000U 76 F 1 53"
]
}
],
"prompt_number": 104
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This illustrates the two formats for longitudinal data: **long** and **wide** formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.\n",
"\n",
"The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pivoting\n",
"\n",
"The `pivot` method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: `index`, `columns` and `values`, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.\n",
"\n",
"For example, we may want the `twstrs` variable (the response variable) in wide format according to patient:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 105,
"text": [
"obs 1 2 3 4 5 6\n",
"patient \n",
"1 32 30 24 37 39 36\n",
"2 60 26 27 41 65 67\n",
"3 44 20 23 26 35 35\n",
"4 53 61 64 62 NaN NaN\n",
"5 53 35 48 49 41 51"
]
}
],
"prompt_number": 105
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we omit the `values` argument, we get a `DataFrame` with hierarchical columns, just as when we applied `unstack` to the hierarchically-indexed table:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia.pivot('patient', 'obs')"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 106,
"text": [
"<class 'pandas.core.frame.DataFrame'>\n",
"Int64Index: 109 entries, 1 to 109\n",
"Data columns (total 42 columns):\n",
"(week, 1) 109 non-null values\n",
"(week, 2) 103 non-null values\n",
"(week, 3) 106 non-null values\n",
"(week, 4) 104 non-null values\n",
"(week, 5) 104 non-null values\n",
"(week, 6) 105 non-null values\n",
"(site, 1) 109 non-null values\n",
"(site, 2) 103 non-null values\n",
"(site, 3) 106 non-null values\n",
"(site, 4) 104 non-null values\n",
"(site, 5) 104 non-null values\n",
"(site, 6) 105 non-null values\n",
"(id, 1) 109 non-null values\n",
"(id, 2) 103 non-null values\n",
"(id, 3) 106 non-null values\n",
"(id, 4) 104 non-null values\n",
"(id, 5) 104 non-null values\n",
"(id, 6) 105 non-null values\n",
"(treat, 1) 109 non-null values\n",
"(treat, 2) 103 non-null values\n",
"(treat, 3) 106 non-null values\n",
"(treat, 4) 104 non-null values\n",
"(treat, 5) 104 non-null values\n",
"(treat, 6) 105 non-null values\n",
"(age, 1) 109 non-null values\n",
"(age, 2) 103 non-null values\n",
"(age, 3) 106 non-null values\n",
"(age, 4) 104 non-null values\n",
"(age, 5) 104 non-null values\n",
"(age, 6) 105 non-null values\n",
"(sex, 1) 109 non-null values\n",
"(sex, 2) 103 non-null values\n",
"(sex, 3) 106 non-null values\n",
"(sex, 4) 104 non-null values\n",
"(sex, 5) 104 non-null values\n",
"(sex, 6) 105 non-null values\n",
"(twstrs, 1) 109 non-null values\n",
"(twstrs, 2) 103 non-null values\n",
"(twstrs, 3) 106 non-null values\n",
"(twstrs, 4) 104 non-null values\n",
"(twstrs, 5) 104 non-null values\n",
"(twstrs, 6) 105 non-null values\n",
"dtypes: float64(30), object(12)"
]
}
],
"prompt_number": 106
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A related method, `pivot_table`, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia.pivot_table(rows=['site', 'treat'], cols='week', values='twstrs', aggfunc=max).head(20)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 107,
"text": [
"week 0 2 4 8 12 16\n",
"site treat \n",
"1 10000U 60 41 48 49 65 67\n",
" 5000U 44 32 34 43 42 46\n",
" Placebo 53 61 64 62 32 38\n",
"2 10000U 65 60 60 64 67 66\n",
" 5000U 67 64 65 64 62 64\n",
" Placebo 53 56 52 57 61 54\n",
"3 10000U 50 43 51 46 49 56\n",
" 5000U 52 44 47 50 50 49\n",
" Placebo 43 38 40 48 49 44\n",
"4 10000U 54 52 52 54 51 57\n",
" 5000U 52 34 43 45 47 46\n",
" Placebo 52 55 51 52 54 57\n",
"5 10000U 50 50 32 46 54 57\n",
" 5000U 60 53 55 62 67 26\n",
" Placebo 60 57 53 52 53 58\n",
"6 10000U 55 56 47 53 51 51\n",
" 5000U 59 55 50 56 59 53\n",
" Placebo 54 53 51 57 57 57\n",
"7 10000U 53 47 45 45 50 53\n",
" 5000U 53 45 52 51 52 53"
]
}
],
"prompt_number": 107
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For a simple cross-tabulation of group frequencies, the `crosstab` function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.crosstab(cdystonia.sex, cdystonia.site)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 108,
"text": [
"site 1 2 3 4 5 6 7 8 9\n",
"sex \n",
"F 52 53 42 30 22 54 66 48 28\n",
"M 18 29 30 18 11 33 6 58 33"
]
}
],
"prompt_number": 108
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data transformation\n",
"\n",
"There are a slew of additional operations for DataFrames that we would collectively refer to as \"transformations\" that include tasks such as removing duplicate values, replacing values, and grouping values."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Dealing with duplicates\n",
"\n",
"We can easily identify and remove duplicate values from `DataFrame` objects. For example, say we want to removed ships from our `vessels` dataset that have the same name:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"vessels.duplicated(cols='names')"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 109,
"text": [
"mmsi\n",
"1 False\n",
"9 False\n",
"21 False\n",
"74 False\n",
"103 False\n",
"310 False\n",
"3011 False\n",
"4731 False\n",
"15151 False\n",
"46809 False\n",
"...\n",
"812719000 False\n",
"857632392 False\n",
"866946820 True\n",
"888888882 True\n",
"888888888 False\n",
"900000000 False\n",
"919191919 False\n",
"967191190 True\n",
"975318642 True\n",
"987654321 False\n",
"999999999 True\n",
"Length: 10771, dtype: bool"
]
}
],
"prompt_number": 109
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"vessels.drop_duplicates(['names'])"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 110,
"text": [
"<class 'pandas.core.frame.DataFrame'>\n",
"Int64Index: 10253 entries, 1 to 987654321\n",
"Data columns (total 10 columns):\n",
"num_names 10253 non-null values\n",
"names 10253 non-null values\n",
"sov 10253 non-null values\n",
"flag 10253 non-null values\n",
"flag_type 10253 non-null values\n",
"num_loas 10253 non-null values\n",
"loa 10253 non-null values\n",
"max_loa 10253 non-null values\n",
"num_types 10253 non-null values\n",
"type 10253 non-null values\n",
"dtypes: float64(1), int64(3), object(6)"
]
}
],
"prompt_number": 110
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Value replacement\n",
"\n",
"Frequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia.treat.value_counts()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 111,
"text": [
"10000U 213\n",
"5000U 211\n",
"Placebo 207\n",
"dtype: int64"
]
}
],
"prompt_number": 111
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A logical way to specify these numerically is to change them to integer values, perhaps using \"Placebo\" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the `map` method to implement the changes."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 112
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia['treatment'] = cdystonia.treat.map(treatment_map)\n",
"cdystonia.treatment"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 113,
"text": [
"0 1\n",
"1 1\n",
"2 1\n",
"3 1\n",
"4 1\n",
"5 1\n",
"6 2\n",
"7 2\n",
"8 2\n",
"9 2\n",
"...\n",
"620 2\n",
"621 2\n",
"622 2\n",
"623 2\n",
"624 2\n",
"625 2\n",
"626 1\n",
"627 1\n",
"628 1\n",
"629 1\n",
"630 1\n",
"Name: treatment, Length: 631, dtype: int64"
]
}
],
"prompt_number": 113
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternately, if we simply want to replace particular values in a `Series` or `DataFrame`, we can use the `replace` method. \n",
"\n",
"An example where replacement is useful is dealing with zeros in certain transformations. For example, if we try to take the log of a set of values:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"vals = pd.Series([float(i)**10 for i in range(10)])\n",
"vals"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 114,
"text": [
"0 0\n",
"1 1\n",
"2 1024\n",
"3 59049\n",
"4 1048576\n",
"5 9765625\n",
"6 60466176\n",
"7 282475249\n",
"8 1073741824\n",
"9 3486784401\n",
"dtype: float64"
]
}
],
"prompt_number": 114
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"np.log(vals)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 115,
"text": [
"0 -inf\n",
"1 0.000000\n",
"2 6.931472\n",
"3 10.986123\n",
"4 13.862944\n",
"5 16.094379\n",
"6 17.917595\n",
"7 19.459101\n",
"8 20.794415\n",
"9 21.972246\n",
"dtype: float64"
]
}
],
"prompt_number": 115
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In such situations, we can replace the zero with a value so small that it makes no difference to the ensuing analysis. We can do this with `replace`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"vals = vals.replace(0, 1e-6)\n",
"np.log(vals)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 116,
"text": [
"0 -13.815511\n",
"1 0.000000\n",
"2 6.931472\n",
"3 10.986123\n",
"4 13.862944\n",
"5 16.094379\n",
"6 17.917595\n",
"7 19.459101\n",
"8 20.794415\n",
"9 21.972246\n",
"dtype: float64"
]
}
],
"prompt_number": 116
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also perform the same replacement that we used `map` for with `replace`:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 117,
"text": [
"patient obs\n",
"1 1 1\n",
" 2 1\n",
" 3 1\n",
" 4 1\n",
" 5 1\n",
" 6 1\n",
"2 1 2\n",
" 2 2\n",
" 3 2\n",
" 4 2\n",
"...\n",
"108 1 2\n",
" 2 2\n",
" 3 2\n",
" 4 2\n",
" 5 2\n",
" 6 2\n",
"109 1 1\n",
" 2 1\n",
" 4 1\n",
" 5 1\n",
" 6 1\n",
"Name: treat, Length: 631, dtype: object"
]
}
],
"prompt_number": 117
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inidcator variables\n",
"\n",
"For some statistical analyses (*e.g.* regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called **design matrix**. The Pandas function `get_dummies` (indicator variables are also known as *dummy variables*) makes this transformation straightforward.\n",
"\n",
"Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The `type` variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships:\n"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"top5 = vessels.type.apply(lambda s: s in vessels.type.value_counts().index[:5])\n",
"vessels5 = vessels[top5]"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 120
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.get_dummies(vessels5.type).head(10)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 121,
"text": [
" Cargo Pleasure Sailing Tanker Tug\n",
"mmsi \n",
"15151 0 0 0 0 1\n",
"80404 0 1 0 0 0\n",
"366235 1 0 0 0 0\n",
"587370 0 0 0 0 1\n",
"693559 0 0 0 0 1\n",
"1233916 0 1 0 0 0\n",
"3041300 1 0 0 0 0\n",
"3663760 1 0 0 0 0\n",
"3688360 1 0 0 0 0\n",
"7718175 1 0 0 0 0"
]
}
],
"prompt_number": 121
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Discretization\n",
"\n",
"Pandas' `cut` function can be used to group continuous or countable data in to bins. Discretization is generally a very **bad idea** for statistical analysis, so use this function responsibly!\n",
"\n",
"Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia.age.describe()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 122,
"text": [
"count 631.000000\n",
"mean 55.616482\n",
"std 12.123910\n",
"min 26.000000\n",
"25% 46.000000\n",
"50% 56.000000\n",
"75% 65.000000\n",
"max 83.000000\n",
"dtype: float64"
]
}
],
"prompt_number": 122
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 90's:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 123,
"text": [
"Categorical: \n",
"array(['(60, 70]', '(60, 70]', '(60, 70]', '(60, 70]', '(60, 70]',\n",
" '(60, 70]', '(60, 70]', '(60, 70]', '(60, 70]', '(60, 70]',\n",
" '(60, 70]', '(60, 70]', '(60, 70]', '(60, 70]', '(60, 70]',\n",
" '(60, 70]', '(60, 70]', '(60, 70]', '(50, 60]', '(50, 60]',\n",
" '(50, 60]', '(50, 60]', '(70, 80]', '(70, 80]', '(70, 80]',\n",
" '(70, 80]', '(70, 80]', '(70, 80]', '(50, 60]', '(50, 60]'], dtype=object)\n",
"Levels (7): Index(['(20, 30]', '(30, 40]', '(40, 50]', '(50, 60]',\n",
" '(60, 70]', '(70, 80]', '(80, 90]'], dtype=object)"
]
}
],
"prompt_number": 123
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The parentheses indicate an open interval, meaning that the interval includes values up to but *not including* the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the `right` flag to `False`:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 124,
"text": [
"Categorical: \n",
"array(['[60, 70)', '[60, 70)', '[60, 70)', '[60, 70)', '[60, 70)',\n",
" '[60, 70)', '[70, 80)', '[70, 80)', '[70, 80)', '[70, 80)',\n",
" '[70, 80)', '[70, 80)', '[60, 70)', '[60, 70)', '[60, 70)',\n",
" '[60, 70)', '[60, 70)', '[60, 70)', '[50, 60)', '[50, 60)',\n",
" '[50, 60)', '[50, 60)', '[70, 80)', '[70, 80)', '[70, 80)',\n",
" '[70, 80)', '[70, 80)', '[70, 80)', '[50, 60)', '[50, 60)'], dtype=object)\n",
"Levels (7): Index(['[20, 30)', '[30, 40)', '[40, 50)', '[50, 60)',\n",
" '[60, 70)', '[70, 80)', '[80, 90)'], dtype=object)"
]
}
],
"prompt_number": 124
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since the data are now **ordinal**, rather than numeric, we can give them labels:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','ancient'])[:30]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 125,
"text": [
"Categorical: \n",
"array(['old', 'old', 'old', 'old', 'old', 'old', 'old', 'old', 'old',\n",
" 'old', 'old', 'old', 'old', 'old', 'old', 'old', 'old', 'old',\n",
" 'middle-aged', 'middle-aged', 'middle-aged', 'middle-aged', 'old',\n",
" 'old', 'old', 'old', 'old', 'old', 'middle-aged', 'middle-aged'], dtype=object)\n",
"Levels (4): Index(['young', 'middle-aged', 'old', 'ancient'], dtype=object)"
]
}
],
"prompt_number": 125
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A related function `qcut` uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.qcut(cdystonia.age, 4)[:30]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 126,
"text": [
"Categorical: \n",
"array(['(56, 65]', '(56, 65]', '(56, 65]', '(56, 65]', '(56, 65]',\n",
" '(56, 65]', '(65, 83]', '(65, 83]', '(65, 83]', '(65, 83]',\n",
" '(65, 83]', '(65, 83]', '(56, 65]', '(56, 65]', '(56, 65]',\n",
" '(56, 65]', '(56, 65]', '(56, 65]', '(56, 65]', '(56, 65]',\n",
" '(56, 65]', '(56, 65]', '(65, 83]', '(65, 83]', '(65, 83]',\n",
" '(65, 83]', '(65, 83]', '(65, 83]', '(56, 65]', '(56, 65]'], dtype=object)\n",
"Levels (4): Index(['[26, 46]', '(46, 56]', '(56, 65]', '(65, 83]'], dtype=object)"
]
}
],
"prompt_number": 126
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternatively, one can specify custom quantiles to act as cut points:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"quantiles = pd.qcut(segments.seg_length, [0, 0.01, 0.05, 0.95, 0.99, 1])\n",
"quantiles[:30]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 127,
"text": [
"Categorical: \n",
"array(['(1.8, 7.8]', '(7.8, 45.4]', '(1.8, 7.8]', '(7.8, 45.4]',\n",
" '(7.8, 45.4]', '(7.8, 45.4]', '(45.4, 89.7]', '(7.8, 45.4]',\n",
" '(7.8, 45.4]', '(7.8, 45.4]', '(1.8, 7.8]', '(7.8, 45.4]',\n",
" '(7.8, 45.4]', '(7.8, 45.4]', '(7.8, 45.4]', '(45.4, 89.7]',\n",
" '(45.4, 89.7]', '(7.8, 45.4]', '(7.8, 45.4]', '(7.8, 45.4]',\n",
" '(1.8, 7.8]', '(1.8, 7.8]', '(7.8, 45.4]', '(7.8, 45.4]',\n",
" '(7.8, 45.4]', '(7.8, 45.4]', '(7.8, 45.4]', '(7.8, 45.4]',\n",
" '(7.8, 45.4]', '(7.8, 45.4]'], dtype=object)\n",
"Levels (5): Index(['[1, 1.8]', '(1.8, 7.8]', '(7.8, 45.4]',\n",
" '(45.4, 89.7]', '(89.7, 1882]'], dtype=object)"
]
}
],
"prompt_number": 127
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that you can easily combine discretiztion with the generation of indicator variables shown above:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pd.get_dummies(quantiles).head(10)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 128,
"text": [
" (1.8, 7.8] (45.4, 89.7] (7.8, 45.4] (89.7, 1882] [1, 1.8]\n",
"0 1 0 0 0 0\n",
"1 0 0 1 0 0\n",
"2 1 0 0 0 0\n",
"3 0 0 1 0 0\n",
"4 0 0 1 0 0\n",
"5 0 0 1 0 0\n",
"6 0 1 0 0 0\n",
"7 0 0 1 0 0\n",
"8 0 0 1 0 0\n",
"9 0 0 1 0 0"
]
}
],
"prompt_number": 128
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Permutation and sampling\n",
"\n",
"For some data analysis tasks, such as simulation, we need to be able to randomly reorder our data, or draw random values from it. Calling NumPy's `permutation` function with the length of the sequence you want to permute generates an array with a permuted sequence of integers, which can be used to re-order the sequence."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"new_order = np.random.permutation(len(segments))\n",
"new_order[:30]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 129,
"text": [
"array([225097, 46523, 41804, 65128, 144844, 175569, 58230, 174470,\n",
" 149787, 65003, 5727, 242140, 88508, 234383, 11938, 195217,\n",
" 118912, 107266, 75017, 8464, 169805, 36107, 200345, 55297,\n",
" 61600, 243057, 120760, 171551, 223671, 232734])"
]
}
],
"prompt_number": 129
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using this sequence as an argument to the `take` method results in a reordered DataFrame:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments.take(new_order).head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 131,
"text": [
" mmsi name transit segment seg_length \\\n",
"225097 538070509 Bolero 9 1 27.2 \n",
"46523 265492000 Manon 4 1 41.3 \n",
"41804 257673000 Jo Ask 18 1 21.9 \n",
"65128 311316000 Explorer Of The Seas 197 1 19.3 \n",
"144844 367063030 Emerald Princess Ii 274 1 19.9 \n",
"\n",
" avg_sog min_sog max_sog pdgt10 st_time end_time type \n",
"225097 14.5 11.8 14.8 100.0 6/12/10 15:33 6/12/10 17:27 foo \n",
"46523 10.1 7.6 11.3 68.3 3/4/09 4:11 3/4/09 7:05 foo \n",
"41804 11.0 5.2 13.4 74.1 9/18/09 14:55 9/18/09 16:57 foo \n",
"65128 16.9 9.3 19.1 98.3 10/23/10 22:04 10/23/10 23:14 foo \n",
"144844 4.9 1.1 11.9 31.6 7/17/09 15:41 7/17/09 19:41 foo "
]
}
],
"prompt_number": 131
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compare this ordering with the original:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"segments.head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 132,
"text": [
" mmsi name transit segment seg_length avg_sog min_sog \\\n",
"0 1 Us Govt Ves 1 1 5.1 13.2 9.2 \n",
"1 1 Dredge Capt Frank 1 1 13.5 18.6 10.4 \n",
"2 1 Us Gov Vessel 1 1 4.3 16.2 10.3 \n",
"3 1 Us Gov Vessel 2 1 9.2 15.4 14.5 \n",
"4 1 Dredge Capt Frank 2 1 9.2 15.4 14.6 \n",
"\n",
" max_sog pdgt10 st_time end_time type \n",
"0 14.5 96.5 2/10/09 16:03 2/10/09 16:27 foo \n",
"1 20.6 100.0 4/6/09 14:31 4/6/09 15:20 foo \n",
"2 20.5 100.0 4/6/09 14:36 4/6/09 14:55 foo \n",
"3 16.1 100.0 4/10/09 17:58 4/10/09 18:34 foo \n",
"4 16.2 100.0 4/10/09 17:59 4/10/09 18:35 foo "
]
}
],
"prompt_number": 132
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise\n",
"\n",
"Its easy to see how this permutation approach allows us to draw a random sample **without replacement**. How would you sample **with replacement**? Generate a random sample of 5 ships from the `vessels` DataFrame using this scheme."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 132
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data aggregation and GroupBy operations\n",
"\n",
"One of the most powerful features of Pandas is its **GroupBy** functionality. On occasion we may want to perform operations on *groups* of observations within a dataset. For exmaple:\n",
"\n",
"* **aggregation**, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results\n",
"* **slicing** the DataFrame into groups and then doing something with the resulting slices (*e.g.* plotting)\n",
"* group-wise **transformation**, such as standardization/normalization"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia_grouped = cdystonia.groupby(cdystonia.patient)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 133
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This *grouped* dataset is hard to visualize\n",
"\n"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia_grouped"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 134,
"text": [
"<pandas.core.groupby.DataFrameGroupBy at 0x11d966510>"
]
}
],
"prompt_number": 134
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, the grouping is only an intermediate step; for example, we may want to **iterate** over each of the patient groups:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"for patient, group in cdystonia_grouped:\n",
" print patient\n",
" print group\n",
" print"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"1\n",
" patient obs week site id treat age sex twstrs treatment\n",
"0 1 1 0 1 1 5000U 65 F 32 1\n",
"1 1 2 2 1 1 5000U 65 F 30 1\n",
"2 1 3 4 1 1 5000U 65 F 24 1\n",
"3 1 4 8 1 1 5000U 65 F 37 1\n",
"4 1 5 12 1 1 5000U 65 F 39 1\n",
"5 1 6 16 1 1 5000U 65 F 36 1\n",
"\n",
"2\n",
" patient obs week site id treat age sex twstrs treatment\n",
"6 2 1 0 1 2 10000U 70 F 60 2\n",
"7 2 2 2 1 2 10000U 70 F 26 2\n",
"8 2 3 4 1 2 10000U 70 F 27 2\n",
"9 2 4 8 1 2 10000U 70 F 41 2\n",
"10 2 5 12 1 2 10000U 70 F 65 2\n",
"11 2 6 16 1 2 10000U 70 F 67 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"3\n",
" patient obs week site id treat age sex twstrs treatment\n",
"12 3 1 0 1 3 5000U 64 F 44 1\n",
"13 3 2 2 1 3 5000U 64 F 20 1\n",
"14 3 3 4 1 3 5000U 64 F 23 1\n",
"15 3 4 8 1 3 5000U 64 F 26 1\n",
"16 3 5 12 1 3 5000U 64 F 35 1\n",
"17 3 6 16 1 3 5000U 64 F 35 1\n",
"\n",
"4\n",
" patient obs week site id treat age sex twstrs treatment\n",
"18 4 1 0 1 4 Placebo 59 F 53 0\n",
"19 4 2 2 1 4 Placebo 59 F 61 0\n",
"20 4 3 4 1 4 Placebo 59 F 64 0\n",
"21 4 4 8 1 4 Placebo 59 F 62 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"5\n",
" patient obs week site id treat age sex twstrs treatment\n",
"22 5 1 0 1 5 10000U 76 F 53 2\n",
"23 5 2 2 1 5 10000U 76 F 35 2\n",
"24 5 3 4 1 5 10000U 76 F 48 2\n",
"25 5 4 8 1 5 10000U 76 F 49 2\n",
"26 5 5 12 1 5 10000U 76 F 41 2\n",
"27 5 6 16 1 5 10000U 76 F 51 2\n",
"\n",
"6\n",
" patient obs week site id treat age sex twstrs treatment\n",
"28 6 1 0 1 6 10000U 59 F 49 2\n",
"29 6 2 2 1 6 10000U 59 F 34 2\n",
"30 6 3 4 1 6 10000U 59 F 43 2\n",
"31 6 4 8 1 6 10000U 59 F 48 2\n",
"32 6 5 12 1 6 10000U 59 F 48 2\n",
"33 6 6 16 1 6 10000U 59 F 51 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"7\n",
" patient obs week site id treat age sex twstrs treatment\n",
"34 7 1 0 1 7 5000U 72 M 42 1\n",
"35 7 2 2 1 7 5000U 72 M 32 1\n",
"36 7 3 4 1 7 5000U 72 M 32 1\n",
"37 7 4 8 1 7 5000U 72 M 43 1\n",
"38 7 5 12 1 7 5000U 72 M 42 1\n",
"39 7 6 16 1 7 5000U 72 M 46 1\n",
"\n",
"8\n",
" patient obs week site id treat age sex twstrs treatment\n",
"40 8 1 0 1 8 Placebo 40 M 34 0\n",
"41 8 2 2 1 8 Placebo 40 M 33 0\n",
"42 8 3 4 1 8 Placebo 40 M 21 0\n",
"43 8 4 8 1 8 Placebo 40 M 27 0\n",
"44 8 5 12 1 8 Placebo 40 M 32 0\n",
"45 8 6 16 1 8 Placebo 40 M 38 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"9\n",
" patient obs week site id treat age sex twstrs treatment\n",
"46 9 1 0 1 9 5000U 52 F 41 1\n",
"47 9 2 2 1 9 5000U 52 F 32 1\n",
"48 9 3 4 1 9 5000U 52 F 34 1\n",
"49 9 4 8 1 9 5000U 52 F 35 1\n",
"50 9 5 12 1 9 5000U 52 F 37 1\n",
"51 9 6 16 1 9 5000U 52 F 36 1\n",
"\n",
"10\n",
" patient obs week site id treat age sex twstrs treatment\n",
"52 10 1 0 1 10 Placebo 47 M 27 0\n",
"53 10 2 2 1 10 Placebo 47 M 10 0\n",
"54 10 3 4 1 10 Placebo 47 M 31 0\n",
"55 10 4 8 1 10 Placebo 47 M 32 0\n",
"56 10 5 12 1 10 Placebo 47 M 6 0\n",
"57 10 6 16 1 10 Placebo 47 M 14 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"11\n",
" patient obs week site id treat age sex twstrs treatment\n",
"58 11 1 0 1 11 10000U 57 F 48 2\n",
"59 11 2 2 1 11 10000U 57 F 41 2\n",
"60 11 3 4 1 11 10000U 57 F 32 2\n",
"61 11 4 8 1 11 10000U 57 F 35 2\n",
"62 11 5 12 1 11 10000U 57 F 57 2\n",
"63 11 6 16 1 11 10000U 57 F 51 2\n",
"\n",
"12\n",
" patient obs week site id treat age sex twstrs treatment\n",
"64 12 1 0 1 12 Placebo 47 F 34 0\n",
"65 12 2 2 1 12 Placebo 47 F 19 0\n",
"66 12 3 4 1 12 Placebo 47 F 21 0\n",
"67 12 4 8 1 12 Placebo 47 F 24 0\n",
"68 12 5 12 1 12 Placebo 47 F 28 0\n",
"69 12 6 16 1 12 Placebo 47 F 28 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"13\n",
" patient obs week site id treat age sex twstrs treatment\n",
"70 13 1 0 2 1 Placebo 70 F 49 0\n",
"71 13 2 2 2 1 Placebo 70 F 47 0\n",
"72 13 3 4 2 1 Placebo 70 F 44 0\n",
"73 13 4 8 2 1 Placebo 70 F 48 0\n",
"74 13 5 12 2 1 Placebo 70 F 44 0\n",
"75 13 6 16 2 1 Placebo 70 F 44 0\n",
"\n",
"14\n",
" patient obs week site id treat age sex twstrs treatment\n",
"76 14 1 0 2 2 5000U 49 F 46 1\n",
"77 14 2 2 2 2 5000U 49 F 35 1\n",
"78 14 3 4 2 2 5000U 49 F 45 1\n",
"79 14 4 8 2 2 5000U 49 F 49 1\n",
"80 14 5 12 2 2 5000U 49 F 53 1\n",
"81 14 6 16 2 2 5000U 49 F 56 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"15\n",
" patient obs week site id treat age sex twstrs treatment\n",
"82 15 1 0 2 3 10000U 59 F 56 2\n",
"83 15 2 2 2 3 10000U 59 F 44 2\n",
"84 15 3 4 2 3 10000U 59 F 48 2\n",
"85 15 4 8 2 3 10000U 59 F 54 2\n",
"86 15 5 12 2 3 10000U 59 F 49 2\n",
"87 15 6 16 2 3 10000U 59 F 60 2\n",
"\n",
"16\n",
" patient obs week site id treat age sex twstrs treatment\n",
"88 16 1 0 2 4 5000U 64 M 59 1\n",
"89 16 2 2 2 4 5000U 64 M 48 1\n",
"90 16 3 4 2 4 5000U 64 M 56 1\n",
"91 16 4 8 2 4 5000U 64 M 55 1\n",
"92 16 5 12 2 4 5000U 64 M 57 1\n",
"93 16 6 16 2 4 5000U 64 M 58 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"17\n",
" patient obs week site id treat age sex twstrs treatment\n",
"94 17 1 0 2 5 10000U 45 F 62 2\n",
"95 17 2 2 2 5 10000U 45 F 60 2\n",
"96 17 3 4 2 5 10000U 45 F 60 2\n",
"97 17 4 8 2 5 10000U 45 F 64 2\n",
"98 17 5 12 2 5 10000U 45 F 67 2\n",
"99 17 6 16 2 5 10000U 45 F 66 2\n",
"\n",
"18\n",
" patient obs week site id treat age sex twstrs treatment\n",
"100 18 1 0 2 6 Placebo 66 F 50 0\n",
"101 18 2 2 2 6 Placebo 66 F 53 0\n",
"102 18 3 4 2 6 Placebo 66 F 52 0\n",
"103 18 4 8 2 6 Placebo 66 F 57 0\n",
"104 18 5 12 2 6 Placebo 66 F 61 0\n",
"105 18 6 16 2 6 Placebo 66 F 54 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"19\n",
" patient obs week site id treat age sex twstrs treatment\n",
"106 19 1 0 2 7 10000U 49 F 42 2\n",
"107 19 2 2 2 7 10000U 49 F 42 2\n",
"108 19 3 4 2 7 10000U 49 F 43 2\n",
"109 19 4 8 2 7 10000U 49 F 33 2\n",
"110 19 5 12 2 7 10000U 49 F 37 2\n",
"111 19 6 16 2 7 10000U 49 F 43 2\n",
"\n",
"20\n",
" patient obs week site id treat age sex twstrs treatment\n",
"112 20 1 0 2 8 Placebo 54 F 53 0\n",
"113 20 2 2 2 8 Placebo 54 F 56 0\n",
"114 20 3 4 2 8 Placebo 54 F 52 0\n",
"115 20 4 8 2 8 Placebo 54 F 54 0\n",
"116 20 5 12 2 8 Placebo 54 F 55 0\n",
"117 20 6 16 2 8 Placebo 54 F 51 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"21\n",
" patient obs week site id treat age sex twstrs treatment\n",
"118 21 1 0 2 9 5000U 47 F 67 1\n",
"119 21 2 2 2 9 5000U 47 F 64 1\n",
"120 21 3 4 2 9 5000U 47 F 65 1\n",
"121 21 4 8 2 9 5000U 47 F 64 1\n",
"122 21 5 12 2 9 5000U 47 F 62 1\n",
"123 21 6 16 2 9 5000U 47 F 64 1\n",
"\n",
"22\n",
" patient obs week site id treat age sex twstrs treatment\n",
"124 22 1 0 2 10 Placebo 31 M 44 0\n",
"125 22 2 2 2 10 Placebo 31 M 40 0\n",
"126 22 3 4 2 10 Placebo 31 M 32 0\n",
"127 22 4 8 2 10 Placebo 31 M 36 0\n",
"128 22 5 12 2 10 Placebo 31 M 42 0\n",
"129 22 6 16 2 10 Placebo 31 M 43 0\n",
"\n",
"23\n",
" patient obs week site id treat age sex twstrs treatment\n",
"130 23 1 0 2 11 10000U 53 F 65 2\n",
"131 23 2 2 2 11 10000U 53 F 58 2\n",
"132 23 3 4 2 11 10000U 53 F 55 2\n",
"133 23 5 12 2 11 10000U 53 F 56 2\n",
"134 23 6 16 2 11 10000U 53 F 60 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"24\n",
" patient obs week site id treat age sex twstrs treatment\n",
"135 24 1 0 2 12 5000U 61 M 56 1\n",
"136 24 2 2 2 12 5000U 61 M 54 1\n",
"137 24 3 4 2 12 5000U 61 M 52 1\n",
"138 24 4 8 2 12 5000U 61 M 48 1\n",
"139 24 5 12 2 12 5000U 61 M 52 1\n",
"140 24 6 16 2 12 5000U 61 M 53 1\n",
"\n",
"25\n",
" patient obs week site id treat age sex twstrs treatment\n",
"141 25 1 0 2 13 Placebo 40 M 30 0\n",
"142 25 2 2 2 13 Placebo 40 M 33 0\n",
"143 25 3 4 2 13 Placebo 40 M 25 0\n",
"144 25 4 8 2 13 Placebo 40 M 29 0\n",
"145 25 5 12 2 13 Placebo 40 M 32 0\n",
"146 25 6 16 2 13 Placebo 40 M 32 0\n",
"\n",
"26\n",
" patient obs week site id treat age sex twstrs treatment\n",
"147 26 1 0 2 14 5000U 67 M 47 1\n",
"148 26 3 4 2 14 5000U 67 M 54 1\n",
"149 26 4 8 2 14 5000U 67 M 43 1\n",
"150 26 5 12 2 14 5000U 67 M 46 1\n",
"151 26 6 16 2 14 5000U 67 M 50 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"27\n",
" patient obs week site id treat age sex twstrs treatment\n",
"152 27 1 0 3 1 10000U 54 F 50 2\n",
"153 27 2 2 3 1 10000U 54 F 43 2\n",
"154 27 3 4 3 1 10000U 54 F 51 2\n",
"155 27 4 8 3 1 10000U 54 F 46 2\n",
"156 27 5 12 3 1 10000U 54 F 49 2\n",
"157 27 6 16 3 1 10000U 54 F 53 2\n",
"\n",
"28\n",
" patient obs week site id treat age sex twstrs treatment\n",
"158 28 1 0 3 2 Placebo 41 F 34 0\n",
"159 28 2 2 3 2 Placebo 41 F 29 0\n",
"160 28 3 4 3 2 Placebo 41 F 27 0\n",
"161 28 4 8 3 2 Placebo 41 F 21 0\n",
"162 28 5 12 3 2 Placebo 41 F 22 0\n",
"163 28 6 16 3 2 Placebo 41 F 22 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"29\n",
" patient obs week site id treat age sex twstrs treatment\n",
"164 29 1 0 3 3 5000U 66 M 39 1\n",
"165 29 2 2 3 3 5000U 66 M 41 1\n",
"166 29 3 4 3 3 5000U 66 M 33 1\n",
"167 29 4 8 3 3 5000U 66 M 39 1\n",
"168 29 5 12 3 3 5000U 66 M 37 1\n",
"169 29 6 16 3 3 5000U 66 M 37 1\n",
"\n",
"30\n",
" patient obs week site id treat age sex twstrs treatment\n",
"170 30 1 0 3 4 Placebo 68 F 43 0\n",
"171 30 2 2 3 4 Placebo 68 F 31 0\n",
"172 30 3 4 3 4 Placebo 68 F 29 0\n",
"173 30 4 8 3 4 Placebo 68 F 28 0\n",
"174 30 5 12 3 4 Placebo 68 F 33 0\n",
"175 30 6 16 3 4 Placebo 68 F 38 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"31\n",
" patient obs week site id treat age sex twstrs treatment\n",
"176 31 1 0 3 5 10000U 41 F 46 2\n",
"177 31 2 2 3 5 10000U 41 F 26 2\n",
"178 31 3 4 3 5 10000U 41 F 29 2\n",
"179 31 4 8 3 5 10000U 41 F 33 2\n",
"180 31 5 12 3 5 10000U 41 F 45 2\n",
"181 31 6 16 3 5 10000U 41 F 56 2\n",
"\n",
"32\n",
" patient obs week site id treat age sex twstrs treatment\n",
"182 32 1 0 3 6 5000U 77 M 52 1\n",
"183 32 2 2 3 6 5000U 77 M 44 1\n",
"184 32 3 4 3 6 5000U 77 M 47 1\n",
"185 32 4 8 3 6 5000U 77 M 50 1\n",
"186 32 5 12 3 6 5000U 77 M 50 1\n",
"187 32 6 16 3 6 5000U 77 M 49 1\n",
"\n",
"33\n",
" patient obs week site id treat age sex twstrs treatment\n",
"188 33 1 0 3 7 10000U 41 M 38 2\n",
"189 33 2 2 3 7 10000U 41 M 19 2\n",
"190 33 3 4 3 7 10000U 41 M 20 2\n",
"191 33 4 8 3 7 10000U 41 M 27 2\n",
"192 33 5 12 3 7 10000U 41 M 29 2\n",
"193 33 6 16 3 7 10000U 41 M 32 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"34\n",
" patient obs week site id treat age sex twstrs treatment\n",
"194 34 1 0 3 8 Placebo 56 M 33 0\n",
"195 34 2 2 3 8 Placebo 56 M 38 0\n",
"196 34 3 4 3 8 Placebo 56 M 40 0\n",
"197 34 4 8 3 8 Placebo 56 M 48 0\n",
"198 34 5 12 3 8 Placebo 56 M 49 0\n",
"199 34 6 16 3 8 Placebo 56 M 44 0\n",
"\n",
"35\n",
" patient obs week site id treat age sex twstrs treatment\n",
"200 35 1 0 3 9 5000U 46 F 28 1\n",
"201 35 2 2 3 9 5000U 46 F 16 1\n",
"202 35 3 4 3 9 5000U 46 F 11 1\n",
"203 35 4 8 3 9 5000U 46 F 7 1\n",
"204 35 5 12 3 9 5000U 46 F 13 1\n",
"205 35 6 16 3 9 5000U 46 F 21 1\n",
"\n",
"36\n",
" patient obs week site id treat age sex twstrs treatment\n",
"206 36 1 0 3 10 10000U 46 F 34 2\n",
"207 36 2 2 3 10 10000U 46 F 23 2\n",
"208 36 3 4 3 10 10000U 46 F 16 2\n",
"209 36 4 8 3 10 10000U 46 F 15 2\n",
"210 36 5 12 3 10 10000U 46 F 17 2\n",
"211 36 6 16 3 10 10000U 46 F 29 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"37\n",
" patient obs week site id treat age sex twstrs treatment\n",
"212 37 1 0 3 11 Placebo 47 F 39 0\n",
"213 37 2 2 3 11 Placebo 47 F 37 0\n",
"214 37 3 4 3 11 Placebo 47 F 39 0\n",
"215 37 4 8 3 11 Placebo 47 F 39 0\n",
"216 37 5 12 3 11 Placebo 47 F 45 0\n",
"217 37 6 16 3 11 Placebo 47 F 43 0\n",
"\n",
"38\n",
" patient obs week site id treat age sex twstrs treatment\n",
"218 38 1 0 3 12 5000U 35 M 29 1\n",
"219 38 2 2 3 12 5000U 35 M 42 1\n",
"220 38 3 4 3 12 5000U 35 M 35 1\n",
"221 38 4 8 3 12 5000U 35 M 24 1\n",
"222 38 5 12 3 12 5000U 35 M 29 1\n",
"223 38 6 16 3 12 5000U 35 M 42 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"39\n",
" patient obs week site id treat age sex twstrs treatment\n",
"224 39 1 0 4 1 Placebo 58 M 52 0\n",
"225 39 2 2 4 1 Placebo 58 M 55 0\n",
"226 39 3 4 4 1 Placebo 58 M 51 0\n",
"227 39 4 8 4 1 Placebo 58 M 52 0\n",
"228 39 5 12 4 1 Placebo 58 M 54 0\n",
"229 39 6 16 4 1 Placebo 58 M 57 0\n",
"\n",
"40\n",
" patient obs week site id treat age sex twstrs treatment\n",
"230 40 1 0 4 2 5000U 62 F 52 1\n",
"231 40 2 2 4 2 5000U 62 F 30 1\n",
"232 40 3 4 4 2 5000U 62 F 43 1\n",
"233 40 4 8 4 2 5000U 62 F 45 1\n",
"234 40 5 12 4 2 5000U 62 F 47 1\n",
"235 40 6 16 4 2 5000U 62 F 46 1\n",
"\n",
"41\n",
" patient obs week site id treat age sex twstrs treatment\n",
"236 41 1 0 4 3 10000U 73 F 54 2\n",
"237 41 2 2 4 3 10000U 73 F 52 2\n",
"238 41 3 4 4 3 10000U 73 F 52 2\n",
"239 41 4 8 4 3 10000U 73 F 54 2\n",
"240 41 5 12 4 3 10000U 73 F 51 2\n",
"241 41 6 16 4 3 10000U 73 F 57 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"42\n",
" patient obs week site id treat age sex twstrs treatment\n",
"242 42 1 0 4 4 10000U 52 F 52 2\n",
"243 42 2 2 4 4 10000U 52 F 44 2\n",
"244 42 3 4 4 4 10000U 52 F 33 2\n",
"245 42 4 8 4 4 10000U 52 F 54 2\n",
"246 42 5 12 4 4 10000U 52 F 46 2\n",
"247 42 6 16 4 4 10000U 52 F 47 2\n",
"\n",
"43\n",
" patient obs week site id treat age sex twstrs treatment\n",
"248 43 1 0 4 5 Placebo 53 F 47 0\n",
"249 43 2 2 4 5 Placebo 53 F 45 0\n",
"250 43 3 4 4 5 Placebo 53 F 41 0\n",
"251 43 4 8 4 5 Placebo 53 F 45 0\n",
"252 43 5 12 4 5 Placebo 53 F 43 0\n",
"253 43 6 16 4 5 Placebo 53 F 41 0\n",
"\n",
"44\n",
" patient obs week site id treat age sex twstrs treatment\n",
"254 44 1 0 4 6 5000U 69 M 44 1\n",
"255 44 2 2 4 6 5000U 69 M 34 1\n",
"256 44 3 4 4 6 5000U 69 M 29 1\n",
"257 44 4 8 4 6 5000U 69 M 28 1\n",
"258 44 5 12 4 6 5000U 69 M 35 1\n",
"259 44 6 16 4 6 5000U 69 M 41 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"45\n",
" patient obs week site id treat age sex twstrs treatment\n",
"260 45 1 0 4 7 Placebo 55 M 42 0\n",
"261 45 2 2 4 7 Placebo 55 M 39 0\n",
"262 45 3 4 4 7 Placebo 55 M 38 0\n",
"263 45 4 8 4 7 Placebo 55 M 47 0\n",
"264 45 5 12 4 7 Placebo 55 M 39 0\n",
"265 45 6 16 4 7 Placebo 55 M 39 0\n",
"\n",
"46\n",
" patient obs week site id treat age sex twstrs treatment\n",
"266 46 1 0 4 8 10000U 52 F 42 2\n",
"267 46 2 2 4 8 10000U 52 F 14 2\n",
"268 46 3 4 4 8 10000U 52 F 9 2\n",
"269 46 4 8 4 8 10000U 52 F 9 2\n",
"270 46 5 12 4 8 10000U 52 F 16 2\n",
"271 46 6 16 4 8 10000U 52 F 33 2\n",
"\n",
"47\n",
" patient obs week site id treat age sex twstrs treatment\n",
"272 47 1 0 5 1 10000U 51 F 44 2\n",
"273 47 2 2 5 1 10000U 51 F 34 2\n",
"274 47 3 4 5 1 10000U 51 F 32 2\n",
"275 47 4 8 5 1 10000U 51 F 35 2\n",
"276 47 5 12 5 1 10000U 51 F 54 2\n",
"277 47 6 16 5 1 10000U 51 F 53 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"48\n",
" patient obs week site id treat age sex twstrs treatment\n",
"278 48 1 0 5 2 Placebo 56 F 60 0\n",
"279 48 2 2 5 2 Placebo 56 F 57 0\n",
"280 48 3 4 5 2 Placebo 56 F 53 0\n",
"281 48 4 8 5 2 Placebo 56 F 52 0\n",
"282 48 5 12 5 2 Placebo 56 F 53 0\n",
"283 48 6 16 5 2 Placebo 56 F 58 0\n",
"\n",
"49\n",
" patient obs week site id treat age sex twstrs treatment\n",
"284 49 1 0 5 3 5000U 65 F 60 1\n",
"285 49 2 2 5 3 5000U 65 F 53 1\n",
"286 49 3 4 5 3 5000U 65 F 55 1\n",
"287 49 4 8 5 3 5000U 65 F 62 1\n",
"288 49 5 12 5 3 5000U 65 F 67 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"50\n",
" patient obs week site id treat age sex twstrs treatment\n",
"289 50 1 0 5 4 10000U 35 F 50 2\n",
"290 50 2 2 5 4 10000U 35 F 50 2\n",
"291 50 4 8 5 4 10000U 35 F 46 2\n",
"292 50 5 12 5 4 10000U 35 F 50 2\n",
"293 50 6 16 5 4 10000U 35 F 57 2\n",
"\n",
"51\n",
" patient obs week site id treat age sex twstrs treatment\n",
"294 51 1 0 5 5 5000U 43 M 38 1\n",
"295 51 2 2 5 5 5000U 43 M 27 1\n",
"296 51 3 4 5 5 5000U 43 M 16 1\n",
"297 51 4 8 5 5 5000U 43 M 19 1\n",
"298 51 5 12 5 5 5000U 43 M 23 1\n",
"299 51 6 16 5 5 5000U 43 M 26 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"52\n",
" patient obs week site id treat age sex twstrs treatment\n",
"300 52 1 0 5 6 Placebo 61 M 44 0\n",
"301 52 3 4 5 6 Placebo 61 M 46 0\n",
"302 52 4 8 5 6 Placebo 61 M 26 0\n",
"303 52 5 12 5 6 Placebo 61 M 30 0\n",
"304 52 6 16 5 6 Placebo 61 M 34 0\n",
"\n",
"53\n",
" patient obs week site id treat age sex twstrs treatment\n",
"305 53 1 0 6 1 Placebo 43 M 54 0\n",
"306 53 2 2 6 1 Placebo 43 M 53 0\n",
"307 53 3 4 6 1 Placebo 43 M 51 0\n",
"308 53 4 8 6 1 Placebo 43 M 56 0\n",
"309 53 5 12 6 1 Placebo 43 M 39 0\n",
"310 53 6 16 6 1 Placebo 43 M 9 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"54\n",
" patient obs week site id treat age sex twstrs treatment\n",
"311 54 1 0 6 2 10000U 64 F 54 2\n",
"312 54 2 2 6 2 10000U 64 F 32 2\n",
"313 54 3 4 6 2 10000U 64 F 40 2\n",
"314 54 4 8 6 2 10000U 64 F 52 2\n",
"315 54 5 12 6 2 10000U 64 F 42 2\n",
"316 54 6 16 6 2 10000U 64 F 47 2\n",
"\n",
"55\n",
" patient obs week site id treat age sex twstrs treatment\n",
"317 55 1 0 6 3 5000U 57 M 56 1\n",
"318 55 2 2 6 3 5000U 57 M 55 1\n",
"319 55 3 4 6 3 5000U 57 M 44 1\n",
"320 55 4 8 6 3 5000U 57 M 50 1\n",
"321 55 5 12 6 3 5000U 57 M 53 1\n",
"322 55 6 16 6 3 5000U 57 M 52 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"56\n",
" patient obs week site id treat age sex twstrs treatment\n",
"323 56 1 0 6 4 5000U 60 F 51 1\n",
"324 56 2 2 6 4 5000U 60 F 50 1\n",
"325 56 3 4 6 4 5000U 60 F 50 1\n",
"326 56 4 8 6 4 5000U 60 F 56 1\n",
"327 56 5 12 6 4 5000U 60 F 59 1\n",
"328 56 6 16 6 4 5000U 60 F 53 1\n",
"\n",
"57\n",
" patient obs week site id treat age sex twstrs treatment\n",
"329 57 1 0 6 5 10000U 44 F 53 2\n",
"330 57 2 2 6 5 10000U 44 F 56 2\n",
"331 57 3 4 6 5 10000U 44 F 47 2\n",
"332 57 4 8 6 5 10000U 44 F 53 2\n",
"333 57 5 12 6 5 10000U 44 F 51 2\n",
"334 57 6 16 6 5 10000U 44 F 51 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"58\n",
" patient obs week site id treat age sex twstrs treatment\n",
"335 58 1 0 6 6 Placebo 41 F 36 0\n",
"336 58 2 2 6 6 Placebo 41 F 29 0\n",
"337 58 3 4 6 6 Placebo 41 F 24 0\n",
"338 58 4 8 6 6 Placebo 41 F 32 0\n",
"339 58 5 12 6 6 Placebo 41 F 45 0\n",
"340 58 6 16 6 6 Placebo 41 F 36 0\n",
"\n",
"59\n",
" patient obs week site id treat age sex twstrs treatment\n",
"341 59 1 0 6 7 5000U 51 F 59 1\n",
"342 59 2 2 6 7 5000U 51 F 53 1\n",
"343 59 3 4 6 7 5000U 51 F 45 1\n",
"344 59 4 8 6 7 5000U 51 F 44 1\n",
"345 59 5 12 6 7 5000U 51 F 50 1\n",
"346 59 6 16 6 7 5000U 51 F 48 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"60\n",
" patient obs week site id treat age sex twstrs treatment\n",
"347 60 1 0 6 8 Placebo 57 F 49 0\n",
"348 60 2 2 6 8 Placebo 57 F 50 0\n",
"349 60 3 4 6 8 Placebo 57 F 48 0\n",
"350 60 4 8 6 8 Placebo 57 F 56 0\n",
"351 60 5 12 6 8 Placebo 57 F 49 0\n",
"352 60 6 16 6 8 Placebo 57 F 57 0\n",
"\n",
"61\n",
" patient obs week site id treat age sex twstrs treatment\n",
"353 61 1 0 6 9 10000U 42 F 50 2\n",
"354 61 2 2 6 9 10000U 42 F 38 2\n",
"355 61 3 4 6 9 10000U 42 F 42 2\n",
"356 61 4 8 6 9 10000U 42 F 43 2\n",
"357 61 5 12 6 9 10000U 42 F 42 2\n",
"358 61 6 16 6 9 10000U 42 F 46 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"62\n",
" patient obs week site id treat age sex twstrs treatment\n",
"359 62 1 0 6 10 Placebo 48 F 46 0\n",
"360 62 2 2 6 10 Placebo 48 F 48 0\n",
"361 62 3 4 6 10 Placebo 48 F 46 0\n",
"362 62 4 8 6 10 Placebo 48 F 57 0\n",
"363 62 5 12 6 10 Placebo 48 F 57 0\n",
"364 62 6 16 6 10 Placebo 48 F 49 0\n",
"\n",
"63\n",
" patient obs week site id treat age sex twstrs treatment\n",
"365 63 1 0 6 11 10000U 57 M 55 2\n",
"366 63 2 2 6 11 10000U 57 M 34 2\n",
"367 63 3 4 6 11 10000U 57 M 26 2\n",
"368 63 4 8 6 11 10000U 57 M 40 2\n",
"369 63 5 12 6 11 10000U 57 M 49 2\n",
"370 63 6 16 6 11 10000U 57 M 47 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"64\n",
" patient obs week site id treat age sex twstrs treatment\n",
"371 64 1 0 6 12 5000U 39 M 46 1\n",
"372 64 2 2 6 12 5000U 39 M 44 1\n",
"373 64 3 4 6 12 5000U 39 M 47 1\n",
"374 64 4 8 6 12 5000U 39 M 50 1\n",
"375 64 5 12 6 12 5000U 39 M 46 1\n",
"376 64 6 16 6 12 5000U 39 M 51 1\n",
"\n",
"65\n",
" patient obs week site id treat age sex twstrs treatment\n",
"377 65 1 0 6 13 10000U 67 M 34 2\n",
"378 65 2 2 6 13 10000U 67 M 31 2\n",
"379 65 3 4 6 13 10000U 67 M 25 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"66\n",
" patient obs week site id treat age sex twstrs treatment\n",
"380 66 1 0 6 14 5000U 39 F 57 1\n",
"381 66 2 2 6 14 5000U 39 F 48 1\n",
"382 66 3 4 6 14 5000U 39 F 50 1\n",
"383 66 4 8 6 14 5000U 39 F 50 1\n",
"384 66 5 12 6 14 5000U 39 F 50 1\n",
"385 66 6 16 6 14 5000U 39 F 49 1\n",
"\n",
"67\n",
" patient obs week site id treat age sex twstrs treatment\n",
"386 67 1 0 6 15 Placebo 69 M 41 0\n",
"387 67 2 2 6 15 Placebo 69 M 40 0\n",
"388 67 3 4 6 15 Placebo 69 M 42 0\n",
"389 67 4 8 6 15 Placebo 69 M 38 0\n",
"390 67 5 12 6 15 Placebo 69 M 50 0\n",
"391 67 6 16 6 15 Placebo 69 M 56 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"68\n",
" patient obs week site id treat age sex twstrs treatment\n",
"392 68 1 0 7 1 5000U 54 F 49 1\n",
"393 68 2 2 7 1 5000U 54 F 25 1\n",
"394 68 3 4 7 1 5000U 54 F 30 1\n",
"395 68 4 8 7 1 5000U 54 F 41 1\n",
"396 68 5 12 7 1 5000U 54 F 41 1\n",
"397 68 6 16 7 1 5000U 54 F 31 1\n",
"\n",
"69\n",
" patient obs week site id treat age sex twstrs treatment\n",
"398 69 1 0 7 2 Placebo 67 F 42 0\n",
"399 69 2 2 7 2 Placebo 67 F 30 0\n",
"400 69 3 4 7 2 Placebo 67 F 40 0\n",
"401 69 4 8 7 2 Placebo 67 F 43 0\n",
"402 69 5 12 7 2 Placebo 67 F 36 0\n",
"403 69 6 16 7 2 Placebo 67 F 45 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"70\n",
" patient obs week site id treat age sex twstrs treatment\n",
"404 70 1 0 7 3 10000U 58 F 31 2\n",
"405 70 2 2 7 3 10000U 58 F 18 2\n",
"406 70 3 4 7 3 10000U 58 F 23 2\n",
"407 70 4 8 7 3 10000U 58 F 26 2\n",
"408 70 5 12 7 3 10000U 58 F 33 2\n",
"409 70 6 16 7 3 10000U 58 F 41 2\n",
"\n",
"71\n",
" patient obs week site id treat age sex twstrs treatment\n",
"410 71 1 0 7 4 Placebo 72 F 50 0\n",
"411 71 2 2 7 4 Placebo 72 F 27 0\n",
"412 71 3 4 7 4 Placebo 72 F 43 0\n",
"413 71 4 8 7 4 Placebo 72 F 32 0\n",
"414 71 5 12 7 4 Placebo 72 F 40 0\n",
"415 71 6 16 7 4 Placebo 72 F 47 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"72\n",
" patient obs week site id treat age sex twstrs treatment\n",
"416 72 1 0 7 5 10000U 65 F 35 2\n",
"417 72 2 2 7 5 10000U 65 F 24 2\n",
"418 72 3 4 7 5 10000U 65 F 34 2\n",
"419 72 4 8 7 5 10000U 65 F 28 2\n",
"420 72 5 12 7 5 10000U 65 F 34 2\n",
"421 72 6 16 7 5 10000U 65 F 28 2\n",
"\n",
"73\n",
" patient obs week site id treat age sex twstrs treatment\n",
"422 73 1 0 7 6 5000U 68 F 38 1\n",
"423 73 2 2 7 6 5000U 68 F 25 1\n",
"424 73 3 4 7 6 5000U 68 F 21 1\n",
"425 73 4 8 7 6 5000U 68 F 33 1\n",
"426 73 5 12 7 6 5000U 68 F 42 1\n",
"427 73 6 16 7 6 5000U 68 F 53 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"74\n",
" patient obs week site id treat age sex twstrs treatment\n",
"428 74 1 0 7 7 10000U 75 F 53 2\n",
"429 74 2 2 7 7 10000U 75 F 40 2\n",
"430 74 3 4 7 7 10000U 75 F 38 2\n",
"431 74 4 8 7 7 10000U 75 F 44 2\n",
"432 74 5 12 7 7 10000U 75 F 47 2\n",
"433 74 6 16 7 7 10000U 75 F 53 2\n",
"\n",
"75\n",
" patient obs week site id treat age sex twstrs treatment\n",
"434 75 1 0 7 8 Placebo 26 F 42 0\n",
"435 75 2 2 7 8 Placebo 26 F 48 0\n",
"436 75 3 4 7 8 Placebo 26 F 26 0\n",
"437 75 4 8 7 8 Placebo 26 F 37 0\n",
"438 75 5 12 7 8 Placebo 26 F 37 0\n",
"439 75 6 16 7 8 Placebo 26 F 43 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"76\n",
" patient obs week site id treat age sex twstrs treatment\n",
"440 76 1 0 7 9 5000U 36 F 53 1\n",
"441 76 2 2 7 9 5000U 36 F 45 1\n",
"442 76 3 4 7 9 5000U 36 F 52 1\n",
"443 76 4 8 7 9 5000U 36 F 51 1\n",
"444 76 5 12 7 9 5000U 36 F 52 1\n",
"445 76 6 16 7 9 5000U 36 F 53 1\n",
"\n",
"77\n",
" patient obs week site id treat age sex twstrs treatment\n",
"446 77 1 0 7 10 10000U 72 M 46 2\n",
"447 77 2 2 7 10 10000U 72 M 47 2\n",
"448 77 3 4 7 10 10000U 72 M 45 2\n",
"449 77 4 8 7 10 10000U 72 M 45 2\n",
"450 77 5 12 7 10 10000U 72 M 50 2\n",
"451 77 6 16 7 10 10000U 72 M 52 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"78\n",
" patient obs week site id treat age sex twstrs treatment\n",
"452 78 1 0 7 11 Placebo 54 F 50 0\n",
"453 78 2 2 7 11 Placebo 54 F 42 0\n",
"454 78 3 4 7 11 Placebo 54 F 52 0\n",
"455 78 4 8 7 11 Placebo 54 F 60 0\n",
"456 78 5 12 7 11 Placebo 54 F 54 0\n",
"457 78 6 16 7 11 Placebo 54 F 59 0\n",
"\n",
"79\n",
" patient obs week site id treat age sex twstrs treatment\n",
"458 79 1 0 7 12 5000U 64 F 43 1\n",
"459 79 2 2 7 12 5000U 64 F 24 1\n",
"460 79 3 4 7 12 5000U 64 F 17 1\n",
"461 79 4 8 7 12 5000U 64 F 37 1\n",
"462 79 5 12 7 12 5000U 64 F 36 1\n",
"463 79 6 16 7 12 5000U 64 F 38 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"80\n",
" patient obs week site id treat age sex twstrs treatment\n",
"464 80 1 0 8 1 Placebo 39 F 46 0\n",
"465 80 2 2 8 1 Placebo 39 F 39 0\n",
"466 80 3 4 8 1 Placebo 39 F 25 0\n",
"467 80 4 8 8 1 Placebo 39 F 15 0\n",
"468 80 5 12 8 1 Placebo 39 F 21 0\n",
"469 80 6 16 8 1 Placebo 39 F 25 0\n",
"\n",
"81\n",
" patient obs week site id treat age sex twstrs treatment\n",
"470 81 1 0 8 2 10000U 54 M 41 2\n",
"471 81 2 2 8 2 10000U 54 M 30 2\n",
"472 81 3 4 8 2 10000U 54 M 44 2\n",
"473 81 4 8 8 2 10000U 54 M 46 2\n",
"474 81 5 12 8 2 10000U 54 M 46 2\n",
"475 81 6 16 8 2 10000U 54 M 44 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"82\n",
" patient obs week site id treat age sex twstrs treatment\n",
"476 82 1 0 8 3 5000U 48 M 33 1\n",
"477 82 2 2 8 3 5000U 48 M 27 1\n",
"478 82 3 4 8 3 5000U 48 M 25 1\n",
"479 82 4 8 8 3 5000U 48 M 30 1\n",
"480 82 5 12 8 3 5000U 48 M 28 1\n",
"481 82 6 16 8 3 5000U 48 M 30 1\n",
"\n",
"83\n",
" patient obs week site id treat age sex twstrs treatment\n",
"482 83 1 0 8 4 5000U 83 F 36 1\n",
"483 83 2 2 8 4 5000U 83 F 15 1\n",
"484 83 3 4 8 4 5000U 83 F 16 1\n",
"485 83 4 8 8 4 5000U 83 F 17 1\n",
"486 83 5 12 8 4 5000U 83 F 22 1\n",
"487 83 6 16 8 4 5000U 83 F 41 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"84\n",
" patient obs week site id treat age sex twstrs treatment\n",
"488 84 1 0 8 5 10000U 74 M 33 2\n",
"489 84 2 2 8 5 10000U 74 M 32 2\n",
"490 84 3 4 8 5 10000U 74 M 31 2\n",
"491 84 4 8 8 5 10000U 74 M 27 2\n",
"492 84 5 12 8 5 10000U 74 M 49 2\n",
"493 84 6 16 8 5 10000U 74 M 60 2\n",
"\n",
"85\n",
" patient obs week site id treat age sex twstrs treatment\n",
"494 85 1 0 8 6 Placebo 41 M 37 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"86\n",
" patient obs week site id treat age sex twstrs treatment\n",
"495 86 1 0 8 7 10000U 65 F 24 2\n",
"496 86 2 2 8 7 10000U 65 F 29 2\n",
"497 86 3 4 8 7 10000U 65 F 18 2\n",
"498 86 4 8 8 7 10000U 65 F 20 2\n",
"499 86 5 12 8 7 10000U 65 F 25 2\n",
"500 86 6 16 8 7 10000U 65 F 41 2\n",
"\n",
"87\n",
" patient obs week site id treat age sex twstrs treatment\n",
"501 87 1 0 8 8 5000U 79 M 42 1\n",
"502 87 2 2 8 8 5000U 79 M 23 1\n",
"503 87 3 4 8 8 5000U 79 M 30 1\n",
"504 87 4 8 8 8 5000U 79 M 36 1\n",
"505 87 5 12 8 8 5000U 79 M 41 1\n",
"506 87 6 16 8 8 5000U 79 M 43 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"88\n",
" patient obs week site id treat age sex twstrs treatment\n",
"507 88 1 0 8 9 Placebo 63 M 30 0\n",
"508 88 2 2 8 9 Placebo 63 M 22 0\n",
"509 88 3 4 8 9 Placebo 63 M 21 0\n",
"510 88 4 8 8 9 Placebo 63 M 25 0\n",
"511 88 5 12 8 9 Placebo 63 M 26 0\n",
"512 88 6 16 8 9 Placebo 63 M 33 0\n",
"\n",
"89\n",
" patient obs week site id treat age sex twstrs treatment\n",
"513 89 1 0 8 10 Placebo 63 F 42 0\n",
"514 89 2 2 8 10 Placebo 63 F 46 0\n",
"515 89 3 4 8 10 Placebo 63 F 41 0\n",
"516 89 4 8 8 10 Placebo 63 F 43 0\n",
"517 89 5 12 8 10 Placebo 63 F 49 0\n",
"518 89 6 16 8 10 Placebo 63 F 54 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"90\n",
" patient obs week site id treat age sex twstrs treatment\n",
"519 90 1 0 8 11 10000U 34 F 49 2\n",
"520 90 2 2 8 11 10000U 34 F 25 2\n",
"521 90 3 4 8 11 10000U 34 F 30 2\n",
"522 90 4 8 8 11 10000U 34 F 49 2\n",
"523 90 5 12 8 11 10000U 34 F 55 2\n",
"524 90 6 16 8 11 10000U 34 F 58 2\n",
"\n",
"91\n",
" patient obs week site id treat age sex twstrs treatment\n",
"525 91 1 0 8 12 5000U 42 M 58 1\n",
"526 91 2 2 8 12 5000U 42 M 46 1\n",
"527 91 3 4 8 12 5000U 42 M 46 1\n",
"528 91 4 8 8 12 5000U 42 M 50 1\n",
"529 91 5 12 8 12 5000U 42 M 56 1\n",
"530 91 6 16 8 12 5000U 42 M 60 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"92\n",
" patient obs week site id treat age sex twstrs treatment\n",
"531 92 1 0 8 13 Placebo 57 M 26 0\n",
"532 92 2 2 8 13 Placebo 57 M 26 0\n",
"533 92 3 4 8 13 Placebo 57 M 27 0\n",
"534 92 4 8 8 13 Placebo 57 M 22 0\n",
"535 92 5 12 8 13 Placebo 57 M 38 0\n",
"536 92 6 16 8 13 Placebo 57 M 35 0\n",
"\n",
"93\n",
" patient obs week site id treat age sex twstrs treatment\n",
"537 93 1 0 8 14 5000U 68 M 37 1\n",
"538 93 3 4 8 14 5000U 68 M 23 1\n",
"539 93 4 8 8 14 5000U 68 M 18 1\n",
"540 93 5 12 8 14 5000U 68 M 34 1\n",
"541 93 6 16 8 14 5000U 68 M 36 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"94\n",
" patient obs week site id treat age sex twstrs treatment\n",
"542 94 1 0 8 15 10000U 51 M 40 2\n",
"543 94 2 2 8 15 10000U 51 M 24 2\n",
"544 94 3 4 8 15 10000U 51 M 25 2\n",
"545 94 4 8 8 15 10000U 51 M 37 2\n",
"546 94 6 16 8 15 10000U 51 M 38 2\n",
"\n",
"95\n",
" patient obs week site id treat age sex twstrs treatment\n",
"547 95 1 0 8 16 5000U 51 F 33 1\n",
"548 95 2 2 8 16 5000U 51 F 10 1\n",
"549 95 3 4 8 16 5000U 51 F 13 1\n",
"550 95 4 8 8 16 5000U 51 F 16 1\n",
"551 95 5 12 8 16 5000U 51 F 32 1\n",
"552 95 6 16 8 16 5000U 51 F 16 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"96\n",
" patient obs week site id treat age sex twstrs treatment\n",
"553 96 1 0 8 17 10000U 61 F 41 2\n",
"554 96 2 2 8 17 10000U 61 F 50 2\n",
"555 96 3 4 8 17 10000U 61 F 22 2\n",
"556 96 4 8 8 17 10000U 61 F 28 2\n",
"557 96 5 12 8 17 10000U 61 F 34 2\n",
"558 96 6 16 8 17 10000U 61 F 36 2\n",
"\n",
"97\n",
" patient obs week site id treat age sex twstrs treatment\n",
"559 97 1 0 8 18 Placebo 42 M 46 0\n",
"560 97 3 4 8 18 Placebo 42 M 41 0\n",
"561 97 4 8 8 18 Placebo 42 M 41 0\n",
"562 97 5 12 8 18 Placebo 42 M 58 0\n",
"563 97 6 16 8 18 Placebo 42 M 53 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"98\n",
" patient obs week site id treat age sex twstrs treatment\n",
"564 98 1 0 8 19 10000U 73 F 40 2\n",
"565 98 2 2 8 19 10000U 73 F 28 2\n",
"566 98 3 4 8 19 10000U 73 F 29 2\n",
"567 98 4 8 8 19 10000U 73 F 30 2\n",
"568 98 5 12 8 19 10000U 73 F 37 2\n",
"569 98 6 16 8 19 10000U 73 F 44 2\n",
"\n",
"99\n",
" patient obs week site id treat age sex twstrs treatment\n",
"570 99 1 0 9 1 10000U 57 M 40 2\n",
"571 99 2 2 9 1 10000U 57 M 16 2\n",
"572 99 3 4 9 1 10000U 57 M 18 2\n",
"573 99 4 8 9 1 10000U 57 M 25 2\n",
"574 99 5 12 9 1 10000U 57 M 33 2\n",
"575 99 6 16 9 1 10000U 57 M 48 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"100\n",
" patient obs week site id treat age sex twstrs treatment\n",
"576 100 1 0 9 2 Placebo 59 M 61 0\n",
"577 100 2 2 9 2 Placebo 59 M 52 0\n",
"578 100 3 4 9 2 Placebo 59 M 61 0\n",
"579 100 4 8 9 2 Placebo 59 M 68 0\n",
"580 100 5 12 9 2 Placebo 59 M 59 0\n",
"581 100 6 16 9 2 Placebo 59 M 71 0\n",
"\n",
"101\n",
" patient obs week site id treat age sex twstrs treatment\n",
"582 101 1 0 9 3 5000U 57 M 35 1\n",
"583 101 2 2 9 3 5000U 57 M 21 1\n",
"584 101 3 4 9 3 5000U 57 M 29 1\n",
"585 101 4 8 9 3 5000U 57 M 30 1\n",
"586 101 5 12 9 3 5000U 57 M 35 1\n",
"587 101 6 16 9 3 5000U 57 M 48 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"102\n",
" patient obs week site id treat age sex twstrs treatment\n",
"588 102 1 0 9 4 Placebo 68 F 58 0\n",
"589 102 2 2 9 4 Placebo 68 F 38 0\n",
"590 102 3 4 9 4 Placebo 68 F 50 0\n",
"591 102 4 8 9 4 Placebo 68 F 53 0\n",
"592 102 5 12 9 4 Placebo 68 F 47 0\n",
"593 102 6 16 9 4 Placebo 68 F 59 0\n",
"\n",
"103\n",
" patient obs week site id treat age sex twstrs treatment\n",
"594 103 1 0 9 5 5000U 55 F 49 1\n",
"595 103 2 2 9 5 5000U 55 F 45 1\n",
"596 103 3 4 9 5 5000U 55 F 36 1\n",
"597 103 5 12 9 5 5000U 55 F 40 1\n",
"598 103 6 16 9 5 5000U 55 F 52 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"104\n",
" patient obs week site id treat age sex twstrs treatment\n",
"599 104 1 0 9 6 10000U 46 F 52 2\n",
"600 104 2 2 9 6 10000U 46 F 46 2\n",
"601 104 3 4 9 6 10000U 46 F 36 2\n",
"602 104 5 12 9 6 10000U 46 F 45 2\n",
"603 104 6 16 9 6 10000U 46 F 54 2\n",
"\n",
"105\n",
" patient obs week site id treat age sex twstrs treatment\n",
"604 105 1 0 9 7 Placebo 79 F 45 0\n",
"605 105 2 2 9 7 Placebo 79 F 46 0\n",
"606 105 3 4 9 7 Placebo 79 F 33 0\n",
"607 105 4 8 9 7 Placebo 79 F 44 0\n",
"608 105 5 12 9 7 Placebo 79 F 46 0\n",
"609 105 6 16 9 7 Placebo 79 F 48 0"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"106\n",
" patient obs week site id treat age sex twstrs treatment\n",
"610 106 1 0 9 8 5000U 43 M 67 1\n",
"611 106 2 2 9 8 5000U 43 M 63 1\n",
"612 106 3 4 9 8 5000U 43 M 71 1\n",
"613 106 4 8 9 8 5000U 43 M 66 1\n",
"614 106 5 12 9 8 5000U 43 M 68 1\n",
"615 106 6 16 9 8 5000U 43 M 71 1\n",
"\n",
"107\n",
" patient obs week site id treat age sex twstrs treatment\n",
"616 107 1 0 9 9 10000U 50 M 57 2\n",
"617 107 3 4 9 9 10000U 50 M 36 2\n",
"618 107 4 8 9 9 10000U 50 M 23 2\n",
"619 107 6 16 9 9 10000U 50 M 52 2"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n",
"108\n",
" patient obs week site id treat age sex twstrs treatment\n",
"620 108 1 0 9 10 10000U 39 F 63 2\n",
"621 108 2 2 9 10 10000U 39 F 51 2\n",
"622 108 3 4 9 10 10000U 39 F 46 2\n",
"623 108 4 8 9 10 10000U 39 F 50 2\n",
"624 108 5 12 9 10 10000U 39 F 50 2\n",
"625 108 6 16 9 10 10000U 39 F 54 2\n",
"\n",
"109\n",
" patient obs week site id treat age sex twstrs treatment\n",
"626 109 1 0 9 11 5000U 57 M 53 1\n",
"627 109 2 2 9 11 5000U 57 M 38 1\n",
"628 109 4 8 9 11 5000U 57 M 33 1\n",
"629 109 5 12 9 11 5000U 57 M 36 1\n",
"630 109 6 16 9 11 5000U 57 M 51 1"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"\n"
]
}
],
"prompt_number": 135
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A common data analysis procedure is the **split-apply-combine** operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.\n",
"\n",
"For example, we may want to aggregate our data with with some function.\n",
"\n",
"![split-apply-combine](http://f.cl.ly/items/0s0Z252j0X0c3k3P1M47/Screen%20Shot%202013-06-02%20at%203.04.04%20PM.png)\n",
"\n",
"<div align=\"right\">*(figure taken from \"Python for Data Analysis\", p.251)*</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can aggregate in Pandas using the `aggregate` (or `agg`, for short) method:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia_grouped.agg(mean).head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 136,
"text": [
" patient obs week site id age twstrs treatment\n",
"patient \n",
"1 1 3.5 7.0 1 1 65 33.000000 1\n",
"2 2 3.5 7.0 1 2 70 47.666667 2\n",
"3 3 3.5 7.0 1 3 64 30.500000 1\n",
"4 4 2.5 3.5 1 4 59 60.000000 0\n",
"5 5 3.5 7.0 1 5 76 46.166667 2"
]
}
],
"prompt_number": 136
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that the `treat` and `sex` variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.\n",
"\n",
"Some aggregation functions are so common that Pandas has a convenience method for them, such as `mean`:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia_grouped.mean().head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 137,
"text": [
" patient obs week site id age twstrs treatment\n",
"patient \n",
"1 1 3.5 7.0 1 1 65 33.000000 1\n",
"2 2 3.5 7.0 1 2 70 47.666667 2\n",
"3 3 3.5 7.0 1 3 64 30.500000 1\n",
"4 4 2.5 3.5 1 4 59 60.000000 0\n",
"5 5 3.5 7.0 1 5 76 46.166667 2"
]
}
],
"prompt_number": 137
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `add_prefix` and `add_suffix` methods can be used to give the columns of the resulting table labels that reflect the transformation:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia_grouped.mean().add_suffix('_mean').head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 138,
"text": [
" patient_mean obs_mean week_mean site_mean id_mean age_mean \\\n",
"patient \n",
"1 1 3.5 7.0 1 1 65 \n",
"2 2 3.5 7.0 1 2 70 \n",
"3 3 3.5 7.0 1 3 64 \n",
"4 4 2.5 3.5 1 4 59 \n",
"5 5 3.5 7.0 1 5 76 \n",
"\n",
" twstrs_mean treatment_mean \n",
"patient \n",
"1 33.000000 1 \n",
"2 47.666667 2 \n",
"3 30.500000 1 \n",
"4 60.000000 0 \n",
"5 46.166667 2 "
]
}
],
"prompt_number": 138
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# The median of the `twstrs` variable\n",
"cdystonia_grouped['twstrs'].quantile(0.5)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 139,
"text": [
"patient\n",
"1 34.0\n",
"2 50.5\n",
"3 30.5\n",
"4 61.5\n",
"5 48.5\n",
"6 48.0\n",
"7 42.0\n",
"8 32.5\n",
"9 35.5\n",
"10 20.5\n",
"...\n",
"99 29.0\n",
"100 61.0\n",
"101 32.5\n",
"102 51.5\n",
"103 45.0\n",
"104 46.0\n",
"105 45.5\n",
"106 67.5\n",
"107 44.0\n",
"108 50.5\n",
"109 38.0\n",
"Length: 109, dtype: float64"
]
}
],
"prompt_number": 139
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we wish, we can easily aggregate according to multiple keys:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia.groupby(['week','site']).mean().head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 140,
"text": [
" patient obs id age twstrs treatment\n",
"week site \n",
"0 1 6.5 1 6.5 59.000000 43.083333 1.000000\n",
" 2 19.5 1 7.5 53.928571 51.857143 0.928571\n",
" 3 32.5 1 6.5 51.500000 38.750000 1.000000\n",
" 4 42.5 1 4.5 59.250000 48.125000 1.000000\n",
" 5 49.5 1 3.5 51.833333 49.333333 1.000000"
]
}
],
"prompt_number": 140
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternately, we can **transform** the data, using a function of our choice with the `transform` method:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"normalize = lambda x: (x - x.mean())/x.std()\n",
"\n",
"cdystonia_grouped.transform(normalize).head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 141,
"text": [
" patient obs week site id age twstrs treatment\n",
"0 NaN -1.336306 -1.135550 NaN NaN NaN -0.181369 NaN\n",
"1 NaN -0.801784 -0.811107 NaN NaN NaN -0.544107 NaN\n",
"2 NaN -0.267261 -0.486664 NaN NaN NaN -1.632322 NaN\n",
"3 NaN 0.267261 0.162221 NaN NaN NaN 0.725476 NaN\n",
"4 NaN 0.801784 0.811107 NaN NaN NaN 1.088214 NaN"
]
}
],
"prompt_number": 141
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is easy to do column selection within `groupby` operations, if we are only interested split-apply-combine operations on a subset of columns:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia_grouped['twstrs'].mean().head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 142,
"text": [
"patient\n",
"1 33.000000\n",
"2 47.666667\n",
"3 30.500000\n",
"4 60.000000\n",
"5 46.166667\n",
"Name: twstrs, dtype: float64"
]
}
],
"prompt_number": 142
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# This gives the same result as a DataFrame\n",
"cdystonia_grouped[['twstrs']].mean().head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 143,
"text": [
" twstrs\n",
"patient \n",
"1 33.000000\n",
"2 47.666667\n",
"3 30.500000\n",
"4 60.000000\n",
"5 46.166667"
]
}
],
"prompt_number": 143
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"chunks = dict(list(cdystonia_grouped))"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 144
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"chunks[4]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 145,
"text": [
" patient obs week site id treat age sex twstrs treatment\n",
"18 4 1 0 1 4 Placebo 59 F 53 0\n",
"19 4 2 2 1 4 Placebo 59 F 61 0\n",
"20 4 3 4 1 4 Placebo 59 F 64 0\n",
"21 4 4 8 1 4 Placebo 59 F 62 0"
]
}
],
"prompt_number": 145
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By default, `groupby` groups by row, but we can specify the `axis` argument to change this. For example, we can group our columns by type this way:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"dict(list(cdystonia.groupby(cdystonia.dtypes, axis=1)))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 146,
"text": [
"{dtype('int64'): <class 'pandas.core.frame.DataFrame'>\n",
"Int64Index: 631 entries, 0 to 630\n",
"Data columns (total 8 columns):\n",
"patient 631 non-null values\n",
"obs 631 non-null values\n",
"week 631 non-null values\n",
"site 631 non-null values\n",
"id 631 non-null values\n",
"age 631 non-null values\n",
"twstrs 631 non-null values\n",
"treatment 631 non-null values\n",
"dtypes: int64(8),\n",
" dtype('O'): <class 'pandas.core.frame.DataFrame'>\n",
"Int64Index: 631 entries, 0 to 630\n",
"Data columns (total 2 columns):\n",
"treat 631 non-null values\n",
"sex 631 non-null values\n",
"dtypes: object(2)}"
]
}
],
"prompt_number": 146
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Its also possible to group by one or more levels of a hierarchical index. Recall `cdystonia2`, which we created with a hierarchical index:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia2.head(10)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 147,
"text": [
" week site id treat age sex twstrs\n",
"patient obs \n",
"1 1 0 1 1 5000U 65 F 32\n",
" 2 2 1 1 5000U 65 F 30\n",
" 3 4 1 1 5000U 65 F 24\n",
" 4 8 1 1 5000U 65 F 37\n",
" 5 12 1 1 5000U 65 F 39\n",
" 6 16 1 1 5000U 65 F 36\n",
"2 1 0 1 2 10000U 70 F 60\n",
" 2 2 1 2 10000U 70 F 26\n",
" 3 4 1 2 10000U 70 F 27\n",
" 4 8 1 2 10000U 70 F 41"
]
}
],
"prompt_number": 147
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 148,
"text": [
"obs\n",
"1 45.651376\n",
"2 37.611650\n",
"3 37.066038\n",
"4 39.807692\n",
"5 42.913462\n",
"6 45.628571\n",
"Name: twstrs, dtype: float64"
]
}
],
"prompt_number": 148
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Apply\n",
"\n",
"We can generalize the split-apply-combine methodology by using `apply` function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The function below takes a DataFrame and a column name, sorts by the column, and takes the `n` largest values of that column. We can use this with `apply` to return the largest values from every group in a DataFrame in a single call. "
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"def top(df, column, n=5):\n",
" return df.sort_index(by=column, ascending=False)[:n]"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 149
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield `segments_merged`). Say we wanted to return the 3 longest segments travelled by each ship:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]\n",
"top3segments"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 150,
"text": [
"<class 'pandas.core.frame.DataFrame'>\n",
"MultiIndex: 29464 entries, (1, 6) to (999999999, 262525)\n",
"Data columns (total 2 columns):\n",
"names 29464 non-null values\n",
"seg_length 29464 non-null values\n",
"dtypes: float64(1), object(1)"
]
}
],
"prompt_number": 150
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that additional arguments for the applied function can be passed via `apply` after the function name. It assumes that the DataFrame is the first argument."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"top3segments.head(20)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 151,
"text": [
" names seg_length\n",
"mmsi \n",
"1 6 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... 76.0\n",
" 5 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... 17.4\n",
" 7 Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho... 13.7\n",
"9 15 000000009/Raven/Shearwater 47.2\n",
" 14 000000009/Raven/Shearwater 31.4\n",
" 13 000000009/Raven/Shearwater 19.3\n",
"21 16 Us Gov Vessel 48.7\n",
" 25 Us Gov Vessel 25.3\n",
" 30 Us Gov Vessel 21.7\n",
"74 35 Mcfaul/Sarah Bell 7.4\n",
" 34 Mcfaul/Sarah Bell 1.4\n",
"103 37 Ron G/Us Navy Warship 103/Us Warship 103 87.5\n",
" 41 Ron G/Us Navy Warship 103/Us Warship 103 62.6\n",
" 43 Ron G/Us Navy Warship 103/Us Warship 103 59.1\n",
"310 51 Arabella 77.4\n",
" 58 Arabella 30.7\n",
" 49 Arabella 30.4\n",
"3011 74 Charleston 121.6\n",
" 69 Charleston 89.7\n",
" 77 Charleston 59.7"
]
}
],
"prompt_number": 151
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Recall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classification than genus. For example, we can identify samples down to *class*, which is the 3rd level of organization in each index."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb1.index[:3]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 152,
"text": [
"Index([u'Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera', u'Archaea \"Crenarchaeota\" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus', u'Archaea \"Crenarchaeota\" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus'], dtype=object)"
]
}
],
"prompt_number": 152
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using the string methods `split` and `join` we can create an index that just uses the first three classifications: domain, phylum and class."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"class_index = mb1.index.map(lambda x: ' '.join(x.split(' ')[:3]))"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 153
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb_class = mb1.copy()\n",
"mb_class.index = class_index"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 154
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, since there are multiple taxonomic units with the same class, our index is no longer unique:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb_class.head()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 155,
"text": [
" Count\n",
"Archaea \"Crenarchaeota\" Thermoprotei 7\n",
"Archaea \"Crenarchaeota\" Thermoprotei 2\n",
"Archaea \"Crenarchaeota\" Thermoprotei 3\n",
"Archaea \"Crenarchaeota\" Thermoprotei 3\n",
"Archaea \"Euryarchaeota\" \"Methanomicrobia\" 7"
]
}
],
"prompt_number": 155
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can re-establish a unique index by summing all rows with the same class, using `groupby`:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"mb_class.groupby(level=0).sum().head(10)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 156,
"text": [
" Count\n",
"Archaea \"Crenarchaeota\" Thermoprotei 15\n",
"Archaea \"Euryarchaeota\" \"Methanomicrobia\" 9\n",
"Archaea \"Euryarchaeota\" Archaeoglobi 2\n",
"Archaea \"Euryarchaeota\" Halobacteria 12\n",
"Archaea \"Euryarchaeota\" Methanococci 1\n",
"Archaea \"Euryarchaeota\" Methanopyri 12\n",
"Archaea \"Euryarchaeota\" Thermoplasmata 2\n",
"Bacteria \"Actinobacteria\" Actinobacteria 1740\n",
"Bacteria \"Aquificae\" Aquificae 11\n",
"Bacteria \"Bacteroidetes\" \"Bacteroidia\" 1"
]
}
],
"prompt_number": 156
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise\n",
"\n",
"Load the dataset in `titanic.xls`. It contains data on all the passengers that travelled on the Titanic."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"from IPython.core.display import HTML\n",
"HTML(filename='data/titanic.html')"
],
"language": "python",
"metadata": {},
"outputs": [
{
"html": [
"<hr><h2>Data frame:titanic3</h2>1309 observations and 14 variables, maximum # NAs:1188<hr>\n",
"<TABLE BORDER>\n",
"<tr><td>Name</td><td>Labels</td><td>Units</td><td>Levels</td><td>Storage</td><td>NAs</td></tr>\n",
"<tr><td>pclass</td><td></td><td></td><td><a href=\"#pclass\"><div align=right> 3</div></a></td><td>integer</td><td><div align=right> 0</div></td></tr>\n",
"<tr><td>survived</td><td>Survived</td><td></td><td><div align=right></div></td><td>double</td><td><div align=right> 0</div></td></tr>\n",
"<tr><td>name</td><td>Name</td><td></td><td><div align=right></div></td><td>character</td><td><div align=right> 0</div></td></tr>\n",
"<tr><td>sex</td><td></td><td></td><td><a href=\"#sex\"><div align=right> 2</div></a></td><td>integer</td><td><div align=right> 0</div></td></tr>\n",
"<tr><td>age</td><td>Age</td><td>Year</td><td><div align=right></div></td><td>double</td><td><div align=right> 263</div></td></tr>\n",
"<tr><td>sibsp</td><td>Number of Siblings/Spouses Aboard</td><td></td><td><div align=right></div></td><td>double</td><td><div align=right> 0</div></td></tr>\n",
"<tr><td>parch</td><td>Number of Parents/Children Aboard</td><td></td><td><div align=right></div></td><td>double</td><td><div align=right> 0</div></td></tr>\n",
"<tr><td>ticket</td><td>Ticket Number</td><td></td><td><div align=right></div></td><td>character</td><td><div align=right> 0</div></td></tr>\n",
"<tr><td>fare</td><td>Passenger Fare</td><td>British Pound (\\243)</td><td><div align=right></div></td><td>double</td><td><div align=right> 1</div></td></tr>\n",
"<tr><td>cabin</td><td></td><td></td><td><a href=\"#cabin\"><div align=right>187</div></a></td><td>integer</td><td><div align=right> 0</div></td></tr>\n",
"<tr><td>embarked</td><td></td><td></td><td><a href=\"#embarked\"><div align=right> 3</div></a></td><td>integer</td><td><div align=right> 2</div></td></tr>\n",
"<tr><td>boat</td><td></td><td></td><td><a href=\"#boat\"><div align=right> 28</div></a></td><td>integer</td><td><div align=right> 0</div></td></tr>\n",
"<tr><td>body</td><td>Body Identification Number</td><td></td><td><div align=right></div></td><td>double</td><td><div align=right>1188</div></td></tr>\n",
"<tr><td>home.dest</td><td>Home/Destination</td><td></td><td><div align=right></div></td><td>character</td><td><div align=right> 0</div></td></tr>\n",
"</TABLE>\n",
"<hr>\n",
"<TABLE BORDER>\n",
"<tr><td>Variable</td><td>Levels</td></tr>\n",
"<tr><td><a name=\"pclass\">pclass</a></td><td>1st</td></tr>\n",
"<tr><td></td><td>2nd</td></tr>\n",
"<tr><td></td><td>3rd</td></tr>\n",
"<tr><td><a name=\"sex\">sex</a></td><td>female</td></tr>\n",
"<tr><td></td><td>male</td></tr>\n",
"<tr><td><a name=\"cabin\">cabin</a></td><td></td></tr>\n",
"<tr><td></td><td>A10</td></tr>\n",
"<tr><td></td><td>A11</td></tr>\n",
"<tr><td></td><td>A14</td></tr>\n",
"<tr><td></td><td>A16</td></tr>\n",
"<tr><td></td><td>A18</td></tr>\n",
"<tr><td></td><td>A19</td></tr>\n",
"<tr><td></td><td>A20</td></tr>\n",
"<tr><td></td><td>A21</td></tr>\n",
"<tr><td></td><td>A23</td></tr>\n",
"<tr><td></td><td>A24</td></tr>\n",
"<tr><td></td><td>A26</td></tr>\n",
"<tr><td></td><td>A29</td></tr>\n",
"<tr><td></td><td>A31</td></tr>\n",
"<tr><td></td><td>A32</td></tr>\n",
"<tr><td></td><td>A34</td></tr>\n",
"<tr><td></td><td>A36</td></tr>\n",
"<tr><td></td><td>A5</td></tr>\n",
"<tr><td></td><td>A6</td></tr>\n",
"<tr><td></td><td>A7</td></tr>\n",
"<tr><td></td><td>A9</td></tr>\n",
"<tr><td></td><td>B10</td></tr>\n",
"<tr><td></td><td>B101</td></tr>\n",
"<tr><td></td><td>B102</td></tr>\n",
"<tr><td></td><td>B11</td></tr>\n",
"<tr><td></td><td>B18</td></tr>\n",
"<tr><td></td><td>B19</td></tr>\n",
"<tr><td></td><td>B20</td></tr>\n",
"<tr><td></td><td>B22</td></tr>\n",
"<tr><td></td><td>B24</td></tr>\n",
"<tr><td></td><td>B26</td></tr>\n",
"<tr><td></td><td>B28</td></tr>\n",
"<tr><td></td><td>B3</td></tr>\n",
"<tr><td></td><td>B30</td></tr>\n",
"<tr><td></td><td>B35</td></tr>\n",
"<tr><td></td><td>B36</td></tr>\n",
"<tr><td></td><td>B37</td></tr>\n",
"<tr><td></td><td>B38</td></tr>\n",
"<tr><td></td><td>B39</td></tr>\n",
"<tr><td></td><td>B4</td></tr>\n",
"<tr><td></td><td>B41</td></tr>\n",
"<tr><td></td><td>B42</td></tr>\n",
"<tr><td></td><td>B45</td></tr>\n",
"<tr><td></td><td>B49</td></tr>\n",
"<tr><td></td><td>B5</td></tr>\n",
"<tr><td></td><td>B50</td></tr>\n",
"<tr><td></td><td>B51 B53 B55</td></tr>\n",
"<tr><td></td><td>B52 B54 B56</td></tr>\n",
"<tr><td></td><td>B57 B59 B63 B66</td></tr>\n",
"<tr><td></td><td>B58 B60</td></tr>\n",
"<tr><td></td><td>B61</td></tr>\n",
"<tr><td></td><td>B69</td></tr>\n",
"<tr><td></td><td>B71</td></tr>\n",
"<tr><td></td><td>B73</td></tr>\n",
"<tr><td></td><td>B77</td></tr>\n",
"<tr><td></td><td>B78</td></tr>\n",
"<tr><td></td><td>B79</td></tr>\n",
"<tr><td></td><td>B80</td></tr>\n",
"<tr><td></td><td>B82 B84</td></tr>\n",
"<tr><td></td><td>B86</td></tr>\n",
"<tr><td></td><td>B94</td></tr>\n",
"<tr><td></td><td>B96 B98</td></tr>\n",
"<tr><td></td><td>C101</td></tr>\n",
"<tr><td></td><td>C103</td></tr>\n",
"<tr><td></td><td>C104</td></tr>\n",
"<tr><td></td><td>C105</td></tr>\n",
"<tr><td></td><td>C106</td></tr>\n",
"<tr><td></td><td>C110</td></tr>\n",
"<tr><td></td><td>C111</td></tr>\n",
"<tr><td></td><td>C116</td></tr>\n",
"<tr><td></td><td>C118</td></tr>\n",
"<tr><td></td><td>C123</td></tr>\n",
"<tr><td></td><td>C124</td></tr>\n",
"<tr><td></td><td>C125</td></tr>\n",
"<tr><td></td><td>C126</td></tr>\n",
"<tr><td></td><td>C128</td></tr>\n",
"<tr><td></td><td>C130</td></tr>\n",
"<tr><td></td><td>C132</td></tr>\n",
"<tr><td></td><td>C148</td></tr>\n",
"<tr><td></td><td>C2</td></tr>\n",
"<tr><td></td><td>C22 C26</td></tr>\n",
"<tr><td></td><td>C23 C25 C27</td></tr>\n",
"<tr><td></td><td>C28</td></tr>\n",
"<tr><td></td><td>C30</td></tr>\n",
"<tr><td></td><td>C31</td></tr>\n",
"<tr><td></td><td>C32</td></tr>\n",
"<tr><td></td><td>C39</td></tr>\n",
"<tr><td></td><td>C45</td></tr>\n",
"<tr><td></td><td>C46</td></tr>\n",
"<tr><td></td><td>C47</td></tr>\n",
"<tr><td></td><td>C49</td></tr>\n",
"<tr><td></td><td>C50</td></tr>\n",
"<tr><td></td><td>C51</td></tr>\n",
"<tr><td></td><td>C52</td></tr>\n",
"<tr><td></td><td>C53</td></tr>\n",
"<tr><td></td><td>C54</td></tr>\n",
"<tr><td></td><td>C55 C57</td></tr>\n",
"<tr><td></td><td>C6</td></tr>\n",
"<tr><td></td><td>C62 C64</td></tr>\n",
"<tr><td></td><td>C65</td></tr>\n",
"<tr><td></td><td>C68</td></tr>\n",
"<tr><td></td><td>C7</td></tr>\n",
"<tr><td></td><td>C70</td></tr>\n",
"<tr><td></td><td>C78</td></tr>\n",
"<tr><td></td><td>C80</td></tr>\n",
"<tr><td></td><td>C82</td></tr>\n",
"<tr><td></td><td>C83</td></tr>\n",
"<tr><td></td><td>C85</td></tr>\n",
"<tr><td></td><td>C86</td></tr>\n",
"<tr><td></td><td>C87</td></tr>\n",
"<tr><td></td><td>C89</td></tr>\n",
"<tr><td></td><td>C90</td></tr>\n",
"<tr><td></td><td>C91</td></tr>\n",
"<tr><td></td><td>C92</td></tr>\n",
"<tr><td></td><td>C93</td></tr>\n",
"<tr><td></td><td>C95</td></tr>\n",
"<tr><td></td><td>C97</td></tr>\n",
"<tr><td></td><td>C99</td></tr>\n",
"<tr><td></td><td>D</td></tr>\n",
"<tr><td></td><td>D10 D12</td></tr>\n",
"<tr><td></td><td>D11</td></tr>\n",
"<tr><td></td><td>D15</td></tr>\n",
"<tr><td></td><td>D17</td></tr>\n",
"<tr><td></td><td>D19</td></tr>\n",
"<tr><td></td><td>D20</td></tr>\n",
"<tr><td></td><td>D21</td></tr>\n",
"<tr><td></td><td>D22</td></tr>\n",
"<tr><td></td><td>D26</td></tr>\n",
"<tr><td></td><td>D28</td></tr>\n",
"<tr><td></td><td>D30</td></tr>\n",
"<tr><td></td><td>D33</td></tr>\n",
"<tr><td></td><td>D34</td></tr>\n",
"<tr><td></td><td>D35</td></tr>\n",
"<tr><td></td><td>D36</td></tr>\n",
"<tr><td></td><td>D37</td></tr>\n",
"<tr><td></td><td>D38</td></tr>\n",
"<tr><td></td><td>D40</td></tr>\n",
"<tr><td></td><td>D43</td></tr>\n",
"<tr><td></td><td>D45</td></tr>\n",
"<tr><td></td><td>D46</td></tr>\n",
"<tr><td></td><td>D47</td></tr>\n",
"<tr><td></td><td>D48</td></tr>\n",
"<tr><td></td><td>D49</td></tr>\n",
"<tr><td></td><td>D50</td></tr>\n",
"<tr><td></td><td>D56</td></tr>\n",
"<tr><td></td><td>D6</td></tr>\n",
"<tr><td></td><td>D7</td></tr>\n",
"<tr><td></td><td>D9</td></tr>\n",
"<tr><td></td><td>E10</td></tr>\n",
"<tr><td></td><td>E101</td></tr>\n",
"<tr><td></td><td>E12</td></tr>\n",
"<tr><td></td><td>E121</td></tr>\n",
"<tr><td></td><td>E17</td></tr>\n",
"<tr><td></td><td>E24</td></tr>\n",
"<tr><td></td><td>E25</td></tr>\n",
"<tr><td></td><td>E31</td></tr>\n",
"<tr><td></td><td>E33</td></tr>\n",
"<tr><td></td><td>E34</td></tr>\n",
"<tr><td></td><td>E36</td></tr>\n",
"<tr><td></td><td>E38</td></tr>\n",
"<tr><td></td><td>E39 E41</td></tr>\n",
"<tr><td></td><td>E40</td></tr>\n",
"<tr><td></td><td>E44</td></tr>\n",
"<tr><td></td><td>E45</td></tr>\n",
"<tr><td></td><td>E46</td></tr>\n",
"<tr><td></td><td>E49</td></tr>\n",
"<tr><td></td><td>E50</td></tr>\n",
"<tr><td></td><td>E52</td></tr>\n",
"<tr><td></td><td>E58</td></tr>\n",
"<tr><td></td><td>E60</td></tr>\n",
"<tr><td></td><td>E63</td></tr>\n",
"<tr><td></td><td>E67</td></tr>\n",
"<tr><td></td><td>E68</td></tr>\n",
"<tr><td></td><td>E77</td></tr>\n",
"<tr><td></td><td>E8</td></tr>\n",
"<tr><td></td><td>F</td></tr>\n",
"<tr><td></td><td>F E46</td></tr>\n",
"<tr><td></td><td>F E57</td></tr>\n",
"<tr><td></td><td>F E69</td></tr>\n",
"<tr><td></td><td>F G63</td></tr>\n",
"<tr><td></td><td>F G73</td></tr>\n",
"<tr><td></td><td>F2</td></tr>\n",
"<tr><td></td><td>F33</td></tr>\n",
"<tr><td></td><td>F38</td></tr>\n",
"<tr><td></td><td>F4</td></tr>\n",
"<tr><td></td><td>G6</td></tr>\n",
"<tr><td></td><td>T</td></tr>\n",
"<tr><td><a name=\"embarked\">embarked</a></td><td>Cherbourg</td></tr>\n",
"<tr><td></td><td>Queenstown</td></tr>\n",
"<tr><td></td><td>Southampton</td></tr>\n",
"<tr><td><a name=\"boat\">boat</a></td><td></td></tr>\n",
"<tr><td></td><td>1</td></tr>\n",
"<tr><td></td><td>10</td></tr>\n",
"<tr><td></td><td>11</td></tr>\n",
"<tr><td></td><td>12</td></tr>\n",
"<tr><td></td><td>13</td></tr>\n",
"<tr><td></td><td>13 15</td></tr>\n",
"<tr><td></td><td>13 15 B</td></tr>\n",
"<tr><td></td><td>14</td></tr>\n",
"<tr><td></td><td>15</td></tr>\n",
"<tr><td></td><td>15 16</td></tr>\n",
"<tr><td></td><td>16</td></tr>\n",
"<tr><td></td><td>2</td></tr>\n",
"<tr><td></td><td>3</td></tr>\n",
"<tr><td></td><td>4</td></tr>\n",
"<tr><td></td><td>5</td></tr>\n",
"<tr><td></td><td>5 7</td></tr>\n",
"<tr><td></td><td>5 9</td></tr>\n",
"<tr><td></td><td>6</td></tr>\n",
"<tr><td></td><td>7</td></tr>\n",
"<tr><td></td><td>8</td></tr>\n",
"<tr><td></td><td>8 10</td></tr>\n",
"<tr><td></td><td>9</td></tr>\n",
"<tr><td></td><td>A</td></tr>\n",
"<tr><td></td><td>B</td></tr>\n",
"<tr><td></td><td>C</td></tr>\n",
"<tr><td></td><td>C D</td></tr>\n",
"<tr><td></td><td>D</td></tr>\n",
"</TABLE>\n",
"<hr>\n"
],
"metadata": {},
"output_type": "pyout",
"prompt_number": 157,
"text": [
"<IPython.core.display.HTML at 0x11d966e50>"
]
}
],
"prompt_number": 157
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Women and children first?\n",
"\n",
"1. Use the `groupby` method to calculate the proportion of passengers that survived by sex.\n",
"2. Calculate the same proportion, but by class and sex.\n",
"3. Create age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 157
}
],
"metadata": {}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment