Skip to content

Instantly share code, notes, and snippets.

@chryss
Last active December 29, 2015 07:49
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save chryss/7638837 to your computer and use it in GitHub Desktop.
Save chryss/7638837 to your computer and use it in GitHub Desktop.
Python 3 testbed
This file has been truncated, but you can view the full file.
{
"metadata": {
"name": ""
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "heading",
"level": 1,
"metadata": {},
"source": [
"HOWTO access and display MODIS satellite data with Python 3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First steps in Python3! This example has been run with the python.org Python v. 3.3.3 (installed from the .dmg), with numpy, matplotlib, GDAL, and IPython (and its dependencies for this notebook) installed into a virtual environment with pip. \n",
"\n",
"First, let's check that we're really using Python 3:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"import sys\n",
"print(sys.version)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"3.3.3 (v3.3.3:c3896275c0f6, Nov 16 2013, 23:39:35) \n",
"[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]\n"
]
}
],
"prompt_number": 2
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Very well. So let's load the pylab libraries, set them up for plotting inline, import everything else we need, and off we go. The goal is to open a MODIS Level 1B file, that is, a file that contains satellite data that has been corrected for sensor calibration, but not re-mapped or gridded in any way. \n",
"\n",
"This data comes from NASA in a format called HDF-EOS (http://hdfeos.org/), a hierachchical data format based on HDF4, which is maintained by the HDF Group. Theoretically, the pyhdf library would be suitable for interfacing with such a file, but it has the tendency to be fickle. The web page (http://pysclint.sourceforge.net/pyhdf/) does not give me confidence it has been ported to Python 3. \n",
"\n",
"No matter, the version of GDAL I have installed has been compiled with HDF4 and HDF-EOS support. GDAL supports a huge number of data formats and therefore provides somewhat of a convoluted kitchen-sink of an interface. But it'll do. "
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"%pylab inline\n",
"import os, os.path\n",
"from osgeo import gdal"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"Populating the interactive namespace from numpy and matplotlib\n"
]
}
],
"prompt_number": 3
},
{
"cell_type": "heading",
"level": 2,
"metadata": {},
"source": [
"Accessing the data in a MODIS L1B file"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is the file we want to look at, and it comes with a separate geolocation file. We open both and receive GDAL Dataset objects back. The file name tells us it contains data at (nominally) 1 km resolution, and dates from Julian day 199 in 2004. "
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"fn = \"MOD021KM.A2004199.2140.005.2010140125955.hdf\"\n",
"geofn = \"MOD03.A2004199.2140.005.2010140193808.hdf\""
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 4
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"dat = gdal.Open(fn)\n",
"geodat = gdal.Open(geofn)\n",
"print(type(dat))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"<class 'osgeo.gdal.Dataset'>\n"
]
}
],
"prompt_number": 5
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Metadata can be accessed from the gdal.Dataset object, for example to confirm the date/time stamp of the data."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"metadata = dat.GetMetadata_Dict()\n",
"print(metadata['RANGEBEGINNINGDATE'], metadata['RANGEBEGINNINGTIME'])"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"2004-07-17 21:40:00.000000\n"
]
}
],
"prompt_number": 6
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Indeed, the data file knows the name of its own georeference data file, though the metadata key naming isn't immediately intuitive. ANCILLARYINPUTPOINTER? I wouldn't have guessed. (There are other ways of getting the same data, in fact.)"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"dat.GetMetadata_Dict()['ANCILLARYINPUTTYPE'], dat.GetMetadata_Dict()['ANCILLARYINPUTPOINTER']"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 7,
"text": [
"('Geolocation', 'MOD03.A2004199.2140.005.2010140193808.hdf')"
]
}
],
"prompt_number": 7
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"HDF files are, as the name suggests, hierarchically organized. In the GDAL dataset API this means that a gdal.Dataset object contains subdatasets. For HDF-EOS files we stop at one level of subdatasetting: each subdatasets contains raster bands of the same dimension and data type. What subdatasets there are in a dataset, and what rasterbands in a subdataset, is laid down in NASA's data specification and user manuals for the data in question. \n",
"\n",
"In the case of MODIS, we need to know this: MODIS acquires data in 36 spectral bands, 2 at a nominal resolution of 250 m, 5 at 500 m, and the remaining 19 at 1 km. The data file we have is the 1 km one. It also contains all the higher-resolution datasets, aggregated to 1 km. For reasons that will become clear, I'd like to pull out the subdatasets that were generated from the 250 and 500 m data. Luckily (either because I've looked through all the subdatasets or because I've read NASA's documentation) I know which ones these are. Similarly, I know where to find latitude and longitude arrays in the georeference data."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"subdatasets = dat.GetSubDatasets()\n",
"geosubdatasets = geodat.GetSubDatasets()"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 8
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"subdatasets[4][1], subdatasets[7][1], geosubdatasets[8][1], geosubdatasets[9][1]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 9,
"text": [
"('[2x2030x1354] EV_250_Aggr1km_RefSB MODIS_SWATH_Type_L1B (16-bit unsigned integer)',\n",
" '[5x2030x1354] EV_500_Aggr1km_RefSB MODIS_SWATH_Type_L1B (16-bit unsigned integer)',\n",
" '[2030x1354] Latitude (32-bit floating-point)',\n",
" '[2030x1354] Longitude (32-bit floating-point)')"
]
}
],
"prompt_number": 9
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There they are. And the subdataset descriptions contain additional information:\n",
"\n",
"* the shape of the data arrays (2030x1354 samples for each raster band in the dataset)\n",
"* the shape of the latitude and longitude coordinate arrays (the same, luckily, so we don't have to interpolate the data)\n",
"* the data type (16-bit unsigned integer)\n",
"\n",
"We open the subdatasets and see that each indeed contains the expected number of rasterbands."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"highres = gdal.Open(subdatasets[4][0], gdal.GA_ReadOnly)\n",
"midres = gdal.Open(subdatasets[7][0], gdal.GA_ReadOnly)\n",
"latsds = gdal.Open(geosubdatasets[8][0], gdal.GA_ReadOnly)\n",
"lonsds = gdal.Open(geosubdatasets[9][0], gdal.GA_ReadOnly)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 10
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"print(highres.RasterCount, midres.RasterCount, latsds.RasterCount, lonsds.RasterCount)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"2 5 1 1\n"
]
}
],
"prompt_number": 11
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"I would like to produce something close to a true color RGB image, so I'd like to access the bands closest to red, green and blue. These would be band 2 (the second from the 250 m subdataset), and bands 4 and 3 (the second and first from the 500 m data). Attention, trap: **rasterband indexing in GDAL starts at 1, not 0**. "
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"red = highres.GetRasterBand(1)\n",
"green = midres.GetRasterBand(2)\n",
"blue = midres.GetRasterBand(1)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 12
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, let's check that our three color RasterBand objects are of the same size. They are, because each has been aggregated to a resolution of 1 km (at nadir, which means looking straight down - it's actually more than 1 km at the left and right \n",
"edges)."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"for color in [red, green, blue]:\n",
" print(color.XSize, color.YSize)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"1354 2030\n",
"1354 2030\n",
"1354 2030\n"
]
}
],
"prompt_number": 17
},
{
"cell_type": "heading",
"level": 2,
"metadata": {},
"source": [
"Visualizing MODIS L1B data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To plot the data as an image, we need to do two things: \n",
"\n",
"* make them available to a numpy array\n",
"* rescale them from 16 bit to 8 bit for easier plotting\n",
"\n",
"For the first, in this demonstation, we're going to be unconcerned about memory and simply allocate a whole $N \\times M \\times 3$ array containing 8-bit integer data. Note that the XSize is the number of columns and the YSize the number of rows, so in the array declaration they have to be reversed (rows come first). "
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"imgdata = numpy.zeros([blue.YSize, blue.XSize, 3], np.uint8)\n",
"imgdata.shape"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 18,
"text": [
"(2030, 1354, 3)"
]
}
],
"prompt_number": 18
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the second step, the maximum and minimum data value are calculated, any invalid data masked out, and the remaining data re-scaled to the interval between 0 and 255. This is done for each RGB band. Note that we get the data as a numpy array from the rasterband object via ``color.ReadAsArray()``."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"for idx, color in enumerate([red, green, blue]):\n",
" min, max = color.ComputeRasterMinMax()\n",
" data = np.ma.masked_greater_equal(color.ReadAsArray(), max)\n",
" imgdata[:, :, idx] = np.multiply(255 / (max - min), data - min).astype(np.uint8)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 19
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The result isn't very \"true color\", simply because of the high dynamic range of the image in particular the cloud and ice areas, which are much brighter than the land. "
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"fig = plt.figure(figsize=(18, 27))\n",
"plt.imshow(imgadata)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"metadata": {},
"output_type": "pyout",
"prompt_number": 20,
"text": [
"<matplotlib.image.AxesImage at 0x1015e80d0>"
]
},
{
"metadata": {},
"output_type": "display_data",
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment