Last active
May 3, 2019 22:07
-
-
Save eteq/067d9c52ec0c20137bb5adb17da4613c to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Note that in general things that come first in a \"this or that\" are preferred." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"import regions" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from glue.core import DataCollection\n", | |
"\n", | |
"dc = DataCollection()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"# HDUList / CCDData" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 21, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"import io\n", | |
"\n", | |
"from astropy.io import fits\n", | |
"from astropy.nddata import CCDData" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 7, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Filename: /Users/erik/.astropy/cache/download/py3/b6263420de7f51d7ded9797cfbfb16f5\n", | |
"No. Name Ver Type Cards Dimensions Format\n", | |
" 0 PRIMARY 1 PrimaryHDU 103 (1025, 513) float32 \n" | |
] | |
} | |
], | |
"source": [ | |
"hdul = fits.open('https://astropy.stsci.edu/data/photometry/spitzer_example_image.fits')\n", | |
"hdul.info()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"One option might be something like this:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"dc.add_data(image=hdul)\n", | |
"im = app.imshow(dc, 'image[0]')" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"But the data object should remain an HDU list, so that e.g. this works:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 33, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Filename: /Users/erik/.astropy/cache/download/py3/b6263420de7f51d7ded9797cfbfb16f5\n", | |
"No. Name Ver Type Cards Dimensions Format\n", | |
" 0 PRIMARY 1 PrimaryHDU 103 (1025, 513) float32 \n" | |
] | |
} | |
], | |
"source": [ | |
"dc['image'].info()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"But that might be too fits-esoteric. A cleaner plan might be to instead work with individual HDUs instead as native CCD objects:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 32, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"bio = io.BytesIO()\n", | |
"hdul.writeto(bio)\n", | |
"bio.seek(0)\n", | |
"\n", | |
"ccd0 = CCDData.read(bio)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"dc.add_data(image=ccd0)\n", | |
"im = app.imshow(dc['image'])" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 35, | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"data": { | |
"text/plain": [ | |
"SIMPLE = T / file does conform to FITS standard \n", | |
"BITPIX = -32 / number of bits per data pixel \n", | |
"NAXIS = 2 / number of data axes \n", | |
"NAXIS1 = 1025 / length of data axis 1 \n", | |
"NAXIS2 = 513 / length of data axis 2 " | |
] | |
}, | |
"execution_count": 35, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"dc['image'].meta[:5]" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## Selection " | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Assuming the user selects a pixel region, should be able to do this:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"region = im.get_selection(...).as_pixel_region()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Or possibly:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"region = regions.RectanglePixelRegion(im.get_selection()) #or a classmethod could also work here" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"assert isinstance(region, regions.RectanglePixelRegion)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"(And corresponding *sky* regions that are similar)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"# Spectrum" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 36, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from specutils import Spectrum1D, SpectralRegion" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## Scalar case " | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"spec1d = Spectrum1D.read(...)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"^^ assume this is a *scalar* Spectrum1D - i.e. `spec1d.flux` is 1d" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"dc.add_data(spec=spec1d)\n", | |
"plot = app.profile(dc, 'spec')" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"assert isinstance(dc['spec'], Spectrum1D)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Now when a region is brushed, the following should work:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"spec_region = plot.get_selection().as_region()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Or possibly:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"spec_region = SpectralRegion(plot.get_selection()) #or classmethod" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"assert isinstance(spec_region, SpectralRegion)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Which then allows:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"specutils.analysis.equivalent_width(dc['spec'], regions=spec_region)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Note that there could be pixel equivalents that give the regions in pixel units instead of `spectral_axis` units" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"spec_pix_region = plot.get_selection().as_pixel_region()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"## Vector case " | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"spec1d2 = Spectrum1D.read(...)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"^^ assume this is a *scalar* Spectrum1D - i.e. `spec1d.flux` is (m, n_wavelength)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"dc.add_data(spec=spec1d)\n", | |
"plot = app.profile(dc['spec'])" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Which should yield *multiple* plots on the same view. The region selection ends up the same in the \"regular\" region case since it's all in spectral axis coordinates. \n", | |
"\n", | |
"Pixels it's a bit murkier, but would be something like:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"spec_pix_region2 = plot.get_selection().as_pixel_region()\n", | |
"\n", | |
"assert isinstance(spec_pix_region, list)\n", | |
"assert isinstance(spec_pix_region[0], SpectralRegion)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"Note a fall-back alternative for the vector case is just:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"dc.add_data(spec0=spec1d[0])\n", | |
"app.profile(dc['spec'])" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"In which case the user has to manually do for-looping over the spectra but otherwise it's pretty much the same as the scalar case" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"# Table" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"from astropy.table import Table" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"tab = Table(names=['a', 'b'], data=...)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"dc.add_data(dataset=tab)\n", | |
"scat = app.scatter2d(dc['dataset'], 'a', 'b')" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"assert isinstance(dc['dataset'], Table)" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"subtab = dc['dataset'][plot.get_selection().as_mask()]\n", | |
"assert isinstance(subtab, Table)" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"bonus but maybe not mandatory (presumably requires additional linking or something?):" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"ab_pix_region = plot.get_selection().as_pixel_region()\n", | |
"assert (ab_pix_region, regions.RectangleSkyRegion) # if it was a rectangle selection and a and b are interpretable as RA/Dec" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"# get_selection/get_selections paradigm" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"The above selection all assumes a *single* subset/selection via `get_selection()`. The idea there is that `get_selection()` should always yield the \"most recent\" selection, meaning whatever the user most recently interacted with in the UI. But it can also be either numbered or \"named\" datasets - i.e.," | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"sel1 = xxxx.get_selection() # (assume here three selections have been made, the third most recently)\n", | |
"sel2 = xxxx.get_selection(2) #third subset\n", | |
"sel3 = xxxx.get_selection('subset 3')\n", | |
"\n", | |
"assert sel1.as_region() == sel2.as_region() == sel3.as_region()" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"This is relatively straightforward because then the various different types of \"regions\" for selection can be done as individual selections.\n", | |
"\n", | |
"An alternative approach is to do `get_selections()`, which yields an iterator over the selections. That would mean the following would be the way to do the above:" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"sel1 = xxxx.get_selections()[xxxx.latest_selection]\n", | |
"sel2 = xxxx.get_selections()[2]\n", | |
"sel3 = xxxx.get_selections()['subset 3']" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": {}, | |
"source": [ | |
"But that adds additional requirements to the return object to understand array semantics... So an alternative middle ground is for only `get_selection()` to be supported for the \"latest selection\" and `\"subset3 \"` usage, but have it still exist to just yield a list of selections." | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "Python (glue-jupyter)", | |
"language": "python", | |
"name": "glue-jupyter" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.7.0" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 2 | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
(second revision is after some discussion with @astrofrog out-of-band)