Last active
November 6, 2016 19:44
-
-
Save dfm/0eb0d399897aea93dbbd to your computer and use it in GitHub Desktop.
Astropy issue comments
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
<issue> | |
<author>eteq</author> | |
Doc layout | |
<author>eteq</author> | |
This branch has a basic layout for documentation - mostly the standard sphinx stuff, but with numpydoc and gitwash pulled in, along with our development documents. Note that it *includes* the modifications to the coding guidelines from astrofrog/astropy_test@03e92d8808ea068898a5. | |
Also note I made a little logo with GIMP... this is definitely not permanent, as it isn't very pretty, but think of it as a placeholder. | |
A few items that still need to be done for the docs (but are perhaps easier done later): | |
* Determine exactly how we plan to use autosummary such to build docs once we have code to test it on. | |
* Get the version number from the package into conf.py (I know how to do this, but it requires a definite versioning scheme for the main package). | |
* Customize the gitwash pages once we've finalized our workflow for external packages. | |
* Fix the warnings that numpydoc issues when actual code is built | |
* Customize the docs - I think the default color scheme of sphinx is a bit ugly... | |
* Make a better logo... | |
<author>astrofrog</author> | |
I need to look at this in more detail, but a few comments on the minor points: | |
* I think we should keep the version number at ``0.0.0`` until we actually start with the coding. Once we have a skeleton for a package, with a setup.py file, then we can switch to setting the version number to ``astropy.__version__``. By the way, I don't like using version numbers with things like ``'dev'`` in them, because it confuses tools that compare version numbers, but that's a discussion for later! | |
* I agree the ``'default'`` theme is ugly, but the ``'sphinxdoc'`` theme is quite nice (it's built-in). Have you tried it out? I think customizing themes should be a very low priority for now, and we should just pick the least worse default one. | |
<author>astrofrog</author> | |
There is an issue in the gitwash guidelines - some instances of ipython haven't been changed to astropy, e.g. on cloning the repository: | |
git clone git://github.com/ipython/astropy.git | |
I'm not 100% sure I understand how you pulled in the gitwash stuff, so it's probably easier for you to fix. | |
<author>astrofrog</author> | |
My last comment for now is that I hadn't realized how long the gitwash guidelines are. We'll definitely want to make sure that we have a simple one page overview to summarize things - also, the gitwash pages mention multiple possible workflows, but we might want to simplify that (we can probably remove stuff to do with patches being sent to mailing lists!). Finally, the gitwash guidelines mixes a bit how to do things in git, and how to actually contribute to the package, which I see as separate issues to some extent. Anyway, all this to say that I think this is fine for now as a placeholder, but we definitely want to go through and keep only what we really need, and potentially separate the real workflow for AstroPy from the git tutorial. | |
<author>eteq</author> | |
The ipython bit was just a typo when I got the gitwash documentation... so that's corrected (and a small piece I forgot to do before that adds in a favicon to the page build). | |
I actually am not a huge fan of the 'sphinxdoc' theme for other projects because it always looks the same and is not customizable at all - it has to look exactly the same font and color-wise as the sphinx docs. default, meanwhile can be totally altered and customized anyway you want (fonts, colors,styles, etc.) - the astropysics docs were made that way, and as you can see they actually end up looking more like the sphinx docs - in principal you can get it to look nearly identical if you choose the same color and font scheme. | |
And yes, I 100% agree the gitwash needs to get cleaned up for our scheme... it doesn't bother me that "how to use git" mixes with "how to contribute" because that allows actually useful examples (which is what I think most people want). We'll need to have this a bit more separated out if we use the non-git VCS option workflow schemes, though. | |
<author>astrofrog</author> | |
Ok, let's just leave the default theme for now, and we can always customize it later as you said. I've been thinking a bit more about the gitwash stuff and I guess I do agree that it make sense to have the git 'tutorial' mesh with 'how to contribute', but I think we'll need to have three main sections in the instructions to reflect what's been discussed so far: | |
* Recommended (but optional) workflow for affiliated packages - it's up to them to decide if they want to follow that if they are using git, and it's their problem if they mess up the repositories. But at least providing recommended guidelines will help avoid that. | |
* Required strict workflow for requesting merging into the core package. This can include instructions for making a git fork from an hg repository for example, and also list the option of asking us to do it, etc. | |
* Required strict workflow for making contributions directly to the core package | |
We can try and think about this a bit more. I'd be happy to create a fork to try and re-arrange the docs in this way once this pull request is merged (and we can always choose to not use the modified version if it's not adequate) | |
<author>astrofrog</author> | |
I should have said, I'm fine with the layout and content otherwise. The suggestion about the gitwash stuff is for future, but this pull request is good to go as far as I'm concerned. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Created improved version of workflow documents based on gitwash documents | |
<author>astrofrog</author> | |
The gitwash version is still present on the developer documentation page. The new documents are in ``workflow/`` to differentiate them from the gitwash documents. | |
<author>eteq</author> | |
Overall, I very much like this re-formatting. I think we should just git rid of gitwash if we use this, as the content is basically the same. One important thing, though - the "submitting a patch" section of gitwash (or something similar) absolutely has to be present in the new version - the best way to get people to start helping is by lowering the barrier to entry, and a patch submission is pretty much the simplest thing one can do with git/github. | |
<author>astrofrog</author> | |
Thanks for checking over this! I'll implement your comments tomorrow and will update the pull request. | |
<author>astrofrog</author> | |
The above commits should address your comments - let me know if I missed anything! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Color and logo | |
<author>eteq</author> | |
This pull request does two things: It adjusts the color scheme in the docs from the default to a somewhat different look, and adds in a logo produced by Kyle Barbary. | |
If you think the color scheme is not good, that can be removed/altered while leaving the logo in place (although the logo color may want to be tweaked if so, as the scheme is matched to the logo. | |
<author>astrofrog</author> | |
I'm not keen on this particular color scheme. I strongly feel that it's too early to be customizing the 'look' of the project, so I think that we keep one of the default color schemes for now (though ``sphinxdoc`` feels a little less 'boring' than ``default``), and I also think we should just keep the placeholder logo (or none at all). There's nothing wrong with this logo, but I think it'll be more fun to organize a little logo and theme competition once the coding is underway to get everyone involved. | |
<author>kbarbary</author> | |
I agree that it would be better to wait for this sort of thing. The logo/theme competition is a good idea. I was actually surprised to see a logo added to the project this early without input. So perhaps removing the logo or making clear that it is a placeholder would be best. | |
<author>eteq</author> | |
Well, I was about to say I wanted to reward Kyle's excellent work on the logo, but if he wants to wait and enter in the competition, than I withdraw that statement :) | |
But I was actually intending this to be a temporary scheme, anyway, to be updated later (I hadn't thought of a competition, but that's a good idea!). I'm also fine with just dropping this pull request and waiting until later, though, so if you agree, feel free to close this one without merging. | |
I agree ``sphinxdoc`` looks better than ``default``, but I'm a little concerned that if we switch to that we'll just stick with it (it's rather hard to customize)... and it's what everyone seems to use. But I suppose that's not a big deal, so I can just switch to that on master (given that it's a one-line commit), if you like. | |
<author>astrofrog</author> | |
Since we agree, just to be cautious (since I think that some people involved in this project would not necessarily be happy with not having any input on this) I'm going to close this pull request for now. | |
@eteq - I don't actually feel strongly about which default theme to choose, so since as you suggest the current one makes it clearer this is temporary, we can just leave it the way it is :) | |
</issue> | |
<issue> | |
<author>phn</author> | |
Emacs configuration examples. | |
<author>phn</author> | |
I have added a file docs/development/codeguide_emacs.rst, which lists some settings that can help in ensuring compliance with PEP8. The document codeguide.rst has a link to this document. | |
Sphinx gives a warning that the above file is not included in any toctree. I wasn't sure where to add this document in the current setup. Perhaps we can have an appendix.rst and add codeguide_emacs.rst and others to it. | |
Prasanth | |
<author>eteq</author> | |
Thanks for this contribution! One tiny thing to change: Your link to the emacs page on the coding guidelines page (new line 80) is in the PEP8 bullet point. Can you instead move it into the note that currently directs to the PEP8 checker script (old line 83-84)? | |
<author>astrofrog</author> | |
@phn - just in case you haven't used pull requests before, if you make commits to your branch they will be added to this pull request automatically (no need to open a new pull request) | |
<author>astrofrog</author> | |
Looks good to me - thanks! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Package layout | |
<author>eteq</author> | |
This pile of commits define the setup.py and initial layout for the package. | |
One item I wasn't sure about: versioning. As it stands right now, version numbers are set by the "version" module, which creates version numbers of the form "x.x.x" if the `release` variable is True, and otherwise produces "x.x.xdev-r##" where ## is the total number of commits in the github repository. This requires a little trick to get the version numbers to properly freeze when installing a development version (see the astropy_build_ext in setup.py), but it works fine as-is. | |
I think it's better to at least have "dev" at the end of the version name - pypi, setupytools, and distutils all behave properly when you do this - a version that ends in the string "dev" is always an "earlier" version than one with "dev", and the "-r##" means you always know if you have forgotten to re-run python setup.py install or the like. But I could change it to just "x.x.xdev" or "x.x.xalpha" or something like that without the version number, if that's deemed a bad idea. | |
<author>astrofrog</author> | |
This pull request seems fine to me. I like the layout. I definitely agree with adding ``dev`` (not alpha) at the end of the version numbers, and I agree that the equivalent revision number would be useful (note we could also just use the 6 first characters of the SHA, but I agree it's less intuitive). The only comment I have is regarding making __version__ available, but to be honest this can be added later. | |
<author>eteq</author> | |
I agree that adding the ``__version__`` variable makes sense, so I just updated it to do so. I also updated .gitignore to ignore some of the files we definitely want to ignore (after almost adding a .pyc file...) | |
<author>perrygreenfield</author> | |
I may want to have a couple in our branch look at this. Is it typical to have the C extensions that separated from the Python code? Usually they are tightly coupled with a particular module or package and I wonder if it makes sense to put things in two different places. What is the advantage of doing that? | |
<author>eteq</author> | |
The advantage of a separate /extern directory for c code is that you can put "un-altered" C libraries in there so that they don't need to be downloaded separately, but can be easily updated in their own directories when the upstream library is updated. | |
The /astropy/wrappers directory is for the wrappers for the /extern c libraries only - it's in /astropy because it's clearer to have the Cython/C code be where it actually ends up in terms of the python package layout, but remain separated out so that it's clear that that code is directly interfacing with the /extern C libraries. | |
The intent is that any Cython or C code that is not directly related to an external library gets placed in the particular module it is used in, exactly as you say. These two separated directories are *only* meant for pre-existing C-libraries that need to be wrapped. | |
<author>embray</author> | |
This looks good to me as a starting template, other than my above comment about namespace packages. That's not really an actionable comment at this point though, as there has been no discussion about it. My point about namespace packages, however, is that if astropy is going to be a large platform, it may be desireable to break out at least parts of it into separate optional installables, in which case namespace packages are the way to go. I don't know if anyone else had this in mind though. | |
[And as an aside on namespace packages in general, I'm aware that there's been discussion in scikits about not using them anymore, but I think that's misguided, especially in anticipation of PEP 382/402, either of which will make them easier to deal with once accepted.] | |
<author>eteq</author> | |
Actually, the intent as specified in the vision and coding guidelines is specifically *not* to have astropy be a namespace package. The idea is that the astropy package contains all the core functionality, and everything else uses the "awastropy" or no top-level namespace (just the package name directly), until it's merged into the astropy core. | |
So this pull request is about the layout of the core package, and the astropy/project-template package is the one that is the template for affiliated packages. (it will reflect this design fairly closely so as to make merging easier, but will have slightly different names and include examples and such) | |
Once this layout is finalized, we will implement the affiliated package index, and it is those affiliated packages that will allow for the separate optional installable you have in mind. | |
While I see the advantages of the namespace package idea, they're a pain to use given the problems that PEP 382/402 were designed to fix. 382/402 are great, but we can't use them while still maintaing 2.x compatibility (they only apply to 3.2 and 3.3). So at least for the near term, this scheme is more straightforward. | |
<author>embray</author> | |
I guess my problem is that it hasn't really been defined what make up the "core functionality" required for other Astronomy packages. By some definitions that could be a lot of stuff. But I'm fine with saying "no namespace packages" for now, and it can be worried about if "being too big" actually becomes a real problem. | |
<author>eteq</author> | |
You have a good point to keep in mind - we should definitely have this option on the radar for the future as we merge other affiliated packages in and be sure to at least allow for a change to a namespace package if "too big" becomes a problem. If nothing else, hopefully by then everyone will have moved to 3.x and we can safely use the 382/402 enhancements. | |
<author>astrofrog</author> | |
This looks good to me. Obviously the layout might still change slightly in future, but I think we should merge this for now. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Coding standards super() encouraged | |
<author>embray</author> | |
Here are my updates to the Coding Standards documented explaining why use of super() ought to be encouraged rather than discouraged. It also adds a requirement that all new classes must be new-style classes. | |
It also adds .swp files to the .gitignore, otherwise I'd go crazy :) | |
<author>eteq</author> | |
Looks great to me... Thanks! | |
It might also make sense to add (either in the same bullet point or one right next to it) a statement discouraging multiple inheritance unless there's a good reason for it, as we discussed on-list (with a mixin example for "when it's ok"). If you want to add that in this or another pull request, that's fine, or I can try writing one when I get around to it. Just let me know which. | |
<author>embray</author> | |
You're right--I thought of putting in something like that, but I wasn't sure where it would fit. But I'll figure something out and add it to this. | |
<author>embray</author> | |
Updated with a multiple inheritance example. It's pretty abstract, but I didn't want the example to implement any specific functionality that would distract from the main point of how that mixin works. | |
<author>eteq</author> | |
The guideline looks great aside from the small suggestion I made inline. | |
Regarding the example, I definitely like the idea there, but I think it would be better to use MutableCollection instead of DictMixin. We are specifically saying 2.6 compatibility is required, so we should encourage people to make use of the 2.6 collections instead of the older styles. I think the only change you need to make is replace ``keys`` with ``__iter__``, and add ``__len__`` and ``__delitem__`` methods, and you can point out that before 2.6 the same thing was done with DictMixin. | |
If it isn't too much trouble, it would also be good to have the example method be implemented rather than have all the functionality be "..." - I personally tend to get more out of examples that are runnable, even if the functionality is completely trivial (e.g. something that ends up just like a regular dict but prints messages or something). | |
<author>embray</author> | |
I'll take out the background information on the C3 algorithm altogether--you're right that it's just distracting. I'll leave in the link which contains much of that same information anyways. | |
I also agree on changing DictMixin to MutableMapping, though I'm a bit iffy on filling out the rest of the example. The problem is that most of the use cases for using this in a multiple-inheritance context tend to be non-trivial. Your suggestion of having it print messages, for example, can be achieved simply my subclassing dict; no multiple-inheritance necessary. Maybe I'll try to come up with a different example, or even remove the example altogether and suggest "You don't need to use this unless you know you need to use this" though that feels like a bit of a cop-out. | |
<author>eteq</author> | |
Should I merge this now, then? Or are you still thinking you might want to write a slightly different mixin example? | |
<author>embray</author> | |
No, my preference at this point is to not have an example at all. As far as I'm concerned it's ready to merge. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Minor formatting and bug fixes | |
<author>astrofrog</author> | |
Since I need to use some of the elements from the core distribution for the template package (e.g. setup.py) I just thought I'd tidy up a few things and make it PEP8 compliant before copying it over. I found a few small bugs which I've fixed. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Improve version.py generation | |
<author>embray</author> | |
I've been working on my own version.py generation for STScI's packages--it works pretty similarly, but with a couple differences: | |
* I also started out making it an extension of the build_py command, but I ended up changing it to regenerate the file on every setup.py run--this ensures that an updated version.py file is created for commands such as 'develop' and 'sdist'. It doesn't seem to add any noticeable time to running setup.py. | |
* The version.py uses the SVN revision info getter to automatically update its SVN info on import *if* the package is installed in develop mode. That way, when installed in develop mode the revision is always up to date. I would want the same for git. It doesn't do this if the package is not installed in develop mode, so it doesn't hurt import time otherwise. The code for doing that is just part of the version.py template. | |
I'm open to other ideas for exactly how to implement this, but I would want to add these capabilities. | |
<author>embray</author> | |
I should add--in my version of this I just put the version.py directly into the source (rather than in the build/ dir as a build_py extension would do). Then I just use svn:ignore (or in this case .gitignore) on it. | |
<author>embray</author> | |
Issue #12 illustrates what I'm talking about here (has GitHub added the ability to attach a pull request to an existing issue yet?) | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Remove Numpy's fork of the matplotlib plot_directive from our tree. | |
<author>mdboom</author> | |
Remove Numpy's fork of the matplotlib plot_directive from our tree. Use matplotlib's plot_directive if it is available, otherwise warn about the lack of plots in the output. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Add a basic building and package guide | |
<author>mdboom</author> | |
Add a basic building and package guide based on the discussion had at AstroPy Meeting Oct 13, 2011. | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Ordered dict | |
<author>taldcroft</author> | |
Add utils/odict.py which is an ordered dictionary class. This is the Python 2.7 collections/odict code with a few trivial changes for Python 2.6 compatibility. See https://github.com/taldcroft/odict for the changes. | |
<author>taldcroft</author> | |
There were problems with git history in the merge. Closing this pull request and branch and starting over with "odict" branch. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Fix setup version | |
<author>embray</author> | |
This request illustrates my proposed solution to issue #8. There's plenty of room here for massaging, but I wanted to put something up as a starting point. | |
This fix has 3 advantages over the current situation: | |
* version.py is updated for commands besides build, such as sdist | |
* the base version string is kept in the setup.py, which is the first place where most people and tools will look for it | |
* the revision number is automatically updated on each astropy import if the package is installed in develop mode | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Pywcs | |
<author>mdboom</author> | |
Initial import of pywcs as astropy.wcs. | |
This is ready for review, but probably not yet ready to commit. | |
What works: | |
- everything in the new namespace | |
- nose tests | |
- documentation build | |
What is yet to be done: | |
- convert the tests to py.test and include in whatever framework is yet to be developed | |
- add the pywcs backward compatibility layer | |
<author>mdboom</author> | |
Also -- the docstrings need to be converted to Numpydoc format. | |
<author>mdboom</author> | |
This is now ready for merging. AFAIK, it meets all of the guidelines. | |
It does not work with Python 3.x (setup.py in master doesn't currently work, and I think that's better left fixing elsewhere). | |
I don't expect a full review -- I'm sure we'll find other problems over time. | |
However, of particular interest to @iguananaut and @eteq are the changes to the setup.py script. It walks the astropy tree looking for setup_package.py scripts and imports them. Then it looks for special functions within each that allow customizing C extensions, package_data and data_files. If you can think of a better way, let me know. It also checks for the existence of Numpy itself (rather than letting setuptools do it), since Numpy is needed during the astropy.wcs build -- if you don't check for it you get an ImportError on Numpy which isn't as helpful for the end user. | |
<author>embray</author> | |
On the setup.py stuff: Looks fine to me, though it's already getting to the point where it would be worth moving parts of it into the setuputils module we discussed. I'd say go ahead and commit the changes now as is, and we can move stuff later (I'm thinking now maybe we should call it setup_helper.py for symmetry with version_helper.py). | |
<author>embray</author> | |
Oh, I see you already created setuputils.py. That's fine then--just more things can go into it I think. And I'm still for renaming it to setup_helper (or helpers?). | |
<author>astrofrog</author> | |
I have not reviewed all the code, but I installed it and ran ``astropy.test()`` and all the tests run fine. | |
The api.rst file still mentions the old testing framework (including nose). To run specifically the wcs tests, the preferred way is now: | |
import astropy | |
astropy.test('wcs') | |
Otherwise it looks good to me. | |
<author>astrofrog</author> | |
Ok, I'm fine with this now! I'll let @eteq and/or @perrygreenfield review as well before we merge. | |
<author>eteq</author> | |
We may want to re-order how the documentation works, but that can be changed later when more of the documentation is in place. Also, when I run the docs, I see | |
``` | |
/Users/erik/src/astropy/docs/wcs/references.rst | |
``` | |
Should that worry me? | |
<author>astrofrog</author> | |
I also get: | |
checking consistency... [snip]/astropy/docs/wcs/references.rst:: WARNING: document isn't included in any toctree | |
It looks like ``references.rst`` is included in many files, but just not in the toctree. I think we can ignore this since it makes sense it's not included? | |
<author>mdboom</author> | |
The "references.rst" not included in the doctree is annoying, but I can't find a better solution. "references.rst" contains a number of external hyperlinks that are referenced throughout the wcs documentation that are too cumbersome to repeat everywhere (they would be particularly annoying in docstrings). But there's not real "content" in references.rst. We could put these in the "conf.py" rst_epilog variable, I suppose, but I'm not a hugh fan of the modularity of that. | |
<author>eteq</author> | |
Hmm... I don't think it's a good idea to let warnings that we still know are "ok" slip into the doc build, because then people will start ignoring legitimate warnings that are because they mis-typed something (and its easy for sphinx to go crazy with warnings if you don't keep on top of it!). I don't know if there's a way to supress that warning? Alternatively, there's the `rst_epilog` feature in conf.py - we could put everything in a global reference file and load it from conf.py ... but I can do that separately if we judge that a good idea. | |
<author>eteq</author> | |
Oh, and I also prefer avoiding ``setuputils.py`` as a name to prevent confusion with the package by that name. ``setup_helpers.py`` seems fine. | |
<author>astrofrog</author> | |
I figured out a solution for the warning. Just rename ``references.rst`` to ``references.txt`` and update the include statements appropriately. | |
I also agree with using the name ``setup_helpers.py`` instead of ``setuputils.py`` | |
<author>mdboom</author> | |
Thanks for the solution (to rename references.rst to references.txt). I think that's a good one. | |
Also -- I will rename setuputils.py to setup_helpers.py. | |
<author>astrofrog</author> | |
Looks good to me now. | |
<author>eteq</author> | |
Same here. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Testing guidelines | |
<author>astrofrog</author> | |
The updated testing guidelines. Some details need to be figured out before we can merge, but just wanted to put this up here for initial review. | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Add OrderedDict class via utils/odict.py for Python 2.6 usage. | |
<author>taldcroft</author> | |
Add utils/odict.py which is an ordered dictionary class. This is the Python 2.7 collections/odict code with a few trivial changes for Python 2.6 compatibility. See https://github.com/taldcroft/odict for the changes. | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Add OrderedDict class via utils/odict.py for Python 2.6 usage. | |
<author>taldcroft</author> | |
Add utils/odict.py which is an ordered dictionary class. This is the Python 2.7 collections/odict code with a few trivial changes for Python 2.6 compatibility. See https://github.com/taldcroft/odict for the changes. | |
</issue> | |
<issue> | |
<author>crawfordsm</author> | |
Adding a license which is adopted from scipy with the minor update of cha | |
<author>crawfordsm</author> | |
Adding a license which is adopted from scipy with the minor update of changing the copyright to astropy developers | |
<author>astrofrog</author> | |
Thanks! You need to remove the mention of Enthought from the license file. Also, do we want the main AstroPy license in the root of the package? (i.e. is ``licenses/`` aimed at licenses of components of AstroPy? | |
<author>crawfordsm</author> | |
Agreed on both points. I'll remove any mention of Enthought from the LICENSE and add it to the main root directory as well. | |
<author>eteq</author> | |
Just to be on the safe side, we figured it'd be better to not have the version with Enthought anywhere in the git tree, as someone might take that as us changing some earlier copyright. So I took the standard BSD 3-clause (which is identical to the scipy one) and committed it as f98dd36f4604cedc5d713ebc7df3e6fa298cc55c without only references to astropy team and the like. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Pytest standalone | |
<author>jiffyclub</author> | |
Implements astropy.test() functionality. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Added options to astropy.test() for passing to py.test | |
<author>astrofrog</author> | |
Added options to astropy.test() for passing to py.test (including arbitrary arguments, plugins, and more specific verbose and pastebin options) | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Constants | |
<author>astrofrog</author> | |
**Note:** this module is not ready for merging. I am opening this pull request so that we can review the current code and iterate on it first (with the ultimate goal of merging). | |
A few notes: | |
* I decided to always use ``_`` for subscripts and to use lowercase and uppercase as is usually conventional - but we can also switch to all lowercase if you think that would be best | |
* I decided to spell out sun, jup, and earth to avoid a possible conflict between electron mass and earth mass (let me know if you have a better idea!). | |
* The S.I. constants re-use the values from the cgs module so that we only need to change constants in one place. | |
* There are no tests yet, as I'm not sure what tests would consist of in the case of constants. Maybe we can check for example that a certain dimensionless combination of constants in cgs is equal to that in S.I. (which would test the c.g.s to S.I conversion). | |
* I need to double check the constants values, and we'll need several other people to vet them, since this is obviously very important to get right. | |
<author>astrofrog</author> | |
Closed because of a mistake in the commits included | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Constants | |
<author>astrofrog</author> | |
**Note:** this module is not ready for merging. I am opening this pull request so that we can review the current code and iterate on it first (with the ultimate goal of merging). | |
A few notes: | |
* I decided to always use ``_`` for subscripts and to use lowercase and uppercase as is usually conventional - but we can also switch to all lowercase if you think that would be best | |
* I decided to spell out sun, jup, and earth to avoid a possible conflict between electron mass and earth mass (let me know if you have a better idea!). | |
* The S.I. constants re-use the values from the cgs module so that we only need to change constants in one place. | |
* There are no tests yet, as I'm not sure what tests would consist of in the case of constants. Maybe we can check for example that a certain dimensionless combination of constants in cgs is equal to that in S.I. (which would test the c.g.s to S.I conversion). | |
* I need to double check the constants values, and we'll need several other people to vet them, since this is obviously very important to get right. | |
* I need to add a documentation page | |
<author>phn</author> | |
Hello, | |
I have just a few comments; the full code is at https://github.com/phn/astropy/blob/constants-test/astropy/constants/__init__.py . | |
At some point it will be useful to know the source of constants. For example CODATA or Particle Data Group's compilation or from astronomy references. I thought perhaps we can have a simple number like constants that carry such information with it. | |
I came up with this simple sub-class of namedtuple, that will act like a constant number but with fields carrying meta-data. The number like behavior is obtained by simply adding methods such as ``__add__`` and so on. | |
The definition of a constant looks like this: | |
wien_freq = AstroConst(val=5.8789254e10, err=0.0000053e10, | |
unit="Hz K^-1", | |
name="Wien frequency displacement law constant", | |
notes="F_max / T = wien_freq.", | |
src=_refs['nist_codata']) | |
The _refs dictionary stores the references. One can do computations, for example, | |
flux_max = wein_freq * T . | |
Is this an over-kill? | |
I think that as we add more astronomy constants, the references will become very important. I believe that one of the reasons the IAU has discontinued(?) issuing constants is because there are many reference to choose the values from, and the one to use is not obvious. For example, distance of Sun from the Galactic center. | |
Even conversion methods can be added that can be used, for example, | |
boltzmann_constant.convert("cgs") | |
or even, | |
boltzmann_constat.cgs | |
The conversion factors can be stored, at the module level, as nested dictionaries, with physical quantity as the main key. | |
For example, | |
...... | |
......, | |
'energy': { | |
# Conversion factors from | |
# http://physics.nist.gov/cuu/Constants/energy.html | |
'j': (1.0, None), | |
'ergs': (1e7, None), | |
'ev': (6.24150934e+18, 0.00000014e+18), | |
'mev': (6.24150934e+24, 0.00000014e+24), | |
'kg': (1.112650056e-17, None), | |
# Inverse meter = Joules / (h * c). | |
'im': (5.03411701e+24, 0.00000022e+24), | |
'icm': (5.03411701e+22, 0.00000022e+22), | |
# Hertz = Joules / h. | |
'hz': (1.509190311e+33, 0.000000067e+33), | |
# Kelvin = Joules / k; k = bolztman constant | |
'k': (7.2429716e+22, 0.0000066e+22), | |
# Atomic mass unit = Joules / (atomic_mass_unit * c^2) | |
'u': (6.70053585e+9, 0.00000030e+9), | |
# Hartree = Joules / (Rydberg_infinity * h * c) | |
'eh': (2.29371248e+17, 0.00000010e+17), | |
'info': " j: Joules\n ergs: ergs\n ev: electron-volt\n" | |
" mev: mega electron-volt\n kg: kilogram\n" | |
" im: inverse meter\n icm: inverse centi-meter\n" | |
" hz: Hertz\n k: Kelvin\n u: atmoic mass unit\n" | |
" eh: Hartree\n" | |
}, | |
...., | |
Users can access these conversion factor directly or as methods of constants. | |
Thanks, | |
Prasanth | |
<author>eteq</author> | |
The pull request looks good, except for one thing: ``__init__.py`` imports from both cgs and si, and because the constants all have the same name, the si import will overwrite the cgs import. I think we want the default to be cgs, so you could fix that by just killing the ``from .si import *`` | |
<author>eteq</author> | |
Oh, and my preference would be to have all the variables not have underscores, so as not to appear like fnctions (although that's mostly an opinion thing). Also, while technically they should all be capitalized because they're constants, I think here we break that rule, because there are clear conventions for capitalizations that are used when these constants are actually written down, and it would be too confusing to a lot of people to violate those conventions. (To be clear, you've already done that, but I see that Prasanth's version instead uses all caps in some places.) | |
<author>eteq</author> | |
Prasanth, | |
First of all, your approach here is quite clever... and not necessarily overkill, but perhaps premature. The main reason why I say that is that there is a ``units`` package in the works that we will definitely want to apply to the constants - exactly how it will be implemented is not completely decided, but it definitely needs to be compatible with the constants module. I would strongly recommend you contact Perry Greenfield about the units module, though (he's the one that's coordinating that effort), because I think you've already implemented here some of the stuff that's supposed to go there! | |
At the astropy meeting it was a more-or-less unanimous agreement that the interim solution would be to use this approach where the constants are stored in the typical unit systems that are encountered, at least until the units system is worked out. That way, when someone imports units and types ``from astropy.constants.cgs import pc`` it's immediately obvious in the code that they're getting it in cgs as opposed to SI or whatever. | |
On the other hand, I definitely like the way the approach you've described here stores the reference information and extra notes, though - you're definitely right that it would be good to keep that available programmatically. One possible approach might be to use your class as you've defined it to store the information, but remove the units part for now (until the larger unit scheme is worked out) and define the constants in the si.py and cgs.py scheme as Thomas has done. If you want to try this, I recommend creating your own pull request with the understanding that we will merge either this one or that one once we decide which is best. | |
Also, we definitely *don't* want to include all the parameters for solar system objects in the ``constants`` module - the only thing that should be here are actual constants. M_sun/L_sun, M_earth, and M_jup are a special cases here because they are used a lot in the community as though they are constants (generally as units). The solar system information should definitely live somewhere, but the more logical place is wherever the module that does things like calculating orbits ends up. | |
A few stylistic comment's I'd like to add about your branch though (mostly for your information): | |
* Implementation shouldn't go in the __init__.py file - only imports and docstrings. all the actual code should go in other files inside the module. So basically you would just move your __init__.py to something like astroconstant.py and then have ``from .astroconstant import *`` in the __init__.py file | |
* in the `__repr__` function you do ``s = Base.__repr__(self)`` - you should instead use ``s = super(Base,self).__repr__()``, as the coding standards say to always use super unless there's a specific reason not to. | |
* Also, see my comment above regarding the naming scheme - I think it's important that we use the same names and capitalizations that astronomers are used to seeing when defining constants. | |
<author>phn</author> | |
Hello, | |
Thanks for all the comments. | |
My idea of a ``constants`` module was more like a soft copy of "Astrophysical Quantities"!! So more than ``constants``, I had ``quantities`` in mind. I know that this is perhaps too much to ask, and is perhaps impractical. | |
The current version was a personal project to collect astronomical quantities, and hence the information on physical properties of planets. The orbits of planets would, of-course, be part of say, the coordinates module. But the orbital period of Jupiter, as specified by the Solar System Dynamics group, can be in ``constants`` (or ``quantities``). | |
As an interim solution the current idea of splitting constants into cgs and si is the way to go. But in the "long term", we may run into problems, where the split may not be obvious: is the magnitude system cgs? Here also, I am assuming that the constants module will have quantities such as say, the magnitude of Vega in the Johnson filter, or the B-V color of Sun, or perhaps even the current estimates of the solar abundances according to paper X. | |
If such quantities are deemed not to be included in constants, then of-course the current split is more appropriate. I will start a branch in my fork, with my version of the ``AstroConst`` class and values defined in separate cgs and si modules. I will remove the unit conversion code, but I think the "unit" field can be retained as a meta-data field. | |
For the naming scheme, I am for using underscores, since they make it easier to read the name. Also, is ``M_sun`` the mass of Sun or is it the bolometric absolute magnitude of the Sun? And is ``year`` Julian year or Besselian tropical year? | |
Since we are still in the exploration stage, I will get started with a reasonably consistent naming scheme, and then we can make changes. So I will start with ``sun_mass`` and ``proton_mass``, but these can later be changed to ``M_sun`` or ``m_p`` if they are more appropriate. | |
The branch was named "constants-test" specifically because it doesn't follow most of the AstroPy coding conventions! Thanks for pointing out the usage of ``super()``. | |
Thanks, | |
Prasanth | |
<author>eteq</author> | |
Ah, now that I see what you had in mind I understand your implementation better. | |
I see your point that it makes sense to have a unified object to store this information (including reference info and the like). So while the class is a good idea, and we should probably encourage its use if we end up adopting this solution, my main point was that many of these quantities themselves should actually be defined in whatever module is most appropriate to them. Things like e.g. magnitude of Vega do *not* belong in `constants` - that should live in a module where photometric processing is performed. `constants` is for quantities that don't have a specifically obvious domain. | |
<author>eteq</author> | |
Oh, and `M_sun` is solar mass (at least that's what the comment above it says). | |
`year` looks to me to be the tropical year in seconds (is that right, Thomas?) I think `year` probable does not belong in constants though, as it would lead to a fair amount of confusion because of precisely these ambiguities - the time module (not in yet but being worked on) can take care of that stuff. | |
<author>astrofrog</author> | |
Erik, I'm not sure I understand your first comment about the imports in ``__init__.py``. At the moment, I'm not doing ``from .cgs import *`` (I am doing ``from . import cgs``), so none of the variables are being overwritten. Could you point me to the file/line you are referring to? | |
<author>astrofrog</author> | |
I do like the idea of keeping some metadata in the constants, and have a few comments about that: | |
* Why redefine all the possible operations on constants rather than just subclassing ``float``? For example: | |
class Constant(float): | |
def __init__(self, *args, **kwargs): | |
self.system = "cgs" | |
self.source = "CODATA" | |
float.__init__(self, *args, **kwargs) | |
works fine and acts like a float. You can also overload ``__repr__`` and ``__str__`` to display the meta-data. | |
* I don't like the name ``AstroConstant`` or variants of that because the Boltzmann constant for example is not specifically an astronomy constant. Just using something like ``Constant`` or ``PhysicalConstant`` (to differentiate from programming constants) would be better in my opinion. | |
* A compromise to including units before a proper unit conversion solution exists would be to use e.g. ``system`` as in my above example, and not say e.g. ``m/s`` but ``S.I.``. Once we have the units module implemented, we can add a ``units`` field to the above object. | |
* I think that even if/once constants are defined with such a class, we still want to have them importable from ``constants.si`` and ``constants.cgs`` because it's still useful to have them pre-defined in several systems, so I don't see that API as being temporary. | |
<author>eteq</author> | |
Sorry, you're absolutely right - I saw what you wrote and somehow mentally assigned ``from .cgs import *`` ... I think at one point we had discussed doing that and hence I was expecting to see that. Your approach here I think is a good one and is fine - sorry about that! | |
Also, a suggestion for the ``__init__.py`` docstring: | |
```python | |
""" | |
The constants module stores physical and astronomical constants. | |
These constants are currently accessible in two unit systms: cgs or SI. | |
These are stored as submodules (`~astropy.constants.cgs` and | |
`~astropy.constants.si`) of the constants module. Thus, the constants should | |
be accessed as e.g. ``from constants.cgs import c`` or | |
``from constants.cgs import c`` | |
Notes | |
-------- | |
Many of these constants make use of the 2010 CODATA recommended values [cod]_ . | |
References | |
---------------- | |
.. [cod] http://www.codata.org/ | |
""" | |
``` | |
<author>astrofrog</author> | |
Thanks for the docstring! I'll include that. What should we do for now regarding classes - should we use floats, a 'thin' wrapper around floats like what I showed above, or a more complete class like what Prasanth is suggesting? | |
<author>eteq</author> | |
Perhaps you can try doing the thin wrapper approach? I think that seems a bit cleaner to me, but it sounds like Prasanth is going to change his approach based on the above comments, and we can just see which seems to work better? | |
<author>phn</author> | |
Hello, | |
As both of you say, the simplest thing to do right now is to have two separate modules with SI and CGS values. In fact, we should just stick to a simple list of constants as Tom has done now. | |
I accept that as an interim solution, since we don't have a consistent way of working with units. And additionally many of the "constants" can be defined in separate modules like time, coords, photometry, and then simply have references to them in the ``constants`` module. | |
In a module it will be easier to see which system is being used by looking at the import line, but this will be hard while working at the terminal. So in my opinion having classes, with built-in conversions, will be more appropriate in the long term than having separate modules. We could store instances of classes, with the appropriate units and values, in the submodules constants.si and constants.cgs, to maintain backward compatibility. Also, we can import the one we want without having to perform an additional conversion step. | |
My point in asking about ``year`` was not the nature of the value, but of the naming scheme: ``year`` v/s ``julian_year`` and ``besselian_year``. | |
Within the two module scheme, I am for storing reference values in constants.si and then performing the conversion in constants.cgs. | |
I went with namedtuples because that was the first thing that came to mind: constants => immutable => tuple! | |
For me, most of these are issues that can be settled one way or the other; I can live with either one. But there may be users, even if they are not developers, who may have strong opinions on these. | |
Prasanth | |
<author>embray</author> | |
I only just noticed this pull request and the ensuing discussion. I don't have too much more to add at the moment other than a +1 for using a wrapper approach. It doesn't necessarily need to be limited to floats either, though I'm not sure I can think of any examples where a float wouldn't be the most appropriate type anyways. | |
<author>astrofrog</author> | |
Before I modify the pull request to include the thin wrapper we are talking about, I just wanted to mention the scipy constants module: http://docs.scipy.org/doc/scipy/reference/constants.html - do we want to repeat constants that are already in scipy? do we want to use a system that is the same as theirs? | |
<author>eteq</author> | |
I think If we plan to implement the thin wrapper with source information, it makes sense to do it all internally consistently and not use the scipy constants module, given that some of those values don't have source information. Also, given that we are not requiring scipy, it might be useful to include them here on their own. | |
Also, a lot of the constants in the scipy module aren't relevant to astronomy... although it certainly makes sense to have a ``See Also`` section that points to scipy.constants. | |
Oh, but we probably *should* use t `numpy.pi` and `numpy.e` values, though. | |
<author>hamogu</author> | |
About distributing constants of some sort throughout `astropy`: While I agree that the mass of the electron and the bolometric magnitude of the sun will usually not appear in the same formula it would be nice to have a way to import all constants in astropy in one go - it helps to "see what is there". This could be done eventually by including statements like `from photometry.constants import *` in `astropy.constants` or maybe `astropy.constants.photometry = astropy.photometry.constants`. | |
<author>astrofrog</author> | |
I've implemented the ``Constant`` class. Since the CODATA 2010 values are in SI, I decided for simplicity that the constants should actually be defined in S.I. and converted into cgs (trust me, it makes checking the values a lot easier...). Do you think the ``Constant`` class is an acceptable solution? You can do: | |
In [4]: k_B | |
Out[4]: <Constant: name='Boltzmann constant' value=1.3806488e-23 error=1.3e-29 system='SI' origin='CODATA 2010' > | |
In [5]: print k_B | |
Name = Boltzmann constant | |
Value = 1.3806488e-23 | |
Error = 1.3e-29 | |
System = SI | |
Origin = CODATA 2010 | |
We can still discuss things like capitalization, underscores, etc., but otherwise are you happy with this system? | |
<author>astrofrog</author> | |
Also, once we have a proper units module in Astropy, we can add a ``units`` attribute that would allow 'intelligent' unit conversion. I could actually already add the units, and have them just not be 'intelligent'. Thoughts? | |
<author>astrofrog</author> | |
Also, here's a dilemma: is 'e' the electron charge, or ``numpy.e``? (in astronomy, I'd say the former) | |
<author>hamogu</author> | |
Why do we need `np.e` or `np.pi` at all? Since `numpy` is required, everyone can use numerical from there. I suggest to limit `astrolib.constants` to physical and astronomical constants. | |
<author>eteq</author> | |
@astrofrog - I would say add 'dumb' units now, as long as it won't take too much time - but make sure to put a disclaimer in the docstring that this is for informational purposes and will be changed in the future. | |
@hamogu - That's a good point... I was thinking people might expect pi, in particular, to be in a ``constants`` module, but it should be pretty obviously that they can find it either in `numpy` or in the standard library `math`, so perhaps it is better not to include it here. So I'd say that means `astropy.constants.e` should be the electron charge. | |
<author>eteq</author> | |
Oh, and maybe `system` should be `unitsystem` , just for clarity. Or perhaps we should ditch that attribute completely, if you implement `units` instead. | |
Also, one extra suggestion: we should set a scheme that automatically adds a last of the constants into the docstring, so we don't have to rely on people updating the docs whenever they add a new constant. (I can do this later if you'd rather not do it now, though.) | |
<author>astrofrog</author> | |
@eteq - I guess adding dumb units is fine because if someone does c in cgs times k in SI for example, it will give a float anyway, not a Constant that would need to figure out units. So it's up to the user to figure out the units of the result. I'll get rid of system too, since in some cases it doesn't even apply (e.g. Avogadro's number). | |
Also agree about getting rid of pi! | |
<author>astrofrog</author> | |
I've made the changes above. Any other suggestions/requests? | |
I know this is a bit tedious, but given that the values listed here are pretty critical, could someone check over the values of the constants and uncertainties in `si.py` and the conversions in `cgs.py`? The values of the constants are taken from http://physics.nist.gov/cuu/Constants/ and Allen's Astrophysical Quantities 4th edition. | |
<author>astrofrog</author> | |
The code now includes docstrings written by @eteq, including a list of available constants. One thing I should mention is that of course there are other constants to include in future, but the current list should demonstrate how additional constants should be added. | |
<author>hamogu</author> | |
`si.py` - I have checked all the CODATA (I don't have an Allen at home): | |
* unit of `G` should be `km^3/m/s^2` | |
* `stef-boltz` is often called sigma. Sigma is used for a lot of | |
things in astronomy. I am OK with the current name, I just wanted to | |
bring this up. | |
`cgs.py` - I checked all conversions: | |
* unit of `G` is `.../s^2`, but value is OK | |
* `stef-boltz` should have a factor 1e3 not 1e5, because it has | |
W->erg and (m->cm)^2 | |
* `e` - I always feel uncomfortable with the charge in any variant | |
of the cgs system (there are multiple possible definitions of | |
electrostatic or electromagnetic units in competing cgs systems), but I | |
think it is OK as it is now. | |
<author>astrofrog</author> | |
@hamogu - thanks for checking this and for spotting the /s -> /s^2 problem (I presume you meant m^3/kg/s^2 for G). I'm fine with using sigma if other people agree. @eteq and @phn, do you also agree? | |
I'm also uncomfortable with the statC unit, but can't do much about that! | |
<author>eteq</author> | |
I'm not sure I'm comfortable with `sigma` given all the different meanings it has... but perhaps `sigma_sb` or something like that? | |
Also, it's good to have them in there for now, but maybe we should put a disclaimer in `au`, `pc`, and `kpc` along the lines of ``warning - these may change or disappear when the units framework is implemented`` given that they're generally thought of more as units than physical quantities... | |
<author>hamogu</author> | |
@eteq - Why should they all disappear? `au` is a unit as well as a constant, so it should be usable in both places. `au` can be defined in `astropy.units` but I would be good if I could still call it as `astropy.constants.au`. | |
<author>eteq</author> | |
@hamogu - I agree that might be a good approach, but until the unit scheme is implemented, I don't think we want to *guarantee* that this will be the case, given that the design might end up putting this information somewhere else. In that case, we certainly don't want two (potentially) different values of `au` in two different places. | |
That is, you're probably right if its better that they stay there, but I'm just suggesting we warn anyone who's going to use this that it *might* change when the units are implemented. | |
<author>hamogu</author> | |
@eteq - I don't want all values in this file, I just suggest that they can be referenced via `astropy.constants.au`. They can sit in any other file, as long as we do `astropy.constants.au = anyotherfile.au`. As a user I don't care in which file the value is actually defined. This would be analogous to many functions in `scipy`, e.g. `scipy.sum()` is not defined in `scipy` is is just imported from `numpy`, but as a user I don't care, I can write either `scipy.sum` or `numpy.sum` and get the same result (because it is the same function called by two different names). | |
In the same way `astrolib.constants` can be a one-stop-shop for constants including `au`, even is `au` is defined in any other file. | |
That said, there is nothing wrong with a disclaimer at this stage. Also there might be good reasons not to have `au` in here. That's not up to me to decide, I merely wanted to put forward a humble suggestion. | |
<author>eteq</author> | |
@hamogu - I agree this is a very reasonable scheme (although with the caveat that it's actually `astropy.constants.cgs.au` or `astropy.constats.si.au` that you want :) - perhaps the warning should instead just say something like "don't count on this beign a Constants object" or something like that, as it may end up a "Units" object or something like that in the future. | |
<author>astrofrog</author> | |
Alternatively, cgs.au can remain a constant (which it is) and be used by the unit conversion system? Even when the units system is implemented, it will still be true that cgs.au is a constant in cgs units :-) | |
<author>astrofrog</author> | |
You could argue that M_sun, M_earth etc. are units too and if you check, that's basically all of the 'astronomical' constants. I'd suggest that we keep these and au, kpc, pc, etc. in this module though, and have the units module use these values if necessary. Anyway, I guess this is something that can be decided later, and I would be for just putting a deprecation warning if things do change (not pre-emptively, in case we don't actually deprecate them in the end). | |
<author>astrofrog</author> | |
Last comment - the nice thing about keeping au, kpc, M_sun, etc. in constants is that with this object we can keep track of the origin of the values (because there are *many* definitions of these values, and origin is really important). | |
<author>eteq</author> | |
Right, I was just think it might be that the `Units` object that may or may not appear might have a different way of representing this. But I'm fine with either a warning now or just later giving a DeprecationWarning if people use it. | |
<author>eteq</author> | |
I just checked over all the Allen's values and saw two (related) typos that I commented on inline. | |
One final thought: For all the values that currently have errors of "0", perhaps the errors should be changed to .5 of the last significant digit? e.g. if the value is "1.8987e27", perhaps the error should be "0.00005e27" ? | |
<author>astrofrog</author> | |
Thanks for the comments - I fixed the units as @hamogu pointed out, I changed the Stefan-Boltzmann constant to ``sigma_sb``, and I set the uncertainties to 0.5 times the position of the last significant digit as @eteq suggested, except for the speed of light. Even though this might not be the 'true' uncertainty, it is better than saying the uncertainty is zero. | |
<author>eteq</author> | |
Looks good to me as it stands. | |
@phn - were you planning on implementing something in a branch of your own still, or should we go ahead and merge this? | |
<author>phn</author> | |
Hello, | |
@phn - were you planning on implementing something in a branch of your own | |
> still, or should we go ahead and merge this? | |
> | |
> I don't have anything to add on to the current material, since at this | |
stage a thin wrapper is more than enough. | |
Prasanth | |
<author>eteq</author> | |
@phn - sorry, didn't see your earlier comment until now: I don't think ``None`` or ``'exact'`` or similar is a good idea for uncertainties, because most people that write any kind of generic code will expect uncertainty to be a number. So None or a string will cause that code to fail. As it stands now, ``0`` indicates ``'exact'``, while for those with no uncertainty listed, the implied uncertainty is at the level of the last decimal point (that's what I always learned is what you assume when there's no uncertainty, anyway...). | |
I'm going to merge this to make it easier for some upcoming work, but if you disagree with this approach still, feel free to continue commenting and @astrofrog or I can change it directly in master. | |
<author>phn</author> | |
Hello, | |
On Fri, Oct 28, 2011 at 3:19 AM, Erik Tollerud < | |
reply@reply.github.com>wrote: | |
> @phn - sorry, didn't see your earlier comment until now: I don't think | |
> ``None`` or ``'exact'`` or similar is a good idea for uncertainties, because | |
> most people that write any kind of generic code will expect uncertainty to | |
> be a number. So None or a string will cause that code to fail. As it | |
> stands now, ``0`` indicates ``'exact'``, while for those with no uncertainty | |
> listed, the implied uncertainty is at the level of the last decimal point | |
> (that's what I always learned is what you assume when there's no | |
> uncertainty, anyway...). | |
> | |
> I agree with you on the usage aspect. But then the information begins to | |
diverge from what is in the reference. I would like people to be aware that | |
the reference does not give any information on uncertainties. | |
> I'm going to merge this to make it easier for some upcoming work, but if | |
> you disagree with this approach still, feel free to continue commenting and | |
> @astrofrog or I can change it directly in master. | |
> | |
> Comment, I shall! Hopefully, I can submit code and not just keep commenting | |
:) | |
> -- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/pull/21#issuecomment-2549945 | |
> | |
Prasanth | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Test helper | |
<author>jiffyclub</author> | |
I modified tests/helper.py so that it automatically loads the pytest module from the script in extern/pytest.py in the event that it can't import pytest directly. | |
When developers are writing tests they can import pytest like so (they'll have to get the relative part right for their module): | |
from ..tests.helper import pytest | |
This will give them an installed pytest if it exists, or extern/pytest otherwise. | |
This change also allowed me to remove the pytest.main function wrapper from helper.py. Now the best thing to do is directly call pytest.main after importing pytest from test.helper. | |
<author>astrofrog</author> | |
Thanks! | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Fix issues installing AstroPy with Python 3.x | |
<author>astrofrog</author> | |
The following commits fix the installation issues in Python 3.x while still working in Python 2.6 and 2.7. 22 tests are failing in Python 3.2, but at least the package installs, so maybe we can merge this and then fix the package-specific issues after? | |
<author>astrofrog</author> | |
Are there any other issues that need to be fixed before this can be merged? | |
<author>eteq</author> | |
I'm getting a bunch of errors in the tests that seem to be due to the OrderedDict tests... these seem to mostly have to do with some changes in sequence behavior and various similar quirks. We probably don't care, though, because we sholdn't be using the backported odict in 3.2 anyway. Perhaps we either check for version >2.6 and just have the tests be no-op if so? Or maybe there's a way to tell py.test to do this... | |
<author>astrofrog</author> | |
I was getting some errors both from wcs and odict, but I figured that for now the main issue is to be able to have the package install, then we can try and fix the tests after (I'll ask Tom A. what he thinks re: the failing odict tests since he implemented most of it into astropy). In the mean time, is there any reason not to merge this? (apart from the ``distutils.log.info`` issue above? | |
<author>eteq</author> | |
I think the content looks fine. py.test has a builtin way to skip tests just like this, though (see http://pytest.org/latest/skipping.html#marking-a-test-function-to-be-skipped) - I'll submit a pull request to your branch implementing that just for the odict stuff. So just hold off for a bit so I can do that and you can have a look. | |
<author>mdboom</author> | |
I've submitted a pull request to this to fix the astropy.wcs tests. | |
<author>astrofrog</author> | |
With the latest commits, AstroPy install correctly on Python 2.6, 2.7, and 3.2, and no tests fail. The odict tests are skipped on 2.7 and 3.2. Ok to merge? | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
astropy != AstroPy ??? | |
<author>mdboom</author> | |
Is there a good reason for the distutils/setuptools/egg name being different from the Python package name? It's currently defined as "AstroPy" in the setup.py. | |
Maybe this doesn't matter (sorry to be pedantic), but I find this confusing for the following reasons: | |
1) I use a tool that cleans my virtualenv by deleting packages by name -- if its name on disk is different from how it's imported I have to remember which is which | |
2) "pip install AstroPy" someday will confuse people | |
3) tarballs and installers built with distutils won't match the name of the git repository | |
<author>phn</author> | |
Hello, | |
The command ``pip install astropy`` will work even if the module is named "AstroPy". | |
Prasanth | |
<author>eteq</author> | |
I've seen this in other packages, and it doesn't seem to present too many problems... As Prasanth says various tools seem to generally be able to figure it out. | |
But having said that, the no really good reason for the capitalized form, either. So I see your point that it makes sense to change the name to astropy so that it's all the same regardless. | |
As long as there are no objections, I'll just go ahead and change the name in `setup.py` from ``AstroPy`` to ``astropy`` | |
<author>eteq</author> | |
Fixed in commit cb58b739f8b7b9cc4dfe1eacaf936d3dcc236bb5 | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Move pywcs' old CHANGELOG into astropy as history.rst | |
<author>mdboom</author> | |
<author>astrofrog</author> | |
Looks good to me. | |
<author>eteq</author> | |
Looks fine to me... except that I'm now seeing this error when I build the docs | |
``` | |
[u'Wcsprm.dateavg', u':module: astropy.wcs', u'', u'``string``', u'', u'Representative mid-point of the date of observation in ISO format,', u'``yyyy-mm-ddThh:mm:ss``.', u'', u'.. seealso::', u'', u' `~astropy.wcs.Wcsprm.dateobs`', u'``string``', u'', u'Start of the date of observation in ISO format,', u'``yyyy-mm-ddThh:mm:ss``.', u'', u'.. seealso::', u'', u' `~astropy.wcs.Wcsprm.dateavg`']:11: (WARNING/2) Explicit markup ends without a blank line; unexpected unindent. | |
``` | |
I'm a bit mystified at what made this appear, because I built the docs without any problems after the last pull request (I thought...), and this doesn't seem to have any history-related part. I think this is complaining about there not being a newlline between the first seealso and the next ''string''. But you may as well fix this while you're at it. | |
<author>mdboom</author> | |
I'm not seeing this with current master (1a758b1a041c). I'm on Sphinx 1.0.1, docutils 0.7, numpydoc 1.6.1. | |
This does point out to me, however, that this still using the Sphinx/docutils ".. seealso::" rather than numpydoc's "See also". I'll make those changes and add to this pull request. | |
<author>astrofrog</author> | |
I'm using Sphinx 1.1 and I'm not seeing any errors. | |
<author>astrofrog</author> | |
This is not really a big issue, but at the moment the word `cd` in the history page links to the Python standard library docs, due to the intersphinx extension. Is there a way to avoid this? (just curious as we'll probably encounter this issue elsewhere too) | |
<author>mdboom</author> | |
I think we'll have to fully qualify these using the `~astropy.wcs.Wcsprm.cd` syntax. I'll try to find and replace these. | |
<author>astrofrog</author> | |
Looks good! | |
<author>eteq</author> | |
switching to the numpydoc style See Also section fixed the error, so regardless, all is now good. | |
</issue> | |
<issue> | |
<author>phn</author> | |
sdist doesn't include wcs header files and scripts directory. | |
<author>phn</author> | |
Hello, | |
I tried Python 2.6 and Python 2.7. Running ``python setup.py install`` in the ``astropy`` repo, doesn't create any problems since all the files are accessible. But this is a problem when trying to install this from a ``sdist``, for example by automated testing software like ``tox``. | |
Bug in setuptools/distutils? | |
Using ``recursive-include astropy/wcs/src *.h`` in ``MANIFEST.in`` works. | |
If ``MANIFEST.in`` is specified then only the files mentioned in it gets copied; setuptools/distutils does not use its default file finding algorithm in this case. | |
Additional comments (with what I think are the reasons): | |
The ``scripts `` directory doesn't get copied, since ``setup.py`` removes the only file ``README.rst`` in this directory. | |
The contents of ``AstroPy.egg-info/`` doesn't get cleared/updated between two runs of ``setup.py``. For example, say we run ``setup.py sdist`` using the ``MANIFEST.in`` mentioned above. Now remove the line from MANIFEST.in, ad run ``setup.py`` again. This time the header files will get included, even though they are not specified in MANIFEST.in. I guess that this is because the header files get listed in ``AstroPy.egg-info/SOURCES.txt`` the first time, and the list gets reused. This leads to major confusion: I just lost 1.5 hours. Deleting ``AstroPy.egg-info`` leads to the previous behaviour. | |
If ``AstroPy.egg-info`` is deleted, without deleting ``version.py`` AND ``version.pyc``, then the ``import astropy`` line in ``setup.py`` complains that ``AstroPy`` distribution doesn't exist. The ``astropy/__init__.py`` tried to run ``version.py`` which uses ``AstroPy``, causing error. | |
Prasanth | |
<author>eteq</author> | |
There are still some revisions going on in setup.py and a few other layout places that I'd like to wait to settle down before addressing this... once those are done, I'll update the MANIFEST.in. | |
The distribution doesn't exist error should be fixed by @27dcbdc6ef57fdbf6feeac77d14ad3c45642b185, though. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Big Data Marker | |
<author>jiffyclub</author> | |
This pull request implements a scheme for handling tests which should not be run by default. `astropy.tests.helper` defines a marker called `big_data` which should be used to mark any test which uses online data. `astropy.test()` will skip any test marked with `@big_data` unless the `big_data` keyword is set to True. (This adds the `--runbigdata` flag to `args` in `helper.run_tests`.) | |
If tests are run by using `py.test` at the command line in a ``tests`` folder then the decorators will be ignored because `py.test` will not see the hooks defined in `helper.py`. I don't think this should be a problem, though. | |
In the future we can use the same setup for marking tests as slow, or otherwise non-default. | |
In a tangential change, I changed the `pastebin` option to `run_tests` so that it only accepts `'failed'` and `'all'` as acceptable values (in addition to the default `None`). Before you could specify `pastebin=True` which was the same as `pastebin='failed'` and I didn't like having two options do the same thing. I modified the docstring to make things clear. | |
<author>astrofrog</author> | |
Looks great! I wonder whether we should use the term remotedata rather than bigdata? At this stage there is nothing to say that some small files won't end up online too. @eteq, what do you think? | |
<author>eteq</author> | |
I agree remotedata is better - I see it more as an aid for people who aren't currently connected to the internet. | |
<author>jiffyclub</author> | |
remote_data sounds better, I'll put up an update soon. | |
<author>jiffyclub</author> | |
That was some exciting rebasing, but this should be ready with the remote_data changes. | |
<author>jiffyclub</author> | |
I've modified the test_skip_remote_data so that it will pass if called from the command line or if `remote_data=True`, but will fail if it is not skipped and `remote_data=False`. Also, the failure has a helpful message. | |
<author>eteq</author> | |
I think this is probably ready to go, but one thing I'm not clear on: By default, do remote_data tests run, or not run? | |
<author>eteq</author> | |
Oh, although it would be good to update the testing guidelines to include information on this along with the rest of the pull request... I think the testing guideline pull request is almost ready to go, so perhaps you can (re)base it on that once it gets pulled into master? | |
<author>jiffyclub</author> | |
`@remote_data` tests are not run by `astropy.test()` unless it is explicitly told to run them with `remote_data=True`. | |
If you're planning to merge Thomas' testing-guidelines branch soon I'll rebase once that's done and update the docs. | |
<author>astrofrog</author> | |
@jiffyclub - we've now merged the testing-guidelines branch, so you can go ahead and update the docs with this pull request. Thanks! | |
<author>eteq</author> | |
In that case, perhaps we should merge this now and have you update the docs later? | |
<author>jiffyclub</author> | |
I've just rebased on the astropy master so this should be an easy merge if you want to go for it. And I think it's ready to be merged. Either way I'll get to the docs as soon as I can. | |
<author>astrofrog</author> | |
I'm fine with us merging this and adding the docs later. | |
<author>eteq</author> | |
@jiffyclub Great - once you get back you can issue another pull request to update the guidelines then. Have a good vacation! | |
</issue> | |
<issue> | |
<author>embray</author> | |
Makes some of the documentation titles more consistently cased | |
<author>embray</author> | |
This is really pedantic and low-priority. But I noticed while working on porting the pyfits docs that some of the existing documentation wasn't consistent about title casing. | |
I don't necessarily think that all headings must use Title Case. It should just be consistent across a heading level. This attempts to make all headings that appear at the same level use the same casing. | |
This also inserted line breaks in one document that had long single-line paragraphs. | |
<author>astrofrog</author> | |
Looks good to me, thanks! | |
</issue> | |
<issue> | |
<author>phn</author> | |
nose dependency | |
<author>phn</author> | |
Hello, | |
I got this error while running tests: | |
========================================= ERRORS ========================================= | |
___________________ ERROR collecting astropy/wcs/tests/test_wcsprm.py ____________________ | |
astropy/wcs/tests/test_wcsprm.py:6: in <module> | |
> from nose.tools import raises | |
E ImportError: No module named nose.tools | |
========================== 851 passed, 1 error in 4.73 seconds =========================== | |
Should nose be added as a dependency? | |
Thanks, | |
Prasanth | |
<author>mdboom</author> | |
Thanks for catching this. All the rest of us probably have nose installed so didn't see it. Nose should not a be a dependency -- we are using py.text instead. I will rewrite these tests to not require nose. (Expect a pull request shortly). | |
<author>astrofrog</author> | |
Looks good, thanks! | |
</issue> | |
<issue> | |
<author>embray</author> | |
Add a 'test' command to setup.py | |
<author>embray</author> | |
Adds a 'test' command to the setup.py (distribute comes with a test command but it doesn't work with py.test, so this replaces it outright). This also supports all the existing options to `astropy.test()` such as `module` and `verbose`. | |
With this it's possible to run all the tests with just `setup.py test` even if astropy hasn't been installed or built yet. If necessary it will first build extension modules and make them importable. | |
<author>astrofrog</author> | |
Works for me! Just made a couple of small comments in the source code, but once those are fixed it seems ready to me. | |
<author>phn</author> | |
Using Python 3.2, I get the following error: | |
Traceback (most recent call last): | |
File "setup.py", line 71, in <module> | |
extensions.extend(package.get_extensions(BUILD)) | |
File "/home/phn/code/astropy-forks/iguananaut/astropy/wcs/setup_package.py", line 169, in get_extensions | |
generate_c_docstrings() | |
File "/home/phn/code/astropy-forks/iguananaut/astropy/wcs/setup_package.py", line 96, in generate_c_docstrings | |
docstrings[key] = docstrings[key].encode('utf8').lstrip() + '\0' | |
TypeError: can't concat bytes to str | |
<author>mdboom</author> | |
@phn: I think I introduced this bug. I'll fix it in master (since it really exists there) and maybe @iguananaut can rebase this branch of it? | |
<author>embray</author> | |
This leads me to another question, and I forget whether or not this has been decided and documented anywhere: Are we officially supporting Python 3 *currently*? My thinking was that we were coding with Python 3 in mind (requiring Python 2.6 and using as many Py3-isms as possible) but not necessarily officially supporting Python 3 yet. | |
If we are dedicating ourselves to Python 3 support now that's fine too. But I'll want to get astropy up and running with tox (with documentation on how to set that up) in order to facilitate cross-version testing. The only thing that makes this tricky is that tox uses pip to install dependencies when setting up its virtual environments. And numpy currently does not play well with pip (I've submitted patches to both numpy *and* pip to fix this issue since both are partially to blame, but those patches aren't available in any current release of those projects). | |
<author>astrofrog</author> | |
I think we are going with Python 3.x compatibility *now* as much as possible (see http://astropy.org/development/codeguide.html#interface-and-dependencies). | |
<author>embray</author> | |
In that case I'll make sure this works on Python 3 before suggesting that it be pulled. | |
<author>embray</author> | |
After giving it a try, this is working fine currently on 3.2 (haven't tried 3.1). That is in part, of course, since none of the current code needs to be 2to3'd. I don't think that will be the case forever (vo.table, pyfits) so there still needs to be a solution for when we get to that point. But at least it works for now. | |
<author>eteq</author> | |
This pull request now conflicts quite a bit from some of the recently merged changes... perhaps you should merge with the new master and then rebase? It's not obvious to me what the best solution for the merge conflicts is... | |
<author>embray</author> | |
Hrm, I haven't pulled from master in a while. I'll give it a look tomorrow. | |
<author>embray</author> | |
Okay, I rebased this on master--seems good now. Sorry for all the whitespace changes--my editor is configured to strip extra whitespace from blank lines. | |
<author>astrofrog</author> | |
Looks good to me! By the way, you can add ?w=0 to a github URL to hide whitespace diffs: | |
https://github.com/astropy/astropy/pull/30/files?w=0 | |
Useful! | |
</issue> | |
<issue> | |
<author>cdeil</author> | |
wcs build error on Mac OS X Lion | |
<author>cdeil</author> | |
`python setup.py install --user` | |
results in an error while trying to install the `astropy.wcs._wcs` extension (see below). | |
Is this a known problem with llvm-gcc-4.2 or one specific to the wcs version included in astropy? | |
``` | |
In file included from /Users/deil/git/astropy/astropy/wcs/src/wcslib/C/wcs.c:44: | |
/Users/deil/git/astropy/astropy/wcs/src/wcslib/C/wcsutil.h:227: warning: function declaration isn't a prototype | |
/Developer/usr/bin/llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DECHO -DWCSTRIG_MACRO -DASTROPY_WCS_BUILD -D_GNU_SOURCE -DWCSVERSION=4.8.2 -DNDEBUG -UDEBUG -I/Users/deil/Library/Python/2.7/lib/python/site-packages/numpy/core/include -I/Users/deil/git/astropy/astropy/wcs/src/wcslib/C -I/Users/deil/git/astropy/astropy/wcs/src -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c /Users/deil/git/astropy/astropy/wcs/src/wcslib/C/wcserr.c -o build/temp.macosx-10.7-x86_64-2.7/Users/deil/git/astropy/astropy/wcs/src/wcslib/C/wcserr.o | |
/Users/deil/git/astropy/astropy/wcs/src/wcslib/C/wcserr.c:140: internal compiler error: Segmentation fault: 11 | |
Please submit a full bug report, | |
with preprocessed source if appropriate. | |
See <URL:http://developer.apple.com/bugreporter> for instructions. | |
{standard input}:0:End-of-File not at end of a line | |
{standard input}:116:End-of-File not at end of a line | |
{standard input}:unknown:Partial line at end of file ignored | |
{standard input}:unknown:Undefined local symbol L_.str | |
{standard input}:unknown:Undefined local symbol L_.str1 | |
{standard input}:unknown:Undefined local symbol L_.str2 | |
error: command '/Developer/usr/bin/llvm-gcc-4.2' failed with exit status 1 | |
``` | |
<author>jiffyclub</author> | |
I'm not getting this same error, but also have not been able to compile the wcs extensions on Lion. | |
``` | |
running build_ext | |
building 'astropy.wcs._wcs' extension | |
C compiler: gcc -DNDEBUG -g -O3 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.5.sdk -O -ansi | |
compile options: '-DECHO -DWCSTRIG_MACRO -DASTROPY_WCS_BUILD -D_GNU_SOURCE -DWCSVERSION=4.8.2 -DNDEBUG -UDEBUG -I/Library/Frameworks/EPD64.framework/Versions/7.1/lib/python2.7/site-packages/numpy/core/include -I/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C -I/Users/jiffyclub/astropy/astropy/wcs/src -I/Library/Frameworks/EPD64.framework/Versions/7.1/include/python2.7 -c' | |
gcc: /Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:36:19: error: stdio.h: No such file or directory | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:37:20: error: stdlib.h: No such file or directory | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:38:20: error: string.h: No such file or directory | |
In file included from /Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:41: | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcsprintf.h:123: error: expected ')' before '*' token | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c: In function 'wcserr_set': | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:75: warning: incompatible implicit declaration of built-in function 'calloc' | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c: In function 'wcserr_copy': | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:104: warning: incompatible implicit declaration of built-in function 'memset' | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:110: warning: incompatible implicit declaration of built-in function 'memcpy' | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:36:19: error: stdio.h: No such file or directory | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:37:20: error: stdlib.h: No such file or directory | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:38:20: error: string.h: No such file or directory | |
In file included from /Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:41: | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcsprintf.h:123: error: expected ')' before '*' token | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c: In function 'wcserr_set': | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:75: warning: incompatible implicit declaration of built-in function 'calloc' | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c: In function 'wcserr_copy': | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:104: warning: incompatible implicit declaration of built-in function 'memset' | |
/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c:110: warning: incompatible implicit declaration of built-in function 'memcpy' | |
error: Command "gcc -DNDEBUG -g -O3 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.5.sdk -O -ansi -DECHO -DWCSTRIG_MACRO -DASTROPY_WCS_BUILD -D_GNU_SOURCE -DWCSVERSION=4.8.2 -DNDEBUG -UDEBUG -I/Library/Frameworks/EPD64.framework/Versions/7.1/lib/python2.7/site-packages/numpy/core/include -I/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C -I/Users/jiffyclub/astropy/astropy/wcs/src -I/Library/Frameworks/EPD64.framework/Versions/7.1/include/python2.7 -c /Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.c -o build/temp.macosx-10.5-x86_64-2.7/Users/jiffyclub/astropy/astropy/wcs/src/wcslib/C/wcserr.o" failed with exit status 1 | |
``` | |
This error is caused by the fact that on Lion there is no /Developer/SDKs/MacOSX10.5.sdk. I'm not sure why 10.5 being grabbed, but it could be because of my Python version. If I try to compile with the 10.6 or 10.7 SDKs I get the same error as cdeil. | |
<author>jiffyclub</author> | |
It looks like this has been a common issue for projects on Lion. On Lion the compilers default to llvm-gcc: | |
``` | |
>:gcc --version | |
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00) | |
Copyright (C) 2007 Free Software Foundation, Inc. | |
``` | |
Here's a thread discussing this for another project: http://web.archiveorange.com/archive/v/yj6N4VBSrwkOaZ5VhQtw | |
Some bit of wcserr.c does not agree with this compiler, maybe the ellipsis on line 64? | |
The regular gcc compilers are still available on Lion as `/usr/bin/gcc-4.2` and `/usr/bin/gcc++-4.2` so by setting CC and CXX to these those of us on Lion may be able to get wcs compiled. It would be nice to have it work with the default compilers, though. | |
<author>cdeil</author> | |
Yes, it works with the system GCC and also with the system CLANG on Lion. | |
So it's not really a problem now, although I think with the next XCode no GCC will be shipped anymore. | |
Maybe this is a bug in `i686-apple-darwin11-llvm-gcc-4.2` or something in `wcserr.c` is not quite C standard-conform? | |
<author>jiffyclub</author> | |
With the EPD I also found it necessary to set `CFLAGS = "-isysroot /Developer/SDKs/MacOSX10.7.sdk"` because the EPD is compiled with the 10.5 SDK which is not available on Lion. We should probably fix this as well, since chances are fairly high that people will be trying to install astropy with the EPD and Lion. | |
<author>mdboom</author> | |
I'm trying to get access to a Lion machine to reproduce this and get to the bottom of it. (For what it's worth, it builds find with llvm-clang on Linux). | |
The ellipsis is standard C -- I'm not sure if there's a way around it, but I'll see if I can come up with anything. | |
@cdeil: When you say it works with the system CLANG, how are you setting that up? The default compiler is llvm-gcc which *is* clang with a gcc argument-handling compatibility layer, so I'm surprised one works and the other doesn't. | |
<author>mdboom</author> | |
Ugh -- of course I'm confused. clang != llvm-gcc, so I suppose it's not surprising that one works and the other doesn't. | |
<author>cdeil</author> | |
This is how I invoked clang and the build worked: | |
`export CC=CLANG; python setup.py build` | |
<author>jiffyclub</author> | |
I can help provide a Lion machine if you have trouble getting hold of one. | |
<author>mdboom</author> | |
It seems like a legitimate compiler bug. I have filed a report with Apple as issue #10370713. | |
I have a patch to fix this -- setup.py detects the broken compiler version and if found, uses clang instead. See pull request #40. | |
</issue> | |
<issue> | |
<author>embray</author> | |
More setup.py/version fixes | |
<author>embray</author> | |
These are two fixes that resulted from an exchange I had with @eteq earlier. | |
The first is the get rid of a `BUILD` variable in setup.py that was an artifact of PyWCS's setup.py. Instead it uses `astropy.version.release` which has in effect the same semantics (theoretically one might want to do a 'release' build while still under development, but in that remote case the developer may manually remove 'dev' from the version). | |
The second is another fix to the version.py template. It's now much simpler than before and does not rely on any obscure (and unreliable) machinery in pkg_resources. Instead it just checks if astropy is being imported from within a git working copy, and if so it updates the version string if necessary. If astropy is not being imported from within a git working dir then the version is left "frozen". | |
<author>mdboom</author> | |
On Mon, Oct 24, 2011 at 3:31 PM, Erik Bray < | |
reply@reply.github.com>wrote: | |
> These are two fixes that resulted from an exchange I had with @eteq | |
> earlier. | |
> | |
> The first is the get rid of a `BUILD` variable in setup.py that was an | |
> artifact of PyWCS's setup.py. Instead it uses `astropy.version.release` | |
> which has in effect the same semantics (theoretically one might want to do a | |
> 'release' build while still under development, but in that remote case the | |
> developer may manually remove 'dev' from the version). | |
> | |
This flag is used to turn on and off debugging asserts (range checking and | |
the like). It's quite common that the developer (i.e. me) wants to have | |
them on to verify for correctness and then off for benchmarking. It | |
actually has little to do with whether it's a tagged release or not. In | |
fact, a user may want to build in debug mode from a released tarball. So I | |
think it needs to be completely disconnected from astropy.version.release. | |
Ideally, it would be a distutils build parameter, I just haven't gotten | |
around to implementing it in that way. | |
<author>embray</author> | |
In that case we could add a debug variable and a `--debug` command line switch for the build command. I'm certainly not against that. I guess as implemented it seemed redundant since it was using 'release' as one of the possible values. I know that in its original context it wasn't redundant at all. | |
I'll go ahead and add that here. | |
<author>eteq</author> | |
+1 for the idea of a debug option. Perhaps we should even call it ``--cdebug`` to make it clear that it's for debugging in a C/Cython context? | |
I'm also glad for getting rid of the distribution stuff in version.py - it was causing a variety of strange problems for me. | |
<author>embray</author> | |
Right, like I said in my e-mail the approach I was using isn't even reliable anymore (It used to be, but apparently hasn't been for a long time). So better to dispense with it. | |
As for the --debug option, the build and build_ext commands already have a `--debug` command line option (that's only used by build_ext currently). The patch I'm working on adds a `debug` variable to `version.py` so that an easily accessible record is left in place as to whether or not the last build was a "debug" build. This is still used primarily by C extensions but it could theoretically be used by pure Python code too. | |
Another nice advantage of my patch (which adds a custom build command) is that it *rebuilds* the C code if the "debug" status changes from the last build. The normal build command doesn't do this (you have to manually rm -rf build/) since it has no way of knowing what type of build was last performed. | |
<author>mdboom</author> | |
@iguananaut: That sounds like it perfectly meets astropy.wcs' needs. Thanks. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Install sphinx configuration and extensions | |
<author>mdboom</author> | |
It's important to keep the settings used to build the core astropy docs and the affiliated packages in sync to ease the inclusion of affiliated packages in the core later on, and to ensure a consistent look across these packages. Also, distributing bugfixes in the extensions, or updating them when Numpy does, is much easier to do in one place. | |
This suggestion came about in response to how the affiliated package template layout was proposed here: | |
https://github.com/astropy/package-template/pull/1 | |
<author>embray</author> | |
+1 | |
<author>astrofrog</author> | |
I initially was worried that this would require astropy to be installed to build the docs, but this will be necessary anyway once we use things like autosummary and automodule. | |
Looks good to me! | |
<author>eteq</author> | |
Other than that one small bit about the rst_epilog, this looks great to me. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Two bug fixes to pull request #32 | |
<author>embray</author> | |
Two minor bugs that came in with pull request #32. | |
<author>astrofrog</author> | |
Thanks! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Config package and data downloaders | |
<author>eteq</author> | |
This pull request implements the layout of the `config` package, And also implements functionality for remote download/caching of data files and configuration file utility functions. | |
The remote data aspect of this is ready to go (including tests). One choice I ended up making that could be changed if desired: As it stands right now, data included in the source distribution should live in the astropy/data directory (or subdirectories), rather than in data directories scattered around in each package. If that's strongly desired, it can probably be implemented, although it may require fiddling with setup.py. I shy aware from that because it'll be confusing for future developers - specifically, how does the directory structure of package map onto the directory structure in the data directory? It could be multiple ways, but the simplest thing is to just put them all in one place. | |
As for the configuration files, there are a few questions to be addressed: | |
* Should this simple interface (``get_config('subpackagename')``) be it? I could implement a scheme where package authors could just define constants at the top of the modules and have them automatically get passed out to the configuration files (or a new version adopted or whatever), but I wasn't sure what everyone else wanted, so I started with this basic interface (which will be useful in more complicated cases, anyway). | |
* Right now, the configuration files and remote data cache are in $HOME/.astropy ... given that affiliated packages will want to use this stuff, should we use the ``.astropy`` name? Right now, inside that directory there will be directories with the name e.g. ``astropy`` for the astropy core, ``afpkg1`` for the first afilliated package, and so on... You specify which one you want with the second argument to ``get_config``. This could change to have separate main directories for each affiliate package, but I think this is easier to deal with, as it doens't clutter up the user's home directory. | |
* In the docstring for ``get_config`` I'm pointing to the configobj web site for definitions of the ConfigObj object and its interface. Should I pull in a copy of the ConfigObj documentation and included it in the sphinx docs (just as a separate html file include in the source dist)? | |
Once this has been decided on and the pull request completed, I will write up documentation describing how to use these modules in the sphinx docs and issue a new pull request. | |
<author>astrofrog</author> | |
This seems to include commits by @mdboom that have already been merged in - maybe a rebase is necessary against the most recent master? | |
<author>mdboom</author> | |
See my low-level comments above. | |
Here's some comments pertaining to the pull request discussion: | |
- I would really like to see subpackage-specific data live within the subpackages. It makes it clear which code cares about what data. astropy.wcs already has some for its tests, and it just loads them using paths relative to the source code (using __file__) since because they are small local files, there no special caching to do etc. They are installed using the get_data_files() hook in setup_package.py. I'm not sure why this approach isn't sufficient for small, local files where all of the caching/hashing infrastructure etc. isn't necessary. If there's good reasons to put all data under "data/", then perhaps by convention we could have a parallel directory structure there, where wcs's data lives under "data/wcs"? | |
- Configuration and cache files are separate and shouldn't be commingled. Configuration is something I would want to synchronize between machines, backup and basically guard with my life. Cache data is ephemeral and opaque -- I don't care what's in it or if it disappears as long as it can be recreated. (As alluded in my line comments, this is why LSB recommends putting configuration under ~/.local and cache under ~/.cache). To that end, I like the suggestion of configuration all being under the same root folder, including for affiliated packages, as long as everything is organized by subpackage underneath that. For caching, I think we should have a single bucket of stuff for anything using the astropy caching framework. No need to divvy that up at all. | |
- I think it's sufficient to link to (but not include) the ConfigObj docs. Should we document this as a blanket policy wrt external libraries? | |
<author>eteq</author> | |
Those updates are just a rebase against master to get rid of the extraneous commits. | |
<author>eteq</author> | |
@mdboom - Regarding the general comments(in the same order): | |
* I think its important to have a one-stop-shop *function* to get data file names, because otherwise people will end up implementing different slightly incompatible ways of organizing their data files, even if we ask them not to... and they will have to use tricks like looking at the __file__ variable to figure out the directory and such, and that sort of thing is error-prone if we ever have to refactor anything. Remember that this same infrastructure will be used by affiliated packages, and we want to reduce the complications involved in merging in those packages. | |
Having said that, it's certainly possible to implement the function something like ``get_data_filename('wcs/filename')``, and have that intelligently go looking in the wcs source directory for a "data" directory. But that's a bit confusing if you don't know there's magic being used. The parallel structure in the data directory is cleaner, as you suggest. Note that that is supported in the current version - all you have to do is move the ``astropy/wcs/data`` directory into ``data/wcs``. | |
* I agree with this sentiment, so I'm not sure what you're saying needs to be changed here... currently, if I assume your home directory is ``/home/mdboom``, the cached data goes in ``/home/mdboom/.astropy/datacache``, and all configuration goes in ``/home/mdboom/.astropy/astropy``, so the cache and config info *are* in separate directories. I'm avoiding doing something like putting it in ~/.local or ~/.cache because we want to stay cross-platform, and e.g. windows is laid out differently (hence the ridiculous code for `get_config_dir` that looks for something like a home directory). Or are you suggesting that the configuration be in ``/home/mdboom/.astropy/config/astropy`` or something like that (along with affiliated package config)? I could certain do that quite easily. | |
* You're probably right we should say in general we don't want to copy over external docs. The only reason why I say maybe ConfigObj is because it's not obvious how to use it without that, and the docs are all one single html file, so it's pretty straightforward to include them. I don't have a strong opinion on this, though - I could go either way. | |
<author>astrofrog</author> | |
One quick question - do we really expect to have so many configuration options that we'll need to split it up into multiple fils, one for each sub-package? If not, then we can have a single file with e.g. | |
[wcs] | |
origin = 0 | |
[io.fits] | |
memmap = True | |
[util.odict] | |
option = True | |
That would be easier than having to create a config file for each sub-package, which could be a pain. Maybe you were already thinking about having this? | |
If we went with that, then we could even have just a file called ``.astropyrc`` and a directory called ``.astropy.cache``. If you foresee the need for other astropy-related files that are not cache files then you could have ``.astropy/astropyrc`` (consistent with matplotlib) and ``.astropy/cache`` or ``.astropy.cache`` for the cache. Just some ideas :-) | |
<author>hamogu</author> | |
+1 for @astrofrog | |
Having all options in one file makes it so much easier to backup, to synchronize or to give my options to colleagues new to astropy. I don't mind if that single file needs a few hundred lines, because I could just use a text editor to modify it. And, if the options are named sensibly, it's often easier to edit a text file then to search though the documentation to find functions like `astropy.util.odict.set_option(option=True)` (to stay with @astrofrog's example). | |
If you think `memmap = True` is too short, `.astropyrc` could contain comment lines:: | |
[io.fits] | |
# Should memory mapping be used to speed up fits file reading? [True, False] | |
memmap = True | |
The fitting package `sherpa` is one example for a larger package which has a single `.sherpa.rc` file structured in this fashion. | |
<author>mdboom</author> | |
@eteq: In response to the general comments: | |
- Sure. That seems fine. | |
- I'm suggesting that cache goes into `~/.cache/astropy` and configuration goes into `~/.local/astropy`. The idea is that *all* of my configuration, not just from astropy, would go in `~/.local`. Mac OS-X has a similar standard. Caches are meant to go in `~/Library/Caches`. Native apps usually put configuration in `~/Library/Preferences`. I know these are platform-specific differences, but they are very useful differences when managing multiple systems etc.. | |
- Sure. | |
To @astrofrog 's comment: I don't know if being consistent with how matplotlib handles configuration should be a goal :) It's been on my TODO list for some time to make it more like I described above. My suggestion (to put the files in the modern platform-specific locations for configuration and cache) is orthogonal to whether we decide to have a single file for configuration or a directory with multiple files. | |
<author>eteq</author> | |
@mdboom regarding the directory structure: My point was that there's no straightforward way to do this in a cross-platform way. I suppose we could look to see if its unix-like, and put it in ``~/.local`` and ``~/.cache`` in those cases, use ``~/Libarary/Caches`` if its OS X, and something else sepcific to windows, but I think that will be confusing to users, because its not a consistent standard. It also isn't inline with ipython or matplotlib, which I think our users are more likely to be familiar with... So while I see the value of the separate directory structures, I'm wondering if we should be bowing to the practical fact that it may be confusing and not terribly portable. I'd be good to get other peoples' opinion on this, though - I am not terribly attached to either solution, I just want to do whatever will be the least confusing. | |
<author>eteq</author> | |
@astrofrog and @hamogu : I had conversations with a couple other people who favored multiple-file scheme, as that then allows you to skip having sections entirely for smaller subpackages. Again, I don't really have a terribly strong opinion here, so one or many files is fine. | |
Two complications to consider with a single-file scheme, though: First, affiliated packages that get merged will have to switch to using the astropy config file, so settings from affiliated packages will have to get copied over. This may not be an issue, but is something to be aware of. Second, it's a bit more complicated to enforce the sectioning in ConfigObj if its not at a per-file level. To add a section in ConfigObj, you do ``configobj['io.fits'] = {}``... and the person acccessing the config file will have to be sure to do something like this, which is a bit awkward. Alternatively, I could have it automatically always pick out the right section, but then it's a little more confusing because you have to do ``sec.parent.write()`` or similar to write the file. Again, not a major problem, but a potential source of confusion that isn't present for single-file setups, where the onus is on the writer of the package to actually look up ConfigObj before they do anythign else. | |
<author>astrofrog</author> | |
@eteq - ok, I don't have a strong preference for single file vs multiple file. By the way, do your comments mean that affiliated packages will be allowed (and should be encouraged) to place their configuration files in the astropy configuration directory? (to avoid having to move the config once the package is merged) If so, how should we deal with name changes? (e.g. specutils might become part of astropy.spectrum) | |
@mdboom - I don't really have a strong preference for where the cache should go, but I think we want to keep the configuration files in the same location for all systems (``~/.astropy`` for example) rather than having it in ``~/Library/Preferences/`` on mac for example (I associate that directory with plist files, not unix-style configuration files). | |
<author>eteq</author> | |
@astrofrog - you make a good point with the name change bit... that's actually a good reason to use a single file for astropy. So perhaps it sould be that the core astropy package configuration would be in ``~/.astropy/astropy.cfg`` , affiliated packages would go in ``~/.astropy/affilpkgname.cfg`` ? Then if the name changes in the switch to astropy, there's no confusion because it ends up in a different file ``astropy.cfg`` anyway. | |
@mdboom - just to be clear, were you saying you thought single-file (e.g. like matplotlib) is a good idea, a bad idea, or you don't have a strong preference? | |
<author>astrofrog</author> | |
@eteq - maybe we could even do ``~/.astropy/astropy.cfg`` for the core package and ``~/.astropy/affil/affilpkgname.cfg`` for affiliated packages (i.e. sandbox affiliated packages into ``~/.astropy/affil``) - I don't have a strong opinion though. | |
<author>mdboom</author> | |
@eteq: Note that the current version of IPython puts its configuration XDG_CONFIG_HOME on Linux. matplotlib has a bug out for this change as well, which will probably be fixed for the next major release. IPython doesn't have any cache, of course, only configuration. | |
If we really want everything under `~/.astropy`, then may I suggest the following: configuration goes under `~/.astropy/config`, and cache under `~/.astropy.cache`. That will let Linux users put symlinks to each of those under `~/.local` and `~/.cache` and have a sane backup and sync strategy. An equally good alternative would be to put configuration in `~/.astropy` and cache in `~/.cache/astropy`. | |
I don't have strong feelings about whether configuration is a single file or not. | |
<author>eteq</author> | |
@mdboom: Hmm... it looks like IPython first looks for XDG_CONFIG_HOME and then does .config if it doesn't find it. They do have something like a cache in the form of the history database - that's stored in the config dir (although you're right that just because other people do something that doesn't mean we should). There also seems to be a XDG_CACHE_HOME variable that XDG defines... but neither of those variables seem to be defined by default on either OS X or Ubuntu (although the Ubuntu I checked is a bit out-of-date). | |
I'm leaning towards a variant of your intermediate solution - I'll look for a directory called ``astropy`` in XDG_CONFIG_HOME, then if nothing's there, I'll try ``~/.config/astropy``, and then ``~/.local/astropy``. If none of those exist, if XDG_CONFIG_HOME is defined, it will use that, otherwise it will use ``~/.astropy/config``. similarly, for the cache, it will try XDG_CACHE_HOME and ``~/.cache/astropy``, and if not found, it will either use XDG_CACHE_HOME or ``~/.astropy/cache``. If XDG_CACHE_HOME or XDG_CONFIG_HOME are present, I think the safest thing is to still define the ``~/.astropy`` directory, but symlink the subdirs to the appropriate XDG directories. Does that seem like a good solution that satisfies all needs? | |
<author>embray</author> | |
A few other things I'd like to see, non of which I think conflict with anything already done, so I can work on them myself in my own branch, or add issues for them to do later: | |
* It would be good to have the ability to override any config options via environment variables. These environment variables would follow some standard naming convention like `ASTROPY_SUB_PACKAGE_OPTION`. There could also be some way to register shorter aliases for commonly used options, but I don't have any thoughts yet on the specifics of that.<br /><br />In order to support these environment variables transparently, the easiest approach I see (which I usually end up doing anyways) is to provide a simple wrapper around `ConfigObj` objects. I like to use something simple like `Configuration`. This is a much simpler object than the `ConfigObj` so that fewer details of reading/writing the configuration are exposed to subpackage/affiliated package authors. It would still provide a dictionary-like interface and many of the methods of `ConfigObj`. Though rather than `write()` it has a `save()` method which I think is a little more obvious. If a user wants to update the configuration they simply call `config.save()`. This method would prevent saving any values that were overridden by environment variables.<br /><br />Using a wrapper can have other eventual benefits, since it allows us to provide some abstractions around the configuration object that aren't specific to any one backend implementation (not that I would suggest we move away from `ConfigObj` or anything--it's just nice to have additional flexibility). | |
* We should include a configspec.cfg and require that all options be defined in it, including default values (except in cases where the default needs to be computed, in which case the default should be `None`). If desired, we could make it so that package _may_ have its own configspec file. This way affiliated packages can add their own config options if they want to use the same configuration machinery. If a config file contains an option not recognized by the configspec, at the very least a warning should be raised, though I don't think it should cause the whole thing to blow up either. | |
<author>mdboom</author> | |
@eteq: The relevant part of the XDG spec is this: | |
"$XDG_CONFIG_HOME defines the base directory relative to which user specific configuration files should be stored. If $XDG_CONFIG_HOME is either not set or empty, a default equal to $HOME/.config should be used." | |
So I think on Linux, where this applies, we should follow that. I don't like the idea of going into `~/.astropy/config` if XDG_CONFIG_HOME is not defined because that goes against the standard, and XDG_CONFIG_HOME is almost never defined. | |
Maybe there's a way to have our cake and eat it too. What if we have all config under `~/.astropy/config/` and all cache under `~/.astropy/cache/` on all platforms. Then there's only one thing to document for all platforms (excepting Windows). We can then add symlinks on Linux from the XDG locations following the XDG spec exactly. That second step is really finesse and somewhat optional. But setting up config and cache in separate directories is really the key to open up those possibilities. (This is close to what you suggested with your symlink idea, just simplified even further). | |
I also second @iguananaut 's idea about both an environment variable override and looking in cwd. | |
<author>eteq</author> | |
@mdboom - The approach you suggest here sounds fine to me... one concern, though: if the point of XDG_CONFIG_HOME is that it should make things easier to back up, then that functionality is broken if you ever use it and don't also backup/symlink ``.astropy`` . Or do you think that's not important? | |
That was my main motivation for suggesting that it first look in XDG_CONFIG_HOME/XDG_CACHE_HOME - if it doesn't find ``$XDG_CONFIG_HOME/astropy``, it will put stuff in ``~/.astropy/config``and do the symlink. | |
<author>eteq</author> | |
@iguananaut : | |
* I see your point about environment variables. I'm a little cautious about them because if the environment variables override the config files, I can guarantee you a variety of people will waste hours trying to figure out why the config file isn't doing what it's supposed to because they unknowingly defined an environment variable. I'll leave it out of this pull request for now, but I'll plan around the option and then you can later issue a pull request so we can have a more focused conversation there. | |
* I was actually thinking that something like a Configuration object is indeed a good idea if we end up using the single-file scheme (which it seems other people here generally prefer), as that will make it easier to ensure that the proper sectioning is followed. | |
* I'm not sure I understand what you mean by a configspec.cfg... you mean a separate file that's tracked independently in the source code a la matplotlibrc? I was instead thinking of implementing a scheme where config parameters could be defined at a package level, and when the config file is first generated, it will walk through and define a default config file (with initially everything commented like in matplotlibrc). Then config variables all get defined at the source level and there's no need to worry about keeping the source code and cfg file in sync. | |
<author>embray</author> | |
@eteq: And I see _your_ point about environment variables. Maybe if a value is used from an environment variable there should be a notice in the log so that there's at least some record that can be checked as to where the value is coming from. And like I said, this could be implemented separately from the rest of this pull request so I will write it up in a separate issue. | |
A Configuration wrapper object would also make a multi-file scheme easier, since it could worry about merging the files and such. Either way it's useful to have. | |
As for configspec, ConfigObj supports a configspec file that lists all the sections/options that are valid, and includes validation for each option. However, I do like the idea of defining options at the module level as module-global variables. The ConfigObj validation machinery can be used programmatically as well, so we can get the same functionality whether all options are defined in one file, or if they're defined programmatically in each module. | |
The advantage of the former is that it's easy to see all the options and their allowed values in one place. But if they're defined programmatically there could still be a function to list them all. The only problem I see with that is that to check if a module has options defined it would have to go through and import each module. So maybe they should only be defined at the subpackage-level in \__init__.pys. | |
<author>mdboom</author> | |
Just want to put a note here -- once we have this framework in place we have a few places in existing packages that should be updated to use it. | |
Also @eteq: what's the status here -- good time for another review? | |
<author>eteq</author> | |
Yep, definitely. It should be pretty easy based on what I have in mind on the config side. | |
@mdboom - Not ready yet. I'm finishing up the data caching part today, so the stuff there (and the config/cache directory locating) can be reviewed when I send up the commits, but the config side of things I may not get to for another couple days (sorry its been taking so long! swamped with science/postdoc apps) | |
<author>eteq</author> | |
I am closing this pull request and replacing it with a new one because the new version is pretty much different almost everywhere... | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Issue with unit 'HZ' in FITS files (astropy.wcs) | |
<author>astrofrog</author> | |
It seems that astropy.wcs (and more specifically wcslib) has issues with the unit 'HZ'. For example, the following header: | |
SIMPLE = T /Standard FITS | |
BITPIX = -32 /Floating point (32 bit) | |
NAXIS = 4 | |
NAXIS1 = 500 | |
NAXIS2 = 500 | |
NAXIS3 = 120 | |
NAXIS4 = 1 | |
EQUINOX = 2.000000000000E+03 | |
PC001001= 1.000000000000E+00 | |
PC002001= 0.000000000000E+00 | |
PC003001= 0.000000000000E+00 | |
PC004001= 0.000000000000E+00 | |
PC001002= 0.000000000000E+00 | |
PC002002= 1.000000000000E+00 | |
PC003002= 0.000000000000E+00 | |
PC004002= 0.000000000000E+00 | |
PC001003= 0.000000000000E+00 | |
PC002003= 0.000000000000E+00 | |
PC003003= 1.000000000000E+00 | |
PC004003= 0.000000000000E+00 | |
PC001004= 0.000000000000E+00 | |
PC002004= 0.000000000000E+00 | |
PC003004= 0.000000000000E+00 | |
PC004004= 1.000000000000E+00 | |
CTYPE1 = 'RA---SIN' | |
CRVAL1 = 1.000000000000E+01 | |
CDELT1 = -2.000000000000E-05 | |
CRPIX1 = 2.510000000000E+02 | |
CUNIT1 = 'deg ' | |
CTYPE2 = 'DEC--SIN' | |
CRVAL2 = 1.000000000000E+01 | |
CDELT2 = 2.0000000000000-05 | |
CRPIX2 = 2.510000000000E+02 | |
CUNIT2 = 'deg ' | |
CTYPE3 = 'FREQ ' | |
CRVAL3 = 2.000000000000E+11 | |
CDELT3 = 1.000000000000E+05 | |
CRPIX3 = 1.000000000000E+00 | |
CUNIT3 = 'HZ ' | |
CTYPE4 = 'STOKES ' | |
CRVAL4 = 1.000000000000E+00 | |
CDELT4 = 1.000000000000E+00 | |
CRPIX4 = 1.000000000000E+00 | |
CUNIT4 = ' ' | |
PV2_1 = 0.000000000000E+00 | |
PV2_2 = 0.000000000000E+00 | |
parsed with: | |
from astropy.wcs import WCS | |
import pyfits | |
header = pyfits.Header() | |
header.fromTxtFile('header.hdr') | |
wcs = WCS(header) | |
wcs.wcs_sky2pix([[12.,13.,0.3, 0.4]], 0) | |
gives the error: | |
Traceback (most recent call last): | |
File "test.py", line 7, in <module> | |
wcs.wcs_sky2pix([[12.,13.,0.3, 0.4]], 0) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0dev_r220-py2.7-macosx-10.6-x86_64.egg/astropy/wcs/wcs.py", line 783, in wcs_sky2pix | |
'input', *args, **kwargs) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0dev_r220-py2.7-macosx-10.6-x86_64.egg/astropy/wcs/wcs.py", line 613, in _array_converter | |
result = func(xy, origin) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0dev_r220-py2.7-macosx-10.6-x86_64.egg/astropy/wcs/wcs.py", line 782, in <lambda> | |
lambda xy, o: self.wcs.s2p(xy, o)['pixcrd'], | |
astropy.wcs._wcs.InvalidTransformError: ERROR 6 in wcs_units() at line 2009 of file /Users/tom/tmp/astropy_install/astropy/wcs/src/wcslib/C/wcs.c: | |
In CUNIT2 : Invalid symbol in EXPON context in 'HZ'. | |
Does the WCS standard really forbid the uppercase HZ? If so, could you point me to where this is defined, so I can report a bug with the software that produced this file? And do we want to relax astropy.wcs so that it catches that since I've found a number of 'real' datasets that use ``HZ```? | |
<author>mdboom</author> | |
Units in FITS are case-sensitive by definition. Look at "s" (seconds) vs. | |
"S" (siemens). | |
The relevant FITS standard can be downloaded here: | |
http://www.aanda.org/index.php?option=com_article&access=doi&doi=10.1051/0004-6361/201015362&Itemid=129 | |
. | |
See Tables 3 and 4 on pages 9 and 10. I would report this as a bug to the | |
software that produced the file. | |
I don't think it's a great idea to relax the standard within astropy.wcs -- | |
there is too much potential for breakage and ambiguity. However, wcslib | |
(and therefore astropy.wcs) offers a function to automatically fix commonly | |
found deviations from the standard. Call: | |
wcs.wcs.unitfix() | |
This was not very well documented in the latest release of pywcs, but | |
astropy and pywcs repositories have good documentation about how units work | |
here: | |
https://github.com/astropy/astropy/blob/master/docs/wcs/api_units.rst | |
(I hope to get a new release of pywcs out soon, so this improved | |
documentation will go online.) | |
Mike | |
-- | |
Michael Droettboom | |
http://www.droettboom.com/ | |
<author>mdboom</author> | |
An alternative (though a diversion from wcslib behavior), may to add a kwarg to the WCS() constructor "fix=True", then when True would call all of the fixers. The user could set this to False if they know they don't want any fixing. I would also like to emit a warning when any fixers are performed so that deviations-from-the-standard are known and reported back to the producing software. | |
<author>astrofrog</author> | |
Thanks! Using ``wcs.wcs.unitfix()`` works great, but I like the addition of the ``fix=True`` option. Feel free to merge. | |
<author>eteq</author> | |
Looks fine for merging to me as well - @mdboom, I see you recently made a change, though... I'll leave it to you to merge if you're done with changes. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Test markers | |
<author>jiffyclub</author> | |
I've added a couple lines to the testing guidelines describing the `remote_data` decorator/flag. Also mention the `__init__.py` files in tests directories. | |
I added an` __init__.py` file to the `tests/tests` directory and changed imports in the test tests to relative imports. | |
<author>eteq</author> | |
Looks good to me other than the one inline comment I had. | |
<author>jiffyclub</author> | |
Updated with how to skip `@remote_data` tests from the command line. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
ImportError in wcs/setup_package.py | |
<author>jiffyclub</author> | |
I am getting this error trying to build astropy: | |
``` | |
astropy 32 >:python setup.py develop | |
Freezing version number to astropy/version.py | |
Traceback (most recent call last): | |
File "setup.py", line 55, in <module> | |
extensions.extend(package.get_extensions()) | |
File "/Users/jiffyclub/astropy/astropy/wcs/setup_package.py", line 169, in get_extensions | |
from astropy.version import debug | |
ImportError: cannot import name debug | |
``` | |
<author>jiffyclub</author> | |
Got it to work after cleaning `.pyc` files out of the directory tree. | |
<author>mdboom</author> | |
I've run into this before with the same solution. I don't know if @igaunanaut has any ideas about preventing this kind of thing down the road -- but I suspect this kind of thing may die down as the setup.py infrastructure settles down. | |
</issue> | |
<issue> | |
<author>phn</author> | |
Cython extension discovery example: interface to Skycalc. | |
<author>phn</author> | |
Cython extension discovery should be treated the same way as the regular C extension discovery currently implemented; explicitly asking the developer and not automatic discovery. These commits implement a Cython extension discovery scheme that gives developers control over the extension building process. The scenario implemented here is probably as involved as it can be. This is an example, and not intended for direct merging. | |
The code adds a new sub-package ``astropy.skypy``. This has two modules ``skyc`` and ``astrom``, both of which are Cython extension modules. The ``skyc`` extension calls functions in an external library Skycalc. The ``astrom`` extension is a standalone extension. | |
The Skycalc library source code is included in ``cextern/skycalc``. The user, one who is installing astropy, can set a few configuration options for ``skypy``, which will determine whether astropy will build ``skypy.skyc`` using the source code in ``cextern``, or using pre compiled library in the user's machine. Currently the setting is in ``skypy/setup_package.py``. In production this will be in ``~/.astropy/skypy.cfg`` or equivalent. | |
The developer defines one or both of ``get_cython_pyx_extensions()`` and ``get_cython_extensions()`` functions in ``skypy/setup_package.py``. The astropy ``setup.py`` file will call the former if the condition ``HAVE_CYTHON and not release`` is True, and the latter if the condition is False. In addition, astropy ``setup.py`` will add Cython.Distutils ``build_ext`` command to ``cmdclass`` dictionary if the above condition is satisfied. | |
In the ``get_cython_pyx_extensions()`` function, the developer will return setuptools ``Extension`` instances that directly use Cython ``pyx`` files. In the ``get_cython_extensions()`` function, the developer will return setuptools ``Extension`` instances that use Cython generated C files only. These ``Extension`` instances will be constructed with parameters consistent with user's library option. | |
With the current set of files we can perform the following: | |
$ python setup.py develop # Bundle of warnings from skycalc.c. | |
$ ipython | |
>>> import astropy | |
>>> astropy.test("skypy") | |
================================== test session starts ======== | |
platform linux2 -- Python 2.6.6 -- pytest-2.1.3 | |
collected 1 items | |
astropy/skypy/tests/test_simple.py . | |
=============================== 1 passed in 0.02 seconds ====== | |
0 | |
>>> from astropy import skypy | |
>>> skypy.astrom.hello("d") | |
Hello d! | |
>>> skypy.skyc.dow(2451545.0) | |
5.0 | |
We can now clean up, set the ``USE_SIN_FUNC_LIB`` option in ``skypy/setup_package.py`` and re-run the command. This time the library in ``~/lib`` will be used. The header file ``skysub.h`` must also be in ``~/lib``. | |
The library can be created using | |
gcc -c skycalc.c -o skycalc.o | |
ar rcs libskycalc.a skycalc.o | |
Note: ``USE_SIN_FUNC_LIB`` is left over from test code; forgot to rename to ``USE_SKYCALC_LIB`` or something else. | |
<author>eteq</author> | |
This is definitely along the lines of something we need to be doing. You're absolutely right that the Cython compilation steps need to be fixed. | |
I strongly disagree that Cython should *have* to be explicitly compiled. The reason for C code requiring explicitness is that it has a lot more conflicting choices because it's generally intended for wrapping C-libraries. Cython, by contrast, can be easily used simply as a compiled plug-in to speed up something. So we want to make that as easy as possible by having the setup automatically do the typical use cases (e.g. numerical operations, sometimes with numpy arrays). But you're absolutely right that the *option* to do it explicitly needs to be present, and your solution here makes sense, I think. | |
Also, I'm closing this pull request because you said this is not intended to actually be merged. FYI, pull requests should be used when you actually want something pulled into master, rather than to illustrate an example. This sort of thing is ideal to post on astropy-dev, with a link like https://github.com/phn/astropy/compare/astropy:master...cython-example to get something like the pull request look | |
<author>phn</author> | |
Hello, | |
> I strongly disagree that Cython should *have* to be explicitly compiled. | |
> The reason for C code requiring explicitness is that it has a lot more | |
> conflicting choices because it's generally intended for wrapping | |
> C-libraries. Cython, by contrast, can be easily used simply as a compiled | |
> plug-in to speed up something. So we want to make that as easy as possible | |
> by having the setup automatically do the typical use cases (e.g. numerical | |
> operations, sometimes with numpy arrays). But you're absolutely right that | |
> the *option* to do it explicitly needs to be present, and your solution | |
> here makes sense, I think. | |
> | |
> I don't think it would be any less or any more difficult for the developer | |
to provide an explicit Extension instance, because creating such an | |
instance is first thing one does when writing Cython, unless one is doing | |
everything manually. | |
> Also, I'm closing this pull request because you said this is not intended | |
> to actually be merged. FYI, pull requests should be used when you actually | |
> want something pulled into master, rather than to illustrate an example. | |
> This sort of thing is ideal to post on astropy-dev, with a link like | |
> https://github.com/phn/astropy/compare/astropy:master...cython-example to | |
> get something like the pull request look | |
> | |
> I issued a pull request because I wasn't aware of this feature! | |
Thanks, | |
Prasanth | |
> -- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/pull/39#issuecomment-2581930 | |
> | |
<author>eteq</author> | |
> I don't think it would be any less or any more difficult for the developer | |
> to provide an explicit Extension instance, because creating such an | |
> instance is first thing one does when writing Cython, unless one is doing | |
> everything manually. | |
I foresee plenty of people who want to write C-speed code but don't want to have to deal with the ``setup.py`` stuf. The point is to make the barrier to entry low (and we're lucky in that we can do that without really sacrificing customizability). I'm working right now on an implementation I have in mind that hopefully will incorporate what you have in mind here with what I'm saying. | |
> I issued a pull request because I wasn't aware of this feature! | |
That's what I figured, so it's totally fine - just wanted to mention so you'd know about the alternative in the future. | |
<author>eteq</author> | |
Oh, and by the way, your example here is a perfect test as an affiliated package once we have that all up and running, given that it has a not-so-straightforward approach for setup. If we can get the automated scripts to install this correctly, they should be able to handle just about anything Cython can throw their way! | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Fix OS-X Lion build | |
<author>mdboom</author> | |
If the default compiler ends up being the broken one from XCode-4.2, switch to using clang instead | |
<author>mdboom</author> | |
Closes #31. | |
<author>mdboom</author> | |
@iguananaut: You may want to have a look at this as it pertains to building. I refactored setup_helpers.py a bit to reuse the distutils option finding code you had for "--debug" and reuse it for "--compiler" as well. | |
<author>embray</author> | |
Looks fine to me. Thanks! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Setup cleanup for C/Cython extensions | |
<author>eteq</author> | |
This pull request (in part motivated by @phn 's pull request https://github.com/astropy/astropy/pull/39/) does a variety of fixes and clarifications in setup.py, mostly dealing with C/Cython extensions. It also updates the docs to reflect this scheme. | |
Key features/questions: | |
* As before, the get_extensions function of ``setup_package.py`` files in a subpackage are used to return Extension objects. | |
* The package source files are also searched for .pyx files (Cython files), and if they are found and not present in the Extension objects, they are added as new Extensions with the default numpy c-imports included. This | |
will hopefully facilitate easy development of Cython extensions when needed simply by adding a ``.pyx`` file and importing its name from somewhere else. | |
* Cython is used as long as the source is not in "release" mode, and Cython is installed. Otherwise, it tries to install any .c file with the appropriate name in the same place as the .pyx file. | |
* even if they aren't building anything in C/Cython, subpackages can still install data files and package data files (although the latter may change depending on how the config subpackage is eventually decided). | |
</issue> | |
<issue> | |
<author>phn</author> | |
subprocess.check_output doesn't exist | |
<author>phn</author> | |
In line 204 of setup_helpers.py: | |
output = subprocess.check_output(c.compiler + ['--version']) | |
leads to ``AttributeError: 'module' object has no attribute 'check_output'``. | |
Is this ``subprocess.check_call()``? | |
<author>mdboom</author> | |
Ah. I hadn't noticed this is Python 2.7 only. I will replace it with something that works on 2.6 as well. Expect a pull request shortly. | |
<author>eteq</author> | |
@mdboom - Should this issue be closed (that is, did the pull request fix it)? I checked earlier, and it appears that if you put "closes #42" or "fixes #42" in the merge commit message (for the pull requests), it will close the issue, even if you don't reference it directly in the actual commits that are part of the pull request. | |
<author>mdboom</author> | |
Interesting. I did put "Closes #42" in the commit, but it didn't seem to close this. Anyway, yes, I consider it resolved. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Compiler warnings | |
<author>mdboom</author> | |
Fixes some compiler warnings on Mac OS-X Lion. | |
<author>eteq</author> | |
Looks fine to me. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Don't use subprocess.check_output | |
<author>mdboom</author> | |
Don't use subprocess.check_output -- it is not available on Python 2.6. Closes #42. | |
<author>eteq</author> | |
FYI, for something this straightforward, I'd say its fine to commit directly into the repo. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Fix the raises() decorator to support functions that take arguments | |
<author>embray</author> | |
Fix the raises() decorator to support functions that take arguments (including methods, parametrized tests, etc). | |
I encountered this while porting the PyFITS tests which use test classes. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
VO | |
<author>mdboom</author> | |
This is an initial import of vo.table into astropy as astropy.io.vo. | |
This includes only the VOTABLE file format handling code. The VO networking (conesearch, ssa, image search etc.) stuff is still kind of half-baked and not ready -- that will live on a branch on my fork for a while. | |
There are some things left to do, but they are perhaps best left for after merging: | |
- There are lots of general-purpose utilities here that should be moved into astropy.utils. | |
- There are some local data files which should use the new data loading framework when that lands on master. | |
- It has an optional dependency on "xmllint" for schema verification. We should add this to the forthcoming "check for optional dependencies" tool. At the moment, an exception is raised when calling the single function that depends upon "xmllint". The tests will pass without "xmllint" installed, but the output of the tests will not be verified against the schema. | |
Some points: | |
- This includes all of expat verbatim in cextern. It does not build with a system copy of expat because at least on Ubuntu and Fedora, the system expat is built with UTF-8 strings rather than UCS2/4 strings which makes the conversion to Python Unicode strings much slower. In benchmarking I found this to have a factor of about 15% speedup. The Python expat wrappers perform even worse because they do a lot of unnecessary work. So, somehow, I suppose it should be documented that even though we are including expat in cextern, we don't want to create the option for the user to build against a system expat. | |
<author>astrofrog</author> | |
It looks like ``astropy/io/vo/src`` is missing? | |
i686-apple-darwin10-gcc-4.2.1: astropy/io/vo/src/iterparse.c: No such file or directory | |
i686-apple-darwin10-gcc-4.2.1: no input files | |
error: command '/usr/bin/gcc-4.2' failed with exit status 1 | |
<author>mdboom</author> | |
Yes, indeed. I was foiled by the "*.c" in .gitignore again ;) | |
<author>astrofrog</author> | |
Ok, I get further, but I cannot build this branch on MacOS 10.6 with Python 2.7: | |
/usr/bin/gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DHAVE_EXPAT_CONFIG_H=1 -DBYTEORDER=1234 -DHAVE_UNISTD_H -Iastropy/io/vo/src -Icextern/expat/lib -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c astropy/io/vo/src/iterparse.c -o build/temp.macosx-10.6-x86_64-2.7/astropy/io/vo/src/iterparse.o | |
astropy/io/vo/src/iterparse.c:1506: warning: ‘_state’ defined but not used | |
/usr/bin/gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DHAVE_EXPAT_CONFIG_H=1 -DBYTEORDER=1234 -DHAVE_UNISTD_H -Iastropy/io/vo/src -Icextern/expat/lib -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c cextern/expat/lib/xmlparse.c -o build/temp.macosx-10.6-x86_64-2.7/cextern/expat/lib/xmlparse.o | |
cextern/expat/lib/xmlparse.c:20:26: error: expat_config.h: No such file or directory | |
cextern/expat/lib/xmlparse.c:81:2: error: #error memmove does not exist on this platform, nor is a substitute available | |
cextern/expat/lib/xmlparse.c: In function ‘XML_Parse’: | |
cextern/expat/lib/xmlparse.c:1491: warning: enumeration value ‘XML_FINISHED’ not handled in switch | |
error: command '/usr/bin/gcc-4.2' failed with exit status 1 | |
<author>mdboom</author> | |
@astrofrog: Sorry. Again, it was a missing file. I just tested with a fresh clone and it seems to be working now, and passing all tests. | |
<author>astrofrog</author> | |
It installs fine now and all tests run successfully. | |
<author>astrofrog</author> | |
The documentation has some mentions of the networking capability (e.g. ``Note The network functionality in the conesearch, ssa and image modules is experimental and subject to change.``) - maybe this should be removed for now? | |
<author>astrofrog</author> | |
Minor: on the main documentation page, the headings for the WCS and VO stuff aren't consistent (``astropy.wcs Documentation`` vs ``vo.table documentation``. Also, are you planning on including the vo history? | |
<author>mdboom</author> | |
Good catch. Yes. I think we should just remove those for now. | |
<author>mdboom</author> | |
Will fix the documentation headers to match wcs. | |
I wasn't planning on including history -- after all the moves etc. it's not clear it would be directly useful. The SVN history will live on indefinitely as a resource in a bind. | |
<author>eteq</author> | |
About the included expat: I didn't entirely understand your last comment there - you're saying we don't want to let the user build against the system expat... is this just for the performance reasons? I think we should still allow the system expat (or even the python builtin expat) to be used if the user desires, as long as we make it clear that it will be slower. I can certainly imagine systems where something goes wrong with the astropy expat builds, but the user says "screw it, I'm fine with a 15% slowdown because I don't want to bother figuring out how to compile astropy." Is there any reason why these can't be left as options? (For now, feel free to just put it in as a constant or something in ``setup_package.py`` - we can update it after we have the extension cfg system working). | |
<author>eteq</author> | |
Oh, also, just so we're clear, the cextern expat is only statically linked to iterparser, right? So an expat library file is never actually generated? | |
<author>mdboom</author> | |
I should have been more clear about the expat. Expat has a number of #defines that change how it is built, and different Linux distros build in different ways. The one that matters here is whether it returns utf-8 or ucs4 to the caller. To handle the system expat, we'd have to handle and test both cases, and also do extra work if it is built for utf-8. That's where the 15% comes in. vo.table also has a fallback to pure Python standard library for the XML parsing, but that is about 3x slower. So, IMO, it's preferable to build our own expat -- it's very simple to build -- and use that. And we have a pure Python fallback which could be selected using our forthcoming build configuration system in case building expat as part of astropy is a problem for some users (it seems very unlikely as it's very portable C code). | |
To answer your question -- no standalone expat library is built, it simply builds the 4 required C files as part of the iterparse extension. | |
<author>eteq</author> | |
Ah, that all makes sense re: expat. It's a bit annoying that we have to include it, but I see you point for why it's useful. We definitely want the option of a fallback to the python builtin version, but we can deal with that later (as you say). | |
<author>mdboom</author> | |
@eteq: Just to clarify -- the pure Python fallback already exists and is tested as part of the unit tests. The only thing missing is an option for a user to choose not to build the C extension. | |
<author>eteq</author> | |
I couldn't get github to show me all of the diffs, I think because this pull request has so many files in it. So the stuff above is inline comments that github isn't showing you the context for. | |
<author>eteq</author> | |
One other thing: when I run the tests on this, it modifies the file ``astropy/io/vo/tests/data/test_through_binary.xml``, which is under git version control. I don't think we want the tests modifying anything under version control - maybe either have git forget it, or if it has a role in the test before the test is run, perhaps write to a temporary file? | |
<author>mdboom</author> | |
Good catch about `test_through_binary.xml`. That should be in the tmp dir and was never intended to be checked in. | |
<author>eteq</author> | |
Does anyone know a way to make the diff's visible? I want to be able to look at the diffs for some of the files at the end of the list, but they're all hidden (I think because the page would be too big, otherwise) | |
<author>mdboom</author> | |
Yeah -- it's annoying. You can of course look at the diff offline, but then there's no code review ;) I could take expat out of here for now, but then we'll have a branch that doesn't actually "work". | |
<author>mdboom</author> | |
@eteq: I think this latest set of commits addresses everything you raised. | |
<author>eteq</author> | |
So is this good to go? I don't see anything else that needs changes that can't be post-merge - @astrofrom and @iguananaut? | |
Oh, and @mdboom - when it does get merged, can you create issues for the things we mentioned as "todo after merge"? Thanks. | |
<author>astrofrog</author> | |
@eteq: it seems fine to me (disclaimer: I haven't checked it all line by line). | |
<author>embray</author> | |
Yeah, I think it's the sort of thing where it's going to be best to go ahead and get things merged in order to start tweaking them more. In particular we need to work on centralizing some of the Py3K hacks. | |
<author>mdboom</author> | |
Ok. Merging now... Will add some issues for various TODO list items as @eteq suggests. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Add PyFITS as astropy.io.fits | |
<author>embray</author> | |
This branch isn't quite ready for merging, for a number of reasons. But since it will happen eventually I figured I would open the pull request. Now is a good time to do so since this branch has astropy.io.fits fully functioning, at least in that all the tests run/pass. | |
To give some background on this pull request, this branch is based off of a different branch (https://github.com/iguananaut/astropy/tree/fits) which contains the entire history from PyFITS's SVN repository. However, for inclusion into Astropy all the previous SVN history (up to the last revision merged into Astropy from PyFITS which was r1202) was squashed into a single commit, and rebased on top of Astropy's master. I then applied my additional changes from the fits that get astropy.io.fits fully functioning on top of that. Sorry if that makes no sense--if anyone's really interested I can try drawing a graph of what happened. | |
Some things I need to do before this is merged: | |
* Bring up to date with the latest changes from PyFITS's trunk. There are also some changes in development branches that are coming down the line pretty soon and I would like to get those merged in too as some of them are fairly important. | |
* The documentation still needs work--I've yet to clean up most of it. | |
And some other open questions that need discussing: | |
* PyFITS includes a module called py3kcompat (now at astropy.io.fits.py3kcompat) containing various hacks to support Python 2/3 cross-compatibility. A lot of these hacks may or may not be desirable to have in Astropy at all. Others, I think would be useful. Most of the hacks are actually monkey-patches to Numpy that fix a few bugs it has in Python 3. At any rate I'd appreciate if this module were looked at and discussed. | |
* Likewise, the pyfits.util/astropy.io.fits.util module contains a number of generally useful utility functions and classes. Some of them are FITS-specific, but most, I believe, are not. It might be worth moving a lot of it into someplace under astropy.utils, but the details are up in the air. I also need to see if there are any redundancies between pyfits and pywcs where utility functions/python 3 hacks are concerned. | |
<author>astrofrog</author> | |
I know this is not ready for merging, but just wanted to confirm that this builds and the tests run on MacOS 10.6 with Python 2.7. | |
<author>astrofrog</author> | |
I noticed a few absolute imports, e.g. in ``test_hdulist.py`` | |
<author>mdboom</author> | |
Skimmed through the general utilities and Python 3 stuff. There is some redundancy there with stuff in vo.table, less so with pywcs (which has a lot less Python code). My only comment would be to consider if it made sense to separate out the generic Python 3 compatibility stuff from the numpy-specific stuff -- but it's not a huge deal. I think it's probably easiest to tackle the cross-subproject refactoring and redundant code removal after merging this pull request. | |
<author>embray</author> | |
@mdboom: Agreed on worrying about refactoring more after things are merged. As for where the numpy-specific fixes go, I'm fine putting it in a separate module. Though it's still worth stressing that these are Python3-specific fixes. | |
@astrofrog: Some of the absolute imports are intentional because I simply didn't like imports that had like four dots in them. And I figured that for tests maybe absolute imports aren't necessary since, after all, you're testing the interface rather than using it internally. I can fix it though. | |
<author>mdboom</author> | |
I've also waffled about relative vs. absolute imports in tests. I think I converted everything to relative in wcs and vo here, but there's definitely an argument to be made for using absolute in tests. Probably a minor point. | |
This brings up one thing that also appears to be missing -- the "legacy" pyfits compatibility shim. That would be a great thing to have and test with existing code, as I think it will be important to the adoption of astropy. But again, that can be done post this pull request. | |
<author>embray</author> | |
I actually wouldn't be against putting in the pyfits shim in this pull request. Which brings up another question which is what to do with deprecated stuff: My *plan* was to go ahead and remove all deprecated code since this is sort of a "fresh" start. But it might be better to leave it in for the first _n_ releases for the sake of adoption. | |
Right now most of the things marked "deprecated" are just functions/methods that were given new PEP8-compliant names. However, there are some bigger changes incoming on one of my development branches... | |
<author>mdboom</author> | |
My gut feeling is to leave deprecated stuff in with DeprecationWarnings for the first release of Astropy. For the second major release we remove it. (And "import pyfits" would raise a DeprecationWarning, too, obviously). I think it might be easier to say "hey, use astropy, then you have fewer packages to install" rather than "hey, use astropy, make a bunch of changes to your code, and then you have fewer packages to install". The first hit should be free ;) | |
<author>eteq</author> | |
I agree that leaving in deprecated stuff is reasonable. Make sure the deprecation warnings clearly state something like "future versions of astropy will remove these features". Given we have this plan of releasing a "preview" version, that one could perhaps have the compatibility stuff, and it could be removed in the later releases. Or, alternatively, allow an actual "compatibility mode" that you have to manually activate that allows old-style usage. That might be a pain though. We can figure it out later. | |
<author>eteq</author> | |
And my feeling on the tests is that they should always be relative: I'm pretty sure that if I try to run a test with absolute imports by going into the source directory and typing ``py.test`` at the command line, it runs the *installed* astropy rather than the one that's in that source directory, which is the behavior relative imports are intended to avoid. (I haven't tested this, though, so I'm not entirely sure py.test doesn't just figure it out somehow.) | |
I think its definitely a good idea to try to be consistent, and if the only reason is an aversion to seeing ``...``, I think consistency should take precedence. | |
<author>mdboom</author> | |
On Fri, Nov 4, 2011 at 8:47 PM, Erik Tollerud < | |
reply@reply.github.com>wrote: | |
> And my feeling on the tests is that they should always be relative: I'm | |
> pretty sure that if I try to run a test with absolute imports by going into | |
> the source directory and typing ``py.test`` at the command line, it runs | |
> the *installed* astropy rather than the one that's in that source | |
> directory, which is the behavior relative imports are intended to avoid. | |
> (I haven't tested this, though, so I'm not entirely sure py.test doesn't | |
> just figure it out somehow.) | |
> | |
In most cases (certainly anything with C extensions), going into the source | |
directory to run the tests (with relative imports) is not going to work. | |
Nor will it work under Python3 for code that requires 2to3. This is | |
perhaps an argument in favor of using absolute imports in tests -- though | |
perhaps it is confusing. | |
<author>eteq</author> | |
Right now, going into tests and running them always seems to work just fine... for example, I can run all the wcs tests by just cding to the `wcs` directory and doing ``py.test tests``. It won't recompile, that's true, and I'm sure you're right that 2to3 won't work, but the tests do run. I regularly find myself doing this in pure python code, so it seems silly to not allow that use case (it also makes it easier to customize py.test by using shell aliases and the like). | |
It's not that big of a deal - more important I think is that we just agree on a consistent usage and try to stick with it so as to avoid weird, confusing bugs due to sometimes using the local and sometimes using the installed version. | |
<author>migueldvb</author> | |
I've got three test failures in test_image.py using Python 2.7.1 and pytest-2.1.2 on Linux. | |
I guess this must be known already, otherwise I can paste the errors. The output of the test session is: | |
3 failed, 1085 passed, 44 skipped in 7.34 seconds | |
<author>astrofrog</author> | |
I'm also getting three failing tests, with Python 2.7.2 on MacOS 10.6.8. I'm going to paste the failures here for the record: | |
=================================== FAILURES =================================== | |
_____________________ TestImageFunctions.test_verification _____________________ | |
self = <astropy.io.fits.tests.test_image.TestImageFunctions object at 0x10462ba50> | |
capsys = <_pytest.capture.CaptureFuncarg instance at 0x1051883f8> | |
def test_verification(self, capsys): | |
# verification | |
c = fits.Card.fromstring('abc= a6') | |
c.verify() | |
out, err = capsys.readouterr() | |
> assert ('Card image is not FITS standard (equal sign not at column 8).' | |
in err) | |
E assert 'Card image is not FITS standard (equal sign not at column 8).' in u'' | |
Library/Python/2.7/lib/python/site-packages/astropy-0.0dev_r254-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/tests/test_image.py:217: AssertionError | |
_________________________ TestImageFunctions.test_fix __________________________ | |
self = <astropy.io.fits.tests.test_image.TestImageFunctions object at 0x1050c8f50> | |
capsys = <_pytest.capture.CaptureFuncarg instance at 0x1050ee5a8> | |
def test_fix(self, capsys): | |
c = fits.Card.fromstring('abc= a6') | |
c.verify('fix') | |
out, err = capsys.readouterr() | |
> assert 'Fixed card to be FITS standard.: ABC' in err | |
E assert 'Fixed card to be FITS standard.: ABC' in u'' | |
Library/Python/2.7/lib/python/site-packages/astropy-0.0dev_r254-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/tests/test_image.py:226: AssertionError | |
________________ TestImageFunctions.test_verification_on_output ________________ | |
self = <astropy.io.fits.tests.test_image.TestImageFunctions object at 0x10463aa10> | |
capsys = <_pytest.capture.CaptureFuncarg instance at 0x1050eefc8> | |
def test_verification_on_output(self, capsys): | |
# verification on output | |
# make a defect HDUList first | |
x = fits.ImageHDU() | |
hdu = fits.HDUList(x) # HDUList can take a list or one single HDU | |
hdu.verify() | |
out, err = capsys.readouterr() | |
> assert "HDUList's 0th element is not a primary HDU." in err | |
E assert "HDUList's 0th element is not a primary HDU." in u'' | |
Library/Python/2.7/lib/python/site-packages/astropy-0.0dev_r254-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/tests/test_image.py:471: AssertionError | |
============== 3 failed, 1085 passed, 44 skipped in 15.86 seconds ============== | |
<author>astrofrog</author> | |
This fork of astropy does not build on Python 3.2: | |
Traceback (most recent call last): | |
File "setup.py", line 58, in <module> | |
for package in setup_helpers.iter_setup_packages(): | |
File "/Users/tom/tmp/astropy_erik/astropy/setup_helpers.py", line 80, in iter_setup_packages | |
module = import_module(name) | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/importlib/__init__.py", line 124, in import_module | |
return _bootstrap._gcd_import(name[level:], package, level) | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/importlib/_bootstrap.py", line 807, in _gcd_import | |
_gcd_import(parent) | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/importlib/_bootstrap.py", line 819, in _gcd_import | |
loader.load_module(name) | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/importlib/_bootstrap.py", line 436, in load_module | |
return self._load_module(fullname) | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/importlib/_bootstrap.py", line 141, in decorated | |
return fxn(self, module, *args, **kwargs) | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/importlib/_bootstrap.py", line 342, in _load_module | |
exec(code_object, module.__dict__) | |
File "/Users/tom/tmp/astropy_erik/astropy/io/fits/__init__.py", line 19, in <module> | |
from . import column | |
File "/Users/tom/tmp/astropy_erik/astropy/io/fits/column.py", line 32, in <module> | |
NUMPY2FITS = dict([(val, key) for key, val in FITS2NUMPY.iteritems()]) | |
AttributeError: 'dict' object has no attribute 'iteritems' | |
<author>mdboom</author> | |
@eteq: running "py.tests tests" in "astropy.wcs" doesn't work for me. The import fails because the C extension is not there. Is there something you're doing differently? | |
<author>embray</author> | |
@eteq @mdboom That's partly what `./setup.py test` is for. It will make sure all your C extensions are compiled and importable. That runs just fine out of the source (in Python 2.x). I'm still planning on coming up with a hack to get it to work for 2to3'd code, but haven't worked on it yet. | |
<author>embray</author> | |
@migueldvb @astrofrog The tests that are failing for you are ensuring that specific warning messages are being output. I need to modify the tests to just catch the warnings themselves. But this is still curious--could it be that on your systems the warnings are going to stdout instead of stderr? Or are you somehow running Python with all warnings turned off? | |
<author>astrofrog</author> | |
I don't think I'm running Python with warnings turned off (or at least I've never set it up that way). Let me know if you want me to run any snippet to diagnose the issue. | |
<author>embray</author> | |
@astrofrog Then the only other thing I can think if is that your warnings are somehow going to stdout instead. A simple test you could try, if you're inclined is: | |
```python | |
def test(capsys): | |
import warnings | |
warnings.warn('Warning') | |
out, err = capsys.readouterr() | |
capsys.close() | |
print 'out', out | |
print 'err', err | |
``` | |
Put that in a module and run it with py.test and see which output stream the warning went to. But either way, I need to rewrite those tests to capture the warnings instead of relying on the text being in stdout/err. | |
<author>mdboom</author> | |
FWIW -- `test_wcs.py` has a test that does warning capturing. | |
<author>migueldvb</author> | |
I have run that test with py.test with the following session log: | |
========================================== test session starts ========================================== | |
platform linux2 -- Python 2.7.1 -- pytest-2.0.3 | |
collected 1 items | |
test_err.py . | |
======================================= 1 passed in 0.01 seconds ======================================== | |
I have installed the stock python 2.7 package from gentoo's portage tree | |
and did not change any configuration settings if I remember correctly. | |
On Mon, Nov 07, 2011 at 08:24:08AM -0800, Erik Bray wrote: | |
> @astrofrog Then the only other thing I can think if is that your warnings are somehow going to stdout instead. A simple test you could try, if you're inclined is: | |
> | |
> ```python | |
> def test(capsys): | |
> import warnings | |
> warnings.warn('Warning') | |
> out, err = capsys.readouterr() | |
> capsys.close() | |
> print 'out', out | |
> print 'err', err | |
> ``` | |
> | |
> Put that in a module and run it with py.test and see which output stream the warning went to. But either way, I need to rewrite those tests to capture the warnings instead of relying on the text being in stdout/err. | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/pull/47#issuecomment-2655571 | |
<author>embray</author> | |
@migueldvb Sorry, I forgot to add you should run it with `py.test -s test_err.py` so that it outputs the print statements at the end of the test. | |
<author>migueldvb</author> | |
@iguananaut Thanks for the indication, the warning was sent to sdterr so | |
that seems to be configured correctly. | |
========================================== test session starts ========================================== | |
platform linux2 -- Python 2.7.1 -- pytest-2.0.3 | |
collected 1 items | |
test_err.py | |
out | |
err /home/miguel/tmp/tests/test_err.py:3: UserWarning: Warning | |
warnings.warn('Warning') | |
. | |
======================================= 1 passed in 0.13 seconds ======================================== | |
<author>eteq</author> | |
@mdboom @iguananaut - Are you using ``python setup.py develop`` to install astropy in developer mode? If you do that it puts the compiled stuff where needed to be found by the tests. I'm not sure if this was an intended "feature" or just chance that it makes the tests work, but it is convinient... But, again, it doesn't re-build them - @iguananaut is certainly right that the best way to deal with that is the ``setup.py test`` command. But for pure-python packages, I don't see why we wouldn't want this option available. | |
Is there some practical reason that absolute imports are better in this case (other than aesthetics)? | |
<author>embray</author> | |
I always use develop mode. The fact that it makes compiled modules importable is very much an intended feature--it wouldn't be too useful without it (except for pure Python packages). I wrote the 'test' command to rebuild when necessary, I think. | |
As for imports, I've been thinking about that and am leaning toward making them all relative. If nothing else it seems safer just to be consistent regardless of aesthetic hangups. | |
<author>embray</author> | |
Trying with Python 3. Here again we have the problem of setup.py importing from astropy.io.fits.setup_package and breaking because astropy.io.fits.__init__ imports from several submodules that need to be 2to3'd before they can be imported. | |
<author>eteq</author> | |
Ooh, that is a problem. So 2to3 doesn't get run before setup_package? Any way we can tweak it to go in the correct order? | |
<author>embray</author> | |
Not easily... This question has come up before. I'm poking at a possible workaround... | |
<author>mdboom</author> | |
For VO, I added a flag to __builtins__ to know if we're in the build -- if so, `astropy.io.vo.__init__.py` doesn't import much of anything. It's not terribly elegant, but it works. | |
<author>embray</author> | |
I'm closing this pull request, in part because I renamed the branch that it's based on, and I don't think GitHub liked that. At this point astropy.io.fits is a bit closer to merging too, so I think a new discussion/review is in order soon. Unless someone beats me to it, the new pull request will be #102. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Support overriding config options via environment variables | |
<author>embray</author> | |
I first wrote this as a comment in #35, but I'm creating a separate issue for this so it doesn't get forgotten. I will volunteer to implement it. This is slightly higher priority now that I'm getting somewhere with PyFITS implementation, and I would like to hook it into the eventual config system. | |
It would be good to have the ability to override any config options via environment variables. These environment variables would follow some standard naming convention like `ASTROPY_SUB_PACKAGE_OPTION`. There could also be some way to register shorter aliases for commonly used options, but I don't have any thoughts yet on the specifics of that.<br /><br />In order to support these environment variables transparently, the easiest approach I see (which I usually end up doing anyways) is to provide a simple wrapper around `ConfigObj` objects. I like to use something simple like `Configuration`. This is a much simpler object than the `ConfigObj` so that fewer details of reading/writing the configuration are exposed to subpackage/affiliated package authors. It would still provide a dictionary-like interface and many of the methods of `ConfigObj`. Though rather than `write()` it has a `save()` method which I think is a little more obvious. If a user wants to update the configuration they simply call `config.save()`. This method would prevent saving any values that were overridden by environment variables.<br /><br />Using a wrapper can have other eventual benefits, since it allows us to provide some abstractions around the configuration object that aren't specific to any one backend implementation (not that I would suggest we move away from `ConfigObj` or anything--it's just nice to have additional flexibility). | |
<author>embray</author> | |
I've started poking at this a bit in iguananaut/astropy/config-envvars, but hit a small snag. From my last commit message: | |
When a config is written to a file I don't want it to save the value that was set by an environment variable. This could be implemented through the save_config function. But if a user calls the .write() method directly on the ConfigObj object it won't work correctly. I could be fine with that as long as we make sure the user always uses our API... | |
Another possibility is that there's a recent patch for ConfigObj that makes it possible to subclass the Section class throughout ConfigObj. This would allow us to use a simple Section subclass to implement this functionality consistently. | |
<author>eteq</author> | |
I don't fully understand the problem... why would the user be calling `ConfObj.write()`? As the `ConfigurationItem` class is written, they should always either call `ConfigurationItem.save()` on a particular item or `astropy.config.configs.save_config()`. The idea I have here is to make it so the user doesn't even know ConfigObj is actually even present - all they know about is the `astropy.config.configs` stuff. So you could add a method to the `ConfigurationItem` that checks if the current item setting came from the environment variable or not, and if it does, temporarily put back in whatever it's "supposed" to be when saving. Not terribly elegant, but that was part of the reason I thought we wanted an interface where we control all the entry points... | |
<author>embray</author> | |
@eteq I already have ConfigurationItem doing as you suggested, and that works. So you're right, as long as they only use the API we give them I guess there's no problem. | |
The only place that there is a problem is that get_config() still returns a ConfigObj object. That's where I was running into issues... I don't really mind that it does, though I suppose if we really wanted we could convert it to a normal dict--that would control things even more. | |
<author>eteq</author> | |
Ahh, now I see what you're saying. I think it's good to have access to the actual `ConfgObj` just in case it's needed. But given what you've said here, we should probably add the following to the get_config docstring: | |
``` | |
.. warning:: | |
Calling :meth:`ConfigObj.write` directly may incorrectly save your configuration files. Instead, always use `ConfigurationItem.save` or `save_config` to save configuration settings. | |
``` | |
(note that I'm doing the ``:meth:`` thing in `ConfigObj` becaue configobj doesn't seem to do intersphinx, so sphinx won't know it's a function and do the () thing without explicit marking) | |
<author>embray</author> | |
Sounds fine. Also, I usually use :meth:, :func:, and friends even when they can be otherwise be inferred, just in case... | |
<author>eteq</author> | |
Ok, cool. Once you've gotten this updated, maybe convert this issue to a pull request (if you don't know how to do this, see https://github.com/astropy/astropy/pull/98#issuecomment-2988568 or the comment above that one by @mdboom) so we can review and merge that way? | |
<author>embray</author> | |
Cool, so there *is* a way! I don't know why they don't just add this to the UI already--it seems like a commonly requested feature. | |
<author>eteq</author> | |
I've thought the same thing - I mean, it's not really that hard to do it once you dig into the API, but it would be so simple to just add a button... I imagine they'll add it sooner or later. | |
<author>astrofrog</author> | |
Re-scheduling to 1.0 since we should wait until the new APE3 changes are settled. | |
<author>astrofrog</author> | |
Removing the 1.0 milestone since this is low priority and there has been no work on it recently. | |
<author>eteq</author> | |
@mdboom - I was re-visiting some random old issues... do you think this still makes sense in the context of the new config system? Or should we just close this? | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Fix for "python setup.py --help" | |
<author>eteq</author> | |
In the current version, ``python setup.py --help`` leads to a traceback, due to setup_helpers triggering a bug (or perhaps feature?) in distutils. This patch gets around it by simply catching the exception distutils throws and continuing on with the astropy script. | |
This is quite simple, but I'm not entirely sure what this function is used for, so @iguananaut may want to glance at it before I merge it. | |
<author>mdboom</author> | |
Looks good to me. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Failures in tests with Python 3.2 | |
<author>astrofrog</author> | |
As of 7946471129146d93abcbe731171a4a5adbdc3454, two tests have issues with Python 3.2: | |
============================= test session starts ============================== | |
platform darwin -- Python 3.2.2 -- pytest-2.1.2 | |
collected 940 items / 1 errors | |
Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/tests/tests/test_run_tests.py .. | |
Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/tests/tests/test_skip_remote_data.py s | |
Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/utils/tests/test_odict.py sssssssssssssssssssssssssssssssssssssssssss | |
Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/wcs/tests/test_profiling.py.................................. | |
Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/wcs/tests/test_wcs.py ......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................F | |
Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/wcs/tests/test_wcsprm.py ..................................................................................... | |
==================================== ERRORS ==================================== | |
ERROR collecting Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/constants/tests/test_constant.py | |
_pytest.runner:99: in __init__ | |
> ??? | |
_pytest.main:290: in _memocollect | |
> ??? | |
_pytest.main:214: in _memoizedcall | |
> ??? | |
_pytest.main:290: in <lambda> | |
> ??? | |
_pytest.python:175: in collect | |
> ??? | |
_pytest.python:99: in fget | |
> ??? | |
_pytest.python:225: in _getobj | |
> ??? | |
_pytest.main:214: in _memoizedcall | |
> ??? | |
_pytest.python:230: in _importtestmodule | |
> ??? | |
py._path.local:529: in pyimport | |
> ??? | |
Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/constants/__init__.py:29: in <module> | |
> for nm, val in sorted(si.__dict__.iteritems()): | |
E AttributeError: 'dict' object has no attribute 'iteritems' | |
=================================== FAILURES =================================== | |
__________________________________ test_fixes __________________________________ | |
def test_fixes(): | |
""" | |
From github issue #36 | |
""" | |
def run(): | |
header = open(os.path.join(ROOT_DIR, 'data', 'nonstandard_units.hdr'), | |
'rb').read() | |
w = wcs.WCS(header) | |
import warnings | |
with warnings.catch_warnings(record=True) as w: | |
warnings.simplefilter("always") | |
run() | |
> assert len(w) == 3 | |
E assert 4 == 3 | |
E + where 4 = len([<warnings.WarningMessage object at 0x104194650>, <warnings.WarningMessage object at 0x104194690>, <warnings.WarningMessage object at 0x1041946d0>, <warnings.WarningMessage object at 0x104194710>]) | |
Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/wcs/tests/test_wcs.py:238: AssertionError | |
========== 1 failed, 895 passed, 44 skipped, 1 error in 9.86 seconds =========== | |
@eteq and @mdboom, these relate to code you wrote in constants/__init__.py and wcs/tests respectively. | |
<author>eteq</author> | |
The ``constants/__init__.py`` error is weird... in principal the 2to3 script should have converted `iteritems` into `items`, and no fix should have been needed. But anyway, this should fix it, because `items` is on dictionaries in 3.x anyway (and it doesn't alter the functionality at all). | |
<author>astrofrog</author> | |
The constants test now passes correctly, and only the WCS test is failing now. | |
<author>mdboom</author> | |
@astrofrog: for the WCS test, you somehow have an extra warning. I'd like to see what it is. In astropy/wcs/tests/test_wcs.py:test_fixes, can you add under ``for item in w:``, ``print item.message``? And paste the messages displayed here? | |
<author>eteq</author> | |
@astrofrog: I think I know why the constants thing happened - if you take a look at http://packages.python.org/distribute/python3.html, it appears we are supposed to have ``use_2to3 = True`` as an argument in the setup function of ``setup.py`` for it to run 2to3 and convert the source code... I can add that in now in master if you like (or you can), if it won't mess up any of this. | |
<author>astrofrog</author> | |
Ah, good catch. Feel free to add it to master! | |
<author>astrofrog</author> | |
@mdboom - the warnings are: | |
unclosed file <_io.BufferedReader name='astropy/wcs/tests/data/nonstandard_units.hdr'> | |
'spcfix' made the change 'Unmatched celestial axes'. This FITS header contains non-standard content. | |
'unitfix' made the change 'Applied unit alias 'HZ ' -> 'Hz''. This FITS header contains non-standard content. | |
'celfix' made the change 'Unmatched celestial axes'. This FITS header contains non-standard content. | |
By the way, how does one get output to print during tests? I had to write the warnings to a file to be able to view them. | |
<author>mdboom</author> | |
@astrofrog: In my experience, whatever is written to stdout/stderr gets displayed right after the traceback with each failing test. Of course, I realised I asked you to put the print statement *after* the failing assert, so obviously that line never got run. In any case, see attached fix. | |
<author>astrofrog</author> | |
It looks like all tests now pass in Python 2.6, 2.7, 3.1 and 3.2. Merging! | |
<author>astrofrog</author> | |
Hmm, actually now that I've merged in the changes I'm getting another error in Python 3.1 and 3.2 which I wasn't getting with @mdboom's branch: | |
=================================== FAILURES =================================== | |
__________________________________ test_fixes __________________________________ | |
def test_fixes(): | |
""" | |
From github issue #36 | |
""" | |
def run(): | |
with open(os.path.join(ROOT_DIR, 'data', 'nonstandard_units.hdr'), | |
'rb') as fd: | |
header = fd.read() | |
w = wcs.WCS(header) | |
import warnings | |
with warnings.catch_warnings(record=True) as w: | |
warnings.simplefilter("always") | |
run() | |
assert len(w) == 3 | |
for item in w: | |
assert issubclass(item.category, wcs.FITSFixedWarning) | |
> if 'unitfix' in item.message: | |
E TypeError: argument of type 'FITSFixedWarning' is not iterable | |
astropy/wcs/tests/test_wcs.py:242: TypeError | |
=============== 1 failed, 897 passed, 44 skipped in 6.91 seconds =============== | |
Is this to do with the addition of the 2to3 option? | |
<author>mdboom</author> | |
Ah, I see it now, too. Yes, indeed it seems 2to3 broke it, but it's a simple fix. Try this commit. | |
<author>astrofrog</author> | |
Seems to work, thanks! Merging. | |
<author>astrofrog</author> | |
All tests now pass in master with 2.6, 2.7, 3.1, and 3.2 on MacOS 10.6.8. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Issue with pastebin option in astropy.test() with Python 3.2 | |
<author>astrofrog</author> | |
As of 7946471129146d93abcbe731171a4a5adbdc3454, when using ``astropy.test(pastebin='failed')``, an exception related to XML is raised: | |
============================================================================ Sending information to Paste Service ============================================================================= | |
--------------------------------------------------------------------------- | |
AttributeError Traceback (most recent call last) | |
/Users/tom/<ipython-input-3-144d52ea8b0a> in <module>() | |
----> 1 astropy.test(pastebin='failed') | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/tests/helper.py in run_tests(module, args, plugins, verbose, pastebin, remote_data) | |
121 all_args += ' --remotedata' | |
122 | |
--> 123 return pytest.main(args=all_args, plugins=plugins) | |
124 | |
125 | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.core in main(args, plugins) | |
455 config = hook.pytest_cmdline_parse( | |
456 pluginmanager=_pluginmanager, args=args) | |
--> 457 exitstatus = hook.pytest_cmdline_main(config=config) | |
458 except UsageError: | |
459 e = sys.exc_info()[1] | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.core in __call__(self, **kwargs) | |
409 def __call__(self, **kwargs): | |
410 methods = self.hookrelay._pm.listattr(self.name) | |
--> 411 return self._docall(methods, kwargs) | |
412 | |
413 def pcall(self, plugins, **kwargs): | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.core in _docall(self, methods, kwargs) | |
420 mc = MultiCall(methods, kwargs, firstresult=self.firstresult) | |
421 try: | |
--> 422 res = mc.execute() | |
423 if res: | |
424 self.trace("finish", self.name, "-->", res) | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.core in execute(self) | |
338 method = self.methods.pop() | |
339 kwargs = self.getkwargs(method) | |
--> 340 res = method(**kwargs) | |
341 if res is not None: | |
342 self.results.append(res) | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.main in pytest_cmdline_main(config) | |
88 | |
89 def pytest_cmdline_main(config): | |
---> 90 return wrap_session(config, _main) | |
91 | |
92 def _main(config, session): | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.main in wrap_session(config, doit) | |
82 if initstate >= 2: | |
83 config.hook.pytest_sessionfinish(session=session, | |
---> 84 exitstatus=session.exitstatus) | |
85 if initstate >= 1: | |
86 config.pluginmanager.do_unconfigure(config) | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.core in __call__(self, **kwargs) | |
409 def __call__(self, **kwargs): | |
410 methods = self.hookrelay._pm.listattr(self.name) | |
--> 411 return self._docall(methods, kwargs) | |
412 | |
413 def pcall(self, plugins, **kwargs): | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.core in _docall(self, methods, kwargs) | |
420 mc = MultiCall(methods, kwargs, firstresult=self.firstresult) | |
421 try: | |
--> 422 res = mc.execute() | |
423 if res: | |
424 self.trace("finish", self.name, "-->", res) | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.core in execute(self) | |
338 method = self.methods.pop() | |
339 kwargs = self.getkwargs(method) | |
--> 340 res = method(**kwargs) | |
341 if res is not None: | |
342 self.results.append(res) | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.terminal in pytest_sessionfinish(self, exitstatus, __multicall__) | |
316 self.summary_errors() | |
317 self.summary_failures() | |
--> 318 self.config.hook.pytest_terminal_summary(terminalreporter=self) | |
319 if exitstatus == 2: | |
320 self._report_keyboardinterrupt() | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.core in __call__(self, **kwargs) | |
409 def __call__(self, **kwargs): | |
410 methods = self.hookrelay._pm.listattr(self.name) | |
--> 411 return self._docall(methods, kwargs) | |
412 | |
413 def pcall(self, plugins, **kwargs): | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.core in _docall(self, methods, kwargs) | |
420 mc = MultiCall(methods, kwargs, firstresult=self.firstresult) | |
421 try: | |
--> 422 res = mc.execute() | |
423 if res: | |
424 self.trace("finish", self.name, "-->", res) | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.core in execute(self) | |
338 method = self.methods.pop() | |
339 kwargs = self.getkwargs(method) | |
--> 340 res = method(**kwargs) | |
341 if res is not None: | |
342 self.results.append(res) | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.pastebin in pytest_terminal_summary(terminalreporter) | |
49 if tr.config.option.debug: | |
50 terminalreporter.write_line("xmlrpcurl: %s" %(url.xmlrpc,)) | |
---> 51 serverproxy = getproxy() | |
52 for rep in terminalreporter.stats.get('failed'): | |
53 try: | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/_pytest.pastebin in getproxy() | |
39 | |
40 def getproxy(): | |
---> 41 return py.std.xmlrpclib.ServerProxy(url.xmlrpc).pastes | |
42 | |
43 def pytest_terminal_summary(terminalreporter): | |
/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.0dev_r255-py3.2-macosx-10.6-x86_64.egg/astropy/extern/pytest.py/py._std in __getattr__(self, name) | |
13 m = __import__(name) | |
14 except ImportError: | |
---> 15 raise AttributeError("py.std: could not import %s" % name) | |
16 return m | |
17 | |
AttributeError: py.std: could not import xmlrpclib | |
@jiffyclub - Is this fixable, or do we need to report this as a bug in py.test? | |
<author>jiffyclub</author> | |
My guess is a bug, but I'm not an expert in Python 3. Could you try it with an installed py.test, instead of the astropy packaged py.test, and see whether the same thing happens? | |
<author>embray</author> | |
This would be a py.test bug I think. In Python 3 xmlrpclib became just xmlrpc. So py.test should be doing a different import for Python 3. | |
<author>jiffyclub</author> | |
I've filed an issue with py.test on this: https://bitbucket.org/hpk42/pytest/issue/87/pastebin-fails-under-python-3 | |
<author>eteq</author> | |
Is this something we can monkeypatch to get around, or should we just close this but for now? (or wait until py.test fixes it?) | |
<author>jiffyclub</author> | |
It may be possible to fix this in our packaged py.test by getting and fixing the source code, then regenerating the stand-alone script. I will give that a shot and see what happens. If I get it working I can also give the fix back to py.test. | |
<author>astrofrog</author> | |
@eteq - I think we should definitely keep this ticket open until the issue is fixed either by us or by the py.test developers. | |
@jiffyclub - great! | |
<author>jiffyclub</author> | |
This issue has been fixed in the py.test trunk but I don't know when the next official release will be. I haven't tested it yet with our py.test standalone script due to problems getting the latest astropy built on my computer. Once I've verified that the standalone script is working in both Python 2 and Python 3 I'll open a pull request with the new py.test script and we can close this one, unless we want to leave it open until pastebin also works in a py.test official release. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Revision number in package name | |
<author>astrofrog</author> | |
This has been discussed a bit before but I just wanted to open a ticket to discuss this formally. The inclusion of a 'revision number' in the package name is causing site-packages to get cluttered up: | |
astropy-0.0dev_r153-py2.7-macosx-10.6-x86_64.egg/ | |
astropy-0.0dev_r220-py2.7-macosx-10.6-x86_64.egg/ | |
astropy-0.0dev_r221-py2.7-macosx-10.6-x86_64.egg/ | |
astropy-0.0dev_r227-py2.7-macosx-10.6-x86_64.egg/ | |
astropy-0.0dev_r255-py2.7-macosx-10.6-x86_64.egg/ | |
astropy-0.0dev_r256-py2.7-macosx-10.6-x86_64.egg/ | |
astropy-0.0dev_r277-py2.7-macosx-10.6-x86_64.egg/ | |
astropy-0.0dev_r957-py2.7-macosx-10.6-x86_64.egg/ | |
and I keep having to clear this out manually. I don't think the revision number should be part of the package name, especially since it is not even a real revision number. If one imagines a branch of astropy that is 10 commits ahead of master which is on r100, then if I install that branch I will get r110. However, if a new commit is made on master, and I install that, then the revision number will be r101 which will appear older than my branch. Anyway, all this to say that I think that in the above example there should only be one package installed, which is: | |
astropy-0.0dev-py2.7-macosx-10.6-x86_64.egg/ | |
In the scenario I described above, this gets overwritten by whatever the latest 0.0 developer version is. I think that the git has could still be stored in astropy, e.g. ``astropy.__githash__``. | |
<author>embray</author> | |
I'd say use `setup.py develop` instead. The .egg-link file doesn't have a version attached to it, so it doesn't add any clutter. For testing installed versions use a virtualenv. | |
<author>eteq</author> | |
I agree with @iguananaut - that also makes testing easier because ``setup.py develop`` compiles the C extensions in-place. | |
the problem with not including the revision number is that sometimes python gets confused and refuses to install a newer version unless you force it or clear out older versions. Maybe this isn't a problem anymore with distribute, but I know I had tons of problems with that when testing with easy_install, and the revision numbers bypass those problems. | |
<author>eteq</author> | |
@astrofrog - should I close this, or do we want to gather more opinions on the topic via the list, or what? | |
<author>astrofrog</author> | |
I still think that in the long term, revision numbers alone might be confusing. What if someone tells you they have an issue with r123 - how do you know what the actual git commit is? There are two ways around this issue that I can see: | |
* Use a timestamp style counter instead, e.g. ``0.1.0dev-20111115125049`` would be equivalent to ``703ed97ad55fb4272cdf900d299ef8e827ce1178``, and given that timestamp, I can easily find the commit. This also helps if there are branches since two different branches with the same number of commits since the branching, the timestamps will still be different. | |
* Add the 6 first characters of the git hash *after* the revision number: ``0.1.0dev-r121-703ed9`` which preserves alphabetical order while allowing us to also see what the git version is without requiring an extra ``__githash__`` variable or something like that. | |
If you want to keep revision numbers, shouldn't we have padding zeros? I.e. on some systems, will ``r2`` not come after ``r121``? | |
<author>astrofrog</author> | |
My second suggestion is similar to the one described here: http://stackoverflow.com/questions/514188/how-would-you-include-the-current-commit-id-in-a-git-projects-files/514668#514668 | |
Maybe we could do version number, followed by revision number, but without r (which reminds me of svn), then followed by part of the git hash? For example: ``0.1.0dev-121-703ed9`` (or ``0.1.0dev-000121-703ed9`` if leading zeros are needed). | |
<author>embray</author> | |
An easier solution than all this would be to just have the version.py template contain a copy of the full git hash. I vastly prefer having a short SVN-like "revision number" in the version string if any at all. But since we can put anything we want in version.py it couldn't hurt to add a `git_commit_hash` variable to version.py. | |
<author>astrofrog</author> | |
@iguananaut - regarding the revision number, should we include padding zeros? Also, I'd have a preference for removing the ``r``, but that's just a personal preference. | |
<author>embray</author> | |
I'd be fine with dropping the -r. Actually, [PEP-386](http://www.python.org/dev/peps/pep-0386/) wants the format `.dev<revision>` so maybe we should actually update our version strings to follow that template. This means putting a '.' before 'dev' (something I've never been fond of, but not for any rational reason), and dropping the '-r'. | |
I don't see why padding zeros. PEP-386 doesn't require them, and most shells will sort just fine without them. | |
<author>eteq</author> | |
@astrofrog - To clarify, the method of comparing version numbers that's important here is the one used by distribute/setuptools. The easiest way to test this is: | |
>>> from pkg_resources import parse_version | |
>>> parse_version('1.2dev-r1023')>parse_version('1.2dev-r123') | |
True | |
>>> parse_version('1.2')>parse_version('1.2dev-r1023') | |
True | |
So the zero-padding is not needed because setuptools' algorithm understands what we mean by this syntax. Note that distutils2/packaging will use a similar but slightly different scheme in the NormalizedVersion class: http://www.python.org/dev/peps/pep-0386/ | |
<author>eteq</author> | |
I do see your point about including the git revision number, though. I like @iguananaut's idea of putting it in the frozen version.py as a special variable - that way it's easy to ask for it if needed. I guess I'm fine either way with adding it to the version number or not, but it definitely has to be e.g. "dev-r123-abd434d" or similar, otherwise the setuptools algorithm will break (and it also makes it rather more easily human-readable) | |
<author>eteq</author> | |
@iguananaut - it appears PEP386 is compatible with setuptools, so it does make sense to switch to that scheme if we're going to be changing it, anyway | |
<author>astrofrog</author> | |
+1 to following PEP386 and +1 to having a variable e.g. ``__githash__`` alongside ``__version__``! | |
<author>eteq</author> | |
@astrofrog,@iguananaut - I think this implements everything requested here. We should definitely not merge this until we've sent an announcement through astropy-dev that this will screw up the version numbers for current installations, though. and @astrofrog, you might check to make sure this doesn't do something weird for the affiliated packages... | |
<author>eteq</author> | |
I think I addressed these comments and rebased against master - I'll put out an announcement on astropy-dev and we can give it a day or two before merging? | |
<author>eteq</author> | |
This has hopefully had time for people to process, so I am merging now. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Running tests using py.test command-line script | |
<author>astrofrog</author> | |
I've tried running the tests using the command-line py.test script with version 4ddb3d71e2dfa557939bb99de4922151eb3bb06d of the core astropy repository: | |
git clone git://github.com/astropy/astropy.git astropy_testing | |
cd astropy_testing | |
cd astropy | |
py.test | |
If I do this, then 809 tests fail. I just wanted to check whether this is ok, or whether this should skip tests that can not be run without compiling (i.e. should it be mostly skipped tests rather than failures). | |
<author>embray</author> | |
I didn't even know there was a py.test command line script. I'd lean toward not using it at all: nosetests has the same problem, which is why nose includes the `setup.py nosetests` command: That's the only way it can find out about extension modules and make sure they're importable. | |
It's conceivable that there's a way to write a plugin that would make this work. But I'd lean toward "don't do that". | |
<author>mdboom</author> | |
This will also fail with code that requires 2to3. Between code that requires C extensions and code requiring 2to3, I wonder if there will be much that can be usefully tested this way, and whether it's worth the effort to declare tests as skipped. Personally, I'd rather encourage a single way to test that relies on a built and working tree, rather than supporting two different approaches, one of which can't really be expected to work. (i.e. saying we support this increases our testing burden). | |
Another reason this approach won't work later is that it chops off the front of the namespace. When I was testing this on the vo branch, there is now "astropy.io", so "import io" actually imports "astropy.io" rather than the stdlib "io", which obviously doesn't work too well. | |
Personally, I think `python setup.py test` is great and should be the encouraged usage -- it performs all of the necessary steps to build and then test. (Though I just noticed it doesn't do 2to3 -- we probably will want to fix that at some point, too. It builds C extensions inplace and then runs that tree. A better approach might be to build normally and then run tests in the build tree). | |
That's just my 2c. | |
<author>embray</author> | |
I definitely plan on fixing the test command to do 2to3. It's a little tricky because it requires messing with the path, but I know it can be done. I just wish tools like nose and py.test had built in support for 2to3. You'd think it's a common enough use case that there'd be a fix for it by now. Hmmm... | |
<author>mdboom</author> | |
FWIW, here's what I do: | |
``` | |
python3.2 setup.py build | |
py.test-3.2 build/lib.linux-x86_64-3.2 | |
``` | |
It should be easy enough to modify the test command to do the equivalent. The nice thing about this is it tests exactly what will be installed -- if it works when installed, it should work here without special cases to handle C extensions, 2to3 etc. | |
<author>embray</author> | |
That shouldn't be too bad then, especially since the 'test' command, as written, runs the tests in a separate process anyways. | |
<author>eteq</author> | |
One important thing I just discovered: currently build is *not* running 2to3 as far as I can tell - we need to add in the ``use_2to3 = True`` keyword in the call to ``setup()``. If I do that, I 2to3 seems to run fine, but it *doesn't* use the 2to3ed version when I run ``python setup.py test``. The docs for distribute (http://packages.python.org/distribute/python3.html) seem to indicate that's supposed to work, though... so maybe there's some way to adjust our test command to inherit from distribute in a way that takes care of this? | |
<author>eteq</author> | |
Oh, but as I said in some pull requests, I think the command line script is a very good thing for pacakges that don't have any C code (or when you're not running in py 3). It makes it easy to just run the test for a given package by cding into the directory and doing ``py.test tests``. I think we want to leave this option available, even if we don't generally recommend its use. | |
<author>embray</author> | |
There's nothing gained from subclassing the built-in test command since it is not at all compatible with py.test. Adding in support for 2to3 can be done in more or less the same way it does though. | |
I'm surprised we didn't already have use_2to3 in setup.py--I could've sworn I'd seen it there before, but maybe I just added it on one of my own branches. | |
<author>embray</author> | |
On second look, I suppose I could subclass distribute's test command just so I don't have to rewrite its `with_project_on_sys_path()` method. Though everything else would have to be overridden. | |
<author>eteq</author> | |
Well, if it's easier to leave things alone and just figure out a way to convince the ``test`` command to use the version built for 3.x, that's fine too. I don't think there's much point in trying to be too compatible with distutils given how confusing it already is. As long as 2to3 is run before the tests are run (and they get run on the 2to3ed code), the route there doesn't seem too important. | |
<author>eteq</author> | |
Oh, and I added use_2to3 is master in commit cece59d07a07d76919a695cbb7a89dd26659e9fe | |
<author>eteq</author> | |
so if you do ``setup.py test`` and then ``py.test`` or ``setup.py develop`` and then ``py.test``, do the failures go away? If so, I think we can close this as "expected behavior". Or alternatively, we could add a note in the testing guidelines about this (e.g. something like "if you want to use the command line py.test be sure to use developer mode) | |
<author>eteq</author> | |
@astrofrog - sorry, I was unclear in the last comment - it was directed at you. Assuming that works, we can probably close this? | |
<author>astrofrog</author> | |
@eteq: if I do ``python setup.py test``, all tests pass. If I then do ``py.test`` in the root directory all tests pass but I get 13 errors that are related to collecting tests in ``build``: | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/constants/tests/test_constant.py | |
import file mismatch: | |
imported module 'astropy.constants.tests.test_constant' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/constants/tests/test_constant.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/constants/tests/test_constant.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/converter_test.py | |
import file mismatch: | |
imported module 'astropy.io.vo.tests.converter_test' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/io/vo/tests/converter_test.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/converter_test.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/exception_test.py | |
import file mismatch: | |
imported module 'astropy.io.vo.tests.exception_test' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/io/vo/tests/exception_test.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/exception_test.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/tree_test.py | |
import file mismatch: | |
imported module 'astropy.io.vo.tests.tree_test' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/io/vo/tests/tree_test.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/tree_test.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/ucd_test.py | |
import file mismatch: | |
imported module 'astropy.io.vo.tests.ucd_test' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/io/vo/tests/ucd_test.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/ucd_test.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/util_test.py | |
import file mismatch: | |
imported module 'astropy.io.vo.tests.util_test' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/io/vo/tests/util_test.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/util_test.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/vo_test.py | |
import file mismatch: | |
imported module 'astropy.io.vo.tests.vo_test' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/io/vo/tests/vo_test.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/io/vo/tests/vo_test.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/tests/tests/test_run_tests.py | |
import file mismatch: | |
imported module 'astropy.tests.tests.test_run_tests' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/tests/tests/test_run_tests.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/tests/tests/test_run_tests.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/tests/tests/test_skip_remote_data.py | |
import file mismatch: | |
imported module 'astropy.tests.tests.test_skip_remote_data' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/tests/tests/test_skip_remote_data.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/tests/tests/test_skip_remote_data.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/utils/tests/test_odict.py | |
import file mismatch: | |
imported module 'astropy.utils.tests.test_odict' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/utils/tests/test_odict.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/utils/tests/test_odict.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/wcs/tests/test_profiling.py | |
import file mismatch: | |
imported module 'astropy.wcs.tests.test_profiling' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/wcs/tests/test_profiling.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/wcs/tests/test_profiling.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/wcs/tests/test_wcs.py | |
import file mismatch: | |
imported module 'astropy.wcs.tests.test_wcs' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/wcs/tests/test_wcs.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/wcs/tests/test_wcs.py | |
HINT: use a unique basename for your test file modules | |
ERROR collecting build/lib.macosx-10.6-x86_64-2.7/astropy/wcs/tests/test_wcsprm.py | |
import file mismatch: | |
imported module 'astropy.wcs.tests.test_wcsprm' has this __file__ attribute: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/astropy/wcs/tests/test_wcsprm.py | |
which is not the same as the test file we want to collect: | |
/Users/tom/tmp/astropy_testing/astropy/astropy_testing_new/build/lib.macosx-10.6-x86_64-2.7/astropy/wcs/tests/test_wcsprm.py | |
HINT: use a unique basename for your test file modules | |
If I go into ``astropy`` and run ``py.test``, all tests pass. | |
If I re-clone and do ``python setup.py develop`` then ``py.test`` in the root then all tests pass. | |
<author>eteq</author> | |
There are two solutions to this: | |
* Tell anyone who uses py.test at the command line to call it as ``py.test astropy`` | |
* Add a line in setup.cfg that tells py.test not to search for tests in ``build``. I test this just now and it prevents the erroring tests you report here. | |
My preference is for option 2 - if that's ok with you, I'll make this change in master and close this issue. | |
<author>astrofrog</author> | |
I like option 2 as well. I think we should also make sure that we explicitly say somewhere in the docs that one cannot simply run py.test in the root directory before setup. | |
<author>eteq</author> | |
I added some documentation about this in 92c4a25b36b2ff6d39b4004f8b734879b5767a3f - @jiffyclub, you may want to take note of these changes and include them in the changes you're going to make to the testing docs with the pep8 addition. | |
</issue> | |
<issue> | |
<author>wkerzendorf</author> | |
no module named version in setup.py | |
<author>wkerzendorf</author> | |
Hi Guys, | |
I fetched from upstream (just now) and it seemed to have started out in the right direction, but then met an unfortunate end: | |
python setup.py develop | |
Downloading http://pypi.python.org/packages/source/d/distribute/distribute-0.6.19.tar.gz | |
Extracting in /var/folders/zc/cbprg2bj1l50c3y6sfzf64wm0000gn/T/tmpXuKWeL | |
Now working in /var/folders/zc/cbprg2bj1l50c3y6sfzf64wm0000gn/T/tmpXuKWeL/distribute-0.6.19 | |
Building a Distribute egg in /Users/wkerzend/scripts/python/astropy | |
/Users/wkerzend/scripts/python/astropy/distribute-0.6.19-py2.7.egg | |
Traceback (most recent call last): | |
File "setup.py", line 13, in <module> | |
import astropy | |
File "/Users/wkerzend/scripts/python/astropy/astropy/__init__.py", line 3, in <module> | |
from astropy.version import version as __version__ | |
ImportError: No module named version | |
I have never tried python setup.py develop before but I heard it somehow sets the package path so I can continue to developing while my astropy is in its place. | |
Cheers | |
Wolfgang | |
<author>eteq</author> | |
Are you sure you're actually on the upstream version? The error you're indicating here was present in a much older version but was fixed a while back (unless it has returned...) - and it looks like your fork is about a month old... if you do ``get show HEAD``, what does it give you? | |
<author>wkerzendorf</author> | |
you are right!. But I did do a git fetch upstream. hmm.. | |
Merge: fe1481a 782d606 | |
Author: Erik Tollerud <erik.tollerud@gmail.com> | |
Date: Fri Oct 7 11:18:58 2011 -0700 | |
Merge pull request #7 from astrofrog/clean-layout | |
Tidies up package to be PEP8 compliant and fixes a few typos. | |
<author>eteq</author> | |
``git fetch upstream`` doesn't update the local source code - it just retrieves it from the server for later use. you want to do ``git pull upstream master`` (or ``git fetch upstream`` followed by ``git merge upstream/master`` - basically the same thing). After that, if you want it to be reflected in yout fork on github, do ``git push origin master`` (or maybe just ``git push`` depending on how you set things up) | |
<author>wkerzendorf</author> | |
yes I got that now worked out. | |
Thanks! | |
</issue> | |
<issue> | |
<author>wkerzendorf</author> | |
AstroTime implementation first draft | |
<author>wkerzendorf</author> | |
Hello Guys, | |
With all the suggestions onboard I have started to do the astrotime implementation. | |
So here's what works: | |
from astropy import astrotime | |
import datetime | |
mytime = astrotime.AstroTime.from_date_gregorian(datetime.datetime(2005, 25, 8, 12, 0, 0)) | |
mytime.to_jd() | |
some number comes out. | |
I have looked at SOFA and other Astronomical Almanac and used their algorithm - to make sure they are right. | |
I have also added tests to the astrotime directory and they check out with pytest. | |
There's also somethings awry with the documentation, but that's because I'm new to sphinx. | |
Ah very important: I've thought about what Tom A. and I think Prasanth mentioned: Speed. Yes I completely agree that our current implementation of an object is not optimized for speed and thought about other implementations. I think a solution for this problem is to say: Please use np.float64 MJD as a standard if you need the speed and use this function to convert when speed and memory footprint is not crucial!. I think we should put this in the documentation. | |
What do you guys think? | |
Cheers | |
Wolfgang | |
P.S.: I have also given you my whole astropy, because I don't know how to mark specific files for pull-requests. | |
Please advise! | |
<author>eteq</author> | |
You seem to have included the ``distribute-0.6.19.tar.gz`` and ``.gitignore.orig`` as part of this pull request, which I think you did not intend. You'll want to remove these two and then combine them using ``git rebase --interactive`` . Then, when you push up to your branch, you'll want to use ``git push -f`` because it will involve re-writing history. | |
(As an FYI, it might be useful for you to put this in a separate branch on your repository, rather than doing your work from master. That generally makes it easier for you to organize your contributions). | |
<author>wkerzendorf</author> | |
There's a fundamental reason that I included the __version__ string: I copied the __init__.py from somehwere else ;-) | |
We can take out whatever is unecessary. | |
<author>wkerzendorf</author> | |
Yes, I also meant to leave these files out. So you suggest using different branches, I think I'll read up on that. | |
<author>eteq</author> | |
Oh, and you should add the license boilerplate ``# Licensed under a 3-clause BSD style license - see LICENSE.rst`` to the top of every file that isn't blank. | |
<author>eteq</author> | |
What I've been doing is use my own master just to track the upstream master, and do most real work from branches. At the command line, you change to whatever branch you want to start from (typically master), and then you can do ``git checkout -b branchname``. Then you do ``git push origin branchname`` and it will update your github account with the new branch. After that, if you want, you can do ``git branch --set-upstream origin/branchname`` and then just ``git push`` and ``git pull`` will work. | |
The IPython people actually recommend completely removing your forked master, because you can always to ``git checkout upstream/master`` to get the upstream version... so that's another approach. | |
If you want to switch to doing this from another branch, just let me know and I'll close this one so you can instead open a different pull request. | |
<author>wkerzendorf</author> | |
Working on it... Give me some time - Should I close this or can I change the pull request on the fly? | |
Cheers | |
W | |
<author>eteq</author> | |
If you make changes to your master branch, it will immediately show up here. But if you want to switch to a different branch, we'll have to close this one and open a new one. Once you decide which, I'll give you more feeback on the content itself. | |
<author>wkerzendorf</author> | |
So I'll close this pull request and open another pull request with an astrotime branch | |
</issue> | |
<issue> | |
<author>embray</author> | |
Test with 2to3 | |
<author>embray</author> | |
Adds support for running the tests in Python 3 after they've been run through 2to3. I tried doing this through a subclass of setuptools' builtin test command as discussed in another thread, but it just wasn't working out as I'd hoped. Rather than fight with it, it was faster to keep the code mostly as it is: then adding 2to3 support was simply a matter of ensuring that 2to3 is called first, and that the tests are run out of the 2to3'd build. | |
As a bonus, I added a test that tests that the tests are 2to3'd :) This was necessary since none of the tests currently in master actually require 2to3. | |
<author>eteq</author> | |
One oddity I've noticed here: Before this, ``python setup.py develop`` followed by ``python setup.py test`` works fine, and the only thing I see in `build` is the `build/tmp...` directory. After this change, doing the exact same thing populates `build/lib...` with a bunch of files. Presumably this is because the build command now gets run. I don't really care too much whether or not there's files in build, but I just wanted to make sure this is expected behavior. | |
<author>embray</author> | |
Yes, this would be expected. On Python 2 it's not necessary to do the build_py command at all, and in fact I could just take it out. But for 2to3 it is necessary. I just figure leave it in even for Python 2--there's no harm in it really, and it's more consistent that way. | |
</issue> | |
<issue> | |
<author>wkerzendorf</author> | |
Astrotime | |
<author>wkerzendorf</author> | |
Hello Guys, | |
With all the suggestions onboard I have started to do the astrotime implementation. | |
So here's what works: | |
from astropy import astrotime | |
import datetime | |
mytime = astrotime.AstroTime.from_date_gregorian(datetime.datetime(2005, 25, 8, 12, 0, 0)) | |
mytime.to_jd() | |
some number comes out. | |
I have looked at SOFA and other Astronomical Almanac and used their algorithm - to make sure they are right. | |
I have also added tests to the astrotime directory and they check out with pytest. | |
There's also somethings awry with the documentation, but that's because I'm new to sphinx. | |
Ah very important: I've thought about what Tom A. and I think Prasanth mentioned: Speed. Yes I completely agree that our current implementation of an object is not optimized for speed and thought about other implementations. I think a solution for this problem is to say: Please use np.float64 MJD as a standard if you need the speed and use this function to convert when speed and memory footprint is not crucial!. I think we should put this in the documentation. | |
What do you guys think? | |
Cheers, | |
Wolfgang | |
<author>eteq</author> | |
I see you still have distribute-0.6.19.tar.gz in your history... this is a bit troubling because its a binary file and it would be better to keep that sort of thing out of the history. This would normally be easily fixed with a rebase, but because you pulled in the master branch in the middle of your commits, this makes it needing of some complex git-fu to fix. That means it's probably best dealt with by whoever merges it into the astropy repo - so if that's not me, whoever it is, remember to clear out the distribute-0.6.19.tar.gz stuff from the history. | |
<author>eteq</author> | |
Having looked at it a bit more, I'm not sure its possible to cleanse distribute from the history - because you pulled into a revision from so far back, a lot of strange things are happening, and its proving impossible to rebase. It might actually be better if you reset this branch to just have all the modifications you made in a single commit. You can do this by ``git reset --soft upstream/master`` (make sure you've done ``git fetch upstream`` before this, though). That will reset your git status to the current master, but leave all the local changes you have in place (and also staged for commit) - then you can just do ``git commit -m "whatever"`` and it will leave behind a single commit with these changes. then do ``git push -f origin`` and it should update the pull request here to only have that one commit. | |
<author>mdboom</author> | |
On the speed question: what you might want to do is to allow AstroTime to store a vector of times and have all of the operations work on those. Then the user could pass in two numpy arrays (rather than just two scalars) and run all of the conversions vectorized on those. Then if needing to convert a number of times at once the overhead should be much less. The interface wouldn't really change if the user passes in scalars. | |
<author>astrofrog</author> | |
I agree with @mdboom, and this is a point I raised regarding coordinate conversion too during a conversation with @eteq. I believe that these containers should allow for n-dimensional numpy arrays as well as scalars, as it's *much* more efficient to have a single AstroTime object with 10^6 times than 10^6 AstroTime objects. | |
<author>wkerzendorf</author> | |
@astrofrog, @mdboom, So that raises a question: Should all coordinates (including time) be able to store multiple sets of coordinates? | |
Irregardless of storing multiple coordinates in an array object: I maintain it would be good to suggest only MJD np.float64 if you need speed (and don't mind the loss of accuracy) and not storing them as astrotime objects (which will always be slower and higher memory footprint). | |
<author>wkerzendorf</author> | |
@eteq I'll try the black art of the git reset ;-) | |
<author>wkerzendorf</author> | |
@mdboom: I know that pytest can run pep8 compliance test (that's what their webpage says). How can I initiate that for my astrotime? | |
<author>mdboom</author> | |
@wkerzendorf: Don't know about using pep8 within py.test. Personally, I installed the pep8 script from PyPI and then I run that at the commandline. It also integrates with flymake in emacs to give me pep8 warnings on the fly -- whatever editor you use may also have that feature. | |
<author>mdboom</author> | |
I think we should strive to support n-dimensional arrays in any coordinate class we create. | |
I think @astrofrog's point was that if we support n-dimensional arrays, the overhead of creating lots of class instances goes away. There is still overhead using this two-number representation here and there may be cases where the user wants to use a single float instead, but that overhead has to do with those numbers, not Python class overhead. We may want to have another class similar to this whose internal representation is a single float64 per value and have functions to convert between the two. | |
<author>wkerzendorf</author> | |
@mdboom: http://pypi.python.org/pypi/pytest-pep8. Maybe this would be a nice thing to integrate and request to run before people submit (including me ;-) )? | |
<author>wkerzendorf</author> | |
@mdboom, @astrofrog: I'm personally not sold on the n-dimensional idea .. yet (And I'm searching feverishly for good counter arguments). I agree with both of you that it should be either take only a scalar or be able to take N-dimensional array (and not only a 1D-array). | |
My current idea was: for simple time coordinate conversion just use this astrotime class, but for n-dimensional stuff just use an array of mjd floats (but this would result in the loss of precision) and then use AstroTime as required, but that might not be as practical. | |
So my last caveat for this idea, is the implementation: Should the user be allowed to just put in a scalar or n-dim array and the program figures out what it is and deals with storage? Or should there be certain class functions that say from_jd_array, from_date_gregorian_array that specifically deal with this issue. | |
As I like clean APIs (which I think we should strive for more rigorously in core, than in affil) I am more for the second option, but am happy to be convinced otherwise. | |
<author>mdboom</author> | |
I say: why limit ourselves? Vectorizing code is often a little extra work, true, but the payoffs are great. | |
I think the user should be allowed to put in a scalar or n-dim array for any of the functions and get the same out. Look at most of how numpy's math functions work for guidance. (e.g. `np.sin(0)` and `np.sin([0, 1])`). | |
I think it's fine to have this 2-number representation and a pure floating point representation for time, but they should both have classes, both handle ndarrays and scalars and be easily convertible. They should behave the same way (with the exception of the underlying representation and the accuracy differences that fall out of that). | |
<author>wkerzendorf</author> | |
@mdboom You're right: Numpy provides this functionality why shouldn't we. I'll copy numpy's implementation (then at least I can blame someone else ;-) ). | |
I think misunderstood me: The mjd float is not represented in a class (just a normal numpy float), I suggest it being sort of a coding guideline. If one needs to convert to a different time format, one can use AstroTime. | |
<author>wkerzendorf</author> | |
@mdboom. On a totally unrelated question: Do you know how they do this in numpy. Do they check with np.isscalar, for a scalar and put it in an array? I just checked np.sin, but I think that's written in C. What is the proper numpy implementation to do this? | |
<author>mdboom</author> | |
You shouldn't have to work about it as long as the code is written to handle numpy arrays, it should also work on scalars. Most of your code looks like it should work as-is (having not actually tried it). For example, np.int64() will convert to either a scalar or array depending on input. I would write a test that uses arrays as inputs, and see what happens. | |
<author>wkerzendorf</author> | |
Ah before I forget: logging is not implemented yet! | |
<author>eteq</author> | |
I definitely think it is *not* a good idea to recommend people just use numpy mjd arrays as the default time. If we "recommend" that, then some people will use MJD, some will use regular JD, and some will use epoch, so you always have to include all possibilities when you write code. I've ended up doing this with my own science codes more than I like to admit and that's with only one person - imagine what it'd be like if there's no standard! | |
So if we're going to bother having I time class, I agree with @mdboom and @astrofrog that it at least has to have an option of being array-ified. | |
<author>eteq</author> | |
@wkerzendorf - looks like the reset worked perfectly - thanks! In general, try to avoid merging from master in a feature branch unless you have to - (and ``git rebase master`` often works better). | |
And did you get the pep8 checker in py.test working? http://pypi.python.org/pypi/pep8/0.6.1 also works, at least until we get the py.test plugin integrated in. | |
Oh, and you said you had docstring issues - I will point out inline specific fixes below, but I suggest you at least glance over http://astropy.org/development/docguide.html for the form we're looking for. This will hopefully be much easier once I have a chance to fix up the docs to autogenerate docstring pages and actually integrate everything into the astropy docs. I also plan to update the docs with easy-to-grab examples of these docstrings. This is my next task after getting config up and running. | |
<author>astrofrog</author> | |
Just a small question regarding the naming of the class - do we really want to call it ``AstroTime`` as opposed to just e.g. ``Time``? It's not like the definition of time is really astronomy-specific. Since if it was called Time it would still be in astropy.time, it's clear that if you are importing from there it's an astronomy-related class, and if one was worried about conflicts, they could use ``from astropy.time import Time as AstroTime``. I think for the majority of users, ``Time`` would be sufficient. Any thoughts? | |
<author>astrofrog</author> | |
Just to play devil's advocate to be sure this is the way we want it implemented: | |
* Do we want to be able to use an instance of AstroTime in mathematical equations, and if so, how do we expect it to behave? | |
* Do we really want a generic AstroTime rather than say a JulianDate class? (which would make it more explicit what it is, and allow it to be used in equations, see above point) | |
* Would overloading the existing datetime object be another possibility, adding methods such as to_jd, to_mjd, etc.? (not necessarily as keen on that, but just wanted to bring it up) | |
<author>eteq</author> | |
As an FYI, I've put up another pull request (#59) describing an alternative approach to this using python longs, and re-using a lot of the AstroTime internals there to make an array-based AstroTime work better. I had been working on this for a while, not knowing that @wkerzendorf was working on it... at any rate, we'll have to decide which of the two to merge, but I think it's probably a good idea to try to combine some pieces of each, as they both have quite different as-implemented functionality. | |
<author>eteq</author> | |
@astrofrog - I see your point - perhaps `Time` does make more sense (fewer letters to type, after all!) | |
<author>astrofrog</author> | |
Just to make it clear where I stand re: my previous points, I think the API I would favor is: | |
from astropy.time import JulianDate | |
I'm not a fan of ``astropy.astrotime`` and I prefer JulianDate, which is really what the class is defining, rather than ``AstroTime``. | |
<author>astrofrog</author> | |
Also, one can then define classes such as ``ModifiedJulianDate`` that get used if someone was to run ``jd.to_mjd()`` | |
<author>mdboom</author> | |
@astrofrog: wrt naming: It's looking like we have some alternative proposals for how to represent time with different tradeoffs. Given that there may be no clear winner, I think the class names should reflect the representation, e.g. `TimeFloat`, `TimeBinom`, `TimeLong` etc. If one if a reasonable default, we could alias `Time` to it. | |
EDIT: I wrote this before your subsequent comments came through. I think what you are suggesting for naming -- `JulianDate` etc. -- is good, however we may need `JulianDateFloat` etc. | |
<author>astrofrog</author> | |
@mdboom: I agree with that. My main request is that we drop the name Astro from the class and sub-module names, which I don't think are necessary. | |
<author>wkerzendorf</author> | |
So I guess there's a lot to comment on: | |
First of all usage of floats: I do believe that most of the time we can just use astrotime-class (i'm going to refer to it as astrotime to make it clear what I mean, rather than to say I want astrotime to be the name). But I think there's situations where people for memory footprint reasons or speed reasons, don't want to do so. That is where my suggestion comes in that we should have a recommendation in place. | |
The only problem I have with Time is that it is dangerously close to a built-in (one uppercase letter). In times of tab completion on an ipython console I think it might be better to have a longer word that makes it explicitely clear what it is. My 2 cents on that issue. | |
The suggestion of different classes for different coordinates: I think it's easier to not have to type check every time we get it as an argument to instantiate a spectrum for example. The implementation of datetime in Python also uses only datetime with conversions to ctime. But I guess we should talk about that one. | |
In addition I have put up a wiki to discuss the different time keeping options. | |
Anyways I guess that's to get started | |
<author>mdboom</author> | |
@astrofrog: I think arithmetic operations on these things would be great, and a natural result of using classes etc. They would be particularly handy in the int/float representation since those operations are not direct. | |
<author>wkerzendorf</author> | |
Yes I was planning to implement those. | |
<author>astrofrog</author> | |
@wkerzendorf: then maybe we could use ``JulianDate`` and things like ``JulianDateFloat`` etc. like @mdboom is suggesting. That way we avoid namespace conflicts (that come with ``Time``) and it's a more 'truthful' description of the contents of the class. Since people will naturally expect the units to be days it can also be used in arithmetic operations with non-date objects. | |
Maybe we could also pick one to be the 'default' precision and call it ``JulianDate``? | |
<author>wkerzendorf</author> | |
@astrofrog: Sure sounds good, we can change this. I think the name is somewhat important to the end user to make sure it is used right, so it might be wise to brainstorm on that a bit more (especially with the idea of using arithmetic with non-date objects). | |
@eteq: what happened to your idea of only long as nanoseconds from the big bang? I think that one has some advantages, maybe. | |
<author>astrofrog</author> | |
@wkerzendorf - not sure nanoseconds since the big bang is possible since we don't know how long the big bang occurred with nanosecond accuracy ;-) | |
<author>wkerzendorf</author> | |
@astrofrog: that's not right, it was 4000 years ago. Just add up all the page numbers in the bible and multiply by 42 ;-) | |
I meant nanoseconds as long with a specific zeropoint. | |
<author>wkerzendorf</author> | |
To not duplicate effort: Should I write a hook for classes to be able to take any arrays and make sure its stored nicely, which we can copy around? I would then not worry about the storage format. | |
<author>wkerzendorf</author> | |
I just reviewed all the options in the shower ;-) | |
In between summary: | |
The things we probably agree on: | |
This thing should work with nd-arrays (@eteq that doesn't preclude use of python base types only their array arithmetic). | |
It should have properties like .jd or .date_gregorian (the .to_functions can live side by side with this afaik) | |
things that are still work in progress: | |
Name: | |
Should have time in it to indicate a generic time function? | |
Should not be time or close to that to avoid conflict with built-in? | |
should be JulianDate or similar to indicate storage format? | |
should indicate array use or not array use? | |
Data storage formats: | |
I think from an implementation point of view there are two main concerns: precision + array-arithmetic or not | |
things that work with array-arithmetic (if we care): | |
* int float | |
* int int (days and seconds in a day) | |
* float float (SOFA style) | |
* float | |
* int | |
* etc. | |
things that don't work with array arithmetic | |
* long float | |
* long (nanoseconds from zeropoint) | |
* decimal (we haven't really thought about this but it does have arbitrary precision, right?) | |
* etc. | |
Error - we haven't talked about it, but it should probably be included again as .error. Something to consider when choosing formats. | |
I hope that is a fair summary and please tell me if I forgot anything or got things wrong. We probably really put this in the wiki (https://github.com/astropy/astropy/wiki/astropy.time). I need to give a talk soon, but can put this in there after. | |
<author>eteq</author> | |
@astrofrog @wkerzendorf - I don't really like the ``JulianDate`` scheme as the *primary* class because in my mind that's actually a particular representation of time, and the point (at least to me) of this object is to provide a generic time interface that everyone can just plug in a specific representation. That is, no one should ever look directly at the ``Time`` class except perhaps to set precision or whatever - instead, they use the ``Time`` interface, and one of the implementations might be ``JulianDateTwoPart`` or something. But for the user should only use the latter to make a new time (or perhaps a factory function). | |
I also am not too concerned about ``astropy.time.Time`` being confused with ``datetime.time`` - the latter looks totally different in interface to anything we're talking about so its not too likely that it will be confusing. | |
<author>eteq</author> | |
Actually, when I think about it, perhaps what we really want is an abstract base class like ``TimeBase`` that implements whatever can easily be expressed in standard operator notation, and provides abstract methods/properties for things like jd, mjd, and datetime objects. Then I think we can get by with two subclasses: one that uses longs and has arbitrarily good precision, and one that's more like this pull request/the array classes I wrote. | |
Perhaps we can even get away with something even simpler like a single floating-point JD representation - that would be much easier to implement and is still fine for anything around the current era to microsecond precision. (see also my comments in the #59 pull request about single float vs int/float combo) | |
<author>eteq</author> | |
Also, do we want to move this discussion to astropy-dev? I'm fine with there or here, but it might be more inclusive to solicit opinions there... | |
<author>wkerzendorf</author> | |
That is a very good idea to move it to astropy-dev. | |
We should also consolidate our effort, before moving to astropy-dev and pull a solution in so people can play around with it. | |
if you guys are all of the opinion that it's not a bad idea to call this thing `time` then let's go with that. | |
I am happy to pick the single float implementation for now (and I think we should just decide on one - if it's bad we can change it). I think the only thing I would prefer is that we use numpy datatypes. | |
What's the next step for consolidation? | |
<author>wkerzendorf</author> | |
As erik suggested I do the discussion for #59 also here: | |
There are two things in #59 that I stumbled across: | |
First do we really need a `timedelta`? A delta can just be expressed as a time object, right? But maybe I'm overlooking something | |
In the init function I think we should just have only 1 possible input way (only the way it's stored, not as a string or something else). This I think is a design choice that will not only affect time, but probably a few other things. | |
so here my reasoning: | |
1. A strict implementation will make sure that there are no errors when parsing. This might not be obvious for time, but for coordinates the string notation ra='22:55:88.0' can mean two different things 22 hours or 22 degrees. I think in general in the core we should strive for very clear cut. In computer science this strict way of only handling 1 input is often suggested under the name 'separation of concerns' | |
2. One could argue for convenience when not having to type in from_XXX. but on ipython I think this is not the case as when I type .from_ I can already see what formats it takes. | |
3. This tab completion might not be that easy, when writing programs (depending on your editor). But as code is more often read than written, I think it is a good design choice to see how the program actually works (e.g. from_jd(x) I actually know that x is a float). | |
What do you guys think? | |
<author>astrofrog</author> | |
@wkerzendorf - the difference of two ``Time`` objects cannot be a ``Time`` object, because my understanding of the ``Time`` object is that it is an *absolute* time. If we only cared about relative times, why even bother with a ``Time`` object at all (after all, we don't have or plan to have a Distance object). | |
<author>wkerzendorf</author> | |
@astrofrog: well all time objects are relative, the time object is relative to ~4000 BC. My point being that they encode exactley the same information. I'm just wondering if we need a timedelta object. | |
<author>astrofrog</author> | |
@wkerzendorf - I guess what I mean is that I thought the point of the time object was that they would all have a fixed reference, so they are absolute in that sense. They are not a value as such (since that depends on the reference), but a point in time. My impression is that 'absolute' times are needed for coordinate or ephemeris-related things. Otherwise, why not just use a thin wrapper for float with time units? (like we did for the constants). In a sense, I don't think ``timedelta`` needs to exist, but that's another matter. I just meant that in any case the difference of two ``Time`` objects is definitely *not* a ``Time`` object. | |
<author>wkerzendorf</author> | |
@astrofrog: I do understand what you mean. Anyways, as you I don't think `timedelta` needs to exist, but also don't care that much. | |
Unfortunatley it has gotten quiet around our time discussion. As I said before I think it would be good to hash out a light version of astrotime that we agree (and just pick a format) and then present to the community and say: Is this good or bad. | |
I wonder if a 20 minute online meeting with just the participants of the `time` pull requests would be quick and more productive than further discussion on this forum. Just making something that the community can play around with and give us feedback. | |
What do people think about that? | |
<author>eteq</author> | |
@astrofrog @wkerzendorf - in my mind a `Time` object represents a fixed instant in time - the internal representation may be fixed to some absolute standard, but the point of having a base object is that it represents the abstract concept of an instant in time while abstracting the details of how it is internally stored. A difference of two times, on the other hand, is a fundamentally different thing because it has different operations. i.e. it makes no sense to add two instants of time together (because there's no absolute reference necessarily), but it *does* make sense to add two deltaT's. I see @astrofrog's point that it could be represented as a float or something, but the whole reason I implemented the complicated scheme with the longs was because that allows for arbitrarily good precision, and you lose that if the standard deltaT is a fixed-precision float. | |
@wkerzendorg: +1 to a live meeting... it may be challenging to schedule given that I believe the 4 people on this pull requests are in +1, +5, +8, and +11 GMT, but I guess that demonstrates the importance of having a consistent time object :) | |
<author>wkerzendorf</author> | |
For the live meeting I can try to schedule one for early next week (maybe not using our current time implementation ;-) )? | |
What do @mdboom and @astrofrog think about this? I think if we try to keep it to less than 30 minutes it can be relatively productive. | |
<author>mdboom</author> | |
A quick phone call or Google+ hangout (if we're all able to do that) is probably a good idea. Maybe we should use doodle.com to set up the time? | |
<author>wkerzendorf</author> | |
Hey guys let's try google+ hangout. As we can all video. | |
So I had a look at the times. I think 1pm for Toronto (and I guess Baltimore is the same timezone) would be ideal as it still has sensible times for everyone else (http://www.timeanddate.com/worldclock/meetingtime.html?iso=20111116&p1=250&p2=137&p3=37). | |
Here's the doodle with timezone support: http://www.doodle.com/2crhhmeewwevk3ay | |
<author>phn</author> | |
Hello, | |
Since the idea of using two numbers is just (?) to prevent loss of precision, why not just use [mpmath](http://code.google.com/p/mpmath/) ? | |
I don't know if this will improve or worsen Numpy usage. | |
Prasanth | |
<author>astrofrog</author> | |
So why not use the built-in Decimal type? | |
>>> from decimal import Decimal | |
>>> a = Decimal(34451.1241204909184198409184109238120830219812421412124) | |
>>> a | |
Decimal('34451.1241204909165389835834503173828125') | |
The only downside is loss of speed if we have to construct Numpy object arrays, but otherwise this takes care of arbitrary precision arithmetic. Of course, the default can still be to simply use `float`, but a high precision mode could use the Decimal class. | |
<author>wkerzendorf</author> | |
Decimal sounds like an interesting idea, however it is in the non-array category. Anyways, are you guys up for the meeting (google+) on monday or tuesday (@11am east coast, @8am west coast time and @5pm central european time). I hope that time is also good for you prasanth. | |
I think it would be good to hash out some initial time object to present to the community. | |
<author>eteq</author> | |
At the time reason I was using `long` instead of `Decimal` is that I didn't know Decimal existed... So I agree that seems an *easier* solution than the long approach. However, I just did some tests, and Decimal operations seem to be roughly 25x slower than long-based math wrapped inside a class. This might not be important, if we don't intend to use it where performance for large datasets is necessary (we're talking 1 us vs 25 us per operation), but its something to consider. | |
@prasanth - As for using mpmath, I think we want to avoid requiring an external library for something as fundamental as a Time class. | |
<author>eteq</author> | |
and 8am west coast time would be quite tricky for me - I've put up a doodle poll with the times that seem likely at http://www.doodle.com/bp9e53i9g5d4ce78 @wkerzendorf, @mdoboom, and @astrofrog, could you fill that out? @prasanth, feel free to do so also if you want to participate. | |
<author>wkerzendorf</author> | |
i guess Monday 11.30am east coast time is too close. Should we say wednesday 11.30 east coast time? | |
<author>wkerzendorf</author> | |
after a quick live discussion we came up with the following first draft to present to the community: | |
scalar class with decimal as precision | |
we keep the name time | |
seconds since j2000.0 | |
calendar_date, mjd | |
This pull request will close and a new implementation written with erik will open soon.. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Include PEP8 plugin in test framework | |
<author>mdboom</author> | |
It was suggested by @wkerzendorf that we may want to have a convenient way to test pep8 compliance within our test framework. There is a py.test plugin for this here: | |
http://pypi.python.org/pypi/pytest-pep8 | |
It does seem configurable so we can turn off some of the more annoying false-positive warnings off if necessary. | |
<author>embray</author> | |
+1 | |
<author>eteq</author> | |
@jiffyclub - do you think you can take a crack at this (in particular, adding it as an option to the test function)? Also useful might be http://pypi.python.org/pypi/pytest-cov which runs coverage tests. | |
<author>jiffyclub</author> | |
I'll see what can be done as soon as I get a chance. | |
<author>jiffyclub</author> | |
The py.test pep8 checker just uses the pep8 package: http://pypi.python.org/pypi/pep8, so getting this into astropy would mean packaging the py.test pep8 plugin, and the pep8 package itself. It looks like they are both single file modules, so that's not bad, but there could be import issues since the py.test plugin imports pep8. The packaged plugin could theoretically be modified to import a local copy of pep8. This seems a little redundant when you can just use the pep8 script itself. | |
Similarly, the py.test coverage plugin requires the coverage package: http://pypi.python.org/pypi/coverage, which is not a simple, single file module. I don't think it would be in any way practical to include this. | |
So, given that these py.test plugins are not self-contained do we want to go ahead and try to include them (pep8, anyway)? | |
<author>mdboom</author> | |
Oops. Didn't mean to push the close button ;) | |
<author>embray</author> | |
Yeah, github makes it way too easy to do that accidentally. There should at least be a confirmation dialog or something. | |
<author>eteq</author> | |
Hmm... I see four approaches, then: | |
1. Include arguments to the pep8 and coverage plugins, but have the test script try to run them, and if it fails it emits a warnings something like "you don't have the pytest-pep8 plugin. Run ``pip install pytest-pep8`` to make this work" | |
2. Package pep8 and pytest-pep8 in extern, along with pytest-coverage, but just have the coverage thing fail if the coverage package is not installed (perhaps with a message like for option 1). | |
3. package pep8 and pytest-pep8 in extern and forget about coverage completely. | |
4. Just forget about the whole thing | |
I have a slight preference for 1, but 2 is ok also. | |
<author>astrofrog</author> | |
I like option 1. It's something that will be useful to such a small handful of users that I don't think we need to clutter up the repository will all the dependencies required for it to work. Checking pep8 compliance should therefore not run by default when doing ``python setup.py test`` but should have a special switch, e.g. ``python setup.py test --pep8`` or a separate ``python setup.py test-pep8``. | |
<author>mdboom</author> | |
+1 for option #1. There's two use cases here: 1) developers and 2) users. 1) cares about this stuff, 2) just wants to make sure their build/installation is sane. | |
<author>wkerzendorf</author> | |
+1 for option 1 as well. I think it would be nice to mention in the development workflow: 'run pep8 before pull-request' | |
<author>astrofrog</author> | |
Can we close this issue since the pull request has been merged? | |
<author>eteq</author> | |
Huh... odd - it was supposed to be closed when I entered a commit message that said "closes #51", but apparently somehow that part did't make it into the message. But anyway, yes, it's supposed to be closed. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
long-based approach to astrotime | |
<author>eteq</author> | |
This pull request is an approach to implementing the AstroTime package that's somewhat different from #57. The key idea here is that the time is internally represented as a python long in the `AstroTime` class. This allows essentially arbitrary user-selected precsision. It also includes an `ArrayAstroTime` subclass that re-uses most of the `AstroTime` methods, but holds *ndarrays* of times instead of single times per object. The price is the lost of arbitrarily high precision, though (it takes the same approach as @wkerzendorf of using a float64 and int64 together). | |
A few things that are not yet working: | |
* datetime/gregorian calendar conversions. These are straightforward to do, but are a bit tricky to do in an array-safe way, so I'm holding off on this until we decide which is the approach to take (vs #57). | |
* using DeltaAstroTime from an ArrayAstroTime. DeltaAstroTime loses some information because it converts to the AstroTime representation - this can be fixed/adjusted, but will require a bit more thought (again, to be done if this approach is deemed worthy). | |
<author>astrofrog</author> | |
I don't think we want to start differentiating between AstroTime (or Time) and ArrayAstroTime (or ArrayTime). If we do that, we're going to have to do the same for all time/coordinate classes, e.g. ArrayGalacticCoordinates as well as GalacticCoordinates. I think the class should be the same whether for scalars and arrays, and let the magic happen inside. | |
<author>mdboom</author> | |
If we include the discussion in #57, we now have 3 different time representations proposed: | |
1. int/float pair | |
2. float | |
3. Python arbitrary long + precision value | |
I hate to see a proliferation of these things, but I see how they all have their use cases. As a guiding principle, we should have a unified interface for all of these classes, with efficient conversions between all of them and Gregorian etc. My worry is that (3) is the odd man out -- it can't handle ndarrays. What if it used object arrays? In playing around just now, it seems like it might work *enough* -- the basic arithmetic operations work and are vectorized. Things like np.sin etc., don't, but maybe we don't need that. | |
My point is -- it would be nice to be able to say: here's 3 different time classes that have different precision, range and performance characteristics, but they share the same interface and are convertible to one another so that you can easy switch them out as needed. (And heck, let's run the same set of core tests on all of them to ensure they remain interface compatible). | |
<author>eteq</author> | |
I don't really see the point of both 1 and 2... 2 seems like the simplest to implement, but 1 is a lot better precision-wise - but if we opt for one of them and make sure it's fully numpy-ified, there's little reason to have both. | |
And I agree that it should be possible to vectorize 3 in a reasonable way, it just takes a bit more thought, and I didn't want to waste the time in case we don't actually want to really implement it. | |
I think it probably makes sense to continue this discussion on #57, though (or perhaps somewhere else), at least until we have a plan of action that we're sure doesn't involve closing this one. | |
<author>eteq</author> | |
As with #57, I'm closing this pull request, as we discussed how to reconcile these approaches and move forward, and we'll implement this in a separate pull request. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Option for PEP8 checking | |
<author>jiffyclub</author> | |
Add PEP8 checking option, using the pytest-pep8 plugin, to astropy test helpers as outlined in #58. | |
I've added a ``pep8`` option to ``helper.run_tests``. Turning it on disables regular tests and turns on PEP8 tests. It also checks for the existence of the ``pytest_pep8`` module and raises a helpful error if it can't be imported. It can also be run with ``python setup.py test --pep8``. | |
If we like this I can add a similar thing for coverage. | |
<author>eteq</author> | |
Hmm... So I'm not sure if this is my fault or what, but a common use case would be checking a single file/module for pep8 compliance. Lets say I want to test `astropy/version_helper.py` for pep8 compliance. I tried ``python setup.py test -m version_helper --pep8``, and I got the following traceback: | |
``` | |
running test | |
running build_py | |
running build_ext | |
Traceback (most recent call last): | |
File "<string>", line 1, in <module> | |
File "astropy/tests/helper.py", line 102, in run_tests | |
raise ValueError('Module not found: {0}'.format(module)) | |
ValueError: Module not found: version_helper | |
``` | |
Whereas if I do ``py.test astropy/version_helper.py --pep8`` I see exactly what I expect (the pep8 output) | |
<author>jiffyclub</author> | |
You are getting that error because when you specify a module with `-m`, or directly in `astropy.test`, it's actually looking for a matching directory (e.g. `astropy/io/fits`), so it won't match to a file. | |
I'm not sure why testing PEP8 compliance with `python setup.py test -m name_of_file --pep8` would be a preferred method compared to either `pep8 name_of_file` or `py.test name_of_file --pep8`. They have the same installation requirements and the latter require less typing and less time since there's no build. | |
<author>eteq</author> | |
Ah, I get it now. The main reason it would be useful to have this is the reason for the ``python setup.py test`` option in the first place: if you're writing code that interacts via cython, or that gets converted via 2to3 (for py 3.x), the setup has to run every time you change the file. I see your point that when that's not necessary just calling the pep8 script is easier... but this possibility might also be useful for the coverage module. Is it going to be a major pain to make it also look for files? | |
Also, part of the reason it confused me was because it says "Module not found", when it really means "Package not found", if you're only looking for directories - so perhaps update that in this pull request? (I see why you said this, as I've also found the module/directory distinction confusing... but it was recently clarified to me that package means directory with an ``__init__.py`` and module is a ``whatever.py`` file). | |
<author>jiffyclub</author> | |
Noted on the package thing, I can change that. | |
There are a couple tricky things about making it look for files: one is that there may be multiple files that match your string. Another is what should be done if your match string also matches a directory? | |
There's also an issue with py.test itself. The natural way to do file selection is with py.test's `-k` option, but I learned today that, as far as I can tell, you can only have one of those. So it's not possible to both specify a matching string using `-k` and turn off regular testing with `-k pep8`. When you're testing a single file you probably don't need to turn off regular testing, but if you're running PEP8 checks on an entire package you may not want the regular tests to run as well. Though maybe we don't care about that and I could take out the `-k pep8`. Then you could target a specific file by running `python setup.py test --args='-k version_helper' --pep8`. | |
Another thing I could do is add a `select` argument to `helper.run_tests` that takes a string and adds it to the arguments as the `-k` option so that it's a little easier to do this kind of selection. Then you could do something like `python setup.py test --select=version_helper --pep8`. Again, I'd have to take out the `-k pep8` built into things right now, but as long as we don't mind regular test reports mixed into the PEP8 test reports that's a possibility. | |
I haven't yet fiddled with the coverage plugin so I have no idea what implications that might have. | |
<author>eteq</author> | |
Hmm... can we do both? That is, if it uses the select method, ``-k pep8`` is not done, but otherwise keep it? That then allows the use case I had in mind with select (in that case ``-k pep8`` is unimportant because it's targeting a single file anyway), while still allowing checks on a whole package or something with the old way. maybe even just call it ``singlefile`` or something instead of ``select``? That might also be useful in coverage (although I also haven't looked really at how coverage works). | |
<author>jiffyclub</author> | |
That's definitely a possibility. I wouldn't call the keyword `singlefile`, though, because `-k` is not guaranteed to select a single file. Though if we did go with `singlefile` we could graft it onto the path to astropy and specify it that way. I'll try that. | |
Another thought I had was to just make a wholly separate `pep8` test function and setup.py command. Then we could reserve the `test` function for running actual tests and not load it up with special cases and too many keywords arguments. I haven't thought this all the way through, but it could be another option. | |
<author>jiffyclub</author> | |
So I think this is a pretty good solution. I added a `test_path` keyword to `helper.run_tests` as an alternative to `package` so that files and directories can be specified as paths. You can pass it a single file or a directory name, so long as whatever you give can be os.path.joined with the astropy root and anything specified with `package`. Now you can do `python setup.py test -t astropy/version_helper.py --pep8` and that'll work. | |
This also has the added benefit of leaving the `-k` option free (unless you're doing PEP8 testing) so you could potentially do `test_path='path/to/a/test.py' args='-k test_name'` to select a single test out of a single file. | |
<author>eteq</author> | |
That looks perfectly fine to me. A ``pep8`` option would also have been fine (on reflection, perhaps that actually makes more sense... but now that you've done the work for the ``test`` command, this is fine). | |
Can you update the testing docs to describe these options? Also, in the coding guidelines, where we mention the pep8 utility, can you change that to instead describe how to do this with the test command? | |
<author>jiffyclub</author> | |
The `test_path` function seems like such a useful feature that I think the the `test` command should have it anyway. And now it's easy to just turn on PEP8 checking with `--pep8` and nothing else. I could certainly be convinced to make a separate function, though, and I'll keep it in mind when future changes come up. | |
<author>eteq</author> | |
Ok, sounds good. Are you going to have a chance to update the docs soon, or should I merge this now and you can issue a separate pull request later? | |
<author>jiffyclub</author> | |
I should get to it in the next day or so. | |
<author>eteq</author> | |
Aside from those two little things, the rest of this looks good, so once you adjust those, I'll merge. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Integrate Python 3 compatibility functions | |
<author>mdboom</author> | |
The Python 3 compatibility hacks in wcs, vo and fits should be merged. This will also form the basis for use by future code. | |
Since io.fits aka pyfits probably has the largest amount of this kind of stuff, it probably makes sense to do this after fits is merged in. | |
<author>hamogu</author> | |
Going over old issues. Is this really still open? Most modules use `six` by now... | |
<author>mdboom</author> | |
I think it is probably no longer the case, but @embray should comment about `fits`... | |
<author>embray</author> | |
Wow this is ancient. But actually yeah there's still more to be done here as there are different subpackages of Astropy all still with their own little Python 3 compatibility hacks (aside from six--mostly relating to strings) that could probably be integrated and simplified. | |
Basically, this is a code cleanup reminder. A big "TODO" | |
<author>astrofrog</author> | |
Would be nice to do, but not critical for 1.0 so removing milestone. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Integrate utility functions | |
<author>mdboom</author> | |
The generic utility functions that are currently spread across a number of new packages should be merged together and moved into `astropy.utils`. | |
<author>eteq</author> | |
One thing to note: this is one of the packages where it's definitely crucial to use autosummary or similar to generate the documentation. It's fine to do what we're doing here for now, I suppose, but with the understanding that it should be removed in favor of something autogenerated, at least on the API side. | |
<author>eteq</author> | |
Oh and another item: As it stands right now, `astropy/utils/__init__.py` imports OrderedDict to its package level, but none of the things you just added are in there. To be consistent, we should probably either remove OrderedDict from the root level (which may need to involve changing other code if its needed somewhere), or importing the useful functions/classes from this pull request into the root of the utils package. | |
<author>embray</author> | |
Cool--when this gets merged I can also start adding utils from pyfits. | |
<author>mdboom</author> | |
I'm going to go ahead and merge this, even though we haven't resolved what gets imported into the top of the `astropy.utils` namespace. That can be resolved later, but I think it's more important to have these utilities out there and getting used. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Optional dependency checker tool | |
<author>mdboom</author> | |
At the Boston meeting, it was discussed that there would be a tool to check for optional dependencies, list the versions that were found and, where possible, install them. I'm assigning @eteq, because I believe he already has a similar tool in Astropysics. | |
<author>mdboom</author> | |
`astropy.io.vo` has an optional dependency on `xmllint` (through the shell -- it is a C command line application). | |
<author>eteq</author> | |
Yep, this is third on my to-do list after the config and a docs cleanup - It'll also tie pretty closely with the affiliated package index (as they will also certainly have optional deps). In the meantime, lets use this issue to keep track of optional dependencies. | |
astropy op deps: | |
* scipy | |
* matplotlib | |
testing op deps: | |
* py.test | |
* pep8 | |
* pytest-pep8 | |
* coverage | |
* pytest-coverage | |
<author>eteq</author> | |
This was originally intended for 0.1, but it looks like it will need to be triaged for 0.2. Note that part of this is ongoing at https://github.com/eteq/astropy/tree/dependencies | |
<author>wkerzendorf</author> | |
@iguananaut: I marked the #476 PR as 0.2 milestone (tentatively). Can this issue then be closed? | |
<author>embray</author> | |
Hm. Let's leave it open for now since #476 is just one possible implementation of this. I'm pretty confident it's what we'll end up going with but I see this as still an "open issue" until some specific solution is merged. | |
<author>astrofrog</author> | |
I'm re-scheduling this as 0.2.1 since I also re-scheduled #476. | |
<author>astrofrog</author> | |
Re-scheduling this for 0.4.0 - I don't think it's important enough to worry about for now | |
<author>eteq</author> | |
Closing a la discussion in #476 | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Return status code from `python setup.py test` | |
<author>mdboom</author> | |
When the tests fail running `python setup.py test`, a non-zero error status code should be returned to the shell. This makes integrating with a CI system such as buildbot easier, as well as writing shell scripts that continue only if the tests pass, etc. | |
<author>embray</author> | |
Looks good. For some reason I assumed 'astropy.test' would already return a status code...but since it's meant to be run at the Python prompt I guess it wouldn't. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
VO build error on OS X Lion | |
<author>jiffyclub</author> | |
I'm getting the follow error trying to build the latest astropy on Lion: | |
``` | |
astropy 54 >:python setup.py build | |
running build | |
running build_py | |
running build_ext | |
building 'astropy.io.vo.iterparser' extension | |
gcc-4.2 -DNDEBUG -g -O3 -O -ansi -DHAVE_EXPAT_CONFIG_H=1 -DBYTEORDER=1234 -DHAVE_UNISTD_H -Iastropy/io/vo/src -Icextern/expat/lib -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c astropy/io/vo/src/iterparse.c -o build/temp.macosx-10.6-intel-2.7/astropy/io/vo/src/iterparse.o | |
astropy/io/vo/src/iterparse.c:268: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'PyObject' | |
astropy/io/vo/src/iterparse.c: In function 'startElement': | |
astropy/io/vo/src/iterparse.c:416: warning: passing argument 3 of 'PyTuple_SetItem' makes pointer from integer without a cast | |
astropy/io/vo/src/iterparse.c: In function 'endElement': | |
astropy/io/vo/src/iterparse.c:495: warning: passing argument 3 of 'PyTuple_SetItem' makes pointer from integer without a cast | |
error: command 'gcc-4.2' failed with exit status 1 | |
``` | |
I can get that particular command to work if I remove the `-ansi` flag. | |
Also, setup.py does not sense that I have Lion, I have to manually `export CC=gcc-4.2` so that wcs will build. | |
<author>mdboom</author> | |
Does removing "inline" from line 268 fix it? | |
As for the sensing of the compiler, it only tries to change it if an alternative compiler has not been set either through the CC environment variable or on the setup.py commandline. This worked for me on Lion, but it may be that I installed a different version of XCode or something where the defaults are different. I would step through setup_helpers.py adjust_compiler() and see where it's deciding not to change the compiler (is it because it's sensing arguments, or because the compiler version doesn't match what it expects etc.) and from there hopefully we can find a way to get it to work automatically on your system also. | |
<author>jiffyclub</author> | |
Yes, removing the `inline` works. Unfortunately, that's not the only source file with `inline`. | |
<author>mdboom</author> | |
It might be helpful to see what `cc --version` gives. Is it `i686-apple-darwin11-llvm-gcc-4.2`? | |
<author>jiffyclub</author> | |
``` | |
(python2)astropy 64 >:cc --version | |
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00) | |
``` | |
But: | |
``` | |
(python2)astropy 68 >:echo $CC | |
gcc-4.2 | |
(python2)astropy 69 >:$CC --version | |
i686-apple-darwin11-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) | |
``` | |
<author>jiffyclub</author> | |
This is strange because wcs also has `inline` statements and I used to be able to build with wcs. The only thing that's new is that instead of using EPD I'm now using the python.org binary Mac installers along with virtualenv. | |
<author>mdboom</author> | |
Hmm... I'm a Mac neophyte, so I'm not sure what's going on here. My Lion machine has `gcc`, which is actually just a symlink to `llvm-gcc`. I don't have a `gcc-4.2`. | |
All works when I build with the system Python. After installing the python.org python, it tries to build C files with `cc` (which setup.py nicely changes into clang for me), and then tries to link with `gcc-4.2`, which I don't have. | |
Interestingly, neither python on my machine puts `-ansi` on the commandline. Those arguments are usually hardcoded with the compiler, and distutils tries to build extensions in the same way as the interpreter was built. That's all fine and good, but it makes it hard to write C99, which all of this code is. I think we might be able to do something like: | |
``` | |
#if __STDC_VERSION__ > 199901L | |
#define INLINE inline | |
#else | |
#define INLINE | |
#endif | |
``` | |
and then replace all usage of `inline` with `INLINE`. Or write some distutils hack to remove the `-ansi` from the commandline. | |
<author>jiffyclub</author> | |
I deleted my python.org binary installs and instead went with a homebrew (http://mxcl.github.com/homebrew/) source install. Build works fine under Python 2.7 and the compiler detection even works. Under Python 3.2 I had to manually set `export CC=/usr/bin/clang` but after that Astropy built fine. | |
The problem I was experiencing was likely that setup.py was grabbing a `-ansi` flag from my Python builds and trying to build Astropy with that flag, which wasn't going to work. | |
As we get closer to a release we're definitely going to have to start making sure that Astropy can build out of the box on a variety of common Python installs, and this demonstrates that we should be surprised to run into problems when we do that. | |
<author>astrofrog</author> | |
Should we really close this ticket, since the python.org binary installation is likely to be quite a common set-up on MacOS 10.7? | |
<author>mdboom</author> | |
I think we should have a list of which specific Python installations on OS-X we support and test those whenever possible. I'm not a Mac guy, so I leave that up to the rest of you to make the list... ;) [BTW -- I'm also investigating whether it would be possible to add our own Mac slaves to the ShiningPanda Jenkins instance so we could start getting some automated Mac testing going] | |
<author>jiffyclub</author> | |
Now that I don't have any open py.test issues I will experiment with getting a build working with the python.org binaries, and once I'm working on that I'll open a new, more specific ticket. | |
As far as which OSX/Python versions we'll test, we should probably bring that up on the mailing list, but I would suggest something like EPD and Python.org binaries on OS 10.5-10.7. Python.org has binaries that work all the way back to 10.3 but I'd be shocked if we can find Macs with anything earlier than 10.5 (Leopard). | |
<author>astrofrog</author> | |
I use the MacPorts Python installation, and will add astropy to MacPorts once it's released, so that will be implicitly supported on the MacPorts side. | |
<author>jiffyclub</author> | |
Out of curiosity, what's the advantage to having Astropy on MacPorts if it will already be on PyPI? | |
<author>jiffyclub</author> | |
Well I don't know what to think, I just reinstalled the python.org 2.7 binary and I was able to build Astropy okay. The gcc flags looked totally different this time. Maybe they've been picked up from my other python install? I might need to test this in a totally new user account... | |
<author>astrofrog</author> | |
@jiffyclub - to make it possible for people to type ``sudo port py27-astropy`` and have it install all dependencies automatically, and ensure it uses the correct compiler. It's similar to the fact we might want to try and include it in package managers on Linux - of course you can get most python packages from pypi, but people who use MacPorts like to be able to install things through that ecosystem. Regardless of whether I add it to MacPorts, I want it to build correctly against the MacPorts Python. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
docs/update-version | |
<author>mdboom</author> | |
Automatically set the version of the documentation to the version of astropy used to build them. | |
<author>astrofrog</author> | |
Looks good to me! | |
<author>eteq</author> | |
This was actually the original scheme I had in mind, but someone (unfortunately, I don't remember who) complained because this means the docs get re-built every time there's a new git commit (because the `release` variable changes). | |
Having said that, I actually prefer it this way (the way this pull request will make it). So I'm happy with it as long as there isn't someone else strongly objecting. | |
<author>astrofrog</author> | |
One possibility is to always strip out the revision number for the docs, though it does make it harder to know exactly which developer version of the docs are being shown. | |
<author>mdboom</author> | |
I wasn't aware of this rejection. Leaving out the revision number is certainly a possible compromise. The shiningpanda build clears out everything each time anyway, so it doesn't really matter how we do this with respect to that. However, I can see how this would be annoying for developers. Maybe there's some way we could have two modes -- one for the CI system (which would put a full revision number, and maybe even a commit hash in the docs), and one for everyone else which would just put the base version (1.0dev or something). | |
<author>embray</author> | |
I don't mind that to docs have to be rebuilt each commit. | |
<author>eteq</author> | |
Yeah, to be honest I often do a rebuild anyway because I want to be sure bugs aren't creeping in from partial builds. So I'm fine with it too. | |
If others aren't, though, I know what @mdboom suggests is possible with readthedocs - there they set a flag to indicate the docs are built in readthedocs instead of as a local build. | |
<author>mdboom</author> | |
So is the consensus to just merge this as-is -- and if we get annoyed by the full doc rebuilds down the road (e.g. if the docs start to take a long time to build) then we deal with it then? | |
<author>astrofrog</author> | |
That sounds like a good way to proceed. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Style and PEP8 fixes | |
<author>mdboom</author> | |
This just includes various style and PEP8 fixes including moving to using decorator syntax for properties. | |
<author>astrofrog</author> | |
I had a quick look at the code, and this seems fine. No tests fail. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
New copy of py.test standalone script with Python 3 compatible pastebin. | |
<author>jiffyclub</author> | |
I've regenerated the py.test standalone script from the most recent py.test trunk on BitBucket to fix #51. | |
I've verified that the pastebin option works with Python 3: | |
http://paste.pocoo.org/show/508490/ | |
http://paste.pocoo.org/show/508491/ | |
<author>eteq</author> | |
Ok, great - I'll give this a day or so to make sure no one has a problem using a development version of py.test, and assuming not, I'll merge this and close issue #51. | |
<author>mdboom</author> | |
I can confirm that "python setup.py test" works for me in Python 2.7 and 3.2 on Ubuntu 11.10. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
UnicodeErrors in vo_test with Python 3.2 | |
<author>jiffyclub</author> | |
See http://paste.pocoo.org/show/508490/ and http://paste.pocoo.org/show/508491/. | |
<author>mdboom</author> | |
Can you check this patch out and let me know if it resolves the issue? (I did not check on Mac OS-X). | |
<author>astrofrog</author> | |
All tests in this branch work fine in MacOS X with Python 3.1 and 3.2, but @jiffyclub should check on his computer as I'm not sure if the tests actually failed prior to this patch on Mac. | |
<author>astrofrog</author> | |
Regarding my previous comment, I just tried running the tests on the previous commit at 8cb3ac1cbdff631b6985b85bea877a891816346e and the tests all passed too, so this must have been a MacOS 10.7 or Linux-specific issue. | |
<author>jiffyclub</author> | |
Confirmed, no failures with 633116ceaa14b42209ff8df582c47c273017e0c5. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Bump minimum Numpy version to 1.4 | |
<author>mdboom</author> | |
1.3 has some problems with byteswapping complex numbers that makes supporting them in `io.vo` almost impossible. I know @iguananaut had mentioned that 1.3 support for the upcoming `io.fits` is also annoying. Should we just bite the bullet and drop 1.3? | |
<author>astrofrog</author> | |
I think this makes sense. I've had issues with 1.3 in the past for structured arrays, and these are going to be used in ``astropy.table``. | |
<author>embray</author> | |
The only issue I can see is that the last Ubuntu LTS release ships with 1.3. But the next LTS release is 12.04...which is probably sooner than we're likely to see significant Astropy adoption. So I don't see it as a likely issue... | |
<author>astrofrog</author> | |
What about having a setup.py flag to force installation regardless of dependencies? That way, ``python setup.py install`` would raise an Exception saying that e.g. Numpy is too old, but if you really want to install it, you can do e.g. ``python setup.py install --force``. | |
<author>embray</author> | |
I don't really care either way, but that would be easy to add if others think it's a good idea. | |
<author>astrofrog</author> | |
In any case I agree with not supporting 1.3 by default. | |
<author>eteq</author> | |
I vaugely prefer just requiring 1.4 because that'll be simpler/more-standard, but I'm fine with either solution. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Fixed PEP8 and whitespace in setup.py | |
<author>astrofrog</author> | |
This is minor, but since I'm including this in the template package, I thought it would be best if it started off PEP8 compliant. A small note on comments - while this is not mentioned by PEP8, I have a strong preference for comments to be written as: | |
# Indicates that ... | |
rather than | |
#indicates that | |
so I've fixed the few occurrences in setup.py :-) | |
If no one objects, I'll merge in a few hours. | |
<author>mdboom</author> | |
When you do, be sure to watch the PEP8 violations plot decrease: | |
https://jenkins.shiningpanda.com/astropy/job/astropy/violations/? | |
:) | |
<author>astrofrog</author> | |
@mdboom: unfortunately, it looks like setup.py is not being tested for PEP8 compliance? | |
<author>mdboom</author> | |
Indeed. Will have to fix that up. | |
On Nov 18, 2011 6:09 PM, "Thomas Robitaille" < | |
reply@reply.github.com> | |
wrote: | |
> @mdboom: unfortunately, it looks like setup.py is not being tested for | |
> PEP8 compliance? | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/pull/71#issuecomment-2795639 | |
> | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Update py.test to stable 2.2 | |
<author>astrofrog</author> | |
@jiffyclub - could you update the py.test bundle to stable 2.2? This will allow parametrized tests using decorators. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Config pkg, take 2 | |
<author>eteq</author> | |
This is a re-work of the remote data and configuration package heavily modified based on feedback in pull request #35 . The important major changes: | |
* Both the cache and config directories now use ~/.astropy/config and ~/.astropy/cache, and the XDG_CONFIG_HOME and XDG_CACHE_HOME environment variables can be used to override these. | |
* The remote data-fetching scheme now works completely on hashes internally - no need to worry about local data names | |
* Data provided in the source code tree can now be accessed using the `get_data` function as though it were in the local `data` directory. | |
* The config system is much enhanced and is now built around a ConfigurationItem object - see the documentation section I wrote that summarizes the typical use case, or look at the top of `data.py` in this pull request. | |
Things that are missing but can be easily added later: | |
* Code to auto-generate the configuration files when astropy is installed (although see the `astropy.config.configs._generate_all_config_items` function - that basically just has do be plugged into the setup) | |
* A way to lookup environment variables in place of the configuration files (#48) | |
* Better documentation for the data system | |
<author>mdboom</author> | |
This looks great. | |
As you point out in the pull request description, this does need some way to write out all of the configuration options to a file when first starting up without a configuration file. Note this can't happen exclusively in the `setup` process, because the user building and installing astropy may be different from the user running it. `setup.py` could generate a config template (which implies importing *everything*, so that may need to use some of the same tricks as `setup.py test`), and installs that. That template is then copied to users' config directories on startup if the file is missing. This approach avoids needing to import everything on startup if the user doesn't have a config file. | |
An alternate (though less automated approach) is the one matplotlib takes where this template file is maintained by hand. | |
<author>eteq</author> | |
@mdboom - Thanks for the detailed feedback! Everything that's not commented on above should be fixed in the version I pushed up just now. | |
Regarding the auto-writing of the first file: I'm very much against the by-hand approach, because I'm certain there will be lots of problems with people forgetting to do so. I tried to write this system here so that the output config file is still pretty nice looking with the necessary comments and so on. | |
So yeah, I think the best approach is to do something like ``setup.py test`` does - my thought was that this could be a new setup.py command, allowing people to wipe their current config and startover if they provide the appropriate option, but otherwise it only creates a new one if it isn't there already. Then we could set it up so that the ``setup.py install`` and ``setup.py develop`` commands run that command (in the non-overwriting form) when they finish everything else. | |
<author>eteq</author> | |
@astrofrog,@iguananaut - you two both had a variety of comments on the last pull request, so I'll wait for you two to either give feedback or say it looks good before merging anything. | |
@hamogu, @phn - you may also be interested, as you both had a few comments in the old pull request. | |
<author>eteq</author> | |
This are notes mostly for myself, as I will make issues for each of the not-yet-completed items after this pull request is done: | |
* Provide some sort of progress feedback when downloading remote data | |
* implement data.astropy.org server | |
* ?move the function that locates it's caller's module to utils? | |
* ?unified file-locking scheme? | |
<author>astrofrog</author> | |
I haven't read everything in detail, but it seems fine to me. In any case, I'm sure we will need to tweak things once we start actually making use of the config utilities. | |
<author>mdboom</author> | |
@eteq: As for writing the defaults -- I think I wasn't clear. `setup.py` can not be responsible for writing the file to the user's home directory. The person running `setup.py` may be a sysadmin installing on a shared server, which is then run by users in multiple locations. These users probably don't have access to `setup.py` or at best won't know where to look for it. | |
What I meant to say is that `setup.py build` (why make it an additional step?) would create the template file, and install that along with astropy. Some mechanism on importing astropy *once installed* would check the user's home directory, and if finding no config file, copy the template there. We can also provide an easy-to-find function to reset all configuration files, too. | |
A simpler solution would be to not generate and install a template file, but simply generate it when needed. This has the downside that all of astropy needs to be imported upon first usage -- maybe not a big deal at present, though. | |
<author>mdboom</author> | |
Looks good to me. Once this is merged, there are some places I'd like to start using it :) | |
<author>eteq</author> | |
@mdboom - ah, I get what you mean now. Given the range of options there, I'll make sure to put it up as an issue and there can be a later pull request to deal with how to actually organize the generation. | |
<author>embray</author> | |
The only problem I see with this is that autogenerating a config file means every module has to be imported once to make sure all config options are found. I don't think there's a way around that though. Honestly, I'm not against including a template config file (or files) in the source code--if a new option is added it would just have to be added to the template. There could even be code to generate it. | |
On the other hand, as long as the generation only has to happen once that's fine too. There should be a function (if there isn't already) to update a user's config file with new options while preserving existing settings. | |
<author>eteq</author> | |
All the remaining changes seem to be targets for later pull requests, so I'll merge this now. Thanks for all the helpful feedback! | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Fix how unique names are handled | |
<author>mdboom</author> | |
Resolves a number of issues handling FIELD names and IDs. If a name is the same as the ID (which is also the case when either one is not provided), only that single string will be used as a column name on the Numpy side. This removes the need to generate bogus-yet-unique column titles just for the sake of Numpy. | |
This fixes an issue reported by @astrofrog. This involves a pretty esoteric part of the VOTable spec -- but it is all a result of the spec not requiring unique IDs for fields. | |
<author>eteq</author> | |
Looks fine to me - @astrofrog, you might want to look over this given that you apparently noticed it, but feel free to merge as far as I'm concerned. | |
<author>astrofrog</author> | |
This looks good, and works for me! I will let @mdboom merge. | |
<author>astrofrog</author> | |
@mdboom - would it be possible to backport this fix to the ``vo`` module, so that I can tell people using ATpy to update to the latest vo svn version? (I don't want to start recommending astropy until we have a preliminary release). | |
<author>mdboom</author> | |
Sure. The fix can be backported. It also looks like atpy might be doing the wrong thing in `votable.py`. It should always use ID rather than name to address columns. ID is always guaranteed to exist and be unique (by the votable code, not by the votable spec). name, given how it's defined in the spec, is not. | |
<author>astrofrog</author> | |
Thanks for spotting that - I changed to using ID and everything works fine now, so no real need to backport this. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Updated the packaged py.test to the latest release version 2.2. | |
<author>jiffyclub</author> | |
See #72. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Check that py.test 2.2.0 or later is being used for external installation | |
<author>astrofrog</author> | |
This implements something similar to what Mark suggested on the astropy-dev mailing list. This is just a suggestion, but just so we actually have some code to discuss. This would mean that we can safely make use of modern features of py.test, and only affects people with old system installations. One possible modification would be to raise an ImportError which would then import the built-in one. Then the only thing I'm not sure about is what happens if someone runs the tests with py.test 2.1 on the command-line, and the bundled one gets imported in tests/helper.py... | |
<author>eteq</author> | |
I strongly favor the ImportError scheme - it should definitely fall back on the bundled version if the user has too-old of a version of py.test to run the tests. But it should also print a warning message. So I'd say something like: | |
``` | |
from warnings import warn | |
class VersionWarning(Warning): pass | |
try: | |
if <wrong version>: | |
msg = "py.test 2.2.0 or later is required" | |
warn(msg) | |
raise ImportError(msg) | |
except ImportError: | |
... | |
``` | |
And then, as you say, the ImportError will be immediately caught, but a user can put in a warning filter if they really want to catch this and have it crap out. | |
On a related point, I think it's not a good idea to have a VersionError like that inside the try block - it should be visible to the outside world so someone else can make use of it in a try...except block if they want. | |
<author>eteq</author> | |
I issued a pull request astrofrog/astropy#8 implementing what I've described here... I also ran it with py.test version 2.1 and everything worked... but which test is it that should fail if I'm using version 2.1 instead of 2.2? Or are there no tests like that yet? (If not, maybe you should add one to tests/tests in this pull request so we can check it here?) | |
<author>jiffyclub</author> | |
As Mark mentioned on the mailing list, we can just add `minversion = 2.2` to `setup.cfg` and py.test will perform this check for us. See http://pytest.org/latest/customize.html. | |
We may still want this pull request to switch to the built-in version, but regardless putting the `minversion` in `setup.cfg` would catch and report the problem if the user is running py.test directly or if for some reason this code fails. I'll open up a new pull request with the `minversion` added to `setup.cfg`. | |
<author>eteq</author> | |
@astrofrog, can you merge my pull request to this? It's sounding like this is a better approach than minversion (at least for now). I think this is probably ready to go pending that pull request. | |
<author>astrofrog</author> | |
@eteq: done - had to rebase and resolve conflicts | |
<author>eteq</author> | |
I was about to merge this, but when I tried it, it seems that if I do ``py.test`` directly at the command line with 2.1 installed, there's no indication that the version is out-of-date. Is this the behavior we want? We could add a plugin that just checks for the version and complains if it's not 2.2... | |
<author>jiffyclub</author> | |
So I've been trying to do this and what I've actually found is that this check of `pytest.__version__` followed by raising an ImportError and importing from `extern_pytest` isn't actually giving us the new version. Add this test to `tests/tests/test_run_tests.py` or a new test file and it will error if you have py.test 2.1 installed, even though the version warning is raised: | |
```python | |
# this test will cause an error if the py.test running it is | |
# less than version 2.2 | |
# copied from http://pytest.org/latest/example/parametrize.html | |
@helper.pytest.mark.parametrize(("input", "expected"), [ | |
("3+5", 8), | |
("2+4", 6), | |
("6*7", 42)]) | |
def test_eval(input, expected): | |
assert eval(input) == expected | |
``` | |
Uninstall your py.test 2.1 and this will work fine. | |
This is also why the `minversion` business wasn't working. It was actually the old version running. I experimented with defaulting straight to the bundled py.test and if I do that putting `minversion = 2.2` in `setup.cfg` works just fine, even if I have py.test 2.1 installed. | |
<author>jiffyclub</author> | |
I've also noticed that on my system py.test prints the message `platform darwin -- Python 2.7.2 -- pytest-2.1.3` even after astropy has issued the warning that it's switching to the bundled pytest. So clearly the switch isn't quite happening, even though the right code is clearly being executed. I played a lot with things like `del pytest` and `del sys.modules['pytest']` to clear the system pytest before importing the bundled pytest but nothing worked. | |
I've coded up my preferred alternative in #110. | |
<author>eteq</author> | |
Closing this because it is solved by #110 | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Put py.test plugins in root level conftest.py file | |
<author>jiffyclub</author> | |
One way to have the same py.test behavior for `astropy.test` and `py.test` at the command line is to place our plugins in a `conftest.py` file. py.test searches for a `conftest.py` in its invoking directory or anywhere directly above that directory and uses any hooks it finds in the file. This works both when calling `py.test` directly at the command line or when invoking it using `pytest.main` as the test helper does. This pull request is a working sample of how that would look. | |
Looking at the py.test documentation (http://pytest.org/latest/plugins.html#conftest-py-local-per-directory-plugins) it even looks like py.test will discover more than one `conftest.py` file and respect them all, though who knows in what order. This means it would be possible to have a local level `conftest.py` that defines some local test setup hooks and the one at the root level should still be found and used. I haven't tested this, though. | |
<author>eteq</author> | |
Oh, that's pretty neat... My only concern is that this leads to a moderately confusing layout because test stuff is in two different places. So maybe the better thing to do is to leave ``astropy/tests/helper.py`` as is, and have ``astropy/conftest.py`` simply say something like ``from .tests.helper import pytest_addoption, pytest_runtest_setup`` ? | |
<author>eteq</author> | |
To clarify, the main reason why layout matters here is that affiliated packages will probably want to use the astropy test utilities. | |
<author>embray</author> | |
+1 to keeping stuff in tests.helper; but also +1 for reading the py.test docs and figuring out that this is possible! That would solve most of the problems (though I'm still for telling *users* to not use py.test). | |
<author>eteq</author> | |
@iguananaut - agreed we want to recommend users use `astropy.test()` (or maybe ``setup.py test`` in some cases) - I'm thinking about developers/CI services or similar here. | |
<author>jiffyclub</author> | |
Just importing the hooks into conftest.py also seems to work. | |
<author>eteq</author> | |
Hmm... actually, may this should be moved all the way to the package root... I was testing it now, and it appears the `--remotedata` option appears if I run py.test from inside the astropy package directory, but *not* if I run it from the root of the source distribution. And if I move the file to the root (and change it from ``from.tests.helper ...`` to ``from astropy.tests.helper ...``), now py.test run from the root of the package makes use of all the plugins (which I think is what we want, right?). | |
<author>eteq</author> | |
Oh and one other thing I noticed: if I use the py.test script, the remotedata switch is ``--remotedata``, while for ``python setup.py test``, it's ``--remote-data`` (or ``-R``). I mildly prefer the latter, but either way is fine as long as they are both consistent with each other. | |
<author>jiffyclub</author> | |
Ah yeah, it hadn't occurred to me to put at the very root (since all the tests are under astropy, right?), but either way should work fine. | |
<author>jiffyclub</author> | |
I am getting a py.test error when I try to change the option from `--remotedata` to `--remote-data` so I will have to investigate that. | |
<author>jiffyclub</author> | |
Figured out the error, good to go. These are all minor changes so I can resort this into multiple pull requests if that'd be preferable. | |
<author>eteq</author> | |
Nah, these are all related, so I'll just go ahead and merge them all in this request. Thanks! | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
VO compatibility package does not include sub-modules | |
<author>astrofrog</author> | |
The VO compatibility package does not include all the sub-modules: | |
>>> from vo.table import parse | |
Traceback (most recent call last): | |
File "<stdin>", line 1, in <module> | |
ImportError: No module named table | |
<author>mdboom</author> | |
I've attached what seems to be a solution. This is all new "packaging-fu" to me, so make sure I didn't miss a simpler solution. | |
I wanted to add a test, however, any sort of test for this would fail in the context of `python setup.py test` because it requires installation (or at least testing against the `build` directory). | |
<author>mdboom</author> | |
@iguananaut, @eteq: Is this the best way to do this? Seems a little "magical" to me. | |
<author>eteq</author> | |
Yeah, for some of this packaging/setup stuff, I think there's no obvious way to implement tests with the unittest frameworks, unfortunately (there's some hints at http://pytest.org/latest/monkeypatch.html but I don't think any of those are of use here). | |
I think your solution works here, but one oddity I've noticed: if I do ``setup.py install`` it all works fine, but ``setup.py develop`` results in ``ImportError: No module named vo``. This isn't a big deal to me, but its definitely a thing to be aware of - I guess that means ``develop`` mode doesn't necessarily do the right thing/anything with data files. | |
<author>eteq</author> | |
There is a completely different approach available, now that I think about it: we could re-arrange things in setup.py so that you can add in a "vo" to the `packages` argument of `setup()`, and use the `package_dir` argument to map that to the vo/compat dir. That's probably more standard, in some sense, but it would also probably overwrite any version of `vo` the user may already have installed... do we want that? | |
It would also require adding a few more tricks to the setup.py and setup_helper.py to gather up these special cases... and It's not too hard, but I think it would be wise to not mention it in the docs if we do, as we want to generally discourage this unless there's some very good reason. | |
<author>mdboom</author> | |
One way to make this testable is to test built code rather than testing in place -- this is what the python3 code path of `setup.py test` already does. It's always seemed a bit problematic to me to test something other than the build products, but maybe that's just from years of writing C. | |
@eteq: as you point out, there's probably other ways to install this stuff by refactoring setup.py a bit, but either way we're overwriting any version of vo the user may have already installed. The current approach (using data files) wouldn't overwrite any eggs, but it still shadows any vo package the user may have. That does raise the deeper question -- do we want to avoid overwriting already installed legacy packages? I would say no -- we should instead strive to provide a newer, possibly bugfixed, hopefully backward compatible layer instead. But it's maybe worth hashing that out. | |
Also -- TODO: the same approach as this for wcs/pywcs. | |
<author>eteq</author> | |
Yeah, I'm not sure what the "right" thing is to do here compatibility-wise. I think probably this is really only going to be a concern with `vo`,`pywcs`,and `pyfits` ... so if you and @iguananaut decide on which one you like better (overwriting the old one or not), I'm fine either way. Generally my philosophy is you shouldn't force unexpected changes on other people, but I can see it both ways here... | |
<author>mdboom</author> | |
Here's a totally different way to handle it. The creation of legacy "shims" is now handled by a function `add_legacy_alias" in `setup_helpers.py`. | |
At build time, it tries to import the "legacy" namespace, and if it finds it and it doesn't look like the astropy shim [1] and therefore presumably the "real deal", it prints a warning and doesn't build or install the shim. The warning reads: | |
``` | |
------------------------------------------------------------ | |
The legacy module 'pywcs' was found. | |
To install astropy's compatibility layer for this instead, | |
uninstall 'pywcs' and then reinstall astropy. | |
------------------------------------------------------------ | |
``` | |
[1] The astropy alias shims define the attribute "_is_astropy_legacy_alias". | |
This is sufficiently magical that it probably deserves some scrutiny. | |
<author>astrofrog</author> | |
I was actually going to raise the issue that on my system, I don't want the astropy aliases set up, as I'd rather just use the last stable version of the packages in question, so I like the approach that @mdboom is suggesting. | |
<author>eteq</author> | |
+1 for this approach, as well. Sorry if this ends up spamming people's notifications - for some reason the in-line comments I made before got randomly offset to the wrong part of the diff, and I had to re-submit them in the correct place. | |
One additional though: the `get_data_files` function seems dangerous and unnecessary at face value because it condones essentially putting files anywhere you want (and I can almost guarantee people will confuse it with `get_package_data`). I had originally planned on removing it in the setup re-work I did, but I realized it was necessary for this legacy shimming stuff. Given that it's not needed for that now, I would advocate remove it completely (unless there's some need for it that we foresee that's not apparent to me right now). | |
<author>embray</author> | |
Kind of agree on data_files. The Python packaging gestapo consider data_files evil anyways. | |
In fact, packaging/distutils2 is going to do away with both data_files and package_data, and replace them with a concept called "resources". The resources system allows pretty fine-grained control over where different types of resource files are installed in different installation contexts/platforms. It also provides an API for accessing those data files from within the code regardless of where they're physically located. It works pretty well...but obviously isn't too relevant at the moment. | |
One point about data_files, however, is that it's the only way to organize resource files outside of the python package itself. pysynphot uses this fairly legimitately--at its top level it has a 'data' directory that has 'generic' and 'wavecat' subdirectories, making it easier to organize the data. However, the data needs to be installed in a flat layout that's compatible with SYNPHOT. | |
However, I have been working off and on on a way that pysynphot can work around this without having to rely on data_files (and so that it can be installed in 'develop' mode) so once I get that fully working I hope this will become a moot point... | |
<author>embray</author> | |
Also, I'm not making any comment about whether or not pysynphot, or parts of it, will be part of Astropy. Just pointing to it as an example of why one might want to use data_files instead of package_data. | |
<author>mdboom</author> | |
I'm fine with removing `data_files`, as long as a way is found to install the C header files of `astropy.wcs`. STScI has code that links against pywcs at the C level and needs access to these header files. | |
<author>eteq</author> | |
Oh, hadn't thought of that... where are you putting the headers now? We could replace `get_data_files` with something like `get_headers`... and just implement that (which is fairly sensible). | |
<author>mdboom</author> | |
It adds this to data_files: | |
``` | |
('astropy/wcs/include', glob.glob('astropy/wcs/src/*.h')) | |
``` | |
In other words, they get installed to `astropy/wcs/include`, but they live in `astropy/wcs/src` in the source tree. It's possible, I suppose to move them into `astropy/wcs/include` in the source tree and then it could probably be handled with `package_data`. I'll submit a pull request for this. | |
<author>mdboom</author> | |
Ah -- I'll do this as part of this pull request -- I can't remove `get_data_files` without `get_legacy_alias` and friends. | |
<author>mdboom</author> | |
I think this is probably ready to merge now. What say the rest of you? | |
<author>astrofrog</author> | |
This works for me (it skips the installation of the vo and pywcs because I have them already installed). | |
<author>eteq</author> | |
Perhaps you should just put a sentence or two in the documentation to explain how to get C-headers to be installed? | |
Other than that, looks great to me - feel free to merge! | |
<author>mdboom</author> | |
@eteq: Ok. Added documentation paragraph. Merging. | |
</issue> | |
<issue> | |
<author>fgrollier</author> | |
A set of minor tweaks and fixes for constant.py | |
<author>fgrollier</author> | |
This request introduces some minor (mostly cosmetic) changes to the Constant class. | |
But, most importantly, this is my first attempt on the full git workflow, and is more a request for comments if I did anything wrong or whatever. So comments are highly welcomed :) | |
<author>fgrollier</author> | |
As a side note, the float type does not have a `__init__` method, so that `float.__init__()` in fact calls `object.__init__()`, which does nothing. This means that the line `float.__init__(self)` is useless and may be safely removed. But there may be arguments against that (although I can't think of one just now), so I didn't include this in the pull request. | |
<author>eteq</author> | |
Thanks! Two comments: | |
1. I see you did each of these changes as a separate commit - commits that are that specific are definitely good, so I definitely don't want to discourage you from doing this, but it's definitely not the case that you *have* to be that fine-grained with the commits - that is, if you'd done this all in one commit and just said something like "minor fixes in constant.py", that would still have been ok. To reiterate, though, what you've done here is definitely the better thing to do - its just we're willing to be a bit more lax and not always *require* this many commits :) | |
2. This is not something you changed - I only noticed it now because I was looking over the diffs. But given that you have this pull request going anyway: all the places that say something like ``float.__init__(...)`` should be changed to ``super(float,self).__init__(...)`` and similarly for ``float.__repr__``... something similar may be possible for ``__new__`` as well, but I'm not quite as sure given that it's a class method... | |
<author>eteq</author> | |
Oh, and with the super in place, it is still a good idea to leave the ``__init__`` in, because then more complicated class heirarchies can still work (although that probably won't ever happen in this case) | |
<author>fgrollier</author> | |
@eteq: thanks for commenting. And yes, using `super()` is a good idea, I committed a new item for this. | |
<author>embray</author> | |
For `__new__` the correct super-based idiom is `super(Foo, cls).__new__(cls, ...)`; I've always found that a little confusing, but it makes sense if you think about it it. | |
<author>eteq</author> | |
Huh... I don't quite follow why that works for ``__new__``... I was thinking it would be ``super(Constant,cls).__new__(...)`` was the right thing... but that doesn't work, and as @fgrollier is proving in this patch, what you said here is definitely the right thing. | |
At any rate, the pull requests looks good right now, so I'm merging it. | |
<author>embray</author> | |
@eteq If you're curious I think I can explain, but I won't clutter the comments here. | |
</issue> | |
<issue> | |
<author>cdeil</author> | |
Docstrings must be at the top to be picked up by help() | |
<author>cdeil</author> | |
```python | |
>>> import astropy | |
>>> help(astropy) | |
>>> astropy? # in ipython | |
``` | |
includes the license info in the name and doesn't pick up the docstring. | |
The same problem is e.g. present in ```astropy.config```, whereas e.g. ```astropy.wcs``` or ```wcs.constants``` work fine. | |
I didn't check the other modules. | |
The solution is to always put the docstring at the top right after the license info. | |
<author>eteq</author> | |
Thanks - I noticed this, too when I was working on the config package pull request (#73), and meant to fix it in master, but apparently did it in the pull request instead. As far as I can tell I fixed all of these, so I'm closing this issue, but if I missed any, let me know by re-opening this one or starting a new pull request, and I can fix it directly in master. | |
<author>mdboom</author> | |
It would be nice to write a check for this along with our PEP8 checking. Something along the lines of "import every module and make sure its docstring is not zero-length". I'll look into it. | |
</issue> | |
<issue> | |
<author>cdeil</author> | |
Read the Docs search not working? | |
<author>cdeil</author> | |
The Read the Docs search field in the sidebar does seem pretty useless, e.g. there are no results for 'astropy' or 'wcs': | |
http://readthedocs.org/docs/astropy/en/latest | |
It is possible to search the docs sphinx-style by explicitly going to | |
http://readthedocs.org/docs/astropy/en/latest/search.html | |
but I think it would be nice / most users would expect a search field with this functionality in the sidebar, | |
as is the case with most other sphinx-generated documentations outside Read the Docs? | |
Note that this is not an astropy issue, I looked at a few other projects and there it is the same. | |
I think this issue is unrelated, but I'm not sure: | |
https://github.com/rtfd/readthedocs.org/issues/138 | |
If someone else thinks it would be nice to have the usual sphinx search for the project in the sidebar I can investigate further if this is possible with Read the Docs or make a feature request with them. | |
<author>eteq</author> | |
Huh... yeah, this seems to be a RTD problem, so it'd be great if you issued a bug report there and passed it along here so we can keep an eye on it. There may be some workaround, too, - We'll leave this bug report open until either we figure out a workaround or RTD fixes it on their end. | |
(Note that the IRC room #readthedocs seems to have some of their devs on at least sometimes... you might ask there, too) | |
<author>cdeil</author> | |
See https://github.com/rtfd/readthedocs.org/issues/141 | |
<author>astrofrog</author> | |
It looks like it's working now, so closing this issue. Feel free to re-open if you are still having issues. | |
</issue> | |
<issue> | |
<author>cdeil</author> | |
astropy.org shows outdated sphinx docs | |
<author>cdeil</author> | |
At the moment astropy.org shows an old version of the sphinx docs. | |
Would it be helpful if I make a very small page that simply points to the up-to-date Read the Docs page and Github using the IPython layout? | |
Or is someone already working on a web page for astropy.org? | |
http://ipython.org/ | |
https://github.com/ipython/ipython-website | |
<author>eteq</author> | |
Yep, the plan is to switch to a landing page for astropy.org and have it point to the astropy docs, just like ipython or sphinx does - I was going to do this when I get around to it, but feel free to issue a pull request to the github project that actually hosts the web site: https://github.com/astropy/astropy.github.com - then it will automatically show up and be the right domain and everything. | |
I'll leave this issue open until we've done this, as this was definitely in the plans. | |
<author>astrofrog</author> | |
This is now done! | |
</issue> | |
<issue> | |
<author>cdeil</author> | |
wcs.WCS.wcs.name is not set | |
<author>cdeil</author> | |
I tried the two examples here: | |
http://readthedocs.org/docs/astropy/en/latest/wcs/examples.html | |
In both cases I get an empty string for the name: | |
``` | |
# Print out the "name" of the WCS, as defined in the FITS header | |
print w.wcs.name | |
``` | |
Is that normal (then why bother printing it in these examples) or a bug? | |
All astropy tests pass successfully. | |
<author>cdeil</author> | |
The second example seems to produce a FITS header with non-standard content, because if I run the first example on the file, I get this warning: | |
```python | |
/Users/deil/Library/Python/2.7/lib/python/site-packages/astropy-0.0dev_r346-py2.7-macosx-10.7-x86_64.egg/astropy/wcs/wcs.py:279: FITSFixedWarning: 'datfix' made the change 'Success'. This FITS header contains non-standard content. | |
FITSFixedWarning) | |
``` | |
<author>mdboom</author> | |
Indeed -- this is a terrible example, and is the result of copy-and-paste from the other example and drifting of the code over time. I'll replace this with something better. | |
As for your second comment -- I can't reproduce this. Can you send me the FITS file produced so I can compare it against mine? | |
<author>cdeil</author> | |
Sorry, I can't reproduce the FITS warning now. | |
Don't know what happend before, I must have had modified the examples somehow. | |
<author>mdboom</author> | |
No problem. Closing this issue, then. | |
</issue> | |
<issue> | |
<author>phn</author> | |
Add astropy version and some other info to py.test output. | |
<author>phn</author> | |
The code is for adding astropy version and other information to the py.test output header. | |
For example: | |
platform linux2 -- Python 2.6.6 -- pytest-2.2.0 | |
Testing Astropy version 0.0dev-r353. | |
Running tests in: astropy/constants. | |
Using astropy options: remote_data. | |
collected 8 items | |
Information added: version, package being tested, "special" astropy options used. The last is currently "remote_data" and "pep8". "pep8" is a "special option" since it skips all the others. | |
This is accomplished using ``pytest_report_header``. | |
Perhaps there are other builtin ways of determining the astropy version and other information? I thought I will submit this anyway. | |
<author>eteq</author> | |
Thanks - this is a good idea! Other than the small stylistic suggestion I made above, I'm happy with it. | |
<author>embray</author> | |
As for the imports thing, I also remember a GvR proclamation specifically on this, but I don't remember exactly where. However, the relevant lines from PEP8 are: | |
" The preferred way of wrapping long lines is by using Python's implied line | |
continuation inside parentheses, brackets and braces. Long lines can be | |
broken over multiple lines by wrapping expressions in parentheses. These | |
should be used in preference to using a backslash for line continuation." | |
This applies equally for import statements. As for why not to to have multiple "from foo import ..." statements for the same module, it's just to avoid multiple invocations of `__import__`. Even if the module is already imported there's a certain overhead to multiple `__import__` calls...albeit not much... | |
<author>eteq</author> | |
Ok, well, I can't find the thing saying that parenthesis are to be avoided for from ... import ... statements, so maybe I imagined it (or it wasn't BDFL-condoned). Parenthesis it is, then! | |
</issue> | |
<issue> | |
<author>embray</author> | |
More setup.py cleanup and Cython fixes | |
<author>embray</author> | |
This moves still more functionality out of setup.py and into setup_helpers.py; this is all stuff that was being repetitively defined in the affiliated package template. The goal here is to bash down setup.py so that most of what's in it is just metadata. | |
This branch actually started out to fix some problems I was having building packages that use Cython on a virtualenv where I don't have Cython installed. This pull request contains a fix related to that as well. I've also created an update to the template package that makes use of these changes which I will submit shortly. | |
<author>astrofrog</author> | |
The ``setup.py`` file looks much cleaner this way. I was also starting to worry that there was too much machinery in the setup.py file, and wasn't happy with the duplication in all affiliated packages. As far as I can tell, this is fine to merge, but maybe @eteq or @mdboom could also have a quick look? | |
<author>eteq</author> | |
Aside from my one suggestion, I think this looks good.. except that when I merged in the config pull request (#73), it appears to conflict with this one. I *think* the only change #73 introduced of any relevance was deleting a single comment in `setup.py` that was rendered irrelevant, but @iguananaut, it'd probably be good if you rebased against the current master just to be sure. | |
<author>eteq</author> | |
@iguananaut - you asked in astropy/package-template#1 what `skip_cython` is for - if you didn't figure it out: | |
This was motivated by @phn mentioning that the code that automatically locates ``*.pyx`` files and adds them as extensions means that there's no way to disable not compile a Cython file once its in the source code. So the idea is that if you make a package with a .pyx file that you *don't* want to be compiled, you specify an extension that includes that file, but has the name "skip_cython" - that tells setup.py that it should not include it as an extension. This keeps anyone from using the package name "skip_cython", but I'm pretty sure we can live with that :) | |
<author>embray</author> | |
I guess I'm still not clear on why you would want to skip compiling the file in general... but I haven't used Cython in any real projects (only a few simple single module projects). I'm sure there's a good reason though. In any case this patch should preserve that functionality. I understood _how_ it works. Just not what it's for. | |
<author>embray</author> | |
Added some more documentation and rebased on master, so I'm going to go ahead and merge this. | |
<author>phn</author> | |
Hello, | |
@iguananaut | |
I mentioned the "skipping Cython" feature to @eteq, so as to give more flexibility to the developers of affiliated packages. | |
Say, I want to try building a package without compiling a particular Cython module/package. The current scheme is somewhat easier than having to remove all the Cython files from the package, test build, and then add them back. The scheme of compiling all Cython files by default seemed too restricting. | |
This should not create any confusion for the end user, since we won't be shipping, or at-least not compiling, any Cython code in the release version downloaded by an end user. | |
Prasanth | |
<author>embray</author> | |
That's fine, though wouldn't it make sense to still at least include the pyx files in the source package, even if they won't be compiled? | |
Also, why this and not, say, an option to build_ext? | |
<author>phn</author> | |
One reason for not including pyx file is to avoid Cython dependency, and to prevent any errors caused by using an incorrect version of Cython. The C file need to be identical to that in the code repository. | |
If users decide to compile pyx files, then we would want them to use the same Cython version that the developer(s) use. If they want to look at pyx source, it will be best if they download/clone the code repository, and then use the specified Cython version to compile pyx files. | |
@eteq came up with the current scheme. My guess is that a build_ext option will require more work on the part of the developer? But it is also a possibility. | |
<author>eteq</author> | |
@phn @iguananaut - the .pyx files should be included with the source distribution... the intent was that if it's a released version (i.e. `release` in version.py is True), both .c and .pyx files are included, but ``setup.py`` is only given the .c files that are to be generated as part of the release process. If it's *not* a release, the .pyx files are used if Cython is present, .c files are used if not, and if .c files are missing, the extension is skipped. However, something seems to be messed up in the setup scripts, and I'm not sure it's actually doing that... I suspect someone got confused in one of the times we were moving functionality into setup_helpers... | |
As for the `skip_cython` option, at least as I intended it, it is *not* supposed to be a user option, rather it's for the person who writes the package to control when .pyx files get compiled or not. When someone implements a package, they can provide the 'skip_cython' bit if they have a .pyx file that they have in the source distribution but for whatever reason don't want to be compiled. I didn't really have a specific use case in mind, but it just seemed wrong to me to have the automatic Cython extension generator and give a package writer no way to prevent it from running. | |
<author>eteq</author> | |
Oh, and I do think it's probable wise to give users some level of control over what gets compiled. But I think it makes sense to do this at the same time as we are thinking about how to integrate and register use of external libraries, which we haven't gotten to yet. | |
<author>embray</author> | |
If I were a developer I wouldn't want to edit my source code and change the name of some extension to "skip_cython" to cause it to not build. It should simply be a build option. To make it easier for users the default should probably be to *not* compile Cython files, but it should be easy to turn that on or off as a build option. | |
We also would still want to make the option available--if we do include the pyx files (which I think we should) a user might, for example, want to try building them with their own Cython version. They shouldn't have to edit any files to make that possible. | |
<author>eteq</author> | |
I see your point re: build option vs skip_cython, but I've never done that before, and given that there were no obvious use cases, this seemed like the simplest solution. I guess once we have a way to toggle extensions on or off, we should use that and remove this scheme completely. As long as the developer gets to choose the "default" (which extensions are installed), it should be fine. | |
Oh, and if you want to take a crack at this @iguananaut, feel free - as I said it's nominally tied in with #63, but this could be implemented *first* and have that built over it. I know that one's assigned to me, but my timescales for getting tasks done are closely tied to the number of postdoc apps due in the near future... | |
Regardless, I'm not sure I agree that Cython should be default *not* compiling. The point is that the bar for using Cython should be very low, because I certainly forsee cases where people who are not terribly well versed in python packaging/developing will want to do some C speedups in astronomy code... so we want that to be as simple as possible. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Fixes get_git_devstr and generate_version_py to work in other packages... | |
<author>embray</author> | |
... (namely, affiliated packages). The underscore was dropped from their names due to the fact that, while in a sense internal implementation details, we still expect other third-party packages to use them. This came about as a result of the discussion in astropy/package-template#1 | |
<author>astrofrog</author> | |
This looks good, but a minor point: is ``dir`` a sensible keyword/variable name since it overrides the intrinsic function? (I'm sure this won't cause issues, just wondering about style guidelines). | |
<author>astrofrog</author> | |
As far as I can tell, this is ready to merge, but maybe @eteq or @mdboom could have a quick look? | |
<author>embray</author> | |
Well, dir was already there to start with; not sure who originally put it there. I don't really care too much about the fact that it's a builtin--it's not like the `dir()` builtin is going to be used anywhere in that namespace. But I think I will change it to just path and allow it to be a file path as well. | |
<author>eteq</author> | |
Hmm... I'm getting the following traceback when I use this branch and do ``python setup.py --help`` (after the help string, but before the usage string that should appear): | |
``` | |
Traceback (most recent call last): | |
File "setup.py", line 33, in <module> | |
setup_helpers.get_debug_option()) | |
File "/home/erik/src/astropy/astropy/version_helper.py", line 176, in generate_version_py | |
current_release = version_module.release | |
AttributeError: 'module' object has no attribute 'release' | |
``` | |
(I've found out the hard way that ``python setup.py --help`` and ``python setup.py --help-commands`` are good ways to diagnose odd setup problems...) | |
<author>eteq</author> | |
Another thing in the code... the `_frozen_version_py_template` seems to include mention of astropy too, which we may as well fix - the relevant bit is: | |
``` | |
# Autogenerated by astropy's setup.py on {timestamp} | |
from astropy.version_helper import _update_git_devstr | |
version = _update_git_devstr({verstr!r}) | |
major = {major} | |
minor = {minor} | |
bugfix = {bugfix} | |
release = {rel} | |
debug = {debug} | |
``` | |
The first mention is cosmetic (although it would be neat if it said just "Astropy" if the package is astropy and "Astropy-affiliated package foo" if it's package foo). The second is an import that presumably needs to stay astropy... *but* it made me notice something else: `_update_git_devstr` needs to get the right directory for it to work, so it's path option needs to be updated appropriately, as well. | |
<author>eteq</author> | |
Two other tiny things: | |
@astrofrog - The original use of `dir` was my bad - I agree its bad form and it's good to change them here... I just sometimes forget all the builtins that I might be shadowing (I sometimes find myself trying to name variables `if` and `is` ... at least those give you syntax errors...). | |
@iguananaut: I'm guessing from your pull request text you don't know about a neat trick in the github comment syntax - to reference a pull request/issue in another project you can just do: ``astropy/package-template#1`` and it figures it out. e.g. astropy/package-template#1 | |
<author>embray</author> | |
It looks like I forgot to push a couple commits, because I could swear I fixed the traceback that @astrofrog mentioned as well as the version.py template. (Though I didn't modify the comment string in the version.py template, so that's a good thing to add). | |
And no, I didn't know the syntax for referencing issues in other repos--thanks! | |
<author>embray</author> | |
This should be mergeable now if everything checks out. | |
<author>astrofrog</author> | |
If I change the code in the template package to: | |
if not release: | |
version += get_git_devstr(show_warning=False, path=os.path.dirname(os.path.abspath(__file__))) | |
generate_version_py(version, release, setup_helpers.get_debug_option()) | |
I get the following traceback: | |
Freezing version number to 0.0dev-r2/version.py | |
Traceback (most recent call last): | |
File "setup.py", line 39, in <module> | |
generate_version_py(version, release, setup_helpers.get_debug_option()) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0dev_r448-py2.7-macosx-10.6-x86_64.egg/astropy/version_helper.py", line 202, in generate_version_py | |
with open(version_py, 'w') as f: | |
IOError: [Errno 2] No such file or directory: '0.0dev-r2/version.py' | |
Am I calling the updated functions wrong? | |
<author>astrofrog</author> | |
Oops, I had forgotten to pass PACKAGENAME as the first argument to ``generate_version_py``. | |
<author>embray</author> | |
Also `path=os.path.dirname(os.path.abspath(__file__))` is more than necessary. Since I updated `get_git_devstr()` it's sufficient to use just `path=__file__`. It will then handle figuring out the correct directory path. | |
<author>astrofrog</author> | |
Ok, thanks! I'll merge this now since it seems to be working fine. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Auto-generate config files | |
<author>eteq</author> | |
The config package interacts with packages now to use a single configuration file that it updates as configuration settings are changed programatically, but it would be good to have it automatically make a new config file populated with all of the default settings. The `config.configs._generate_all_config_items` function does this, but it needs to be hooked into the `setup.py` or some other sort of interfaces that makes this happen in some fairly smart automatic way. | |
<author>eteq</author> | |
As an implementation note for how to do this: it might make sense to do the auto-generation when the `config` directory is first created - that could then be integrated into the fix for #196 . There should also be a function and/or command-line script to allow users to populate all of their configuration files (with an option to replace all values with defaults). | |
<author>astrofrog</author> | |
Are we going to try and get this working for 0.1? | |
<author>eteq</author> | |
I was hoping to, but I probably won't get to it until tomorrow right before release... so I can either do that (although there won't be time for a PR review), or we can push it off to 0.2. | |
<author>astrofrog</author> | |
I'm fine with delaying to 0.2, but I wonder whether we should make the configuration section less prominent, i.e. put it after tools and utils in the docs? At the moment it's pretty high up for something that most users will probably not use (same for logging actually). | |
<author>wkerzendorf</author> | |
@iguananaut: I marked the #456 PR as 0.2 milestone (tentatively). Can this issue then be closed? | |
<author>embray</author> | |
@wkerzendorf Ditto to what I wrote on #63. I need to review #456 again but I think it's nearly ready to be merged? If not long since? | |
<author>eteq</author> | |
@iguananaut @wkerzendorf - #456 was superceded by #498 (in my mind, at least). #498 is working fine for me locally on a few different configurations but is failing on travis for mysterious reasons. I think it's almost there but I (or one of you, if you have any ideas!) need to work out the travis issues before it can be merged. Once it is I think we can close this and #456, though. | |
<author>eteq</author> | |
Closed by #498 | |
</issue> | |
<issue> | |
<author>eteq</author> | |
implement data.astropy.org server with version hashes | |
<author>eteq</author> | |
The current `config.data` package expects to be looking in `data.astropy.org` to find remote data files... and such a server does not yet exist. It should carry necessary data that we don't want in the source code, and should also support specific versions of files via `data.astropy.org/hash/395dd6493cc584df1e78b474fb150840`. | |
<author>mdboom</author> | |
Just for clarity: I don't think we want to engineer this as a piece of server software. The minute you have custom server software you really limit your options -- it has to be vetted for security issues, requires more advanced (i.e. more expensive) hosting etc. I think it's going to be really important to keep all of our options open that this is able to work as a bunch of static files on a web server in a more-or-less standard configuration. | |
I think this should be architected as a client side utility that is able to mirror and modify a "directory of stuff", that knows how to create the symlink hash files. This blob of stuff could then be synchronized to a static web server by any number of means (pushing to a code repository, rsync-over-ssh, webdav etc). This way we aren't relying on any custom software or a particular way to host this stuff. | |
<author>eteq</author> | |
I agree there should definitely not be custom server-side software. What I had in mind here is that when someone wants to submit a version-specific file, they submit it to us as stating "I want this to be stored version-specific". Then whoever is managing the server adds that file to the data server in `public_html/hash/395dd6493cc584df1e78b474fb150840` or whatever, and we store a file in `public_html` that keeps track of what the original file name was for each hash. (Or perhaps a very simple server-side script can be used to so this automatically as part of the submission process.) | |
When the submitter wants to use it in the code (presumably this is generally testing code), they use the hash based on what the `astropy.config.data.compute_hash` function gave them for that file, and then everything should work. | |
@mdboom - I'm not sure exactly what you're proposing as a client-side utility, though... we definitely don't want the clients to have to mirror everything in the data.astropy.org site (or even data.astropy.org/hash), because plenty of stuff on there won't be needed by everyone. Or am I misunderstanding what you had in mind? | |
<author>mdboom</author> | |
Ah -- I wasn't clear. I was just suggesting that we write a tool for "whoever is managing the server" that knows how to build the directory structure correctly. "whoever is managing the server" may be multiple people, or at least change over time, so it would be nice to codify and automate as much of the process as possible. I assumed that the person managing the server would probably want to mirror all of the data, but most people (even most developers) would not want to or need to. | |
<author>eteq</author> | |
Ah, I gotcha - yes, that could work just as well - probably the best thing to do is first see what the best place is to get hosted space - I belive @perrygreenfield suggested STScI might be willing to host such a server at least for a time, although nothing was confirmed... | |
<author>astronomeralex</author> | |
I'm going through old issues -- @eteq @mdboom -- can this be closed? | |
<author>astrofrog</author> | |
@astronomeralex - unfortunately not, as we don't actually have a proper server yet (data.astropy.org just points to my web space!) | |
<author>astronomeralex</author> | |
@astrofrog ahh, i see. It turns out a bunch of my friends work for a web-hosting company and I could probably get us a considerable amount of space; shall i inquire? | |
<author>astrofrog</author> | |
http://data.astropy.org now works but we've not got the infrastructure to do the hash stuff (and not sure we've needed it yet). Not high priority so removing milestone. | |
@astronomeralex - sorry for not replying sooner! In the end we are using a server at STScI so we should be ok in terms of computational resources. | |
<author>astronomeralex</author> | |
@astrofrog -- that's fine. this dropped off my radar. glad everything worked out. | |
<author>pllim</author> | |
I think it is time for me to move http://stsdas.stsci.edu/astrolib/vo_databases/ (for `astropy.vo.client` and `astropy.vo.validator`) to a proper place in `data.astropy.org`. How do I accomplish this? | |
<author>astrofrog</author> | |
@pllim - you can add files to data.astropy.org by adding them to the following repository: | |
https://github.com/astropy/astropy-data | |
Every time a commit is made, the repo is synced to the website. | |
<author>pllim</author> | |
@astrofrog , my data files change every night. They are output files from nightly run of `vo.validator` to validate a selected list of 30 Cone Search providers. Is there no way for direct upload or something? By the time someone merges the PR, the files might be outdated already. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Implement progress bar for config.data downloads | |
<author>eteq</author> | |
As the title says - #62 has just such a bar, but it needs to be pulled in before this can be done | |
<author>mdboom</author> | |
#62 is merged. | |
<author>mdboom</author> | |
This implements a progress bar (or spinner when the size of the download can't be determined) when downloading remote data files. A new `ProgressBarOrSpinner` class was created to handle this case. | |
<author>eteq</author> | |
Looks like just what I had in mind... | |
One thing that might be good to add to add in utils.console: a ConfigurationItem specifying the default color. Right now it looks like it uses the "default" terminal color, but It might be nice to have an option to override that with a color that the user can decide on separately from their terminal setup (mostly for people that don't know their terminal well enough to fiddle much with the colors). | |
Along the same lines, perhaps a color option for ProgressBar that the ProgressBarOrSpinner will pass through? | |
(This is all just icing, so if you don't want to bother with it for now, it's fine as is). | |
<author>eteq</author> | |
One thing to consider about this (probably after #93 is merged), is that we will want an alternative message that goes to the log that just says "download complete" or whatever, so that non-interactive application logs aren't full of entries as the progress bar/spinner updates. | |
<author>mdboom</author> | |
I'm not sure what you mean by a configuration option for default color: the "default" color in ANSI escape code nomenclature means something very specific -- specifically, the default color as set by the terminal emulator. I think that's the color we want to use when the user of the console library doesn't specify a color. | |
And sure -- the color of the ProgressBar should probably be overridable from the outside -- I'll save that for another pull request. | |
As for your concern about the logs being full of ProgressBar updates -- it already contains magic such that if it isn't writing to a tty (a log file or stream would not be a tty) it doesn't ever write out a progress bar. It would only write "Downloading foo... [Done]). | |
<author>eteq</author> | |
Ah, I missed that (about the not printing if its not a tty). Excellent. | |
Regarding the color business: I'm saying that I'm suggesting there be a configuration option available to the user that gives them the option of using a specified color instead of letting the console's 'default' determine the color... That is, if the color is set to 'default' in the Spinner/Bar, it would first check the ConfigurationItem, and if it gave a color it uses that, otherwise it uses the console's default. People using astropy might go to the astropy configuration system to alter that instead of going in to mess with their terminal settings (I'm thinking of people I know who can do crazy things with e.g. IRAF colors, but have never learned how to colorize ls output...) | |
I'm not particularly attached to this though, so if you'd rather not have this extra layer, feel free to just merge this as-is. | |
<author>eteq</author> | |
@mdboom - is this ready to go, or do you want to make further modifications? | |
<author>mdboom</author> | |
I think this is good to go. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Consistent scheme for file/directory locking | |
<author>eteq</author> | |
We should have a function/class in `utils` that can be used to lock files or directories to enable thread/process-safe file access. @mdboom pointed out http://code.google.com/p/pylockfile/ as a good place to look for the way we want to do this (perhaps even just pull it into extern). | |
Once this is done, these packages (and possibly others I'm not aware of) should be updated to use it: | |
* astropy.config.data | |
* astropy.config.configs (maybe?) | |
* astropy.vo | |
* astropy.io.fits (not yet in master) | |
<author>embray</author> | |
This would also be useful to add to pyfits/astropy.io.fits at some point or another. There's been lots of interest lately in multiprocess updates to FITS files, and currently we just say that the onus is on the user to not mess up their file writing. Of course, if they're being careful to ensure that each process is only writing to one section of the file, and that the file size isn't going to change, they don't really need file locking. But it might be nice to provide as an option... | |
<author>eteq</author> | |
Ok, I've added it to the list above (I'll edit the original post to add anything else that's suggested to use this so we know what to go in and change once it's done) | |
<author>astrofrog</author> | |
Re-assigning to 0.4.0 | |
<author>astrofrog</author> | |
Removing 1.0 milestone since there has been no work on this recently and this is not critical. | |
</issue> | |
<issue> | |
<author>cdeil</author> | |
Broken doc links | |
<author>cdeil</author> | |
There are further broken links in the developer pages because the git man pages at www.kernel.org don't exist any more (as you can see via ```cd doc; make linkcheck```. But I think this part of the documentation is copy & pasted so it doesn't make sense to change that to point e.g. to | |
http://schacon.github.com/git/git.html | |
http://gitref.org/ | |
Maybe it would also be a good idea to add | |
```make linkcheck``` | |
to https://jenkins.shiningpanda.com/astropy/job/astropy-doc/ | |
PS: I know this is super-trivial, I just wanted to try the dev instructions once and make my first GitHub pull request. :-) | |
<author>eteq</author> | |
This looks fine to me, but I'm going to leave this to @mdboom as he's the one who set up shiningpanda (so he'll see your suggestion). | |
The stuff pointing to kernel.org could definitely be changed to gitref or whatever, too... but yeah, it's based on matthew-brett/gitwash , but it will be diverging more and more as we adopt a workflow. The whole doc organization scheme is due for a fix-up, too, now that we actually have some content. | |
<author>mdboom</author> | |
Putting `make linkcheck` on the Jenkins build is a great idea (I didn't even know it existed). | |
Can you separate out these commits? 6126f2a is unrelated. | |
<author>mdboom</author> | |
I don't understand why you think it doesn't make sense to change the kernel.org links to schacon.github.com -- those seem to be the canonical online place for the manpages these days. | |
<author>eteq</author> | |
Yeah, It's fine to adjust all the kernel.org stuff to schacon.github.com, as we will be manually modifying those docs more anyway (we won't continue to take revisions directly from the original source). | |
and @cdeil, if you don't know how to adjust commits, you can just do ``git rebase --interactive``. That will bring up a list of commits (one per line), and if you just delete one of them it, will be removed. Then you can do ``git push -f`` (the ``-f`` forces the update even though you've edited history), and it should appear here. | |
<author>cdeil</author> | |
I removed 6126f2a and updated the links from kernel.org to schacon.github.com, | |
now `make linkcheck` shows that all links work. | |
Although I had one case of http://github.org not responding within the link check timeout, | |
so if you add this to shiningpanda there might be false positives in the doc test | |
and we'll have to increase the timeout parameter a bit: | |
http://sphinx.pocoo.org/latest/config.html?highlight=timeout#confval-linkcheck_timeout | |
@eteq: thanks for the info on modifying the commit history. | |
<author>mdboom</author> | |
Looks good. Merging. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Config compat fixes | |
<author>eteq</author> | |
This pull request addresses a variety of odd py 3.x compatibility issues from the config module. I pushed a few fixes directly to master that were straightforward, but these are a bit more complicated - in particular, it turns out the standard version of configobj is *not* 3.x compatible. Fortunately, there's a fork that's the same version as the stable 2.x version that has been patched for 3.x compatiblity. So I'm now including *both* in extern, and extern/configobj now just imports the appropriate modules from the 2.x or 3.x version. | |
<author>mdboom</author> | |
This looks good to me. I'm going to merge to get the build bots working again. | |
Note for others -- this requires cleaning the build and installation directories in order to get going again (or it will still see the old, broken configobj) | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Implemented logger | |
<author>astrofrog</author> | |
(This is not yet ready for merging) | |
I've implemented a basic logger, and have added some documentation to describe how to use it. I am struggling with writing tests for this, as even though py.test allows one to capture stdout and stderr, a test such as: | |
def test_info(capsys): | |
logger.info("This is an info message") | |
out, err = capsys.readouterr() | |
assert err == "INFO: This is an info message" | |
does not work, even though py.test says the captured output is "INFO: This is an info message". Any ideas? | |
There are a few lines to wrap for the ConfigurationItem descriptions, but I will do this once we are happy with the strings. | |
<author>fgrollier</author> | |
Python modules that are meant to be used as "libraries" should avoid using the `logging.basicConfig` function, because this prevents the end users to use it in "quick-and-dirty" scripts for their own logging purpose. | |
For the same reason, I think it would be better to define a specific logger (ie `logging.getLogger('astropy')`) instead of using the root logger. Package could then be setup to register sub-loggers (ie `astropy.wcs`, `astropy.fits`, ...) so that it is possible to set different threshold levels for each one, which is very useful during developpement/debugging process. | |
At least this is way I like to implement this kind of things :) | |
<author>embray</author> | |
Right--rather than using the root logger everywhere, it should be possible for each sub-package to get its own logger--affiliated packages should also be able to do this. Something like: | |
```python | |
from astropy.logger import get_logger | |
log = get_logger(__name__) | |
``` | |
should be possible in each module. Then each module can have its own sub-logger that can be futher customized as desired. It's still nice to the `astropy.logger` module though to set up some defaults for the main astropy logger. | |
<author>eteq</author> | |
As an FYI, it should be possible to do just | |
``` | |
from astropy.logger import get_logger | |
log = get_logger() | |
``` | |
if you make use of the `find_current_module` function that PR #96 moved to utils - that's what I'm doing in the config package to have it automatically know where the configuration items are. | |
Not sure about why the tests aren't working, though... I haven't tried any capturing. | |
<author>fgrollier</author> | |
On the failing test thing, here are the results of some of my experiments: | |
This little test snippet works correctly: | |
```python | |
import logging | |
import sys | |
logging.basicConfig(format="%(levelname)s: %(message)s", level=logging.INFO) | |
logger = logging.getLogger() | |
def test_info(): | |
assert logger.handlers[0].stream == sys.stderr | |
``` | |
while this one fails (note the 'capsys' funcarg addition): | |
```python | |
import logging | |
import sys | |
logging.basicConfig(format="%(levelname)s: %(message)s", level=logging.INFO) | |
logger = logging.getLogger() | |
def test_info(capsys): | |
assert logger.handlers[0].stream == sys.stderr | |
``` | |
``` | |
def test_info(capsys): | |
> assert logger.handlers[0].stream == sys.stderr | |
E assert <py._io.capture.EncodedFile object at 0xa2e0a0c> == <py._io.capture.TextIO object at 0xa050c2c> | |
E + where <py._io.capture.EncodedFile object at 0xa2e0a0c> = <logging.StreamHandler instance at 0xa409e2c>.stream | |
E + and <py._io.capture.TextIO object at 0xa050c2c> = sys.stderr | |
``` | |
My understanding here is that `stderr` is captured twice: once 'outside' the test function and once 'inside' the test function if capsys is invoked. The problem seems to be that the target of these two captures aren't the same, and that the `stderr` watched by capsys does not see the `stderr` used by the logging module, which was created outside the test. (*after re-reading this paragraph I feel it isn't very understandable, by my poor english doesn't allow me to make it better, sorry*) | |
To support this, this script also passes the test correctly: | |
```python | |
import logging | |
import sys | |
def test_info(capsys): | |
logging.basicConfig(format="%(levelname)s: %(message)s", level=logging.INFO) | |
logger = logging.getLogger() | |
assert logger.handlers[0].stream == sys.stderr | |
``` | |
here the `stderr` logging stream is created inside the test function and is thus correctly handled by `capsys`. | |
Finally, @astrofrog 's test script can be rewritten like this and work correctly: | |
```python | |
def test_info(capsys): | |
logger.handlers[0].stream = sys.stderr | |
logger.info("This is an info message") | |
out, err = capsys.readouterr() | |
assert err == u"INFO: This is an info message\n" | |
``` | |
note that this is not a solution to this problem (at least not a *good* solution) but it may help circumscribe it. | |
I'm too new into py.test to know if it's a bug or a feature we are misusing. | |
<author>eteq</author> | |
For any who are reading this and haven't been looking at the list, there's a discussion on astropy-dev (subject "How should we do warnings/non-fatal problems?") that probably should be sorted out before we finalize anything here... | |
<author>astrofrog</author> | |
Here is an alternative implementation: #183 | |
<author>astrofrog</author> | |
A more advanced implementation is in #192 | |
<author>astrofrog</author> | |
Closing this in favor of #192 which has now been merged. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
astropy.config.data._find_pkg_data_fn() generates tons of ImportWarnings | |
<author>mdboom</author> | |
``` | |
In [1]: import warnings | |
In [2]: warnings.simplefilter('always') | |
In [4]: from astropy.config import get_data_fileobj | |
In [5]: get_data_fileobj("astropy/wcs/tests/data/3d_cd.hdr") | |
astropy/wcs/tests/data | |
/usr/lib/python2.7/pkgutil.py:186: ImportWarning: Not importing directory '/home/mdboom/Work/builds/astropy/astropy/wcs/tests/data': missing __init__.py | |
file, filename, etc = imp.find_module(subname, path) | |
/usr/lib/python2.7/pkgutil.py:186: ImportWarning: Not importing directory '/home/mdboom/python/lib/python2.7/site-packages/astropy-0.0dev_r455-py2.7-linux-x86_64.egg/astropy/wcs/tests/data': missing __init__.py | |
file, filename, etc = imp.find_module(subname, path) | |
/usr/lib/python2.7/pkgutil.py:186: ImportWarning: Not importing directory '/home/mdboom/python/lib/python2.7/site-packages/astropy-0.0dev_r455-py2.7-linux-x86_64.egg/astropy/wcs/tests/data': missing __init__.py | |
file, filename, etc = imp.find_module(subname, path) | |
/usr/lib/python2.7/pkgutil.py:186: ImportWarning: Not importing directory 'astropy/wcs/tests/data': missing __init__.py | |
file, filename, etc = imp.find_module(subname, path) | |
/usr/lib/python2.7/pkgutil.py:186: ImportWarning: Not importing directory '/home/mdboom/python/local/lib/python2.7/site-packages/astropy-0.0dev_r455-py2.7-linux-x86_64.egg/astropy/wcs/tests/data': missing __init__.py | |
file, filename, etc = imp.find_module(subname, path) | |
/usr/lib/python2.7/pkgutil.py:186: ImportWarning: Not importing directory '/home/mdboom/python/lib/python2.7/site-packages/astropy-0.0dev_r455-py2.7-linux-x86_64.egg/astropy/wcs/tests/data': missing __init__.py | |
file, filename, etc = imp.find_module(subname, path) | |
astropy/wcs/tests | |
Out[5]: <open file '/home/mdboom/Work/builds/astropy/astropy/wcs/tests/data/3d_cd.hdr', mode 'rb' at 0x14c2f60> | |
``` | |
It would be great if no warnings were generated. It's not clear to me why anything needs to be imported just to find the data files relative to the source files. | |
<author>eteq</author> | |
You're probably right that it doesn't - this was just the quickest way to implement it, and I wasn't thinking too hard about the import side-effects. It should be straightforward to change this to just search the relevant paths (as per your suggestion earlier), so I'll do that. | |
<author>mdboom</author> | |
Addressed by #104. Closing. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Add astropy.config.get_data_dir()? | |
<author>mdboom</author> | |
There are a few tests in `wcs` that look in a directory, glob its contents and the loads each file to test that it loads correctly. | |
It would be nice to have a `astropy.config.get_data_dir()` function to do this globbing. Of course, that wouldn't work remotely (or maybe it could be made to work if directory listings are turned on on the web server), but even without remote support it would be helpful to replace all of the `__file__` magic currently being used. | |
<author>embray</author> | |
+1 to this. I wonder if, rather than having the web server directory listing enabled (a can of worms if there ever was one), if part of the data server specification should include an API to get file/directory listings. Seems like it should be simple enough... | |
<author>eteq</author> | |
This should be quite straightforward for local files... I'd say we hold off on doing anything remotely until we have a data.astropy.org server (#88), and then readdress it once we've got a sense of who's hosting it and so on. Definitely something to keep in mind for figuring out who is hosting the server, though. | |
<author>mdboom</author> | |
Addressed by #104. Closing. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Add tools package, organize tools | |
<author>eteq</author> | |
This pull requests cleans up the docstrings for the utils package, and clarifies their relation to the tools package that this pull request also adds, following the discussion on the astropy-dev list. | |
The basic idea is that `tools` is for astromoy/user-useful functions and classes, while `utils` is more developer-oriented. | |
This also moves the `find_current_module` function from config to utils, and adds a `sigma_clip` function to tools (partly as an example of what belongs there, but also because that's a very useful too to have). | |
<author>mdboom</author> | |
Looks good. Probably could use a run through the pep8 checker. | |
<author>eteq</author> | |
Should be pep8 compliant now. | |
<author>mdboom</author> | |
+1 on merging. Will wait in case others want to comment. | |
<author>mdboom</author> | |
Ah, just realised -- the tools stuff should have a home in the docs somewhere (I know that there's a reorg coming, but it's nice to at least ensure the docstrings parse etc.) | |
<author>eteq</author> | |
After a little bit of thought in part based on @phn's comments on the list, I renamed the `alg` module to `misc`, on the theory that we probably want to try to avoid too much sub-structure in `tools`. So must things that are just a function or two or a single class should live in `misc`, longer specific tools should live in their own module inside `tools`, but should be imported into `astropy.tools`, and anything that's more complicated than that should live in it's oen `astropy.whatever` module - the idea being that anything that needs deep structure in `astropy.tools` is probably sufficiently complex that it should have its own package, anyway. | |
I also added a doc section that seems to run fine. | |
<author>phn</author> | |
Shouldn't ``astropy.sphinx`` be ``astropy.utils.sphinx``? | |
<author>eteq</author> | |
@phn - maybe... although I was thinking of `utils` as containing tools that are written *by* Astropy, or at least pulled in as utils from elsewhere, rather than extension and configuration stuff (e.g. the master conf.py that's in astropy/sphinx/conf.py that's used by affiliated packages). | |
(I'd also rather not do it in this pull request, because I think the sphinx stuff is imported and laid out in ways I'm not fully familiar with, so it would be safer to do it as a separate pull request after this one is merged). | |
<author>eteq</author> | |
All right, I'm going to go ahead and merge this given that there's nothing more specifically about this pull request. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Make tests.helper utilties configurable for affiliated packages | |
<author>jiffyclub</author> | |
This is one option for making the astropy test helpers usable with affiliated packages, as discussed in astropy/package-template#1. The utilities in `tests.helper` are made generic and the package specific setup occurs in a couple lines in `astropy/__init__.py`, which can easily be duplicated in astropy/package-template. | |
@astrofrog may want to try checking out f27f696897009e191d5bf05d2daeb1ebfec64dff and running the tests in his package-template fork. | |
<author>eteq</author> | |
+1 for this approach - aside from the comments above, I like it, and I think this should be pretty straighforward to include in the package template. | |
@astrofrog - one other thing to check if you try the package template with this version: check that calling `py.test` at the root of the template package has all the plugins and does everything it's supposed to (probably the easiest thing is to do ``py.test --help`` and make sure the ``--remote-data`` option is present). | |
<author>jiffyclub</author> | |
@eteq, I think that right now the plugins will not work in the package template. It needs its own `conftest.py` file that imports the plugins from `astropy.tests.helper`. I was planning to add that into the package template later. | |
That is one more thing that needs to be maintained across Astropy and the affiliated packages. One way to make that easier would be to define the plugins in a separate module like `astropy.tests.pytest_plugins` and have conftest.py just be `from astropy.tests.pytest_plugins import *`. | |
<author>eteq</author> | |
@jiffyclub - +1 to the idea of putting all the meat of conftest.py in the `astropy.tests` module and having the astropy version and the affiliated packages have the single-line ``from astropy.tests.pytest_plugins import *`` ... that way affiliated packages could easily customize it with their own stuff later, if they want. | |
Feel free to implement that either in this pull request or a separate one. | |
<author>jiffyclub</author> | |
I didn't move anything into `tests/__init__.py` but that's still a possibility of that would be preferable. Otherwise I implemented pretty much everything as we discussed above. | |
<author>eteq</author> | |
Aside from those small inline comments, this looks great to me and I'm happy to merge it whenever you're ready unless there are any objections... | |
@astrofrog, have you had a chance to check that this works with the template package? | |
<author>astrofrog</author> | |
@eteq and @jiffyclub - if I use the above framework, ``python setup.py test`` now works correctly in the package template, so this is ready to merge! | |
<author>eteq</author> | |
Looks great - merging! | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
WCS build fails on Mac | |
<author>astrofrog</author> | |
I'm not sure when this was introduced, but as of 7a7884fef8747dc90c5dd4abfe76c8dad4da1b8b, astropy does not compile on Mac: | |
/Developer/usr/bin/llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DECHO -DWCSTRIG_MACRO -DASTROPY_WCS_BUILD -D_GNU_SOURCE -DWCSVERSION=4.8.2 -DNDEBUG -UDEBUG -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/Users/tom/tmp/astropy/astropy/wcs/src/wcslib/C -I/Users/tom/tmp/astropy/astropy/wcs/src -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c /Users/tom/tmp/astropy/astropy/wcs/src/wcslib/C/wcserr.c -o build/temp.macosx-10.6-x86_64-2.7/Users/tom/tmp/astropy/astropy/wcs/src/wcslib/C/wcserr.o | |
/Users/tom/tmp/astropy/astropy/wcs/src/wcslib/C/wcserr.c:140: internal compiler error: Segmentation fault | |
Please submit a full bug report, | |
with preprocessed source if appropriate. | |
See <URL:http://developer.apple.com/bugreporter> for instructions. | |
{standard input}:0:End-of-File not at end of a line | |
{standard input}:116:End-of-File not at end of a line | |
{standard input}:unknown:Partial line at end of file ignored | |
{standard input}:unknown:Undefined local symbol L_.str | |
{standard input}:unknown:Undefined local symbol L_.str1 | |
{standard input}:unknown:Undefined local symbol L_.str2 | |
error: command '/Developer/usr/bin/llvm-gcc-4.2' failed with exit status 1 | |
I'm assigning this to @mdboom since the error occurs with compiling the WCS files. | |
<author>jiffyclub</author> | |
This is probably the same as #31. @mdboom's compiler detection doesn't always seem to work. You can manually set the compiler with `export CC=/usr/bin/clang`. | |
<author>astrofrog</author> | |
I'm actually using 10.6 (not 10.7 as in #31), and didn't have any issues until some time in the last few days, so something must have changed in the compiler detection that is causing this issue. | |
<author>mdboom</author> | |
The compiler detection code has not changed for some time and it is still working on Lion. Perhaps there was an XCode update recently? If astropy really is at fault, and I can't reproduce this here, maybe you could try running `git bisect` to determine when it broke? Or at least roll back to sometime last week when you last remember it working. | |
When you do `cc --version` what do you get? If you add this version string (i.e. the first token of that output) to `compiler_mapping` in `setup_helpers.py` does that resolve the issue? We may want to have a blanket black list on all versions of llvm-gcc (which is a half-baked transitional compiler anyway) instead of targeting very specific versions of it as we do now. (We don't really know how widespread this compiler bug is.) | |
<author>astrofrog</author> | |
If I change ``compiler_mapping`` to: | |
compiler_mapping = { | |
'i686-apple-darwin10-llvm-gcc-4.2': 'clang', | |
'i686-apple-darwin11-llvm-gcc-4.2': 'clang' | |
} | |
it uses clang to compile, and works fine. Shall I commit that change? | |
<author>mdboom</author> | |
That seems fine -- but I think I'll change it to match on any llvm-gcc-4.2, regardless of darwin version. That should catch more of these cases in the future. Expect a pull request shortly which I'll ask you to test in your environment. | |
<author>astrofrog</author> | |
By the way, this makes sense - darwin10 is in MacOS 10.6, and darwin11 is in MacOS 10.7, so maybe these are two only two default mac compilers we need to worry about? | |
<author>astrofrog</author> | |
@mdboom - ok, thanks! | |
<author>jiffyclub</author> | |
I haven't done any actual experimenting to this end but I've found that the compiler adjustment is highly dependent on the Python I'm using. With EPD or python.org binaries it was never caught and I'd have to set `CC` manually. Right now I have Python 2.7 and Python 3.2 built with the exact same `llvm-gcc` and under Python 2 the compiler gets automatically switched to `clang`, but under Python 3 I have to set `CC` manually to `/usr/bin/clang`. | |
I'll have to see if @mdboom's fix changes things and if not I'll actually try to track down the problem. | |
<author>jiffyclub</author> | |
@mdboom, on a totally separate topic, how do you attach code to an open issue and turn it into a pull request? | |
<author>mdboom</author> | |
@jiffyclub: I have a dorky little script that uses `curl` to call the github API here: http://develop.github.com/p/pulls.html | |
<author>jiffyclub</author> | |
@mdboom I'm getting this error trying to test your branch on Python 3: | |
``` | |
(python3)astropy 33 >:python setup.py test | |
Traceback (most recent call last): | |
File "setup.py", line 24, in <module> | |
setup_helpers.adjust_compiler() | |
File "/Users/jiffyclub/astropy/astropy/setup_helpers.py", line 368, in adjust_compiler | |
if re.match(broken, version): | |
File "/Users/jiffyclub/.virtualenvs/python3/lib/python3.2/re.py", line 153, in match | |
return _compile(pattern, flags).match(string) | |
TypeError: can't use a string pattern on a bytes-like object | |
``` | |
It looks like the `version` object is a bytes object. | |
<author>mdboom</author> | |
@jiffyclub: I've pushed what I hope fixes that to this branch. Also rebased to get other Python 3 fixes made today. | |
<author>jiffyclub</author> | |
It's not liking that either: | |
``` | |
(python3)astropy 55 >:python setup.py test | |
Traceback (most recent call last): | |
File "setup.py", line 24, in <module> | |
setup_helpers.adjust_compiler() | |
File "/Users/jiffyclub/astropy/astropy/setup_helpers.py", line 369, in adjust_compiler | |
os.environ['CC'] = fixed | |
File "/Users/jiffyclub/.virtualenvs/python3/lib/python3.2/os.py", line 455, in __setitem__ | |
value = self.encodevalue(value) | |
File "/Users/jiffyclub/.virtualenvs/python3/lib/python3.2/os.py", line 517, in encode | |
raise TypeError("str expected, not %s" % type(value).__name__) | |
TypeError: str expected, not bytes | |
``` | |
I guess the good news is that it is actually trying to change the compiler. | |
<author>mdboom</author> | |
If you remove the 'b' from `b'clang'` does it work? | |
<author>jiffyclub</author> | |
Yes, works on Python 2.7 and 3.2. | |
One funny thing I just noticed is that the compiler gets switched to `clang` for compiling all the `*.o` files, but then `/usr/bin/llvm-gcc` is used to bundle them into `_wcs.o`. Maybe that's supposed to happen? Doesn't seem to have any negative side effects. | |
<author>astrofrog</author> | |
After removing the ``b`` from ``b'clang'``, it works for me in MacOS 10.6 (``darwin10``) with 2.6, 2.7, 3.1, and 3.2. | |
<author>eteq</author> | |
I know I'm a bit behind here, but because the topic came up and I just figured out how to do this... it's pretty easy to attach code to issues *inside python* via the github API w/ the `requests` package (http://docs.python-requests.org/en/latest/index.html). e.g. to attach my version-renumbering branch to issue 52: | |
``` | |
import requests,json | |
jdata = json.dumps({'issue':'52','head':'eteq:version-renumbering','base':'master'}) | |
res = requests.post('https://api.github.com/repos/astropy/astropy/pulls',data=jdata,auth=('eteq','12345')) | |
print res | |
``` | |
And if it tells you the return code is 201, you were sucessful. | |
(And no, my password is not actually '12345'. That's the kind of combination an idiot would have on his luggage...) | |
<author>mdboom</author> | |
@jiffyclub: wrt a different command used to link -- it's probably fine that it happens that way. `gcc` is designed to be run as either a compiler or a linker depending on what is passed on the commandline. When run as a linker, it merely punts along to the `ld` command. Most Python distros I've seen set both the compiler and linker to the same command, but the `adjust_compiler` stuff here only changes the compiler, since only the compiler is buggy and needs to be changed (at least that I've seen). I think we can ignore this as long as it's working. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
astropy.io.vo: validate | |
<author>mdboom</author> | |
This adds a `validate` method to print out any violations from the VOTable spec for a given file to the console. This also reinstates the validation report generator (in astropy.io.vo.validator.make_validation_report()) which downloads thousands of files, runs them through the validator and xmllint and generates an HTML tree full of reports and statistics about what is failing. | |
As an aside, this also brings some improvements to `astropy.utils.console`. | |
<author>eteq</author> | |
One thing that might be good to include: if `output` is None, have it return a string with the contents of what it would print to the file. I know that could be done with a StringIO, but its more awkard that way, and I can certainly imagine wanting to use it programmatically where its displayed in some text box or something that would take a string. | |
And the `Spinner` class is quite clever... too bad it doesn't work in the ipython qtconsole (that's a limitation of the qtconsole, not your implementation) | |
(Oh, and the failing merge here is because I added the license comment to the top of console.py... so that's easy to fix either via rebasing or it can be just merged and fix the conflict then, but the comment bit should be in) | |
<author>mdboom</author> | |
@eteq: On `validate` returning a string: I think I'd prefer to leave it as it is. It's really convenient in an ipython, for example, to just say `validate('foo.xml')` and not have to remember to pass `sys.stdout`. If it were to return a string by default, the console prints the string with `\n` etc. and it's not very readable. Instead, how about when `output='str'` you get a string out? Or just leave as is -- if one is using it programmatically, the extra two lines to use a StringIO are not a big deal IMHO -- and there are just as many use cases, such as inside a web server, where writing to a stream is the more convenient thing anyway. I understand interactive use and programmatic use are somewhat at odds here. | |
As for `Spinner` in the qtconsole -- thanks for pointing that out. I see you're the reporter of this IPython bug here: https://github.com/ipython/ipython/issues/629 The `ProgressBar` class will have the same issue of course. I wonder if we can detect the presence of the qtconsole and actually use a GUI spinner and progress bar instead... That would be nice. I'll look into it. | |
Rebased on master to fix the conflicting module comment issue. | |
<author>mdboom</author> | |
I submitted a fix for the ipython qtconsole here: | |
https://github.com/ipython/ipython/pull/1089 | |
EDIT: Just FYI -- I looked into popping up a Qt dialog instead, but that doesn't really make sense in the context of IPython, since multiple consoles (of different types) can attach to the same IPython context. We really need something that will work everywhere, so it seemed to make more sense to just fix the IPython qtconsole. | |
<author>phn</author> | |
I get the following error on Ubuntu 11.04 with Python 2.6, 2.7 and 3.2: | |
python setup.py build (or develop) | |
... | |
... | |
/home/phn/code/astropy-forks/mdboom/astropy/wcs/src/docstrings.c:588:6: | |
error: conflicting types for ‘doc_UnitConverter’ | |
/home/phn/code/astropy-forks/mdboom/astropy/wcs/src/docstrings.h:20:13: | |
note: previous declaration of ‘doc_UnitConverter’ was here | |
error: command 'gcc' failed with exit status 1 | |
<author>mdboom</author> | |
Can you try `git clean -f`? | |
On Dec 2, 2011 10:07 PM, "Prasanth" < | |
reply@reply.github.com> | |
wrote: | |
> I get the following error on Ubuntu 11.04 with Python 2.6, 2.7 and 3.2: | |
> | |
> python setup.py build (or develop) | |
> ... | |
> ... | |
> /home/phn/code/astropy-forks/mdboom/astropy/wcs/src/docstrings.c:588:6: | |
> error: conflicting types for ‘doc_UnitConverter’ | |
> /home/phn/code/astropy-forks/mdboom/astropy/wcs/src/docstrings.h:20:13: | |
> note: previous declaration of ‘doc_UnitConverter’ was here | |
> error: command 'gcc' failed with exit status 1 | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/pull/99#issuecomment-2998742 | |
> | |
<author>eteq</author> | |
@mdboom - I wasn't suggesting of changing the *default* - I just meant put it in as an option - maybe this is just a stylistic thing, but I think of StringIO tricks like that as rather un-pythonic and needless work when the point of the function is to generate a string and put it somewhere. So I think it would be good to have it as an option, as you noted as an alternative. I'm not a huge fan of using 'str', as I could see people confusing that with writing out to a file named 'str'... but if you don't like the idea of passing in ``None`` for this reason, it's fine either way. | |
Also, why does it return a boolean indicating sucess? Can't you just catch the exceptions to see if there was a problem? | |
And nice work with the ipython stuff... you're making me look bad by doing in a day the stuff I said I'd try to fix 6 months ago! :) I agree it doesn't really make sense to try to make a GUI here, though - that opens up a lot of extra concerns I don't think we want to deal with at this stage... | |
One thing to consider (definitely as a separate pull request) is to implement a generic interface for updating about something that's happening, so that we might in the future be able to hook in GUI options. That may be more trouble than its worth, though. | |
<author>phn</author> | |
@mdboom | |
``git clean -f`` solves the problem. | |
<author>mdboom</author> | |
@eteq: I'll add the "if file is None, return string" functionality. Note, however, that the purpose of this function is not to generate a string and put it somewhere -- the purpose is to display at the console or on a webpage a possibly very long list of specification violations, both of which are better suited by a stream. On some multi-gigabyte VOTable files in the wild, the list of spec violations can be ~100k entries long. Using all that RAM to do that is unnecessary. | |
The user can not just catch an exception to see if there was a problem, because then it could only return a single exception. The purpose of this function is to display all of the specification violations in the file, and bailing out after the first wouldn't be as useful. | |
I think the stuff we have in `console.py` provides the interface for what could become a library with GUI support. There's no reason the progress bar/spinner couldn't provide their GUI equivalents in some future incarnation. | |
<author>eteq</author> | |
I see your point with the ``console.py`` stuff - it might be a bit confusing to use stuff in ``console.py`` when you want GUIs, but we can cross that bridge if/when we come to it... | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Add py.test minimum version to setup.cfg | |
<author>jiffyclub</author> | |
Added `minversion = 2.2` to `[pytest]` section of `setup.cfg` so that py.test checks its own version. | |
<author>mdboom</author> | |
+1 (works for me!) | |
<author>eteq</author> | |
So I just tested it, and it appears that if the system py.test is 2.1, this craps out, regardless of whether or not you invoke py.test from the command line, use `astropy.test`, or ``setup.py test``. Is this what we wany? As you mentioned in #76, perhaps we should have it fall-back on the astropy version if you use `astropy.test()` or ``setup.py test``? | |
Also, this appears to be pull request/issue number 100! Maybe you should win a prize or something, @jiffyclub ... | |
<author>jiffyclub</author> | |
I think it would still be worth it to fall back onto the built-in py.test in situations where we can control that. That way the tests can still run even if there is an older py.test installed somewhere on the system where the user can't control it. (I have this problem at work.) | |
<author>eteq</author> | |
Do you want to implement that here and have us close #76, or what? @astrofrog, any preferences? | |
<author>jiffyclub</author> | |
I think #76 is entirely compatible with this so if you like that implementation go ahead and merge it. | |
The only competing alternative I'd suggest would be to have astropy always default to the built-in py.test and only try to import a system py.test if the user sets some config option. It's another way of always ensuring the tests can run but it's not really any different from #76 so I don't care either way. Maybe some people prefer defaulting to a system py.test as in #76. | |
<author>jiffyclub</author> | |
I'm going to recommend that we not go with this for now. I just did a test on my work computer where there is a py.test 2.1 install that I have no control over. I took @eteq's branch of #76 and added the `minversion = 2.2` to `setup.cfg`. `setup.py test` successfully switched to the bundled py.test but then py.test quit because it had detected my system's py.test 2.1. I've no idea what would happen if I didn't have py.test at all. So, pending changes to py.test so that it checks the version of the running py.test I don't think `minversion` is a good idea. | |
<author>jiffyclub</author> | |
I filed a ticket with py.test about this so maybe some day it will be more useful. https://bitbucket.org/hpk42/pytest/issue/98/config_checkversion-checks-installed | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Make `utils.console` configurable | |
<author>mdboom</author> | |
(This is just a placeholder for work that needs to be done after some other things get merged). | |
The `utils.console` module should have color and unicode support switchable by config options. | |
Bonus points for borrowing those settings from ipython if running inside of an ipython session and they haven't been set in the astropy config. | |
<author>fgrollier</author> | |
I think that some care should be taken when colorizing output, especially when used in conjunction with logging (see #93). We can't know in advance what type of handlers the user will attach to the logging system, nor if he redirects output to a file using pipes. Thus, unconditional use of coloring can yield to heavily cluttered output files. | |
The way I like to implement this is not force _all_ output to go through the logging mechanism, with a custom made StreamHandler that is able to find if the stream it is attached to is a tty, and thus can smartly decide to do colorizing or not. | |
This also means that colorizing should occur in the very last step just before writing to a stream (or whatever), and that creating colorized strings should be avoided. | |
<author>mdboom</author> | |
@frollier: Yes -- I think your comments make sense for logging and are probably more relevant for the discussion under #93. This module and pull request is more concerned with user console output, as well as the lower-layer that produces color used by the logging layer when it is appropriate to do so. | |
However, I think there are two important points that warrant amending this pull request: | |
1) The new `Color` class, while convenient and obvious, produces strings and makes it hard to turn off coloring based on the stream. This part should probably be reverted to the old approach. | |
2) The feedback-oriented classes (`ProgressBar` and `Spinner`) should probably turn off all of their output if not writing to a real tty. | |
<author>fgrollier</author> | |
You're right, some of these concerns are aimed at #93, but you've correctly summarized what's pertaining to this particular request :) | |
One more (minor) opinion: 'bold' and 'light' colors are the very same thing, and I think that one of these options should be dropped from `color_print`. This will simplify the code just a little bit, but it will also avoid giving the illusion that there is more render options than what the user will really see. | |
If there's a real need for more render styles than just 8+8 colors, the 'reverse video' mode (\033[7m / \033[27m) could be used, this one being almost universally implemented (unlike the 'italic' mode which as already been removed - wisely). | |
<author>eteq</author> | |
Is this ready for review/merge, or are you still working on it? | |
<author>mdboom</author> | |
I believe it is ready. | |
<author>fgrollier</author> | |
I've finished splitting hairs so I believe this is ready for merging. I'll implement my other ideas myself in a future pull request. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Add PyFITS as astropy.io.fits | |
<author>embray</author> | |
I still need to rebase this on master and clean up a few things. | |
Also need to add support for the new config system--haven't decided whether to do that before or after merge. | |
<author>astrofrog</author> | |
I have three test failures on MacOS 10.6 with Python 2.7: | |
=========================================================== FAILURES =========================================================== | |
______________________________________ TestHeaderFunctions.test_verify_invalid_equal_sign ______________________________________ | |
self = <astropy.io.fits.tests.test_header.TestHeaderFunctions object at 0x103a9fa10> | |
capsys = <_pytest.capture.CaptureFuncarg instance at 0x104146440> | |
def test_verify_invalid_equal_sign(self, capsys): | |
# verification | |
c = fits.Card.fromstring('abc= a6') | |
c.verify() | |
out, err = capsys.readouterr() | |
> assert ('Card image is not FITS standard (equal sign not at column 8)' | |
in err) | |
E assert 'Card image is not FITS standard (equal sign not at column 8)' in u'' | |
astropy/io/fits/tests/test_header.py:335: AssertionError | |
_______________________________________ TestHeaderFunctions.test_fix_invalid_equal_sign ________________________________________ | |
self = <astropy.io.fits.tests.test_header.TestHeaderFunctions object at 0x10417c990> | |
capsys = <_pytest.capture.CaptureFuncarg instance at 0x103f4e878> | |
def test_fix_invalid_equal_sign(self, capsys): | |
c = fits.Card.fromstring('abc= a6') | |
c.verify('fix') | |
fix_text = 'Fixed card to meet the FITS standard: ABC' | |
out, err = capsys.readouterr() | |
> assert fix_text in err | |
E assert 'Fixed card to meet the FITS standard: ABC' in u'' | |
astropy/io/fits/tests/test_header.py:343: AssertionError | |
________________________________________ TestImageFunctions.test_verification_on_output ________________________________________ | |
self = <astropy.io.fits.tests.test_image.TestImageFunctions object at 0x103a85090> | |
capsys = <_pytest.capture.CaptureFuncarg instance at 0x103a1d950> | |
def test_verification_on_output(self, capsys): | |
# verification on output | |
# make a defect HDUList first | |
x = fits.ImageHDU() | |
hdu = fits.HDUList(x) # HDUList can take a list or one single HDU | |
hdu.verify() | |
out, err = capsys.readouterr() | |
> assert "HDUList's 0th element is not a primary HDU." in err | |
E assert "HDUList's 0th element is not a primary HDU." in u'' | |
astropy/io/fits/tests/test_image.py:218: AssertionError | |
====================================== 3 failed, 1310 passed, 48 skipped in 27.43 seconds ====================================== | |
<author>eteq</author> | |
My mild preference would be to integrate the config stuff *before* merging, if you think it won't take too long. | |
<author>embray</author> | |
Those look like the same tests that were failing before...somehow the | |
stderr capture isn't picking up warning messages. I guess I still need to | |
change those tests to use warning capture instead. | |
<author>embray</author> | |
I kind of debated myself on that...but maybe you're right. The precedent we | |
want to set when adding packages is that they already fit into the existing | |
infrastructure. | |
Still need to do something about utility unification though...but that I | |
think I'd rather save for after this merge. | |
<author>mdboom</author> | |
@iguananaut: I've had similar issues with stderr capture under pytest, but the warning capture technique seems to work fine. | |
I agree about utility unification after the merge. | |
<author>embray</author> | |
Just strange that capturing warnings from stderr works on Linux but not on OSX. Suggests that on OSX Python doesn't send warnings to stderr by default, though a quick test on bond (one of our OSX build machines) suggests otherwise. So I'm not sure what's going on there... | |
<author>astrofrog</author> | |
Just for the record, the tests still fail, but with different errors: | |
===================================================== FAILURES ===================================================== | |
________________________________ TestHeaderFunctions.test_verify_invalid_equal_sign ________________________________ | |
self = <astropy.io.fits.tests.test_header.TestHeaderFunctions object at 0x104185c10> | |
def test_verify_invalid_equal_sign(self): | |
# verification | |
c = fits.Card.fromstring('abc= a6') | |
with warnings.catch_warnings(record=True) as w: | |
c.verify() | |
> assert len(w) == 1 | |
E assert 3 == 1 | |
E + where 3 = len([<warnings.WarningMessage object at 0x104185cd0>, <warnings.WarningMessage object at 0x104185d10>, <warnings.WarningMessage object at 0x104185d50>]) | |
astropy/io/fits/tests/test_header.py:321: AssertionError | |
_________________________________ TestHeaderFunctions.test_fix_invalid_equal_sign __________________________________ | |
self = <astropy.io.fits.tests.test_header.TestHeaderFunctions object at 0x104185e90> | |
def test_fix_invalid_equal_sign(self): | |
c = fits.Card.fromstring('abc= a6') | |
with warnings.catch_warnings(record=True) as w: | |
c.verify('fix') | |
fix_text = 'Fixed card to meet the FITS standard: ABC' | |
> assert len(w) == 1 | |
E assert 3 == 1 | |
E + where 3 = len([<warnings.WarningMessage object at 0x1041b4350>, <warnings.WarningMessage object at 0x1041b41d0>, <warnings.WarningMessage object at 0x1041b4590>]) | |
astropy/io/fits/tests/test_header.py:330: AssertionError | |
__________________________________ TestImageFunctions.test_verification_on_output __________________________________ | |
self = <astropy.io.fits.tests.test_image.TestImageFunctions object at 0x1041b4590> | |
def test_verification_on_output(self): | |
# verification on output | |
# make a defect HDUList first | |
x = fits.ImageHDU() | |
hdu = fits.HDUList(x) # HDUList can take a list or one single HDU | |
with warnings.catch_warnings(record=True) as w: | |
hdu.verify() | |
text = "HDUList's 0th element is not a primary HDU." | |
> assert len(w) == 1 | |
E assert 3 == 1 | |
E + where 3 = len([<warnings.WarningMessage object at 0x103f3dc90>, <warnings.WarningMessage object at 0x103f3dd10>, <warnings.WarningMessage object at 0x103f3dd90>]) | |
(this is with Python 2.7) | |
<author>astrofrog</author> | |
Interestingly, the same tests pass with Python 3.2 | |
<author>mdboom</author> | |
To add another data point: On my Fedora 16 machine at home, all tests pass on both Python 2 and 3, so we're still maybe seeing something OS-X-specific here. | |
<author>mdboom</author> | |
Also -- you maybe already know this -- there seems to be a bad interaction between `_fix_dtype` and `astropy.io.vo`. `vo` builds a dtype as a list and passes that to the Numpy array constructor, but it seems `_fix_dtype` assumes it is a dtype object with a `fields` member. | |
<author>embray</author> | |
In this case it seems like more warnings are popping out than expected. I'll have to try it out on OSX and see what warnings are actually being output. | |
@mdboom I'll have to look into the _fix_dtype thing. I just need to make it more flexible. | |
<author>astrofrog</author> | |
@iguananaut - here are the failures with the warnings printed to stdout: | |
=============================================================== FAILURES =============================================================== | |
__________________________________________ TestHeaderFunctions.test_verify_invalid_equal_sign __________________________________________ | |
self = <astropy.io.fits.tests.test_header.TestHeaderFunctions object at 0x1040d5a50> | |
def test_verify_invalid_equal_sign(self): | |
# verification | |
c = fits.Card.fromstring('abc= a6') | |
with warnings.catch_warnings(record=True) as w: | |
c.verify() | |
for wi in w: | |
print wi.message | |
> assert len(w) == 1 | |
E assert 3 == 1 | |
E + where 3 = len([<warnings.WarningMessage object at 0x1040d5b50>, <warnings.WarningMessage object at 0x1040d5bd0>, <warnings.WarningMessage object at 0x1040d5c50>]) | |
astropy/io/fits/tests/test_header.py:323: AssertionError | |
----------------------------------------------------------- Captured stdout ------------------------------------------------------------ | |
Output verification result: | |
Card image is not FITS standard (equal sign not at column 8). | |
Card image is not FITS standard (unparsable value string: a6). | |
Note: PyFITS uses zero-based indexing. | |
___________________________________________ TestHeaderFunctions.test_fix_invalid_equal_sign ____________________________________________ | |
self = <astropy.io.fits.tests.test_header.TestHeaderFunctions object at 0x1040d5e90> | |
def test_fix_invalid_equal_sign(self): | |
c = fits.Card.fromstring('abc= a6') | |
with warnings.catch_warnings(record=True) as w: | |
c.verify('fix') | |
fix_text = 'Fixed card to meet the FITS standard: ABC' | |
for wi in w: | |
print wi.message | |
> assert len(w) == 1 | |
E assert 3 == 1 | |
E + where 3 = len([<warnings.WarningMessage object at 0x1040d5c10>, <warnings.WarningMessage object at 0x1040d5b10>, <warnings.WarningMessage object at 0x1040d5b90>]) | |
astropy/io/fits/tests/test_header.py:334: AssertionError | |
----------------------------------------------------------- Captured stdout ------------------------------------------------------------ | |
Output verification result: | |
Card image is not FITS standard (equal sign not at column 8). Fixed card to meet the FITS standard: ABC | |
Card image is not FITS standard (unparsable value string: a6). Fixed card to meet the FITS standard: ABC | |
Note: PyFITS uses zero-based indexing. | |
____________________________________________ TestImageFunctions.test_verification_on_output ____________________________________________ | |
self = <astropy.io.fits.tests.test_image.TestImageFunctions object at 0x1040d5b50> | |
def test_verification_on_output(self): | |
# verification on output | |
# make a defect HDUList first | |
x = fits.ImageHDU() | |
hdu = fits.HDUList(x) # HDUList can take a list or one single HDU | |
with warnings.catch_warnings(record=True) as w: | |
hdu.verify() | |
text = "HDUList's 0th element is not a primary HDU." | |
for wi in w: | |
print wi.message | |
> assert len(w) == 1 | |
E assert 3 == 1 | |
E + where 3 = len([<warnings.WarningMessage object at 0x1037e5b10>, <warnings.WarningMessage object at 0x1037e5b90>, <warnings.WarningMessage object at 0x1037e5c50>]) | |
astropy/io/fits/tests/test_image.py:219: AssertionError | |
----------------------------------------------------------- Captured stdout ------------------------------------------------------------ | |
Output verification result: | |
HDUList's 0th element is not a primary HDU. | |
Note: PyFITS uses zero-based indexing. | |
========================================== 3 failed, 1310 passed, 48 skipped in 24.32 seconds ========================================== | |
<author>embray</author> | |
Thanks for the debugging! Looks like it actually makes 3 `warn()` calls for a multi-line warning message. That's dumb--I need to fix that on the pyfits end. | |
<author>astrofrog</author> | |
All tests pass on Python 2.6, and 2.7 on MacOS 10.6.8. I still get one FITS failure and quite a few VO failures with Python 3.2. I don't seem to get the VO failures with the upstream master though. I'll email you the traceback (too long for pastebin). | |
<author>astrofrog</author> | |
The issue seems to be caused by the fact that `_fix_dtype(dtype)` expects a Numpy dtype, not just a list of fields and types which is also a valid way of representing a dtype when initializing an array. | |
<author>embray</author> | |
@astrofrog Sweet! Yeah, @mdboom pointed out the _fix_dtype thing to me. I've already fixed that in PyFITS and just need to merge it to astropy. What's the other FITS failure you're getting though? | |
<author>embray</author> | |
Nevermind, I see you sent me an e-mail. I'll have to look at it when I get back to work. | |
<author>astrofrog</author> | |
All tests now pass on MacOS 10.7 with all supported Python and Numpy versions. | |
<author>embray</author> | |
Huzzah \o/ Thanks again @astrofrog for setting up that Jenkins build for this branch--very useful. | |
<author>eteq</author> | |
Cool! Does that mean, pending a rebase, this is in principal ready to merged? I personally haven't looked at it too closely, but I'm curious if that means now we should? (at least for integration-with-astropy items - I certainly don't intend to review the entire code base :) | |
<author>embray</author> | |
@eteq I still need to add a handful of config options via the Astropy config system. There are also some environment variables, but I might leave them out for now. I was working on an environment variable patch for the config system a while back, but I got distracted and never got back to that branch... So I don't think I'll hold up fits over that. | |
<author>embray</author> | |
I think this is pretty much ready to go at this point. | |
In the process of giving one last look over the code I stumbled over all the design issues in PyFITS that I still want to fix. But they certainly won't all get fixed immediately so there's no reason to hold up Astropy for them. Though I'll still be continually merging changes from PyFITS, I don't think there's any reason to hold off much longer on merging this. | |
<author>astrofrog</author> | |
I can't review all the code (not sure if anyone can!) but from I can see, you've done a great job. I looked at the docs a bit, and one thing I think we might want to think about is whether we want to recommend common abbreviations for astropy components. For example, the following is a bit long to type: | |
>>> import astropy.io.fits | |
>>> hdulist = astropy.io.fits.open('input.fits') | |
but of course we could have 'standard' abbreviations (like numpy has with np), e.g. | |
>>> import astropy.io.fits as fits | |
>>> hdulist = fits.open('input.fits') | |
I'm sure we can deal with that in a future pull request, but just wanted to raise the point for now. | |
<author>embray</author> | |
@astrofrog That's a good point, and something that's crossed my mind too.. I was thinking of adding something in the docs to the tune of "Standard practice is to import the fits module with `from astropy.io import fits`" but I'm not quite sure where to put that.. | |
<author>astrofrog</author> | |
@iguananaut I guess this applies to all components of Astropy, so maybe we should first merge this in, then we can decide on a consistent convention across packages. I like the idea of: | |
from astropy.io import fits | |
from astropy.io import vo | |
from astropy import wcs | |
etc. | |
<author>eteq</author> | |
@astrofrog @iguananaut - I agree that I like the way @astrofrog suggests for doing imports - it seems easier to read, somehow. | |
@iguananaut - just so I understand what you meant in your earlier comment: you're saying that now all the configuration is via the astropy configuration system, and you're hoping to (post-merge) get the environment variable stuff working, or did you mean that you've left some stuff in this branch that still uses environment variables, and will switch that to the configuration system after the environment variable option is put in? | |
<author>eteq</author> | |
A few docs-related comments: | |
* Maybe change the "astropy.io.fits History" document title o "astropy.io.fits pre-Astropy History" or something like that? And perhaps @mdboom should do the same for vo and wcs' histories? Or do you intend to continue updating these even after merging with astropy? | |
* When I do ``python setup.py build_sphinx`` I get a bunch of documentation warnings, but if I do ``make html`` inside the docs directory, everything builds without complaint. Do any of you see the same behavior, or is this just a consequence of some sort of strange fiddling I've done with my python setup? | |
* I see you added an `astropy.css` file to `docs/_static` - what exactly is that for? And is it strictly necessary to make the pyfits docs look right? | |
<author>embray</author> | |
@eteq Okay, I'm seeing all these warnings with build_sphinx too. No idea why. I had only tested it by running make in the docs directory manually, which works fine for me. | |
I don't want to do anything with the History right now, since it will probably continue to be updated. I'll still be making changes in PyFITS and merging them into Astropy up through the first Astropy release, and probably beyond. It makes sense the way it is, since those changes will continue to originate from PyFITS at least for a while. | |
And yes, astropy.css contains some custom styles that are used in the PyFITS docs as well. There's also some LaTeX bits that I might port over at some point. I haven't tried doing a ps build of the Astropy docs yet... | |
<author>embray</author> | |
I think the problem with the build_sphinx command is that the docs are looking for classes in the `astropy.io.fits` namespace, but nothing is actually imported into that namespace when running setup.py because the package's `__init__.py` doesn't import anything into that namespace if `setup_helpers.is_in_build_mode()`. It's not clear to my why this isn't a problem for the io.vo package too. | |
<author>mdboom</author> | |
On the `setup.py build_sphinx` problem: io.vo works because it's importable without C extensions and the documentation pulls things from the modules they're defined in rather than their references in `astropy/io/vo/__init__.py`. wcs has the same problem I think we're seeing with fits, though even more pronounced because many of the docstrings only exist in the C extension. | |
In https://github.com/astropy/astropy/issues/117#issuecomment-3354373 we had discussed modifying `build_sphinx` so it would import from the `build/lib.$PLATFORM` tree, not the source tree. That should theoretically resolve this. What's the status on that? | |
<author>mdboom</author> | |
We might want to confirm that `astropy.css` plays with with the readthedocs css. Not worth holding up the pull for... | |
There's some tweaks in `stsci_sphinxext` that make the Parameters tables take up less horizontal space that I think would also be nice to include in `astropy`... Will try to work up a pull request for that at some point... | |
<author>embray</author> | |
@mdboom The stuff in astropy.css shouldn't have any effect. I defined a custom class for a figure I wanted formatted in a certain way, and that class is only used by the fits docs. | |
As for build_sphinx, running it out of the build/ dir would fix most of the problems. But it would still be necessary to amend `setup_helpers.in_build_mode()` to do something else when build_sphinx is running. I don't really want to change the API docs to use full module names, because all the documentation is designed so that users of the library can get everything they need directly from the `astropy.io.fits` module without going into any of the submodules. So I don't want to confuse the issue. | |
<author>mdboom</author> | |
We could just run `build_sphinx` in a separate process, as we already do with `setup.py test`. | |
Agreed on not changing the API docs -- I'm not sure how that problem is related -- or maybe I just haven't had enough caffeine today ;) | |
<author>embray</author> | |
@mdboom The API docs are a related problem, because if build AND build_sphinx are run as part of the same process, then by the time build_sphinx is run, the `astropy.io.fits`, `astropy.wcs` and other such packages will have *already been imported* but without their namespaces populated since `in_build_mode()` returns True. So when Sphinx goes about collecting objects in a module it can't actually find anything. | |
But running it in a subprocess, as you suggested, would solve that problem. Another possibility would be to have build_sphinx force a reimport of any astropy packages. | |
<author>eteq</author> | |
@mdboom @iguananaut - Regarding changing the docs to use ``astropy/build`` - I plan to do that once I get #115 working (as there would likely be conflicting changes between those), but #115 has taken a bit longer than I expected. If someone else wants to take a crack at it before I get that done, I could also rebase #115 against that. | |
Regardless, +1 on the subprocess idea; it should do the trick and consistency with ``setup.py test`` is a plus. | |
<author>astrofrog</author> | |
So is the plan to implement the change to the doc-building after this pull request is merged in? If so, is there anything else that needs addressing before we merge? | |
<author>embray</author> | |
Yeah I dunno--if we don't fix the docs first, merging this will break the docs. I don't really care too much right now given that it's only temporary. But that's just me. | |
<author>eteq</author> | |
There are lots of oddities in the way ``build_sphinx`` seems to build the docs that it might already be considered broken anyway, and this just solidifies it... so as far as the docs are concerned, I'd say its fine to merge this if its ready and later add the fix that generates the docs against ``astropy/build`` | |
<author>embray</author> | |
Okay then. By the end of today I will do one last rebase on trunk, and then merge \o/ | |
<author>astrofrog</author> | |
Sounds good! | |
<author>eteq</author> | |
This is a huge merge! Good work, @iguananaut | |
<author>embray</author> | |
@eteq It actually seemed to break GitHub for a while. For at least an hour after the merge it wasn't properly displaying the source tree. I finally gave up and went home, but it seems to be fixed now. | |
<author>embray</author> | |
I'm going home for the day. Not feeling too great so I need to take care of myself. | |
To the remaining Astropy visitors, I'm sorry I didn't get to say bye properly but it was good seeing you, and I'm sure I'll see you around. | |
Erik | |
</issue> | |
<issue> | |
<author>wkerzendorf</author> | |
NDData framework | |
<author>wkerzendorf</author> | |
Implemented NDData framework as discussed at the workshop. This is to make it easy for other developers to start on specialized nddata objects like image and the likes. | |
<author>mdboom</author> | |
It would probably be worth having a discussion about how this relates to @taldcroft's `Table` class (in progress). I assume that this is for non-tabular data, and the `Table` class is for tabular data, but should one be a subclass of the other? Should `Table` columns be returned as `NDData` objects? | |
<author>wkerzendorf</author> | |
@mdboom: I think we talked about this at the meeting. They are supposed to be two different classes, but for some objects (like spectra) there should be converter functions between one and the other. | |
<author>taldcroft</author> | |
@mdboom: The latest iteration of table.Column subclasses ndarray and I think this is a good thing from a user perspective. At the meeting there was some discussion about keeping things clean (as in the current NDData class definition) vs. subclassing ndarray and getting a mix. Lately I'm preferring the latter (but of course there are good arguments on both sides). I haven't been following the NDData discussions so I don't have a good feel for how it would work as the base for a Table Column but will keep this option in mind. | |
<author>eteq</author> | |
+1 for `Table` being based on arrays and not `NDData`. We definitely want a standard convention for converting to tables, as @wkerzendorf said (probably just a `to_table` method?), but I think that a `Table` is something that won't always necessarily have all the other stuff we said `NDData` normally will have (e.g. something wcs-like, units on dimensions, etc.). | |
We should definitely monitor this as use cases develop, though - we definitely want to avoid the unfortunate example of all the different FITS ways of representing table-like data... | |
<author>eteq</author> | |
I added a few comments to commit wkerzendorf/astropy@ee85feb ... apparently it doesn't automatically pull that into the pull request here (I guess even github can't always read your mind) | |
<author>eteq</author> | |
The new comments I added are on the docstrings... to catch these you might just try adding them quickly to the documentation with a file in the docs directory that looks like: | |
``` | |
NDData package | |
-------------- | |
.. automodule:: astropy.nddata | |
:members: | |
.. automodule:: astropy.nddata.nddata | |
:members: | |
``` | |
Or something like that. We don't necessarily want this in the repo now, as there'll be a docs re-organization that should take care of that stuff automatically pretty soon... but this way at least you can test to be sure the docstrings don't render weirdly. | |
<author>astrofrog</author> | |
+1 on keeping `Table` and `NDData` separate | |
<author>wkerzendorf</author> | |
@eteq You can add the file in the docs once this is merged in (I don't exactley know where it's supposed to go). I have changed the docstrings to less than 100 characters and added the `~numpy.nddata'. I think for now it serves as a good starting point. | |
<author>eteq</author> | |
@wkerzendorf Right, I just meant you should put that when you're testing this, because then you can see if it spits out errors when you build the documentation. And in fact, when I do that now, it's still giving errors (you may have missed my two inline comments above that are necessary to get the docs to parse without errors). | |
I'll submitted a pull request directly to your branch with some modifications that make the documentation build correctly and be a bit more explicit. | |
<author>wkerzendorf</author> | |
@eteq I've tried to find the pull request on my branch, but I don't see it open @ https://github.com/wkerzendorf/astropy/pulls | |
There are none there. | |
To my shame I have to admit that I didn't try to build the documentation :D (maybe I should learn to do this) | |
<author>crawfordsm</author> | |
Glad I stumbled across this--I had initially put something together, but then held off on it as I was waiting to hear back from Tom, and then it completely fell off my radar. So thanks for actually submitting it. My initial version was essentially the same so I don't have too many comments at this point. The real hold up is waiting for the WCS package to really further develop how the WCS and arrays interact. | |
I think we were pretty agnostic on the error handling. Variance vs. Error really depends on what you are doing most of the time | |
And I agree that NDdata should not include Tables or the other way around, but I can definitely see where an object might inherit from both NDdata and Tables. | |
<author>eteq</author> | |
@wkerzendorf Sorry, I was working on it locally and forgot to push up and actually issue the pull request! At any rate, you should see it now as wkerzendorf/astropy#1. Might I suggest you pull that and then we can have a discussion on it here just so we have it all in one place? | |
<author>eteq</author> | |
I realize a lot of those documentation changes may seem to be a bit overkill here, but I think its important that the early modules like this be very explicit about how the interfaces work, so that people can start building on them in ways that are subtley not compatible but appear it first glance to be fine. | |
At any rate, people may want to give it another quick glance to make sure they're happy with all the additional validation. Basically, this is just slightly more strict requirements for shape-matching and so on, with the associated more pedantic documentation. | |
<author>astrofrog</author> | |
I've added a few inline comments, most of them minor, but the main one that needs to be addressed is the definition of the mask (should be False=valid, True=invalid). We may already want to allow masks to be either boolean or integer, where False/0 is valid, and anything else is invalid? | |
<author>eteq</author> | |
See my comment above. With the point of more complicated flagging schemes in mind, I agree that it makes sense to switch to the False/0 as valid, and bool or integer, and also string should be allowed (maybe *anything* should be allowed, but I think for a mask its hard to imagine wanting anything other than bool, int, or string). | |
@wkerzendorf, do you want to implement this, or should I do that in a new pull reqeust? | |
<author>crawfordsm</author> | |
Definitely in our initial discussion, the idea was to keep the NDData as general as possible. People will certainly have different uses for it and things that we haven't thought of. Obviously, we could generalize it to the point of being useless, but I think staying agnostic on the mask and the error array are definitely the way to go. As such, I would remove the validation on the two of them in NDdata other than they should have the same shape as NDData. | |
<author>crawfordsm</author> | |
I would hope in the future that units are included as part of the WCS as we might not just have one axes to describe--ie data cubes or data sets. So I would imagine the units in NDData right now could be the name of the units or a list of the names of the units. | |
<author>astrofrog</author> | |
@crawfordsm - I think the ``.units`` attribute is meant to be the actual units of the pixel values, not the coordinates (the WCS should indeed include the units along the axes). | |
<author>eteq</author> | |
@crawfordsm I see your point that the mask should be left open to any type as long as they can be successfully broadcast onto the data - @wkerzendorf, that just means remove lines 92 and 93. As it stands with that change, they are converted into arrays (although if copy is False, they are references rather than copied) - are you saying you don't think even that should be true? | |
and @astrofrog and @crawfordsm - I agree that `.units` is supposed to be only applied to the `.data`, and the WCS is should take care of the coordinates. Should we try to make that more explicit in the docstring? | |
<author>astrofrog</author> | |
@eteq - it might indeed be worth clarifying in the docstring that the units apply to the pixel values, not the coordinates. | |
<author>wkerzendorf</author> | |
I was busy with graduation and talks, so I just merged @eteq s comments in last time, but hadn't time to review the whole thing (by the way thanks for that, I should really look into that whole documentation building business). | |
@crawfordsm: Sorry I should have mailed you about doing it. my initial idea was to have a very general nddata and would bve very small, but it is transformed (probably for the better). | |
I believe the .shape and .dtype is very good as well as the tests. | |
meta: I believe in the description of metadata, it sounds like it has something todo with the observations. NDData is very general and so I think meta should be specified that is a dictionary containing metadata (NDData should handle everything from simulation data to observational data). | |
units: I think the description of units is fine. but if someone wants to add something just post it and I'll add it in. | |
error: I think it should be an array with a dimension >= dimension of the data array. I believe that you could store multiple values which describe the error somehow (std, curtosis, ....). I would leave this in NDData completely open and not forcing the person into std. | |
mask: similar to error. Leave it open imho. | |
copy: ambivalent about it. I guess its not bad as its sort of a np array. | |
init-function: There's an awful lot of if-statements in there. When you want to instantiate a couple of millions of those empty or just with an array in there it will make it slower. I would not make checks on the data, because that will slow things down. | |
A quick question, how do I see what the code looks like once its merged. I just want to see the whole code somehwere in github? Is that possible? | |
<author>eteq</author> | |
@wkerzendorf - As the PR interface is fancy enough to show, I added a new pull request integrating the comments people gave above, including converting mask to any data type. This can of course be modified further, but I'd like to have the fixed version be what we see here. | |
Also, to see what the full merged code looks like, you can look directly at your branch: https://github.com/wkerzendorf/astropy/tree/nddata is what it looks like right now. Or do you mean after it's merged into the *current* master? I don't know of a way to do that in github. what I do is go to master *locally* on my own computer, do ``git merge wkerzendorf/nddata`` or whatever, and then examine what that produces. Note, however, that that merge *will* appear in your master branch afterward if you push it up to github. If that's not what you want, be sure to do ``git reset --hard upstream/master`` to go back to the current astropy master. | |
<author>eteq</author> | |
@wkerzendorf's comments - | |
meta: I don't quite understand what you're saying here - are you saying you think something is wrong with the description, or that you like it as is and are just clarifying it further? I tried to write that description such that it was as general as possible... | |
error: If we don't specify what the error means eventually, it's totally useless as an interface. I agree that we do *not* want to actually stick with the std definition long-term, and that's why there's that warning that clearly says this meaning *will* change. But I think we should give people something to work with until we hash that out, and practically speaking, when people see "error" they generally take that to mean either "std" or "variance", assuming gaussian statistics. Do others have an opinion on this? | |
mask: agreed on this one - the PR I just issued removes the type-checking, so all that it does is check that the shape matches (e.g. that it can be "broadcast" in the numpy sense). | |
copy: I think this has to be in here, because all of the hardest bugs to catch I've encountered in numpy-related projects have been either due to copying when it shouldn't happening or not copying when it should have. After all, ["explicit is better than implicit"](http://www.python.org/dev/peps/pep-0020/). | |
init: Similarly, tons of bugs come from code where someone passes in a list thinking that it will act as an array,and these if statements prevent that... so I think validation should be in there, at least by default. But I see your point that it could be useful to sometimes skip validation, so I added a `validate` argument in the latest PR that lets the checking by bypassed (although it does the checking by default) | |
I did some quick tests, and it appears that the difference between including validation or not is about 10 microseconds for 10,000 element arrays. So you have to initialize about literally around a million NDData objects to really notice a difference... | |
<author>wkerzendorf</author> | |
@eteq | |
meta: I just would change the description to say: 'Dictionary of meta data'. without referencing observations. I'll also change the description in the __init__.py to put in theoretical and simulation. | |
mask: sounds good. | |
error: I thought about that and yes if we limit it to variance it makes it easier for us to implement error propagation and so on. But as this is the base-base-class I would leave it open (just making they are compatible in the number of dimensions they match; e.g. if data is shape 10,10 -> error can be 10,10 or 10,10,2 or 10,10,2,5). | |
copy: agreed | |
init: sounds good. so if we don't validate does it not take these additional 10 microseconds? | |
Even in the current form I'm fine with a merge, I'll add one thing to the docstring and then I'm done | |
<author>eteq</author> | |
@wkerzendorf | |
regarding the meta description: Ah... I disagree that just saying "metadata" is enough. "Metadata" means lots of different things to different people, and means *nothing* to some people... So at the very least the sentence before the "e.g." or something similar should stay in. I see your point that we don't want to be too overly-biased to one particular set of examples, though - I just thought some concrete cases would help people understand. Feel free to either eliminate the examples, or extend the list with some other cases that you think cover everything we have in mind. | |
<author>eteq</author> | |
Oh, and regarding error: Sorry if I wasn't clear before... I'm saying the "stddev" meaning is *not* what we will actually have in a final version of this class. That's why I put in a warning to clearly say that it absolutely should not be taken as permanent. It's just something to give people some concrete definition for the first version. If we leave it completely open to have any meaning at all, any subclasses will be completely incompatible because they'll use different definitions of `error`, so having something in there right now makes it clear that we will eventually define something more concrete. | |
And anyway, it doesn't actually do any kind of checking right now anyway beyond that the shapes match, as you say. So the code won't stop them from doing whatever meaning of error they want (as long as the shape matches) - it's only the docstring that has anything to say about it at all. | |
<author>eteq</author> | |
@wkerzendorf - Is this ready to go? in your earlier message you said something about altering docstrings... | |
<author>wkerzendorf</author> | |
sorry, I'm on holiday and internet access is sporadic. I'll fix this up in a few days. | |
<author>wkerzendorf</author> | |
I'm done from my side. I guess we can merge soon and then decide about the next steps | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Use config and data systems for io.vo and wcs. Fix some initial bugs in data system. | |
<author>mdboom</author> | |
Uses `config.data.get_data_filename` and `config.get_data_fileobj` in `io.vo` and `wcs` packages where appropriate. This is mainly in the test code. | |
Adds support for globbing the contents of a directory using `config.get_data_filenames` and `config.get_data_fileobjs`. (Addresses #95). | |
Adds a way to get locally-installed data files without importing Python modules along the way. This uses a shortcut where the caller's module is determined using the `find_current_module` trick used in the configuration framework. (Addresses #94). | |
Note there were a lot of trailing whitespace problems in `data.py` so the diff is funky there. Turn off whitespace diffing to make it more obvious what I changed. We should probably all set our editors to remove trailing whitespace if possible. | |
<author>eteq</author> | |
See mdboom/astropy#2 for a few additional tests and the adjustment I described above. | |
It might be also a good idea to add something in the config docs (or docstrings) that the '../../data' is the way to get at the astropy/data directory. Or maybe just put it in the astropy/data/README.rst or something. | |
<author>mdboom</author> | |
Rebased against master and responded to comments above. | |
<author>eteq</author> | |
Note that in #108, @iguananaut has another use for the version of `find_current_module` that goes back until it hits a new module. So probably that's a better solution than the private-methods version. Note that #108 has already implemented this, but outside `find_current_module` so either this or that pull request should move that into `find_current_module`, and then they should both use the same syntax in the calling. | |
<author>mdboom</author> | |
Update to use the new `finddiff` option in `find_current_module`. | |
<author>mdboom</author> | |
I think this is probably good to go, now, but @eteq may want to give it one more sanity check. | |
<author>eteq</author> | |
Looks great to me, now - merging! | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Refactored XML code | |
<author>mdboom</author> | |
There probably isn't a whole lot to review here. This basically moves the C code that is not VOTable-specific into `astropy.utils.xml` and leaves the rest in `astropy.io.vo`. | |
At the same time, this corrects the lifetime of file objects through proper use of the with statement. | |
<author>eteq</author> | |
I haven't looked over it terribly closely, but all the tests run fine, and it seems straightforward... so looks good to merge to me unless someone else wants to look over it. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
vo tests all fail in Mac OS X 10.6 | |
<author>eteq</author> | |
When I run the current master (baeb755edc89ecb78ca942ed1d1efa98d7d4f85d) test suite under OS X with python 2.7, all except one (test_too_many_columns is the only passing one) of the tests in astropy/io/vo/tests/vo_test.py fail or error. The failures are in pastebin at http://paste.pocoo.org/show/517880/ through http://paste.pocoo.org/show/517888/ . There are 96 errors, so I'm not going to include them all here, but they all seem fairly similar in where they lead to the one I've pasted below (mainly in that they all end at _convert_to_fd_or_read_function with an IOError). | |
Also, I did a ``git bisect`` to figure out where the error was introduced - it appears to be baeb755edc89ecb78ca942ed1d1efa98d7d4f85d , the xml refactoring commit (not necesarilly surprisingly). | |
``` | |
________________ ERROR at setup of TestFixups.test_implicit_id _________________ | |
self = <class astropy.io.vo.tests.vo_test.TestFixups at 0x4b1fd50> | |
def setup_class(self): | |
self.table = parse( | |
> join(ROOT_DIR, "regression.xml"), pedantic=False).get_first_table() | |
astropy/io/vo/tests/vo_test.py:179: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
source = '/Users/erik/src/astropy/astropy/io/vo/tests/data/regression.xml' | |
columns = None, invalid = 'exception', pedantic = False, chunk_size = 256 | |
table_number = None, filename = None, _debug_python_based_parser = False | |
def parse(source, columns=None, invalid='exception', pedantic=True, | |
chunk_size=tree.DEFAULT_CHUNK_SIZE, table_number=None, | |
filename=None, | |
_debug_python_based_parser=False): | |
""" | |
Parses a VOTABLE_ xml file (or file-like object), and returns a | |
`~astropy.io.vo.tree.VOTable` object, with nested | |
`~astropy.io.vo.tree.Resource` instances and | |
`~astropy.io.vo.tree.Table` instances. | |
Parameters | |
---------- | |
source : str or readable file-like object | |
Path or file object containing a VOTABLE_ xml file. | |
columns : sequence of str, optional | |
List of field names to include in the output. The default is | |
to include all fields. | |
invalid : str, optional | |
One of the following values: | |
- 'exception': throw an exception when an invalid value is | |
encountered (default) | |
- 'mask': mask out invalid values | |
pedantic : bool, optional | |
When `True`, raise an error when the file violates the spec, | |
otherwise issue a warning. Warnings may be controlled using | |
the standard Python mechanisms. See the `warnings` | |
module in the Python standard library for more information. | |
chunk_size : int, optional | |
The number of rows to read before converting to an array. | |
Higher numbers are likely to be faster, but will consume more | |
memory. | |
table_number : int, optional | |
The number of table in the file to read in. If `None`, all | |
tables will be read. If a number, 0 refers to the first table | |
in the file, and only that numbered table will be parsed and | |
read in. | |
filename : str, optional | |
A filename, URL or other identifier to use in error messages. | |
If *filename* is None and *source* is a string (i.e. a path), | |
then *source* will be used as a filename for error messages. | |
Therefore, *filename* is only required when source is a | |
file-like object. | |
Returns | |
------- | |
votable : `astropy.io.vo.tree.VOTableFile` object | |
See also | |
-------- | |
astropy.io.vo.exceptions : The exceptions this function may raise. | |
""" | |
invalid = invalid.lower() | |
assert invalid in ('exception', 'mask') | |
config = { | |
'columns' : columns, | |
'invalid' : invalid, | |
'pedantic' : pedantic, | |
'chunk_size' : chunk_size, | |
'table_number' : table_number, | |
'filename' : filename} | |
if filename is None and isinstance(source, basestring): | |
config['filename'] = source | |
with iterparser.get_xml_iterator( | |
source, | |
_debug_python_based_parser=_debug_python_based_parser) as iterator: | |
return tree.VOTableFile( | |
> config=config, pos=(1, 1)).parse(iterator, config) | |
def parse_single_table(source, **kwargs): | |
astropy/io/vo/table.py:100: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <contextlib.GeneratorContextManager object at 0x6943cd0>, type = None | |
value = None, traceback = None | |
def __exit__(self, type, value, traceback): | |
if type is None: | |
try: | |
> self.gen.next() | |
/Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/contextlib.py:24: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
source = '/Users/erik/src/astropy/astropy/io/vo/tests/data/regression.xml' | |
_debug_python_based_parser = False | |
@contextlib.contextmanager | |
def get_xml_iterator(source, _debug_python_based_parser=False): | |
""" | |
Returns an iterator over the elements of an XML file. | |
The iterator doesn't ever build a tree, so it is much more memory | |
and time efficient than the alternative in `cElementTree`. | |
Parameters | |
---------- | |
fd : readable file-like object or read function | |
Returns | |
------- | |
parts : iterator | |
The iterator returns 4-tuples (*start*, *tag*, *data*, *pos*): | |
- *start*: when `True` is a start element event, otherwise | |
an end element event. | |
- *tag*: The name of the element | |
- *data*: Depends on the value of *event*: | |
- if *start* == `True`, data is a dictionary of | |
attributes | |
- if *start* == `False`, data is a string containing | |
the text content of the element | |
- *pos*: Tuple (*line*, *col*) indicating the source of the | |
event. | |
""" | |
with _convert_to_fd_or_read_function(source) as fd: | |
if _debug_python_based_parser: | |
context = _slow_iterparse(fd) | |
else: | |
context = _fast_iterparse(fd) | |
> yield iter(context) | |
def get_xml_encoding(source): | |
astropy/utils/xml/iterparser.py:195: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <contextlib.GeneratorContextManager object at 0x6943750>, type = None | |
value = None, traceback = None | |
def __exit__(self, type, value, traceback): | |
if type is None: | |
try: | |
> self.gen.next() | |
/Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/contextlib.py:24: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
fd = '/Users/erik/src/astropy/astropy/io/vo/tests/data/regression.xml' | |
@contextlib.contextmanager | |
def _convert_to_fd_or_read_function(fd): | |
""" | |
Returns a function suitable for streaming input, or a file object. | |
This function is only useful if passing off to C code where: | |
- If it's a real file object, we want to use it as a real | |
C file object to avoid the Python overhead. | |
- If it's not a real file object, it's much handier to just | |
have a Python function to call. | |
Parameters | |
---------- | |
fd : object | |
May be: | |
- a file object, in which case it is returned verbatim. | |
- a function that reads from a stream, in which case it is | |
returned verbatim. | |
- a file path, in which case it is opened. If it ends in | |
`.gz`, it is assumed to be a gzipped file, and the | |
:meth:`read` method on the file object is returned. | |
Otherwise, the raw file object is returned. | |
- an object with a :meth:`read` method, in which case that | |
method is returned. | |
Returns | |
------- | |
fd : context-dependent | |
See above. | |
""" | |
if IS_PY3K: | |
if isinstance(fd, io.IOBase): | |
yield fd | |
return | |
else: | |
if isinstance(fd, file): | |
yield fd | |
return | |
if is_callable(fd): | |
yield fd | |
return | |
elif isinstance(fd, basestring): | |
if fd.endswith('.gz'): | |
from ...utils.compat import gzip | |
with gzip.GzipFile(fd, 'rb') as real_fd: | |
yield real_fd.read | |
real_fd.flush() | |
return | |
else: | |
with open(fd, 'rb') as real_fd: | |
yield real_fd | |
> real_fd.flush() | |
E IOError: [Errno 9] Bad file descriptor | |
astropy/utils/xml/iterparser.py:94: IOError | |
________________ ERROR at setup of TestReferences.test_fieldref ________________ | |
self = <class astropy.io.vo.tests.vo_test.TestReferences at 0x4b1fdc0> | |
def setup_class(self): | |
> self.votable = parse(join(ROOT_DIR, "regression.xml"), pedantic=False) | |
astropy/io/vo/tests/vo_test.py:190: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
source = '/Users/erik/src/astropy/astropy/io/vo/tests/data/regression.xml' | |
columns = None, invalid = 'exception', pedantic = False, chunk_size = 256 | |
table_number = None, filename = None, _debug_python_based_parser = False | |
def parse(source, columns=None, invalid='exception', pedantic=True, | |
chunk_size=tree.DEFAULT_CHUNK_SIZE, table_number=None, | |
filename=None, | |
_debug_python_based_parser=False): | |
""" | |
Parses a VOTABLE_ xml file (or file-like object), and returns a | |
`~astropy.io.vo.tree.VOTable` object, with nested | |
`~astropy.io.vo.tree.Resource` instances and | |
`~astropy.io.vo.tree.Table` instances. | |
Parameters | |
---------- | |
source : str or readable file-like object | |
Path or file object containing a VOTABLE_ xml file. | |
columns : sequence of str, optional | |
List of field names to include in the output. The default is | |
to include all fields. | |
invalid : str, optional | |
One of the following values: | |
- 'exception': throw an exception when an invalid value is | |
encountered (default) | |
- 'mask': mask out invalid values | |
pedantic : bool, optional | |
When `True`, raise an error when the file violates the spec, | |
otherwise issue a warning. Warnings may be controlled using | |
the standard Python mechanisms. See the `warnings` | |
module in the Python standard library for more information. | |
chunk_size : int, optional | |
The number of rows to read before converting to an array. | |
Higher numbers are likely to be faster, but will consume more | |
memory. | |
table_number : int, optional | |
The number of table in the file to read in. If `None`, all | |
tables will be read. If a number, 0 refers to the first table | |
in the file, and only that numbered table will be parsed and | |
read in. | |
filename : str, optional | |
A filename, URL or other identifier to use in error messages. | |
If *filename* is None and *source* is a string (i.e. a path), | |
then *source* will be used as a filename for error messages. | |
Therefore, *filename* is only required when source is a | |
file-like object. | |
Returns | |
------- | |
votable : `astropy.io.vo.tree.VOTableFile` object | |
See also | |
-------- | |
astropy.io.vo.exceptions : The exceptions this function may raise. | |
""" | |
invalid = invalid.lower() | |
assert invalid in ('exception', 'mask') | |
config = { | |
'columns' : columns, | |
'invalid' : invalid, | |
'pedantic' : pedantic, | |
'chunk_size' : chunk_size, | |
'table_number' : table_number, | |
'filename' : filename} | |
if filename is None and isinstance(source, basestring): | |
config['filename'] = source | |
with iterparser.get_xml_iterator( | |
source, | |
_debug_python_based_parser=_debug_python_based_parser) as iterator: | |
return tree.VOTableFile( | |
> config=config, pos=(1, 1)).parse(iterator, config) | |
def parse_single_table(source, **kwargs): | |
astropy/io/vo/table.py:100: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <contextlib.GeneratorContextManager object at 0x694aaf0>, type = None | |
value = None, traceback = None | |
def __exit__(self, type, value, traceback): | |
if type is None: | |
try: | |
> self.gen.next() | |
/Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/contextlib.py:24: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
source = '/Users/erik/src/astropy/astropy/io/vo/tests/data/regression.xml' | |
_debug_python_based_parser = False | |
@contextlib.contextmanager | |
def get_xml_iterator(source, _debug_python_based_parser=False): | |
""" | |
Returns an iterator over the elements of an XML file. | |
The iterator doesn't ever build a tree, so it is much more memory | |
and time efficient than the alternative in `cElementTree`. | |
Parameters | |
---------- | |
fd : readable file-like object or read function | |
Returns | |
------- | |
parts : iterator | |
The iterator returns 4-tuples (*start*, *tag*, *data*, *pos*): | |
- *start*: when `True` is a start element event, otherwise | |
an end element event. | |
- *tag*: The name of the element | |
- *data*: Depends on the value of *event*: | |
- if *start* == `True`, data is a dictionary of | |
attributes | |
- if *start* == `False`, data is a string containing | |
the text content of the element | |
- *pos*: Tuple (*line*, *col*) indicating the source of the | |
event. | |
""" | |
with _convert_to_fd_or_read_function(source) as fd: | |
if _debug_python_based_parser: | |
context = _slow_iterparse(fd) | |
else: | |
context = _fast_iterparse(fd) | |
> yield iter(context) | |
def get_xml_encoding(source): | |
astropy/utils/xml/iterparser.py:195: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <contextlib.GeneratorContextManager object at 0x692b2b0>, type = None | |
value = None, traceback = None | |
def __exit__(self, type, value, traceback): | |
if type is None: | |
try: | |
> self.gen.next() | |
/Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/contextlib.py:24: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
fd = '/Users/erik/src/astropy/astropy/io/vo/tests/data/regression.xml' | |
@contextlib.contextmanager | |
def _convert_to_fd_or_read_function(fd): | |
""" | |
Returns a function suitable for streaming input, or a file object. | |
This function is only useful if passing off to C code where: | |
- If it's a real file object, we want to use it as a real | |
C file object to avoid the Python overhead. | |
- If it's not a real file object, it's much handier to just | |
have a Python function to call. | |
Parameters | |
---------- | |
fd : object | |
May be: | |
- a file object, in which case it is returned verbatim. | |
- a function that reads from a stream, in which case it is | |
returned verbatim. | |
- a file path, in which case it is opened. If it ends in | |
`.gz`, it is assumed to be a gzipped file, and the | |
:meth:`read` method on the file object is returned. | |
Otherwise, the raw file object is returned. | |
- an object with a :meth:`read` method, in which case that | |
method is returned. | |
Returns | |
------- | |
fd : context-dependent | |
See above. | |
""" | |
if IS_PY3K: | |
if isinstance(fd, io.IOBase): | |
yield fd | |
return | |
else: | |
if isinstance(fd, file): | |
yield fd | |
return | |
if is_callable(fd): | |
yield fd | |
return | |
elif isinstance(fd, basestring): | |
if fd.endswith('.gz'): | |
from ...utils.compat import gzip | |
with gzip.GzipFile(fd, 'rb') as real_fd: | |
yield real_fd.read | |
real_fd.flush() | |
return | |
else: | |
with open(fd, 'rb') as real_fd: | |
yield real_fd | |
> real_fd.flush() | |
E IOError: [Errno 9] Bad file descriptor | |
astropy/utils/xml/iterparser.py:94: IOError | |
``` | |
<author>mdboom</author> | |
In my overzealous flushing (which was very much required *sometimes* when writing files), I added this to read-only files, which of course makes no sense. So best to just remove. | |
Can you try the attached patch and let me know if it works for you? | |
<author>eteq</author> | |
That seems to have done the trick - all the test now pass for me - I will merge. | |
So OS X (maybe then BSD?) won't flush read-only files, but linux will... odd, but c'est la lie. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Issue with positions outside sky | |
<author>astrofrog</author> | |
The following header defines a transformation such that some pixels in the image fall outside the sky | |
SIMPLE = T / | |
BITPIX = -32 / | |
NAXIS = 2 / | |
NAXIS1 = 2048 / | |
NAXIS2 = 2048 / | |
EXTEND = T / | |
BSCALE = 1.00000000000E+00 / | |
BZERO = 0.00000000000E+00 / | |
CDELT1 = -8.19629704013E-02 / | |
CRPIX1 = 1.02500000000E+03 / | |
CRVAL1 = 79.95701 | |
CTYPE1 = 'RA---SIN' / | |
CDELT2 = 8.19629704013E-02 / | |
CRPIX2 = 1.02500000000E+03 / | |
CRVAL2 = -45.779 | |
CTYPE2 = 'DEC--SIN' / | |
EPOCH = 2.00000000000E+03 / | |
PV2_1 = -0.755124458581295 | |
PV2_2 = 0.209028857410973 | |
When converting pixel positions outside the sky: | |
import pyfits | |
header = pyfits.Header() | |
header.fromTxtFile('header.hdr') | |
from astropy.wcs import WCS | |
wcs = WCS(header) | |
print wcs.wcs_pix2sky([[100.,500.]], 0) # outside sky | |
print wcs.wcs_pix2sky([[200.,200.]], 0) # outside sky | |
print wcs.wcs_pix2sky([[1000.,1000.]], 0) | |
The result is a normal value, and there is no indication that the pixel is outside the sky: | |
[[ 259.95701 -44.221 ]] | |
[[ 259.95701 -44.221 ]] | |
[[ 82.96101414 -47.72314271]] | |
Should the values not be returned e.g. as NaN values to indicate that they are not valid? | |
<author>perrygreenfield</author> | |
That would seem sensible, though I'm curious about what wcslib itself | |
returns in a case like this. Mike? | |
Perry | |
On Dec 8, 2011, at 6:11 AM, Thomas Robitaille wrote: | |
> The following header defines a transformation such that some pixels | |
> in the image fall outside the sky | |
> | |
> SIMPLE = T / | |
> BITPIX = -32 / | |
> NAXIS = 2 / | |
> NAXIS1 = 2048 / | |
> NAXIS2 = 2048 / | |
> EXTEND = T / | |
> BSCALE = 1.00000000000E+00 / | |
> BZERO = 0.00000000000E+00 / | |
> CDELT1 = -8.19629704013E-02 / | |
> CRPIX1 = 1.02500000000E+03 / | |
> CRVAL1 = 79.95701 | |
> CTYPE1 = 'RA---SIN' / | |
> CDELT2 = 8.19629704013E-02 / | |
> CRPIX2 = 1.02500000000E+03 / | |
> CRVAL2 = -45.779 | |
> CTYPE2 = 'DEC--SIN' / | |
> EPOCH = 2.00000000000E+03 / | |
> PV2_1 = -0.755124458581295 | |
> PV2_2 = 0.209028857410973 | |
> | |
> When converting pixel positions outside the sky: | |
> | |
> import pyfits | |
> | |
> header = pyfits.Header() | |
> header.fromTxtFile('header.hdr') | |
> | |
> from astropy.wcs import WCS | |
> | |
> wcs = WCS(header) | |
> print wcs.wcs_pix2sky([[100.,500.]], 0) # outside sky | |
> print wcs.wcs_pix2sky([[200.,200.]], 0) # outside sky | |
> print wcs.wcs_pix2sky([[1000.,1000.]], 0) | |
> | |
> The result is a normal value, and there is no indication that the | |
> pixel is outside the sky: | |
> | |
> [[ 259.95701 -44.221 ]] | |
> [[ 259.95701 -44.221 ]] | |
> [[ 82.96101414 -47.72314271]] | |
> | |
> Should the values not be returned e.g. as NaN values to indicate | |
> that they are not valid? | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/issues/107 | |
<author>perrygreenfield</author> | |
You can access the wcslib layer directly like this: | |
wcs.wcs.p2s(...) | |
It's documented here: | |
http://stsdas.stsci.edu/astrolib/pywcs/api_wcsprm.html#pywcs.Wcsprm.p2s | |
It returns a "stat" array which is 1 wherever the value is invalid. The | |
higher level methods currently ignore this, but it could be modified so | |
it sets all invalid values to NaN, or return a masked array. | |
Mike | |
On 12/08/2011 06:34 AM, Perry Greenfield wrote: | |
> That would seem sensible, though I'm curious about what wcslib itself | |
> returns in a case like this. Mike? | |
> | |
> Perry | |
> | |
> On Dec 8, 2011, at 6:11 AM, Thomas Robitaille wrote: | |
> | |
>> The following header defines a transformation such that some pixels | |
>> in the image fall outside the sky | |
>> | |
>> SIMPLE = T / | |
>> BITPIX = -32 / | |
>> NAXIS = 2 / | |
>> NAXIS1 = 2048 / | |
>> NAXIS2 = 2048 / | |
>> EXTEND = T / | |
>> BSCALE = 1.00000000000E+00 / | |
>> BZERO = 0.00000000000E+00 / | |
>> CDELT1 = -8.19629704013E-02 / | |
>> CRPIX1 = 1.02500000000E+03 / | |
>> CRVAL1 = 79.95701 | |
>> CTYPE1 = 'RA---SIN' / | |
>> CDELT2 = 8.19629704013E-02 / | |
>> CRPIX2 = 1.02500000000E+03 / | |
>> CRVAL2 = -45.779 | |
>> CTYPE2 = 'DEC--SIN' / | |
>> EPOCH = 2.00000000000E+03 / | |
>> PV2_1 = -0.755124458581295 | |
>> PV2_2 = 0.209028857410973 | |
>> | |
>> When converting pixel positions outside the sky: | |
>> | |
>> import pyfits | |
>> | |
>> header = pyfits.Header() | |
>> header.fromTxtFile('header.hdr') | |
>> | |
>> from astropy.wcs import WCS | |
>> | |
>> wcs = WCS(header) | |
>> print wcs.wcs_pix2sky([[100.,500.]], 0) # outside sky | |
>> print wcs.wcs_pix2sky([[200.,200.]], 0) # outside sky | |
>> print wcs.wcs_pix2sky([[1000.,1000.]], 0) | |
>> | |
>> The result is a normal value, and there is no indication that the | |
>> pixel is outside the sky: | |
>> | |
>> [[ 259.95701 -44.221 ]] | |
>> [[ 259.95701 -44.221 ]] | |
>> [[ 82.96101414 -47.72314271]] | |
>> | |
>> Should the values not be returned e.g. as NaN values to indicate that | |
>> they are not valid? | |
>> | |
>> --- | |
>> Reply to this email directly or view it on GitHub: | |
>> https://github.com/astropy/astropy/issues/107 | |
> | |
<author>astrofrog</author> | |
I think it would probably be best and fastest to just change the values to NaN by default when the stat array is set to 1 (rather than switching to masked arrays). | |
<author>embray</author> | |
I agree--if the values are changed to NaN then it would be trivial to make a masked array from that if desired. | |
<author>mdboom</author> | |
@astrofrog: Can you confirm this works, and merge if so? I will then backport this to pywcs. | |
<author>astrofrog</author> | |
@mdboom: works great - thanks! | |
</issue> | |
<issue> | |
<author>embray</author> | |
Make get_git_devstr() magically know the best path to use... | |
<author>embray</author> | |
...without manually specifying one (though a manual path is still possible). Also updated to handle OSErrors. And just after I suggested maybe we were done tweaking this machinery for now... But this seems worthwhile to eliminate confusion. I tested this out with the template affiliated package and it works great. | |
<author>eteq</author> | |
@mdboom and I had just been talking about exactly this for use in the data.py module (#104) - we could add an option to `find_current_module()` that makes it go however far back you want in the call stack, and then keep going *further* until it reaches a *different* module. That would suit the need here just fine, right? | |
<author>eteq</author> | |
That is to say, basically we would just pull the extra bit you did here into `find_current_module()` with an optional argument. So you could either adjust `find_current_module` here, and have #104 use what you change here, or #104 could do so, and you can shorten this to use that functionality. | |
<author>embray</author> | |
I'm all for adding the additional functionality to find_current_module. | |
<author>eteq</author> | |
I went ahead and did this, so you should be able to re-write this using the `finddiff` option, assuming #109 goes through. | |
<author>eteq</author> | |
Apparently I missed that you updated this last week for the `finddiff` option. Looks good to me... @astrofrog , have you checked if this works with the package template? We should probably make sure of that before merging... | |
<author>embray</author> | |
I think I only just did that yesterday actually...though github's display it's lumping all the commits together with the first one from 12/08 *shrug* | |
It works for me with the package template, but it would be nice to have some third-party confirmation before merging. | |
<author>astrofrog</author> | |
Works for me with the template package. Once this is merged, I'll update the template package to have ``get_git_devstr(False)`` instead of ``get_git_devstr(False, path=PACKAGENAME)``. | |
<author>eteq</author> | |
Alright, sounds good. Merging! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
find_current_module finddiff option | |
<author>eteq</author> | |
This adjusts the `astropy.utils.misc.find_current_module` function to have a `finddiff` keyword that, if True, searches back in the call stack until a different module is found. This is needed for both PR #108 and PR #104 | |
<author>embray</author> | |
+1 worksforme | |
<author>eteq</author> | |
@mdboom, will assuming this works for you, I (or you) can go ahead and merge it to clear the way for #104 and #108... | |
<author>mdboom</author> | |
Looks good. Merging. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Use bundled pytest | |
<author>jiffyclub</author> | |
Since #76 doesn't seem to work this is my next favorite option: `astropy.tests.helper` loads the bundled py.test unless the user has set a config option called `use_system_pytest`. I've also added the `minversion=2.2` option to `setup.cfg` to warn users of the version requirement. I've confirmed that this setup works even with py.test version 2.1 installed on my system. | |
@eteq, let me know if I've misused the config system. | |
<author>eteq</author> | |
This looks fine... bummer that we can't make py.test be a bit smarter, but this should be fine. | |
So if for some reason someone has to use 2.1, just commenting the ``minversion=2.2`` and setting use_system_pytest to True will work fine for the non-parametric tests, right (I'm mostly concerned here about automated systems, although maybe that doesn't matter)? | |
<author>jiffyclub</author> | |
That's right, the only thing stopping anyone for using whatever pytest they want at the command line is the `minversion=2.2`, and if they want `astropy.test` to use the system one they'd set the config option. | |
I tried to test the config option but I'm not sure how to set it, is that working yet? | |
<author>eteq</author> | |
It seems to be working for me - if you run the following code, it should save the default configuration setting: | |
``` | |
from astropy.tests import helper | |
helper.USE_SYSTEM_PYTEST.save() | |
``` | |
There should then be a ``$HOME/.astropy/config/astropy.cfg`` file, and it should have ``use_system_pytest = False`` along with the description. If you change that to True, and then re-run py.test, it seems to switch over to the correct one. | |
Once Issue #87 is addressed, this file should automatically be populated with all the defaults... but that's not done yet. | |
<author>jiffyclub</author> | |
That works for me too, cool! | |
<author>eteq</author> | |
@astrofrog, are you ok with this solution? If so, I'll merge this and close #76 (or you can) | |
<author>astrofrog</author> | |
Looks good to me! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Scripts scheme | |
<author>eteq</author> | |
This adds a section to the development documentation that outlines how to write command-line scripts, and also adds a backport of the argparse module that's available in py >=2.7 and >=3.2 so we can use that for all command line scripts. | |
<author>mdboom</author> | |
Aside from very minor inline comments, +1 from me. | |
<author>embray</author> | |
Aside from my small comment +1. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Test what will be installed, not the source tree. | |
<author>mdboom</author> | |
I know I'm been pushing on this issue for a while -- I think it would be helpful to have a discussion on this topic in one place. | |
The current implementation of `python setup.py test` builds C extensions "in place" in the source tree and then tests in the source tree. On python3, since 2to3 needs to be run over the source tree, it builds and then tests under the `build` directory. | |
This has a number of problems that I encounter on a regular basis: | |
1) Fundamentally, we're not testing what's being delivered. If a data file was inadvertently not added to `package_data`, a test will still pass even though the file is not being installed and the feature will break for real users. This means running `python setup.py test` and then building a tarball from that directory does not ensure that the tarball will also pass the tests. | |
2) Anything that relies on code generation doesn't work. `2to3` is one example, but we also code-generate the compatibility shims for `pywcs` and `vo.table`. There is currently no way to test these using `python setup.py test`. | |
3) Forcing a rebuild of C extensions requires doing something like `git clean -fd`, which may delete things other than just build products and is therefore potentially dangerous. `rm -rf build` is much safer. | |
4) python2 and 3 are being tested in a different way. This means that tests that pass under 2 may fail under 3, for reasons that have nothing to do with the differences between the languages. (See the missing data file example). | |
So what are the advantages of testing in-place? Besides shaving a few seconds off of copying files, I can't see any. (And that's made moot by the present implementation which actually first builds under `build` and then inplace anyway.) Is there anything others are relying on that would be missed by making this change? | |
<author>astrofrog</author> | |
This seems reasonable to me! | |
<author>eteq</author> | |
I think the reason for in-place is that it meshes well with the ``setup.py develop`` scheme. That is, if you're in development mode, it doesn't need to re-compile separately for both tests and the develop-installed version. Similarly, direct py.test invocation like ``py.test astropy/wcs`` works if the compiled binaries are in the source tree, but it fails if they're all built in ``build`` (although of course if you're in develop mode, they do work based on the ``develop`` generated binaries). | |
While I work in develop mode and hence I find the current system a bit more convenient, I see your points, also, so I don't really have a strong opinion. But @iguananaut or @jiffyclub may have more insight... | |
<author>embray</author> | |
I haven't tried this out, but as long as `./setup.py test` still does the copy to `build/` automatically (and I believe, from looking at it, that it would) I'm fine with this. | |
<author>mdboom</author> | |
@eteq: This doesn't preclude using `./setup.py develop`. In some ways, I think that's another argument for this change, since if you regularly work in develop mode, you'll want a convenient way to test that everything works in normal "built" or "installed" mode as well, otherwise you may not notice when data files aren't added to `setup.py` etc. | |
@iguananaut: Yes, `./setup.py test` still runs the equivalent of `./setup.py build` first, so there are no additional steps required. | |
<author>embray</author> | |
Works for me. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Add `get_data_file_content` function. | |
<author>mdboom</author> | |
This adds a convenience method for something that was occurring a lot in the code -- where one wants the entire contents of a data file as a bytes object in memory. | |
<author>eteq</author> | |
Should we be worried that this now creates 3 different ways to access the same files? I originally made the first two because the `get_data_fileobj` function had to be there to allow remote access without caching (as I imagine that might be useful in some cases for large files and small computers or whatever). This is a convenience function... and it is indeed convenient, but I worry a little that we are violating "there should be only one obvious way to do it." (Not necessarily objecting, just putting it out there.) | |
Also, probably should add this function into the developer guidelines section of the docs where the other ``get_data_*`` functions are listed. | |
<author>embray</author> | |
I'm fine with this since it serves as a direct replacement to `pkgutil.get_data()`. | |
<author>eteq</author> | |
Oh, and it also occurs to me that something like this should be added, as the `See Also` section is a part of the numpy docstring format. | |
``` | |
See Also | |
-------- | |
get_data_fileobj : returns a file-like object with the data | |
get_data_filename : returns a local name for a file containing the data | |
``` | |
And similar for those two functions | |
<author>embray</author> | |
@eteq That's probably a good compromise on the "more than one way" issue. | |
<author>mdboom</author> | |
Ok. Updated to address the comments here. | |
<author>eteq</author> | |
I was just about to merge this, but when I ran the docs, they failed hard (sphinx raised an exception)... apparently tou aren't supposed to put backticks around the name of something when you include it in the ``See Also`` section - if I just remove the backticks from all the ``See Also`` sections the docs build fine, so I just fixed that pre-merge and have now pulled it in. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Add volint script | |
<author>mdboom</author> | |
A simple script to validate a VOTable file. Hopefully follows the new script-writing documentation correctly. | |
<author>eteq</author> | |
Looks fine to me! | |
<author>astrofrog</author> | |
Works for me! I tested it on a few VO tables, and it works well. Hopefully this will make it easier for people to realize they are providing invalid VO tables ;-) | |
<author>astrofrog</author> | |
@mdboom - feel free to merge | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Docs/automodapi extension | |
<author>eteq</author> | |
This adds a new sphinx extension that automatically creates "autosummary" style tables based on the public attributes in a package/module. It also adds a "automodapi" sphinx directive that can be used to quickly generate a canonical api style that we would presumably adopt for most astropy packages. | |
To see the extension in action, build the docs from this branch, and look at the "Reference/API" section of the "Configuration system Documentation" section of the documentation. | |
If this goes through, I'll use it to refactor and clean up the documentation to have uniform API sections (except for vo and pywcs, given that they're already in-place and would probably be complicated to adjust). | |
<author>astrofrog</author> | |
I get a couple of errors: | |
/Users/tom/dropbox/code/development/astropy_forks/eteq/docs/development/testguide.rst:43: ERROR: Error in "note" directive: | |
invalid option block. | |
.. note:: | |
This method of running the tests defaults to the version of `py.test` that | |
is bundled with Astropy. To use the locally-installed version, you should | |
either change the `use_system_pytest` configuration option to "True" (see | |
:doc:`../../configs`) or the `py.test` method describe below. | |
/Users/tom/dropbox/code/development/astropy_forks/eteq/docs/development/testguide.rst:138: ERROR: Error in "note" directive: | |
invalid option block. | |
.. note:: | |
This method of running the tests defaults to the version of `py.test` that | |
is bundled with Astropy. To use the locally-installed version, you should | |
either change the `use_system_pytest` configuration option to "True" (see | |
:doc:`../../configs`) or the `py.test` method describe above. | |
There is also a wcs-related error, but I'll report that in a different ticket as it was there before your changes. | |
<author>eteq</author> | |
@astrofrog - that's very odd... I don't see these errors at all. What version of Sphinx do you have? I'm also confused as to how this could possibly trigger errors in ``testguide.rst``, because that's not modified in this pull request... | |
<author>astrofrog</author> | |
Ok, this error does indeed occur in master using Sphinx 1.1.2, so you can ignore it for this pull request. I fixed it directly in master in 5d338b3ce9054ef2d1df135dedf9d8244c96d0dd. | |
<author>eteq</author> | |
@astrofrog - huh, that fix is very strange - I don't understand why this would have fixed your error... especially given that I built these docs with the same version of sphinx (1.1.2) and they built without complaint. Ah well, if it fixed it, it fixed it, and no harm done! | |
<author>astrofrog</author> | |
I'm not sure either, but it seems it interpreted :doc: at the start of the indented line to be an option directive. Maybe I was using a developer version of sphinx. | |
<author>eteq</author> | |
Hmm... well, the latest development version seems to currently be at the 1.1.2 release, so unless you were using an *older* developer version, that should be it. Very strange! But I guess it doesn't matter if it's working for both of us now. | |
One thing I just realized, though - if I build the docs using ``python setup.py build_sphinx``, it places the generated files in a ``_generated`` directory *in the source root*, instead of in ``docs``. I'll have to figure out how to fix this before we can merge this. | |
<author>mdboom</author> | |
Minor point I just noticed while working -- `docs/_generated` should be added to `.gitignore`. | |
<author>eteq</author> | |
@astrofrog and @mdboom | |
I think this is now working the way I had in mind. There are tests, but the coverage isn't that good because a lot of it has to be done in a sphinx context. But If you look at the configuration documentation section, you can see basically what this produces now (there's also a lot more options). | |
One of the new options is that it is now possible to separate out the documentation for different subpackages/modules of the module being documented. I'll start a broader discussion on the list regarding exactly how often we want to use that, but these extensions now support either way, so it shouldn't affect the PR itself. | |
<author>astrofrog</author> | |
@eteq - I seem to be getting tons of warnings about generated files: https://gist.github.com/2016457 - is this normal? | |
<author>eteq</author> | |
@astrofrog - That should not be happening... It looks like you tested it using ``build_sphinx``, is that correct? I haven't tested that, mainly because I don't really trust ``build_sphinx`` until #171 is implemented. If you instead install astropy (via either ``setup.py install`` or ``setup.py develop``), and then use ``make html`` in the docs directory, do the warnings go away? | |
Also, can you look for the ``_generated`` directory and tell me if the files it's looking for are actually there? One of the big subtleties I had to fix in this PR involves convincing all the ``:toctree: <dir>`` entires to all go into the same directory, so if the ``_generated`` directory is in the wrong place, that might explain it. | |
<author>astrofrog</author> | |
@eteq - ah, I got confused, I thought build_sphinx was the way to go (since the docs require the build documentation). Doing build then ``make html`` does work better, and produces the tables with the links to the generated API pages. There is still an error in the middle of ``make html`` but it doesn't seem to stop the rest from working: | |
<autodoc>:0: ERROR: Unexpected indentation. | |
(see full log here: https://gist.github.com/2020724) | |
<author>astrofrog</author> | |
By the way, I've just committed a fix for the wrap before the colon in 'Parameters :' - see 7b530aba5c04c6f1da9bdfc048d02796a4e0d3ac | |
<author>astrofrog</author> | |
@eteq - could you push your branch to staging so I can run the tests? | |
<author>eteq</author> | |
@astrofrog - I just pushed to my staging branch - see https://github.com/eteq/astropy/tree/staging. | |
The ``<autodoc>:0: ERROR: Unexpected indentation.`` Is pre-existing - if you go back to master you'll see it's present there as well. I've seen these errors like this before, but they are painful to debug because for some reason autodoc sometimes gets very confused when its used with numpydoc... I generally find them by incrementally removing .rst files until I find which removal that fix it. But I'll do that separate from this pull request given that it's not directly related. | |
And great that you found a fix for the 'Parameters :' problem! That had been really annoying for some time. | |
<author>eteq</author> | |
@astrofrog , @mdboom , as far as you're concerned, can I merge this? | |
<author>astrofrog</author> | |
I don't have time to review the code in detail, but it looks like it's working fine, and producing nice output! So from my point of view, this is ready to go. | |
<author>mdboom</author> | |
Yep -- I'm in the same camp with @astrofrog. | |
<author>eteq</author> | |
Ok, sounds good - merging! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Pickle utils | |
<author>eteq</author> | |
This adds a couple useful functions (and associated tests) to utils.misc. They are basically just helper functions that simplify pickling and unpickling from files (single-function-call instead of having to deal with opening files). | |
This also adds a documentation section for utils.misc that was previously missing. | |
<author>astrofrog</author> | |
I get three test failures with Python 2.6: | |
http://paste.pocoo.org/show/527244 | |
http://paste.pocoo.org/show/527245 | |
http://paste.pocoo.org/show/527246 | |
<author>astrofrog</author> | |
I get three test failures with Python 3.1 (different reason): | |
http://paste.pocoo.org/show/527247 | |
http://paste.pocoo.org/show/527248 | |
http://paste.pocoo.org/show/527249 | |
<author>astrofrog</author> | |
Just out of interest, is there any reason to *not* use cPickle? In other words, do we really need to give the user that option? | |
<author>astrofrog</author> | |
I also get three failures with Python 3.2, but think it's similar to the failures with 3.1: | |
http://paste.pocoo.org/show/527251 | |
http://paste.pocoo.org/show/527252 | |
http://paste.pocoo.org/show/527253 | |
<author>eteq</author> | |
I don't have an easily-acessible version of python 2.6 at the moment, but the change I just sent up should fix those failures... @astrofrog, can you test it? | |
As for cPickle, I guess I can't think of any case where I wouldn't want to use it, but I was thinking there might be something I hadn't thought of that other people might care about. I can easily remove it though - do you think that would be better/less-confusing? | |
<author>eteq</author> | |
I don't understand the python 3 error, though. It seems to be in pickle itself... For example, | |
``` | |
>>> f=open('tstpk','w') | |
>>> fp=43.2456 | |
>>> pickle.dump(fp,f) | |
Traceback (most recent call last): | |
File "<stdin>", line 1, in <module> | |
TypeError: must be str, not bytes | |
``` | |
So is this a bug in python 3's pickler, or is there some subtle change to how pickle works that I'm not understanding? | |
<author>astrofrog</author> | |
I think in Python 3, variables get converted to bytes instead of str, so you need to open the file with ``wb`` instead of just ``w``. | |
<author>astrofrog</author> | |
Regarding pickle vs cpickle, I don't actually know, but I was raising the question in case anyone does. It does seem there are a few small differences between the two apart from performance, the biggest one being the ability to define custom pickling for sub-classes (see http://docs.python.org/library/pickle.html?highlight=cpickle#module-cPickle). So I guess for now we should leave the option. | |
<author>astrofrog</author> | |
Regarding the Python 3 error, it looks like it will be a bit more than just changing the file opening to add ``b`` to the modes causes a new error: | |
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte | |
A couple of other things: | |
* I noticed there are no tests for providing file objects instead of filenames. | |
* It would be useful to add docstrings and comments to the tests to know what is being tested | |
<author>eteq</author> | |
@astrofrog - It seems that adding ``b`` to the file mode for both `fnpickle` and `fnunpickle` seems to do the trick... did you maybe not do the reader in ``b`` mode. At any event, now the tests (which I updated as per your suggestions) pass for me in both 2.7.2 and 3.2.2... | |
<author>eteq</author> | |
One other thing about this - I put this in ``utils`` because it's not astronomy-specific, but its certainly the sort of thing a user may want to use to do things like pickling data they've created in an interactive session. Is it possible that it belongs better in ``tools``? Although it's not astronomy-specific, which I think is what we mostly said ``tools`` is for... | |
<author>astrofrog</author> | |
I think this should stay in utils. I think we definitely want to keep the science vs general utility separation to avoid confusion. These are more like convenience functions, but don't really implement any algorithms or functionality that is not in Python already. | |
Tests pass in 2.6.7, 2.7.2, 3.1.4, and 3.2.2, and I am happy with the updated docstrings for the tests, so as far as I can tell, this is ready to merge. | |
<author>mdboom</author> | |
Tests passing for me. | |
<author>eteq</author> | |
I think I've addressed everything - if there are no more comments I'll merge in a day or two. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Error when generating WCS documentation | |
<author>astrofrog</author> | |
When build the latest docs, I get the following error: | |
Traceback (most recent call last):wcsprm | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 321, in import_object | |
__import__(self.modname) | |
ImportError: No module named _wcs | |
/Users/tom/tmp/astropy/docs/wcs/api_wcsprm.rst:14: WARNING: autodoc can't import/find class 'astropy.wcs._wcs.Tabprm', it reported error: "No module named _wcs", please check your spelling and sys.path | |
<author>eteq</author> | |
Odd, I do *not* see this error (either on Ubuntu 11.04 nor OS X 10.6 w/ py2.7) - the docs build without any problem. | |
<author>astrofrog</author> | |
I'm using Sphinx 1.1.2. I just tested with a clean master, and it still occurs. The error occurs because of this autoclass statement: | |
.. autoclass:: astropy.wcs._wcs.Tabprm | |
which expects the built C extension. But even after running ``python setup.py build`` or ``python setup.py install`` I can't get that error to go away, maybe because we now include the ``astropy`` source directory in the tree correctly? (https://github.com/astropy/astropy/blob/master/docs/conf.py#L24) - and that source directory doesn't contain the built c extensions. | |
<author>mdboom</author> | |
Are you installing astropy into a virtualenv, perhaps? The sphinx-build being used to build the docs must be able to import the installed astropy. What I generally do is install sphinx into my virtualenv or add my virtualenv to PYTHONPATH to get around this limitation. | |
<author>astrofrog</author> | |
I'm not using a virtualenv. Just to make sure I understand, the following line in ``conf.py`` should force the source directory to be used instead of the installed ``astropy``: | |
sys.path.insert(0, os.path.abspath('..')) | |
If I change the line to: | |
sys.path.insert(0, os.path.abspath('.')) | |
as it used to be (which was wrong), then the installed version is used and I don't see this error anymore. Do your PYTHONPATH edits override the above setting and place the installed astropy first in the path, in front of the source directory? | |
<author>eteq</author> | |
Are you building the docs by doing ``python setup.py build_sphinx``, or ``make html`` in the docs directory? | |
<author>astrofrog</author> | |
I'm using the latter | |
<author>eteq</author> | |
Maybe try the former? it *should* do exactly the same thing as doing ``python setup.py build`` followed by the ``make html`` thing... but maybe its getting confused about which directory you're in? | |
You might also try uninstalling the system-installed version (or ``setup.py develop`` link) and seeing if you can get it built that way - if it builds there but doesn't with the install, then at least we know its got to do with the installed version instead of the actual building process... | |
<author>astrofrog</author> | |
If I remove any installed version, clone a clean copy of the master branch in the upstream repository, and run: | |
python setup.py build_sphinx | |
I get: | |
$ python setup.py build_sphinx | |
Freezing version number to astropy/version.py | |
------------------------------------------------------------ | |
The legacy package 'vo' was found. | |
To install astropy's compatibility layer instead, uninstall | |
'vo' and then reinstall astropy. | |
------------------------------------------------------------ | |
------------------------------------------------------------ | |
The legacy package 'pywcs' was found. | |
To install astropy's compatibility layer instead, uninstall | |
'pywcs' and then reinstall astropy. | |
------------------------------------------------------------ | |
running build_sphinx | |
creating docs/_static | |
creating docs/_build | |
creating docs/_build/doctrees | |
creating docs/_build/html | |
Running Sphinx v1.1.2 | |
loading pickled environment... not yet created | |
loading intersphinx inventory from http://docs.python.org/objects.inv... | |
loading intersphinx inventory from http://docs.scipy.org/doc/scipy/reference/objects.inv... | |
loading intersphinx inventory from http://docs.scipy.org/doc/numpy/objects.inv... | |
loading intersphinx inventory from http://matplotlib.sourceforge.net/objects.inv... | |
building [html]: all source files | |
updating environment: 45 added, 0 changed, 0 removed | |
Traceback (most recent call last):wcsprm | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 321, in import_object | |
__import__(self.modname) | |
ImportError: No module named _wcs | |
reading sources... [100%] wcs/relax | |
/Users/tom/dropbox/code/development/astropy/docs/wcs/api_wcsprm.rst:14: WARNING: autodoc can't import/find class 'astropy.wcs._wcs.Tabprm', it reported error: "No module named _wcs", please check your spelling and sys.path | |
looking for now-outdated files... none found | |
pickling environment... done | |
checking consistency... done | |
preparing documents... done | |
WARNING: dot command 'dot' cannot be run (needed for graphviz output), check the graphviz_dot setting | |
writing output... [100%] wcs/relax | |
writing additional files... (19 module code pages) _modules/index | |
genindex py-modindex np-modindex search | |
copying images... [100%] development/workflow/pull_button.png | |
copying static files... done | |
dumping search index... done | |
dumping object inventory... done | |
build succeeded, 2 warnings. | |
(note that the error is still there). Am I doing something wrong? | |
<author>eteq</author> | |
I was able to reproduce it by starting a fresh clone, and I think I understand what's happening: when building, the docs inject ``..`` into `sys.modules`, as @astrofrog pointed out... but if you start fresh and only do ``python setup.py build`` (or anything that triggers the build), the C extensions get put into ``build/lib.platformstuff/...```, and the docs only see the normal source tree, which does *not* include the built versions. | |
These was working for me (and presumably @mdboom) because I use ``python setup.py develop``, which puts the compiled code inside the source directory... but we definitely want it to work the way you tried to do it. I think the solution is to inject ``../build/lib.whatever`` into `sys.modules` (*after* ``..``), but I'm not sure what the best way is to figure out what the `lib` directory is. Does anyone know if there's some stdlib or setuptools function that can tell you this? | |
<author>astrofrog</author> | |
I think adding the `lib` library from `build` will be a bit messy, as the build directory can contain several lib directories (from different python versions). The cleanest way is probably just to *not* inject anything into ``sys.modules``, and to just require ``astropy`` be installed in order to build the docs. One could always consider adding ``..`` to ``sys.modules``, but at the end, rather than at the start, so that it is only used as a last resort, and would not include compiled C extensions. | |
<author>eteq</author> | |
Well, my thought was that there's always only one ``build/lib.whatever``directory for any given python version/platform... and the information is all in the `platform` stdlib module, so it shouldn't be that hard to make sure that *only* the appropriate ``build//lib`` directory is added. I was just wondering if anyone knew of a "standard" way to do it (perhaps @iguananaut or @mdboom?). It might just be hidden somewhere in distutils, in which case we might as well use it. | |
I, at least, find it frustrating to use the as-installed version for building the doc. I know some projects do this, but I sometimes want to build docs for a different version than the one I have installed (or even for a package I don't necessarily want to install at all), which is then impossible. I've also built the wrong docs many times by not doing ``setup.py install`` first -- that's particularly annoying if you're doing ``make html`` because you have to always go back to the source dir or have two consoles open. If we put ``..`` first, followed by ``build/lib.whatever``, I think that's the least error-prone way. | |
<author>eteq</author> | |
I did a few tests after finding `distutils.util.get_platform`... see eteq/astropy@4bedaf8a2e518271dd29f78becb25f1576ad9a60 - that does what I was suggesting above... | |
However, it reveals another problem: because we do ``import astropy`` in ``setup.py``, it still imports the in-source version if I do ``python setup.py build_sphinx``, although it now works perfectly if you do ``make html`` inside the docs directory. We could add some extra bits to make setup.py use the ``build`` version of astropy (if it's present) to get around this, or play around with imports in the ``build_sphinx`` command (which is already customized for us). Alternatively, we could do as @astrofrog suggests, and put ``..`` (and perhaps the `get_platform` thing I did in the above commit) at the *end*... | |
<author>phn</author> | |
I was wondering whether using something like Tox is better than attempting to add too many modifications to sys.path and other variables. | |
One issue with Tox is that it creates the "sdist" archive and then tries to build the entire package, after copying the "sdist" to an isolated virtualenv. This means that testing only the documentation build may not be possible. | |
Tox also has a ``commands`` option that can run arbitrary commands. So, for example one can say | |
commands=python -c "import astropy; astropy.test()" | |
to run tests using ``py.test`` shipped with astropy. | |
I am not an expert, so can't think of potential problems. | |
<author>taldcroft</author> | |
I haven't tested with astropy just now, but in the past I got around this problem with | |
python setup.py build_ext --inplace | |
That builds the compiled extensions within the source dir. | |
<author>mdboom</author> | |
@phn: I haven't spent much time with Tox, but building docs without first building the code is impossible by design anyway, so I wouldn't worry about being about to build docs separately from anything else. | |
@eteq: It would be nice to simplify this, but there's a problem with being too magical here since we don't (currently) control how and where Sphinx was installed and building in place is such a problem with multiple branches of astropy or multiple versions of python. | |
The doc build works by running a Makefile which calls sphinx-build in the shell. This sphinx-build may or may not be the same Python that was invoked by `python setup.py build_sphinx`. So one could build astropy in place and still fail because Sphinx expects the C extensions to be compiled for a different version of Python. | |
I think I fall on @astrofrog's suggestion that you kind of need to know what you're doing and we should inject anything into `sys.path`. The docs should just clearly say "install astropy, and make sure it is importable from the same environment as Sphinx is installed in". | |
However, if we want something more automated, there may be a way to leverage setuptools and ensure Sphinx installed in the current Python environment, then set the SPHINXBUILD environment variable to point to that specific installation of the `sphinx-build` script, then add `build/platform` to the path (don't rely on `--inplace`, it's too broken when using multiple versions of Python), and then use the makefile to build the docs. | |
<author>mdboom</author> | |
Scratch my comment about `build_sphinx` failing if using a different python interpreter from the one where sphinx is installed. `build_sphinx` doesn't use the Makefile -- I've been seeing problems for an unrelated reason. I suppose I'm ok with adding `build/platform` to `sys.path` in that case -- and we can do it from our custom `build_sphinx` command and leave the raw `make html` invocation alone. | |
<author>eteq</author> | |
@taldcroft - ``build_ext --inplace`` I think does the same thing as ``develop`` in terms of building - so that works, but I don't think we want to do that if we want to try to be using the standard as-built version. | |
@mdboom - just so I understand, in your second post you're suggesting have it *first* try the installed version, and then falling back on ``build/platform``, or the other way around? | |
<author>astrofrog</author> | |
I agree with @mdboom that we can do the magic for the ``build_sphinx`` command, and that the ``make html`` should just use the installed version (so we should remove the ``sys.path`` command from ``conf.py``). | |
<author>mdboom</author> | |
@eteq: I wasn't clear... but I was suggesting that `build_sphinx` would use the version in `build/platform` and that `make html` would do nothing magical (i.e. just use whatever Python paths the user may have already set up). | |
<author>eteq</author> | |
Ah, now that I get it, that sounds like a good plan to me as well. | |
I can go ahead and do this as part of #115, or we can have a separate pull request to address this and I can rebase #115 against that - either way is fine with me. | |
<author>eteq</author> | |
This should be closed when #171 is closed. | |
<author>astrofrog</author> | |
Now that #187 has been merged in, I think this issue can be closed. I no longer see the WCS error I originally reported in this issue. @eteq - can you confirm this can be closed? | |
<author>eteq</author> | |
Yep, this is now fixed (at least, it's no longer the docs' fault if wcs doesn't build! :) | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Wcs/windows compilation | |
<author>mdboom</author> | |
This is just some minor fixes to make WCS compile under Microsoft Visual Studio 9.0 and 10.0, ported from pywcs. | |
<author>eteq</author> | |
This seems to be based on #115 ... is that intentional? | |
<author>mdboom</author> | |
@eteq: Oops. Not intentional. Should be fixed now. | |
<author>eteq</author> | |
Seems fine to me, although I can't test it due to a lack of Visual Studio... | |
</issue> | |
<issue> | |
<author>eteq</author> | |
3 vo tests failing on py 3.2.1 | |
<author>eteq</author> | |
I ran the tests with python 3.2.1 on Mac OS X 10.6, and 3 of the VO-related tests failed. Shining Panda seems fine, so this might be Mac specific (or something peculiar to me). URLs for tracebacks are below. The last time I tried in python 3 these weren't happening, but that was probably several weeks ago... | |
http://paste.pocoo.org/show/530200 | |
http://paste.pocoo.org/show/530201 | |
http://paste.pocoo.org/show/530202 | |
<author>mdboom</author> | |
What version of numpy are you running? It looks like it could be somehow related. | |
<author>astrofrog</author> | |
Strange, I can't reproduce this with Python 3.2.2 on 10.6 and using 88ffc87af9c4d994165d8bd2d219a78c9ec5e47d | |
<author>astrofrog</author> | |
I'm using Numpy 1.6.1 | |
<author>eteq</author> | |
I'm also on numpy 1.6.1... | |
But I am on Python 3.2.1 rather than 3.2.2... could that be it? Is ShiningPanda 3.2.2 or 3.2.1 | |
<author>mdboom</author> | |
Sorry I haven't really dug into this. Is this still a live issue? | |
<author>eteq</author> | |
I am still seeing it, and some new ones... what I should probably do is try to find some time to update to 3.2.2 and see if that fixes it. (for the record, I do *not* see this error with 3.2.2 on Ubuntu 11.10). | |
<author>mdboom</author> | |
I wonder if `git bisect` would be helpful here? | |
<author>mdboom</author> | |
FWIW: python 3.2.1 on Fedora 16 works fine, so I don't think it's that dimension that is the problem. | |
<author>eteq</author> | |
``git bisect`` was a good thought, but in trying to go back to a commit where it worked, I couldn't find one that worked at all... so I suspect this is some sort of subtle change in my python 3 configuration (or my memory was incorrect that it worked before) | |
I also wiped and re-installed python 3 for kicks, and the failures are still happening, although I'm encountering a different problem that seems to have something to do with pytest not playing nice with unicode... | |
@astrofrog, are you using the macports 3.2.2, or did you install it by hand or with some other tool? | |
<author>astrofrog</author> | |
@eteq - I'm using the MacPorts Python 3.2.2 | |
<author>eteq</author> | |
Well this is very strange - I'm also on the macports version. @astrofrog, are you on 64-bit python, or 32-bit? (It doesn't seem like that should matter, but I'm grasping at straws here). | |
@mdboom, I can't help but think that http://paste.pocoo.org/show/530200 is the key here - it seems like ``table.array.dtype`` is only reporting *one* of the two names it's supposed to have (e.g. only "string_test" instead of both "string test" and "string_test"). Am I correct in assuming that dtype syntax simply means it should accept both of those as valid names? | |
<author>astrofrog</author> | |
I'm using the 64-bit macports Python. Do you have any config files in your home directory that could affect things? Have you tried running it as a different user with a clean home directory? | |
<author>astrofrog</author> | |
Maybe you should try and create a virtualenv (with --no-site-packages) and install a fresh version of Numpy from source into the virtualenv to see if it also causes the issue. | |
Also, what do you get for the following: | |
$ port installed py32-numpy | |
Warning: port definitions are more than two weeks old, consider using selfupdate | |
The following ports are currently installed: | |
py32-numpy @1.6.1_1 (active) | |
are you using 1.6.1_1, or a different macports revision? (the _1) | |
<author>mdboom</author> | |
This seems to be due to having `pyfits` installed. I'm now finally able to reproduce and will dig down deeper to see what the faulty interaction may be. | |
<author>mdboom</author> | |
The `_fix_dtype` code in pyfits doesn't handles `titles` in the dtype, so they get thrown out. This was masked by a recent change in `pyfits` that doesn't attempt to fix the dtype if the dtype passed in is not an actual dtype object. However, that masking was then nullified by the change in #133, which makes vo pass in its dtype as a dtype. | |
I think the real solution here is to fix `_fix_dtype`... I should have a solution to run by @iguananaut shortly. | |
In the meantime, we could revert #133, but long term it shouldn't matter. | |
<author>mdboom</author> | |
Here's the patch to pyfits: | |
https://trac.assembla.com/pyfits/ticket/110 | |
<author>eteq</author> | |
So what do you think we should do for astropy - just mark it as an xfail if the pyfits version is below whenever this fix gets put in? Or is there some reasonable way to patch vo? | |
<author>mdboom</author> | |
If this were python 2.x, I'd be worried, but as it stands, I don't think a | |
lot of people are using pyfits on python 3.x. We could override pyfits | |
monkeypatch with a working one, I suppose. | |
I'm hoping pyfits will be integrated in astropy reasonably soon, and then | |
we can ignore this whole issue, however, and just use the internal io.fits. | |
@iguananaut: any thoughts? | |
On Jan 24, 2012 4:38 PM, "Erik Tollerud" < | |
reply@reply.github.com> | |
wrote: | |
> So what do you think we should do for astropy - just mark it as an xfail | |
> if the pyfits version is below whenever this fix gets put in? Or is there | |
> some reasonable way to patch vo? | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/issues/119#issuecomment-3640951 | |
> | |
<author>astrofrog</author> | |
@eteq and @mdboom: I was worrying about the same thing, and reached the same conclusion - that we probably want astropy.io.fits merged in before the first astropy release, so the PyFITS version issue would be moot by then. | |
<author>eteq</author> | |
Ok, that's a good point - unless @iguananaut has anything further to add, I'll just make a change directly in master to mark this is an expected fail if using py 3.x and the pyfits version is <= 3.0.4. | |
Should I close this issue in that commit, or should we leave it open until we're sure it's actually passing once the pyfits patch gets in? | |
<author>embray</author> | |
I don't know if I'm too late to this, but I guess you can say expected fail. I think the io.fits package will be ready to merge soonish though. I just need to fix a couple more things and integrate with the astropy config system. | |
<author>eteq</author> | |
Commit 7cc6f4e9eac6361f40a34dbe4818a67d56e0eb3b makes these xfail if you're on py 3.x and pyfits <= 3.0.4 is installed. I'll leave this issue open until the next pyfits release (or pyfits is integrated into astropy - whichever comes first). | |
<author>astrofrog</author> | |
According to Jenkins, it looks like the commit broke vo_test for all Python versions? | |
<author>eteq</author> | |
There was a typo in that commit that I missed because I have pyfits installed (and Jenkins of course does not) - I've fixed it in ef025c6553f979b7e0397bd2a463309362c28c11 and the Jenkins test now look to be passing. | |
<author>eteq</author> | |
I just tested this with the just-released pyfits 3.0.5, and the tests now all pass, so I'm closing this issue. | |
<author>astrofrog</author> | |
I'm reopening this issue, because the tests now fail when PyFITS 2.4.0 is installed. The version string for PyFITS 2.4.0 is actually ``2.4.0exported``, so: | |
_pyfits_vers = tuple([int(i) for i in pyfits.__version__.split('.')]) | |
fails. One possible workaround is to use distutils.version to compare versions properly: | |
In [1]: from distutils import version | |
In [2]: version.LooseVersion('2.4.0exported') <= version.LooseVersion('3.0.4') | |
Out[2]: True | |
By the way, I have Jenkins set up with 74 different Numpy/PyFITS combinations (and that's not all the stable PyFITS releases) to try and spot issues like this. Can't wait for PyFITS to be in astropy to not have to worry about this anymore! | |
<author>embray</author> | |
> Can't wait for PyFITS to be in astropy to not have to worry about this anymore! | |
That said, I don't have too much left to do on that. Maybe I should just aim to get that done this week, and drop this issue? | |
<author>astrofrog</author> | |
@iguananaut - That would be great! | |
<author>mdboom</author> | |
@astrofrog: I made a fix for the version comparison yesterday. Are you sure you have the latest master? The commit was 64e7e8664 | |
<author>astrofrog</author> | |
@mdboom - oops, I was testing someone's forks that was rebased prior to 64e7e86. All tests pass with all Python/Numpy/PyFITS versions! | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
wcs/updgrade-wcslib-4.8.4 | |
<author>mdboom</author> | |
This upgrades our internal wcslib to 4.8.4. Adds new functionality to fix CD matrices (cdfix()). Also better informational messages from wcsprm.fix()). | |
<author>eteq</author> | |
tests pass for me, so I'm fine with merging. | |
<author>astrofrog</author> | |
Tests pass for me on Mac OS 10.6 with Python 2.6.7, 2.7.2, 3.1.4, and 3.2.2 | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Add basic installation documentation. | |
<author>mdboom</author> | |
@perry pointed out to me that we don't have any sort of basic documentation about building and installing astropy. While it mostly follows the standard distutils model, and there's bits of some of this information scattered around the docs, I still think it's useful to have really basic information all in one place. | |
I think we should push to merge sooner rather than later -- someone coming cold to the project would still appreciate this even though it's incomplete. | |
I've tried to keep this as generic as possible and not get too down in the weeds. I chose to link to, rather than explain, how to install the third-party dependencies for example, since there's so many ways one may wish to do that. Of course, down the road we may want to write platform-specific guides to getting everything going for complete Python beginners, but I'm not sure that belongs here. That's, of course, open to discussion. | |
<author>astrofrog</author> | |
This looks good! | |
<author>eteq</author> | |
This is definitely a good idea to include... once #115 gets merged in, I plan to re-organize the documentation and polish some of the developer sections anyway, so the exact location and way it fits in to the rest of the docs is probably not too important right now. | |
<author>eteq</author> | |
Aside from the comment I just made and the one before about ``easy_install``, I think this is fine, so feel free to merge once you've adressed those however you think makes sense. | |
<author>mdboom</author> | |
Ok. I've added a commit to address these concerns. I think it is ready now. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Properly verify the `unit` attribute in VOTable files | |
<author>mdboom</author> | |
This performs verification on unit attributes. It doesn't really parse them and know anything about them, but once we have a units framework in astropy, it would be useful to be able to convert to and from the format used by VOTABLE files (and this is hopefully a starting point for that). | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
What to do when ~/.astropy is not writeable | |
<author>astrofrog</author> | |
While trying to run the astropy build on a mac with Jenkins to automate testing with different Python versions, I ran into an issue. The tests are run using the anonymous user, and when Jenkins tries to build astropy, it needs to be able to create the .astropy directory in the home directory. However, in this case, it tries to access ``/var/root``, which is of course not writeable: | |
Started by user anonymous | |
Checkout:workspace / /Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace - hudson.remoting.LocalChannel@611c4041 | |
Using strategy: Default | |
Last Built Revision: Revision 48af8d6b12f8b2964e037ee7137ad91fa298d5b7 (origin/HEAD, origin/master) | |
Checkout:workspace / /Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace - hudson.remoting.LocalChannel@611c4041 | |
Fetching changes from 1 remote Git repository | |
Fetching upstream changes from git://github.com/astropy/astropy.git | |
Seen branch in repository origin/HEAD | |
Seen branch in repository origin/master | |
Commencing build of Revision 48af8d6b12f8b2964e037ee7137ad91fa298d5b7 (origin/HEAD, origin/master) | |
Checking out Revision 48af8d6b12f8b2964e037ee7137ad91fa298d5b7 (origin/HEAD, origin/master) | |
Warning : There are multiple branch changesets here | |
[workspace] $ /bin/sh -xe /var/folders/zz/zzzivhrRnAmviuee+++++E++++2/-Tmp-/hudson8835089690851720710.sh | |
+ /opt/local/bin/python2.7 setup.py build | |
Traceback (most recent call last): | |
File "setup.py", line 11, in <module> | |
import astropy | |
File "/Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace/astropy/__init__.py", line 22, in <module> | |
from .tests.helper import TestRunner | |
File "/Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace/astropy/tests/__init__.py", line 8, in <module> | |
from . import helper | |
File "/Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace/astropy/tests/helper.py", line 16, in <module> | |
from ..config import ConfigurationItem | |
File "/Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace/astropy/config/__init__.py", line 11, in <module> | |
from .data import * | |
File "/Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace/astropy/config/data.py", line 15, in <module> | |
'dataurl','http://data.astropy.org/','URL for astropy remote data site.') | |
File "/Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace/astropy/config/configs.py", line 135, in __init__ | |
self() | |
File "/Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace/astropy/config/configs.py", line 264, in __call__ | |
sec = get_config(self.module) | |
File "/Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace/astropy/config/configs.py", line 342, in get_config | |
cfgfn = join(get_config_dir(),rootname+'.cfg') | |
File "/Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace/astropy/config/paths.py", line 109, in get_config_dir | |
return path.abspath(_find_or_create_astropy_dir('config',linkto)) | |
File "/Users/Shared/Jenkins/Home/jobs/astropy-python2.7/workspace/astropy/config/paths.py", line 154, in _find_or_create_astropy_dir | |
mkdir(innerdir) | |
OSError: [Errno 13] Permission denied: '/var/root/.astropy' | |
Build step 'Execute shell' marked build as failure | |
Finished: FAILURE | |
Of course, I can fix this by doing: | |
HOME=/Users/Shared/Jenkins python setup.py build | |
but this got me thinking that maybe at least (but not necessarily all) of the following should probably happen: | |
* ``~/.astropy`` should be created on ``install``, not ``build`` (in my view, ``build`` should only affect the local directory, while ``install`` can place files outside the tree) | |
* there should be a way to disable creating ``~/.astropy`` on build for this kind of situation | |
* if ``~/.astropy`` cannot be created, a warning should be raised, but this should not be fatal | |
@eteq, @mdboom, and @iguananaut - since you've been involved in the ``~/.astropy`` stuff I was wondering whether you have any thoughts regarding how this should behave? | |
<author>mdboom</author> | |
~/.astropy needs to be created upon first import... it should not be part of setup.py at all. Think of the use case of installing in a central location on a multi-user machine. It should resolve this situation, too, unless i'm missing something. | |
<author>eteq</author> | |
The trouble is deciding when "first import" is - after all, there's an ``import astropy`` in ``setup.py``. The way it works now is that it *is* created at first import, if it isn't found already... its just that that first import is somewhere in the setup script (I'm not sure actually why it's doing it in ``build``, but it doesn't necessarily surprise me). | |
My idea was that for multi-user machines, it will just always look for whatever the correct home is for the current user who's actually running that python process... as it stands right now, any ``.astropy`` directory created by the root user at install will *not* be used when some other user imports astropy - it will just create a new ``/home/whateveruser/.astropy`` directory. | |
So this could be changed to check for if it's in setup, and if so, not do any of the creating. I think that will break some of the tests, though... Alternatively, we could just have it fail gracefully if it can't access the .astropy directory, and just assume all defaults for the configuration and only fail if someone tries to use the cache stuff (which of course requires a writeable directory). | |
<author>astrofrog</author> | |
I think that whatever we do, if ``~/.astropy`` is not writeable, astropy should still import, but with a warning. I don't think an exception should be raised (astropy can still function with the default config). | |
<author>eteq</author> | |
Yep, that definitely makes sense for config files... But what about accessing the cache? One thing that *could* be done might be to use the `NamedTemporaryFile` mechanism as a fallback if the cache can't be written to (remember that we sometimes need a file name), but to make that work we wouldn't be able to delete the temporary files through astropy (the user would have to know to clear their temp dir). Or we could just have it throw an exception whenever someone asks for an actual filename... | |
<author>mdboom</author> | |
@eteq: Maybe we could have an atexit handler that removes the temporary cache directory if one was created. | |
<author>mdboom</author> | |
@eteq: I think what you propose 3 comments up here seems reasonable. | |
<author>embray</author> | |
Why are the tests relying on ~/.astropy in the first place? | |
I would say that `import astropy` from setup.py shouldn't rely on ~/.astropy at all--I thought we had a global variable for "in setup.py". The tests should use some temp dir in place of ~/.astropy, right? I've been a little out of the loop the last few weeks though, so I'm not sure if there's a problem with that... | |
<author>eteq</author> | |
The tests rely on ``~/.astropy`` because that's how the real system works - and if the tests are supposed to make sure the correct astropy directory can be found, they need to actually go through all the motions. Remember that locating the "home" directory can be surprisingly non-trivial on some platforms... I suppose the test *could* be made to only use a temporary directory, but to me it seems like that's not actually testing the right functionality. And anyway, this is independent of that because one could imagine a scenario where the home directory is for some reason inacessible in *normal* use instead of just when the tests are being run (say the ~/.astropy directory is a symlink to a network drive or something). | |
<author>astrofrog</author> | |
I still think that in any case, related to the original ticket, astropy should never fail to import if ``~/.astropy`` is not accessible, whether in ``setup.py`` or afterwards. I don't want the import to fail if as you said, the home directory is temporarily unavailable for e.g. network reasons. | |
<author>eteq</author> | |
Agreed - and the code I just attached does exactly that - if ``~/.astropy`` is unavailable, `ConfigurationItem`s will just use their default (unless set in user code), and remote data downloads will go to a temporary file. In both cases, warnings are given that the astropy directory is inacessible. In the latter case, the temporary file is also included in the warning so the user can go and delete it later on if they so desire. | |
<author>astrofrog</author> | |
I guess what I'm trying to say is that if ~/.astropy cannot be written/read, it is not *crucial* to astropy and should not raise an exception. In that case, the default config would be used, and no caching would take place. | |
<author>astrofrog</author> | |
@eteq - ok, that sounds sensible. I sent my last comment before seeing yours. | |
<author>eteq</author> | |
Ok, sounds good - I'll give it a few days for review, and assuming there aren't any big problems I'll merge and close the issue. | |
<author>embray</author> | |
When the tests do use `~/.astropy` they don't disturb anything in there though, do they? Wouldn't want that once people are actually using this and have files and configurations in there that shouldn't be disturbed. | |
<author>mdboom</author> | |
I understand the argument that it is hard to test the configuration system itself without actually accessing the configuration files. However, I think, as @iguananaut suggested, we shouldn't touch those files during setup. I wouldn't expect `setup.py build` to create anything outside of the build directory, and Linux packaging systems, for example, generally enforce this policy. | |
There is the other issue that we don't want user's settings to impact the tests themselves. We had this issue in matplotlib. Imagine a setting that turns warnings into errors for example (one such setting already exists in `astropy.io.vo`). What matplotlib does there is use all default settings during test runs, and write tests that explicitly change some config values in order to test the effect of those config values. It should be possible to set a flag in `conftest.py` that would prevent the loading of configuration from files. | |
As for testing configuration itself, it should be possible to set `$HOME` to somewhere under `tmp`, check that it creates files there correctly etc. without touching the real home directory and without having settings that impact other tests in the test suite. | |
<author>embray</author> | |
I agree with @mdboom's suggestion of setting $HOME to a tmp directory for the tests. There could (and should) still be a test or tests for the function that finds the user's $HOME (the result of such a test would depend entirely on which platform it's run on, but that's fine too). | |
<author>eteq</author> | |
@iguananaut and @mdboom - I see your point here - and pytest makes this pretty easy, actually, with the `monkeypatch` funcarg - if you look at the tests I added in this pull request, both new functions have a signature that includes `monkeypatch`, and then at the beginning I do ``monkeypatch.setenv('XDG_CONFIG_HOME', 'foo')``. That sets the environment variable for that test, and then reverts it to what it should be for remaining tests. Should we do that, or instead just set use the test command or a py.test plugin to set $HOME before *any* of the tests run? | |
Also, perhaps one of you should create this as a separate issue/pull request (or I can - I just want to make sure you aren't attached to it being in this particular PR)? I think we want this pull request regardless, and while related, I think having an inaccessible .astropy fall back to defaults and having the tests run separately are slightly different issues. | |
<author>eteq</author> | |
@astrofrog - can you confirm that with the changes in this PR, the original issue is actually fixed? | |
<author>astrofrog</author> | |
It seems to work fine, and I'm getting two test failures: | |
http://paste.pocoo.org/show/539457 | |
http://paste.pocoo.org/show/539458 | |
The first one is expected, but is the second one normal? | |
<author>eteq</author> | |
The second one is happening because if it can't access the astropy directory, it leaves the `ConfigObj` `filename` as None (the default). So it's also expected behavior if the configuration directory is inaccessible. | |
So what do you think is the best way to deal with this? I could just add extra clauses to ignore OSErrors that include "permission denied" for the first one, and the second I could just check for None and have it do nothing if its None. But that could potentially mask future bugs, I suppose. Or I could just merge this as-is, as the suggestions of @mdboom and @iguananaut would make this problem irrelevant for you once we implemented that (a temporary directory for the config files, that is). | |
<author>astrofrog</author> | |
Ok, I agree that the second failed test is not a big issue, so as far as I'm concerned, this is fine to merge for now. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Passing python setup.py test options to py.test | |
<author>astrofrog</author> | |
py.test has some useful options, for example to output the tests in JUnit XML format (useful e.g. for Jenkins): | |
py.test -v --junitxml junit.xml | |
Since the primary way of running tests in astropy is via: | |
python setup.py test | |
I was wondering whether we should pass any options that are not already catched in ``astropy_test`` directly to ``py.test``? Alternatively, could we hardcode the junit option above? | |
@jiffyclub, @eteq, @mdboom, @iguananaut - any thoughts? | |
<author>eteq</author> | |
Right now you can do this using the "--args/-a" option to the test command - e.g. ``python setup.py test -a "-v --junitxml junit.xml"`` (although in that form, apparently it actually puts the junit.xml file in build/lib.platform/junit.xml). | |
We could replace that with a scheme where the extra ``test`` options get passed into py.test, but then there's no way to pass in any py.test option that collides with an option for ``test``. | |
<author>astrofrog</author> | |
Thanks, I wasn't aware of the ``-a`` flag. I tried your suggestion, and the junit.xml file ends up in the build directory (``./build/lib.macosx-10.6-x86_64-2.7/junit.xml``). I should probably open a separate ticket for that though. | |
<author>jiffyclub</author> | |
Can you specify the output file location with an absolute path and have it end up where you want? `setup.py test` executes the py.test command in the build directory so that's where it's going to save the file. | |
<author>astrofrog</author> | |
Thanks, that makes sense! I'm going to close this issue for now - in future it might be worth having junitxml being a proper argument for ``python setup.py test``, but it's low priority. | |
<author>embray</author> | |
I'm actually thinking of reopening this, since I think it's a little buggy that relative paths end up in the build directory. This is understandable, since the test command cds to there, but maybe it should convert any path-like arguments to absolute paths first? | |
<author>astrofrog</author> | |
Good point. I guess it would be hard to do this for options in -a but maybe we can hard code some more options (e.g. ``--junitxml``, ``--confcutdir``), then convert all options where we expect paths to absolute paths? | |
<author>embray</author> | |
True--we'd probably want a hard-coded list of options that are expected to be paths before we go blithely converting arguments. Considering that the junitxml option is used for output to Jenkins we could hard code that option. I don't know what confcutdir does, but there's no harm in adding it if it's something you use. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Windows compatibility | |
<author>mdboom</author> | |
All tests pass on Windows 7, Python 2.7 (from python.org), Numpy 1.6.1, with the MSVC (Visual Studio Express) 10.0 compiler. | |
A couple of gotchas here: | |
configobj expects filenames or file handles opened in binary mode. The weird `LocalPath` objects that py.test gives when you use its `tmpdir` functionality open the files in text mode so things break on Windows. Best to avoid those and pass filenames instead. | |
Filenames can not be simultaneously opened for reading and writing on Windows, so the `test_checksum` test needed to be modified. | |
File handles can not be passed to the C level and used directly, like they can on Unix, so on Windows it now passes a `read` method to the C extension, and the C extension calls it through the Python/C API to get bytes from the file. | |
<author>astrofrog</author> | |
I can't test on Windows, but while all tests pass on 2.6 and 2.7 (in MacOS 10.6) I'm getting failures with Python 3.1 and 3.2: | |
http://paste.pocoo.org/show/532443 | |
http://paste.pocoo.org/show/532444 | |
http://paste.pocoo.org/show/532445 | |
<author>mdboom</author> | |
Thanks. Forgot to test with Python 3. This should work now. (Python 3 on Windows isn't working, but Python 3 everywhere else should be working again). | |
<author>astrofrog</author> | |
All tests now pass on 2.6, 2.7, 3.1, and 3.2 (on MacOS 10.6). I don't have any Windows machines available, but if it's working for you, then I suggest you just merge, and iterate if there are still issues on specific Windows installations. | |
</issue> | |
<issue> | |
<author>perrygreenfield</author> | |
initial units commit | |
<author>perrygreenfield</author> | |
The simple version of units to demonstrate the design approach | |
<author>eteq</author> | |
Two minor organizational comments: | |
* In other places so far we've generally been following the practice of putting no real implementation in the ``__init__.py`` file - e.g., you could rename the current ``__init__.py`` to ``unit.py`` or something, and then just put ``from .unit import *`` in the ``__init__.py``. | |
* As it is written right now, the ``index.rst`` file doesn't link to the units documentation - if you just add a ``units/intro_units`` entry to the toctree in ``index.rst``, the sphinx docs will link properly to the table of contents. | |
<author>eteq</author> | |
Also, just so I'm clear on what you meant by "demonstrate the design approach": Do you want this pull request to eventually be merged, or are you just seeking feedback for now? | |
<author>perrygreenfield</author> | |
Mainly feedback right now. | |
On Jan 23, 2012, at 3:57 AM, Erik Tollerud wrote: | |
> Also, just so I'm clear on what you meant by "demonstrate the design | |
> approach": Do you want this pull request to eventually be merged, or | |
> are you just seeking feedback for now? | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/pull/126#issuecomment-3610803 | |
<author>perrygreenfield</author> | |
On Jan 23, 2012, at 3:55 AM, Erik Tollerud wrote: | |
> Two minor organizational comments: | |
> | |
> * In other places so far we've generally been following the practice | |
> of putting no real implementation in the ``__init__.py`` file - | |
> e.g., you could rename the current ``__init__.py`` to ``unit.py`` or | |
> something, and then just put ``from .unit import *`` in the | |
> ``__init__.py``. | |
Yes, I mentioned that I thought this was the suggested practice. | |
> * As it is written right now, the ``index.rst`` file doesn't link to | |
> the units documentation - if you just add a ``units/intro_units`` | |
> entry to the toctree in ``index.rst``, the sphinx docs will link | |
> properly to the table of contents. | |
> | |
Sure. | |
<author>phn</author> | |
There is a package at https://github.com/python-quantities/python-quantities that is designed for working with numbers and arrays carrying units. I haven neither used nor looked at the code. I thought I will point this out in case someone is interested. | |
<author>eteq</author> | |
I closed this because an updated version is in the works. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Issue when multiple conftest.py files are in the tree | |
<author>astrofrog</author> | |
I'm running into some issues with running tests in a specific environment (Jenkins + virtualenv) and I've finally figured out to how reproduce it: | |
git clone git://github.com/astropy/astropy.git | |
cd astropy/ | |
mkdir directory | |
cd directory/ | |
git clone git://github.com/astropy/astropy.git | |
cd astropy/ | |
python setup.py test | |
which fails with the following error: | |
Traceback (most recent call last): | |
File "<string>", line 1, in <module> | |
File "astropy/tests/helper.py", line 149, in run_tests | |
return pytest.main(args=all_args, plugins=plugins) | |
File "_pytest.core", line 467, in main | |
File "_pytest.core", line 460, in _prepareconfig | |
File "_pytest.core", line 419, in __call__ | |
File "_pytest.core", line 430, in _docall | |
File "_pytest.core", line 348, in execute | |
File "_pytest.helpconfig", line 25, in pytest_cmdline_parse | |
File "_pytest.core", line 348, in execute | |
File "_pytest.config", line 10, in pytest_cmdline_parse | |
File "_pytest.config", line 345, in parse | |
File "_pytest.config", line 75, in parse_setoption | |
File "_pytest.config", line 70, in parse | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/optparse.py", line 1039, in add_options | |
self.add_option(option) | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/optparse.py", line 1020, in add_option | |
self._check_conflict(option) | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/optparse.py", line 995, in _check_conflict | |
option) | |
optparse.OptionConflictError: option --remote-data: conflicting option string(s): --remote-data | |
This is because py.test imports the plugins twice because it finds them in ``astropy/conftest.py`` *and* ``astropy/directory/astropy/conftest.py``. This is precisely what happens in Jenkins - there is a checkout of astropy inside another checkout of astropy. So is there a way to make sure that py.test stops e.g. at the first ``conftest.py``? Or can we ensure that we catch the exception in ``pytest_addoption``? What if we start picking up plugins from other projects instead of astropy? Can we restrict how far py.test searches for conftest.py files? | |
@mdboom - this is the issue I emailed you about yesterday (I was still confused as to the cause at the time). I'm also cc-ing @jiffyclub, @eteq, and @iguananaut. | |
Any ideas? | |
<author>astrofrog</author> | |
I've implemented a fix in pull request #128 and it seems to work! | |
<author>astrofrog</author> | |
After thinking about this more, an obvious solution is just to run tests with: | |
python setup.py test -a "--confcutdir=../../" | |
But I wonder whether we might want that to be the default? | |
<author>astrofrog</author> | |
I'm going to close this ticket because either #129 will fix the issue, or I can use the ``--confcutdir`` as shown above. | |
<author>embray</author> | |
Well, maybe. But I am using the conficutdir and yet this is happening. Even when I cd into the Jenkins build directory and run the exact same command Jenkins is using it works correctly for me. It just doesn't work when Jenkins does it. So I'm at a loss. | |
<author>astrofrog</author> | |
@iguananaut - wrong issue :-) | |
<author>embray</author> | |
No coffee yet. Yerba mate not quite doing it for me as well. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Prevent py.test from looking for conftest.py files above source directory | |
<author>astrofrog</author> | |
This implements a fix for #127. Since tests are built in build/lib*/ py.test should not look at conftest.py files more that two directories away. | |
<author>jiffyclub</author> | |
There are situations where you want py.test to find multiple conftest.py files. Like if you've set up some custom py.test plugins in a conftest.py file in your home directory. Is there a way we could turn on the confcutdir only when Jenkins is running the tests? | |
<author>astrofrog</author> | |
Ok, I didn't realize it might be desirable to have that behavior. If we are happy for the current behavior to be the default, then I just realize that I could run tests with: | |
python setup.py test -a "--confcutdir=../../" | |
In the light of your comment, I agree maybe forcing the cutdir to be ../../ might be a bit restrictive, but maybe that should be a default that can be overridden? | |
<author>mdboom</author> | |
Maybe rearranging the heirarchy on Jenkins would also fix this automatically. | |
I wonder, however, if we shouldn't also move `conftest.py` into the installed source tree. py.test docs says this: | |
``` | |
If you have conftest.py files which do not reside in a python package directory | |
(i.e. one containing an __init__.py) then “import conftest” can be ambiguous | |
because there might be other conftest.py files as well on your PYTHONPATH | |
or sys.path. It is thus good practise for projects to either put conftest.py under | |
a package scope or to never import anything from a conftest.py file. | |
``` | |
Also, if it isn't installed, that means that when running `py.test` on an installed copy of astropy the plugins don't get loaded. Am I right? | |
<author>jiffyclub</author> | |
That's a good point. We probably want to move it one directory up into the astropy directory. | |
<author>eteq</author> | |
So does #129 fix this then? So can we close this and #127? | |
<author>jiffyclub</author> | |
Someone working with Jenkins should test this again but I don't think #129 will fix the problem @astrofrog was having. @mdboom suggested rearranging the Jenkins hierarchy to fix #127 and I think that'd be the way to go. I think you probably could close this specific pull request. | |
<author>eteq</author> | |
Ok, I'll leave this to @astrofrog to close, as he issued the pull request. | |
<author>astrofrog</author> | |
Ok, I think #129 may actually fix the issue I was having, and if it does not, then I agree there is either the option of re-arranging the hierarchy, or just specifying --confcutdir myself. | |
</issue> | |
<issue> | |
<author>jiffyclub</author> | |
Move the py.test conftest.py file | |
<author>jiffyclub</author> | |
Moved the conftest.py file one directory level up in the to the astropy directory so that it gets installed with the rest of astropy. Changed the import of plugins accordingly. See #128. | |
Also updated the .gitignore file so it ignores the Mac OS X .DS_Store file. | |
<author>mdboom</author> | |
+1 on this. I think it's a less controversial change than #128, and is in any case orthogonal to it, I believe. | |
<author>astrofrog</author> | |
I agree, this looks good! | |
<author>jiffyclub</author> | |
@astrofrog, you'll probably want to do the same thing to the package-template. | |
<author>astrofrog</author> | |
@jiffyclub - I've added those changes to the package template, on my minor-fixes branch (that will eventually be merged into the real package template) - https://github.com/astrofrog/package-template/tree/minor-fixes | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Table - support for heterogeneous data tables. | |
<author>taldcroft</author> | |
Documentation is available. I also made a new branch table-rebase where I just rebased with astropy:master, if that would be better. The merge is clean either way. | |
<author>mdboom</author> | |
This is looking great. Can't wait to rewrite `io.vo` in terms of this class. | |
<author>eteq</author> | |
One general suggestion for the documentation: when you're talking about NumPy, you might want to instead use `` `numpy` `` - the intersphinx extension will cause this to automatically link to the numpy documentation. | |
<author>eteq</author> | |
Agree with @mdboom that the overall design is great. One thing I'm missing, though: are you intentionally not including a straightforward way of extracting a table into a numpy structured array? I know it's in principal easy by just copying `talbe._data`, but it might be worthwhile to add a "public" method or something to do that, as well as something in the narrative docs about that option. | |
And on the topic of output/converting: is the intent that this *not* include any means of outputing to ascii? That is, was your thought that this should all be in affiliated packages? I think we probably want to at least have some note in the docs about this, otherwise people might get irritated that there's no clear way to output tables... | |
<author>eteq</author> | |
One other thing: you may or may not have noticed that `nddata` was recently merged into the master branch. It's probably wise to add a sentence or two indicating the distinction between these two (and add something basically identical to the docstring for `nddata`, as well). | |
<author>taldcroft</author> | |
@eteq about extracting the table. My natural inclination would be to keep the structured array in `table.data` as a public attribute and expect people to respect it. But I see the point of view that this can cause problems (shooting in the foot). So what about a `data` property that returns a copy of `_data`? Unless you make a copy there is no point in hiding behind a read-only property since `_data` is mutable. | |
About output / converting, good point. I had not gotten there, but the natural thing would be to have a write or save method that connects to io.ascii. This will give the full flexibility of asciitable.write but do something reasonable by default. I *was* thinking about a nicer `__str__` method that prints a formatted version for inspection. | |
<author>astrofrog</author> | |
A quick note regarding converting - I think that ultimately, it would be nice to implement something like we have in ATpy where readers/writers can be registered with the Table class. Then other Astropy components (FITS, VO, asciitable) can register their own output/input functions that get called when Table.write or Table.read is called. I think this could be done once the current Table class is merged in. | |
<author>astrofrog</author> | |
Just thought I'd raise a possibility - do we want: | |
t = Table(...) | |
a = np.array(t) | |
to work and return a structured array? This could be one possibility for getting a structured array. | |
<author>mdboom</author> | |
+1 on `a = np.array(t)` working. Having an `__array__` method would also make it easier to pass `Table` objects to C extensions. | |
<author>astrofrog</author> | |
Just thought I'd mention there are four PEP8 errors: | |
table.py:354:80: E501 line too long (81 characters) | |
tests/test_column.py:111:1: W391 blank line at end of file | |
tests/test_table.py:471:1: E302 expected 2 blank lines, found 1 | |
tests/test_table.py:480:1: W293 blank line contains whitespace | |
;-) | |
<author>eteq</author> | |
+1 also for ``a = np.array(t)`` suggested by @astrofrog. It might also make sense to have *both* that and the ``t.data`` option @taldcroft suggested... Although perhaps it's better to have only one way, in which case ``np.array(t)`` is probably the better choice for compatibility. | |
@taldcroft - I also like your idea of using `__str__` to give a string-formatted table (although it might be a good idea to have `__repr__` only output a shortened version, like numpy does for long arrays). | |
<author>taldcroft</author> | |
Implemented `__array__` instead of `data` property. I agree this is nicer. | |
<author>taldcroft</author> | |
I think that the recent commits have addressed all of the comments that were raised so far except for big items like masking support, connecting to general I/O readers and writers, and a quick-start tutorial. | |
I was unable to build docs tonight to check because docs.scipy.org is not responding (for intersphinx). | |
<author>embray</author> | |
Looking pretty slick. At some point I will probably pull this out of Astropy and include it in PyFITS. I'm excited to start porting PyFITS over to use this new table interface, but I wouldn't want to unless I can use it both for PyFITS itself, and astropy.io.fits. Otherwise there will just be an unmanageable divergence between the two. | |
<author>astrofrog</author> | |
@taldcroft - since it would be pretty easy to do, and might come in handy, maybe you could also allow np.array(t['a']) and np.array(t[1]) to return numpy arrays? Although it seems to already work for columns, so maybe it only needs to be implemented for rows? | |
<author>taldcroft</author> | |
@iguananaut - I have basically the same plan for asciitable, so the intent was to continue developing the table package in a way that it is not really dependent on the astropy infrastructure. Right now the only code dependence is on astropy.utils.OrderedDict for python 2.6. Maybe table.py could try looking for the OrderedDict port first in the local package if it doesn't find astropy.utils. | |
This does bring to mind the fact that the code has never been tested on Python 2.4 or 2.5, which is supported for asciitable. I'm not sure what PyFITS supports, but I probably used >= 2.6 language constructs so I'll need to look into that. | |
The documentation does depend somewhat on the astropy build system. | |
<author>embray</author> | |
PyFITS currently supports >=2.5. Looking through the table code, it doesn't look like there are any obvious problems; if anything comes up it can be worked around I'm sure. Not concerned about the docs since I'd just be copying the modules over, and the PyFITS docs would talk about how to work with tables (granted, with fewer details). | |
<author>astrofrog</author> | |
I'm also planning on migrating ATpy to use this Table class, since it's so much better! I think we can definitely merge this before implementing masking and read/write methods (both of which I can help with). | |
@taldcroft - is there anything you want to implement before we do a final review and merge? | |
<author>taldcroft</author> | |
@astrofrog - I don't have any new features that are currently in work, so I'm go for final review / merge. | |
<author>astrofrog</author> | |
When I do tab-completion on Table, there is an ``mro`` attribute - any idea what that is? | |
<author>mdboom</author> | |
"Method resolution order". Built-in to all new-style classes. | |
http://docs.python.org/library/stdtypes.html?highlight=mro#class.mro | |
<author>astrofrog</author> | |
Ok, apart from the few in-line comments above, my only comment is the one I made before that maybe it would be nice to be able to do np.array(Row(...)) and np.array(Column(...)). Otherwise, it's ready to go as far as I'm concerned! | |
<author>taldcroft</author> | |
@astrofrog - I think I've addressed all your comments either in the last 3 commits or by comment. | |
<author>eteq</author> | |
I just made a few comments/suggestions that you may or may not want to make some small changes based on, but aside from those, this looks great to me! | |
Oh, and quick tip for @taldcroft - a couple days back you said "I was unable to build docs tonight to check because docs.scipy.org is not responding (for intersphinx)." I've run into that before, and there's an easy work-around: Add ``intersphinx_mapping = {}`` to the bottom of ``conf.py``. If you do that, it won't do any intersphinxing - useful when some of the other sites are down, or when you want to re-generate the docs a bunch of times and don't want to wait for intersphinx. Just be careful not to accidentally commit that change (I almost did that once...) | |
<author>eteq</author> | |
I'm fine as this stands now - I'll go ahead and merge, say, by the end of tomorrow if no one else brings up further concerns. | |
<author>astrofrog</author> | |
Looks good to me! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
nddata masked array test triggers ValueError in py3.2 | |
<author>eteq</author> | |
See http://paste.pocoo.org/show/537165 - note that this is also causing ShiningPanda to fail. | |
I'm rather perplexed by this, as I don't see anything wrong in our code - rather I think it might be a bug in how `MaskedArray` objects work in py 3.x ... If it is indeed a numpy/py3.x bug, we can just mark it as a known fail, and leave this issue open, but before doing this I'm wondering if anyone else has any insight? | |
<author>mdboom</author> | |
This seems to resolve it. Can you confirm? | |
<author>eteq</author> | |
Yep, that seems to have done it - simple enough that I'll just go ahead and merge it - thanks! | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Make WCS objects picklable | |
<author>mdboom</author> | |
This is useful when using `multiprocessing`. It also makes the FITS WCS files round-trippable. | |
<author>eteq</author> | |
When I run the tests on 3.2.2 on OS X 10.6, all of the pickling tests fail - see http://paste.pocoo.org/show/539471 ... note that the first 3 failures are the errors I mentioned in #119, and the latter 6 are from test_pickle... they seem to report problems with byte and string conversions. | |
<author>mdboom</author> | |
Thanks for pointing this out -- I had not tested under Python 3. I think most of these are due to changes in pyfits behavior under 3.x. Looking into it. | |
<author>mdboom</author> | |
This should work on Python 3 now -- but it does require a change to pyfits as described in #119. | |
<author>eteq</author> | |
In this case, then, it probably makes sense to mark all those tests as xfail if py 3.x and pyfits is too low of a version (see http://pytest.org/2.2.0/skipping.html for the relevant semantics, if you don't know it already) | |
<author>eteq</author> | |
actually, I just ran this again after your updates (using pyfits 3.0.4, which presumably does *not* have the patch in question) and the test_pickle failures have disappeared. So I guess no need for xfail after all. In that case, as far as I'm concerned, this is set to merge. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
fix for errors caused by passing lists to pyfit's _fix_dtype | |
<author>eteq</author> | |
This is a fix for an error that seems to result if you have pyfits 3.0.4 installed in py 3.2.2 (I was using Mac OS X 10.6, with the macports python, although I don't think that matters here). All the vo tests were failing due to the same error, the relevant parts of which I've included below. | |
The key point is that the `_fix_dtype` function in pyfits (which is used in numpy's recarrays via monkeypatching to prevent a segfault) fails if the dtype is a list of name/type pairs. @iguananaut, you may want to fix this in pyfits, as well, but this patch just prevents the problem in the first place in `io.vo.tree`. | |
The final part of the traceback is shown below - this is in common to all the vo errors that result *without* this pull request: | |
``` | |
self = <astropy.io.vo.tree.Table object at 0x1065d2910>, nrows = 0 | |
config = {'_current_table_number': 1, '_warning_counts': {<class 'astropy.io.vo.exceptions.W32'>: 1, <class 'astropy.io.vo.exce...py.io.vo.exceptions.W11'>: 1, <class 'astropy.io.vo.exceptions.W01'>: 5, ...}, 'chunk_size': 256, 'columns': None, ...} | |
def create_arrays(self, nrows=0, config={}): | |
""" | |
Create new arrays to hold the data based on the current set of | |
fields, and store them in the *array* and *mask* member | |
variables. Any data in existing arrays will be lost. | |
*nrows*, if provided, is the number of rows to allocate. | |
""" | |
if nrows is None: | |
nrows = 0 | |
fields = self.fields | |
if len(fields) == 0: | |
array = np.recarray((nrows,), dtype='O') | |
mask = np.zeros((nrows,), dtype='b') | |
else: | |
# for field in fields: field._setup(config) | |
Field.uniqify_names(fields) | |
dtype = [] | |
for x in fields: | |
if x._unique_name == x.ID: | |
id = x.ID | |
else: | |
id = (x._unique_name, x.ID) | |
dtype.append((id, x.converter.format)) | |
> array = np.recarray((nrows,), dtype=dtype) | |
astropy/io/vo/tree.py:1923: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
subtype = <class 'pyfits.py3compat.recarray'>, shape = (0,) | |
dtype = [(('string test', 'string_test'), 'O'), (('fixed string test', 'string_test_2'), 'S10'), ('unicode_test', 'O'), (('uni... test', 'fixed_unicode_test'), 'U10'), (('string array test', 'string_array_test'), 'S4'), ('unsignedByte', 'u1'), ...] | |
buf = None, offset = 0, strides = None, formats = None, names = None | |
titles = None, byteorder = None, aligned = False, order = 'C' | |
def __new__(subtype, shape, dtype=None, buf=None, offset=0, | |
strides=None, formats=None, names=None, titles=None, | |
byteorder=None, aligned=False, order='C'): | |
if dtype is not None: | |
> dtype = _fix_dtype(dtype) | |
/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/site-packages/pyfits/py3compat.py:131: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
dtype = [(('string test', 'string_test'), 'O'), (('fixed string test', 'string_test_2'), 'S10'), ('unicode_test', 'O'), (('uni... test', 'fixed_unicode_test'), 'U10'), (('string array test', 'string_array_test'), 'S4'), ('unsignedByte', 'u1'), ...] | |
def _fix_dtype(dtype): | |
""" | |
Numpy has a bug (in Python3 only) that causes a segfault when | |
accessing the data of arrays containing nested arrays. Specifically, | |
this happens if the shape of the subarray is not given as a tuple. | |
See http://projects.scipy.org/numpy/ticket/1766. | |
""" | |
> if dtype.fields is None: | |
E AttributeError: 'list' object has no attribute 'fields' | |
/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/site-packages/pyfits/py3compat.py:110: AttributeError | |
``` | |
<author>embray</author> | |
Though there's no harm in leaving this in, it should be pointed out that this is fixed in PyFITS in the next release. | |
<author>embray</author> | |
Also, this is fixed in my astropy fits branch. But at some point I need to move that stuff into some other module for compatibility patches, perhaps somewhere in the astropy.compat package. | |
<author>eteq</author> | |
That makes sense once the astropy fits stuff gets merged in - certainly the py 3.x monkeypatch sort of things make a lot of sense in `astropy.compat` | |
<author>mdboom</author> | |
I agree. Even though recent `pyfits` and the future `astropy.io.fits` fix this, there's no harm in this. | |
<author>eteq</author> | |
Alright, sounds good - merging! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Use a temporary directory for $HOME/.astropy in config tests | |
<author>eteq</author> | |
As suggested by @mdboom and @iguananaut in issue #123, the tests for the config sub-package should not alter the user's actual $HOME/.astropy directory. So the tests should be updated to use a temporary directory (probably using py.test's `tmpdir` mechanism). | |
<author>embray</author> | |
Since is was the one pushing for this, I'll be happy to take a stab at it. | |
<author>eteq</author> | |
Fixed in #283 | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Allow passing a FITS filename to the WCS constructor | |
<author>mdboom</author> | |
The docs have said you could for a long time -- however, it turns out | |
you can't. | |
This is a backport of r2480 in pywcs. | |
<author>astrofrog</author> | |
Looks good to me, and all tests pass on MacOS X - feel free to merge! | |
<author>eteq</author> | |
Tests pass for me - feel free to merge. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
added issue_to_pr function | |
<author>eteq</author> | |
This adds a function that makes it simple to attach code to a github issue, converting it to a pull request. Note that there are no tests for this right now, because of course there's no good way to interact with the github repo without messing with the issues list. | |
A couple things to consider before merging: | |
1. Should this be included in astropy? I could also just put this as a gist or something like that, and we can just direct people to it that want to use it there. If it should go in astropy, does the location I put it make sense (a new "utils/manage" module - in the future we would put things like release scripts there, as well)? | |
2. I can't full test how this interacts with permissions because I own the repository I was testing this on... could someone go to https://github.com/eteq/apiexp/issues, add an issue, and see if they can attach code to an issue they created, even if they don't own the repository? | |
<author>eteq</author> | |
@phn tested this at https://github.com/eteq/apiexp/pull/4, and he was able to attach a pull request to an issue he had created, so it looks like this works even for someone who doesn't own the repo. | |
<author>eteq</author> | |
@mdboom, @iguananaut, and @astrofrog - what do you think about whether this should actually be included in the code base vs. in a gist or similar? (I ask you three specifically because I think you've all done this on your own before?) | |
<author>embray</author> | |
I only just saw this--I guess since you mentioned me in the comments. | |
This is definitely useful...but I'm leaning against having it in the Astropy codebase. I might want to take this and make it into a script and add it as a git command. | |
<author>embray</author> | |
Though I'll also say--your addition of the 'manage' module reminds me that I need to poke at the release process sometime, and maybe write something up on that. I still mostly like `zest.releaser` though it still has a few annoyances that I'd like to fix, so I might make a fork of it myself sometime. | |
<author>eteq</author> | |
I agree having release tools would be useful - I'm not sure if we want to have a dependency on an external package to do this, though (e.g. `zest.releser`), or did you mean putting it in extern? | |
Regardless, any other thoughts on whether to put it in astropy or not? I am also leaning in the "not" direction - In that case I'll post it as a gist and add a not about it to the Astropy documentation (and perhaps wiki). If there are no other opinions in the next could days, I'll do that. | |
<author>astrofrog</author> | |
This looks very useful, but I'm also leaning towards not including it in the core package. I like the idea of a gist. | |
<author>eteq</author> | |
I've converted this into a gist at https://gist.github.com/1750715 - I'm therefore closing this without merging. | |
@iguananaut, that version now also works as a command-line script | |
<author>mdboom</author> | |
BTW -- apparently the latest version of hub now supports this functionality. | |
http://defunkt.io/hub/ | |
<author>eteq</author> | |
Ooh, thanks for pointing that out - for any who look at this in the future, here's an exceprpt from the `hub` man page: | |
git pull-request [-f] [TITLE|-i ISSUE|ISSUE-URL] [-b BASE] [-h HEAD] | |
... | |
If instead of normal TITLE an issue number is given with -i, the pull request will be attached to an existing GitHub issue. Alternatively, instead of title you can paste a full URL to an issue on GitHub. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Upgrade to wcslib-4.9 | |
<author>mdboom</author> | |
Some minor fixes to how errors and fixes are reported. | |
<author>mdboom</author> | |
FYI: This is the changelog: | |
``` | |
* C library | |
- Fixes to wcsfixi() for collecting the messages properly in the info | |
array (from Michael Droettboom). | |
- Handle certain malformed date strings more gracefully in datfix(). | |
- Make informative messages printed by wcserr_prt() a bit more | |
informative. | |
``` | |
<author>embray</author> | |
Out of curiosity, how are you managing porting changes from PyWCS over to Astropy? Do you just manually generate a patch and touch up the paths? Curious what your process is. | |
<author>mdboom</author> | |
Yes -- that's basically it. I do an "svn diff", and by hand move things around. Some patches no longer translate, so I have to deal with them by hand. | |
<author>astrofrog</author> | |
All tests pass (except for those with PyFITS 2.4.0, but that's just because this branch doesn't include 64e7e86) - ready to merge! | |
<author>mdboom</author> | |
Just got word that Mark Calabretta will be including essentially the same as #138 in the next upstream wcslib release. I think it better to wait for that. Closing this pull request. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
wcs: fix some false positive reports about fixes made | |
<author>mdboom</author> | |
Fixes some things that should come back as "no change" that were reported as "success" before. Also provides more detailed feedback when things have been changed. | |
Requires the upgrade to wcslib 4.9 in #137 first. | |
<author>mdboom</author> | |
Just got word that Mark Calabretta will be including essentially the same as this in the next upstream wcslib release. I think it better to wait for that. Closing this pull request. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Build broken on Windows using mingw32 | |
<author>embray</author> | |
I'm trying to set up a build bot of Astropy for Windows (specifically XP). I don't have any of the Windows compilers installed, and am instead using mingw32: | |
``` | |
./setup.py build_ext -c mingw32 | |
``` | |
Which gives me an error over the `/MANIFEST` switch that gets added. Unfortunately, Astropy adds that switch if sys.platform is windows, regardless of which compiler is being used. It should go by compiler instead of platform using the `get_distutils_option()` function. | |
I confirmed by manually removing the `/MANIFEST` part, and the build succeeded (a few of the vo tests are failing on Windows for me, but everything else seems good). | |
<author>mdboom</author> | |
+1. Doesn't break what was already working for me (msvc 11), but I didn't test with mingw32 myself. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Record compiler used in version.py | |
<author>embray</author> | |
This patch depends on the pull request in #139. | |
While working on #139 I thought it might be useful, for diagnostic purposes if nothing else, to record the compiler that was used to build extension modules. For example, if a user is having some odd problem with an extension module we could ask them to give us astropy.version.compiler. | |
There are a couple things lacking in this current implementation: | |
1) This is only based on the `--compiler` option to setup.py. This is not useless, but it may mean different actual compilers on different platforms. It would be more useful if we also included the compiler version. Unfortunately there's no one guaranteed way to get this. Some variant of `CC --version` should work most of the time, but can't be counted on. Perhaps we could *try* something obvious like that, and if it fails then just go with "Unknown version" or some such. | |
2) This will include a 'compiler' value in version.py even when making a source distribution--a context in which it's mostly meaningless. This is mostly harmless, since when the user builds from source the version.py will be rewritten with the correct compiler. But I might consider amending this to somehow leave compiler out of version.py when running the `sdist` command. | |
<author>embray</author> | |
Since this was branched off my branch for #139 it contains the changeset for that too--oops. Just ignore this until #139 is merged, then I can rebase this. | |
<author>mdboom</author> | |
A more robust approach might be to compile a small C extension that uses C preprocessor defines to determine the compiler. Something along the lines of BOOST_COMPILER from boost: http://svn.boost.org/svn/boost/trunk/libs/config/doc/html/boost_config/boost_macro_reference.html#boost_config.boost_macro_reference.boost_informational_macros | |
We can plunder their code that does this -- they support tons of compilers, but we would only have to include the ones we explicitly support. | |
<author>mdboom</author> | |
I think #141 closes this -- but want to confirm before I do. | |
<author>embray</author> | |
Sure. Fixed by #141. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Determine compiler version using predefined macros in a small C extension | |
<author>mdboom</author> | |
This is an alternative to the approach in #140. This should be pretty robust (at least among the compilers we want to support) and doesn't require futzing with distutils to robustly determine the compiler etc... | |
<author>embray</author> | |
Ooh, I like it. I think this is probably a better approach than mine to get the compiler. Though I'm not sure how I feel about the try/except in the version.py template. I think there I would still rather just have a hard-coded string for the compiler that's placed there when the version.py is generated. Part of the point is that if we do I binary distribution, for example, the compiler that was used is hard-coded. | |
<author>mdboom</author> | |
Hmmm... I put the `try...except` in there because the C extension may not yet exist. If there's some trick to 1) make the C extension, 2) call it in setup.py after build but before install, then it could probably be put in version.py directly. | |
<author>embray</author> | |
One of the functions in version_helper.py would use the C extension to get the compiler, and failing that it could either use the compiler option from distutils like in my version, or just write "unknown". Perhaps if the compiler extension is rebuilt it could then afterwards re-regenerate the version.py with the correct value. Or even just update it--it wouldn't have to regenerate it entirely. | |
<author>eteq</author> | |
Would it break this to put it in, say, the ``astropy/utils`` package, instead of the root astropy package? It seems look it belongs better there, as it is a "developer utility." | |
Also, perhaps add a note or just a quick paragraph or ``..note ::`` in the development part of the documentation indicating that this is the canonical way to determine the compiler? | |
<author>eteq</author> | |
Oh, and if it wasn't clear from the context, I was also suggesting either making it a "public" string (e.g. drop the underscore), or add a ``get_compiler`` function that just returns the private string's value. | |
<author>mdboom</author> | |
Sure. This could be moved into `astropy/utils`. | |
The idea is that the canonical way to determine the compiler is `astropy.version.compiler` (which is already a public string, as it is in `astropy._compiler.compiler`). | |
<author>mdboom</author> | |
@iguananaut: I'm not following how you're suggesting the version.py would be regenerated. At the time it's currently generated, we don't have any C extensions built. It looks as if we'd have to inject some logic into `setup.py install` and do this there. Is that what you proposed? I don't understand the downside of the current implementation (which has the advantage of not monkeying around with distutils very much). | |
<author>embray</author> | |
@mdboom Last time I looked at this I think I hadn't had coffee yet or something, so it didn't click for me that the "_compiler" module just contained a hard-coded string determined at compile time. So that should work fine. | |
I guess my main problem was that you can't just tell the compiler by visually inspecting the version.py file, which I didn't like. My thinking was that maybe after the _compiler module is built the version.py could be updated to add the `compiler =` line. | |
But now that I look at this again I agree with you that it's an unnecessary complication, and that your approach seems fine. | |
<author>mdboom</author> | |
Yeah -- I agree that visual inspection would be nice, and it's probably doable -- just not sure how far down the distutils rabbit hole one would have to go to make it work ;) Maybe we should merge as-is (which the move to utils suggested by @eteq), and then tackle that later if that turns out to be really important. | |
<author>astrofrog</author> | |
Works for me, and all tests pass on MacOS 10.7 with all Python (64-bit)/Numpy combinations. | |
<author>embray</author> | |
It really wouldn't be so hard to do, though easier if we were using packaging/distutils2 (or d2to1) since they support pre/post-command hooks. This is nice because it means you don't have to outright replace commands to do most things (though replacing commands is supported too). So a post-build_ext hook could use the newly built _compile module to update the version.py file. | |
That said, it's more trouble than it's worth for now. | |
<author>eteq</author> | |
@mdboom - I know you already merged this, but I wasn't paying close enough attention before... I see you moved the ``compiler.c`` file to ``astropy/utils/src/compiler.c``. What I meant (although admittedly, I wasn't being clear) was ``astropy/utils/compiler.c``. I'm not sure if we've discussed this/come to consensus, but in general I prefer the convention that the .c files that are interfacing with python *not* be in a separate ``src`` directory. Do you (or anyone else) have a strong preference that they be in ``src``? If not, I (or you) can make the change directly in master to move it out of the ``src`` directory and update ``astropy/utils/setup_package.py`` accordingly. | |
<author>mdboom</author> | |
I disagree. I think it's important to have a clear separation between what gets installed and what doesn't. Also, having .c files mixed in with .py files makes the .gitignore `*.c` policy a PITA. Also, that's what we've done up until now (weak argument, for sure). | |
<author>jiffyclub</author> | |
I won't call it a strong or well reasoned preference, but I also like keeping `*.c` files in the `src` directory. | |
<author>eteq</author> | |
@mdboom and @jiffyclub - I don't have a particularly strong opinion which it is, actually... I just want to make sure we stick with whatever convention is decided (so I suppose we should put this in the coding guidelines). What do you think should be done with Cython .pyx files, then? Do they go in ``src``, or with the *.py files? I guess if it's "gets installed or not" the go in ``src``, but they definitely "look" a lot more like .py files. | |
</issue> | |
<issue> | |
<author>embray</author> | |
get_distutils_option() doesn't always work with custom command options | |
<author>embray</author> | |
Because `get_distutils_option()` uses the distutils machinery to parse the command line options, part of that machinery involves loading the classes for each distutils command being executed and inspecting those classes for the command line arguments they accept. | |
The problem is that we override some built-in commands and add additional options--the custom 'test' command in particular comes to mind. I was running into a problem where I was using the `-a` option with test, which we added. The problem is that if `get_distutils_option()` is called *before* the custom command has been registered with distutils, distutils throws up an exception upon seeing the unrecognized option for the `test` command. That exception is already caught, but it causes `get_distutils_option()` to return `None`. | |
Not sure yet what the best solution is. | |
<author>eteq</author> | |
@iguananaut - did this get fixed in one of the many changes we've since made to the setup system? | |
<author>embray</author> | |
Wow, I completely forgot about this. Unfortunately it still appears to be a problem. For example, if I do: | |
`python setup.py build --debug test -P io.fits` | |
the call to `astropy.setup_helpers.get_debug_option()` returns `False`, because `get_distutils_option()` hits a `DistutilsError` over the unrecognized `-P` argument to the test command. | |
This is still worth fixing, but an immediate fix doesn't immediately spring to mind. In any case, I'll assign this to myself. | |
<author>embray</author> | |
This should work fine; it doesn't really fix `get_debug_option()`, but I'm fine with saying it won't work right if the command line options are invalid anyways. | |
<author>embray</author> | |
Wow, opened a year ago? Anyways this issue is fixed by #668 which supersedes the original fix for this issue. | |
</issue> | |
<issue> | |
<author>embray</author> | |
3 io.vo tests failing on Windows | |
<author>embray</author> | |
Here's a snipped copy of the error output: | |
``` | |
=================================== FAILURES =================================== | |
_____________________________ test_local_data_obj ______________________________ | |
def test_local_data_obj(): | |
from ..data import get_data_fileobj | |
with get_data_fileobj('data/local.dat') as f: | |
f.readline() | |
> assert f.read()==b'CONTENT\n' | |
E assert 'CONTENT\r\n' == 'CONTENT\n' | |
E CONTENT | |
astropy\config\tests\test_data.py:46: AssertionError | |
_______________________________ test_regression ________________________________ | |
@pytest.mark.xfail('pyfits304x') | |
def test_regression(): | |
> _test_regression(False) | |
astropy\io\vo\tests\vo_test.py:190: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
_python_based = False | |
def _test_regression(_python_based=False): | |
# Read the VOTABLE | |
votable = parse( | |
get_data_filename('data/regression.xml'), | |
pedantic=False, | |
_debug_python_based_parser=_python_based) | |
table = votable.get_first_table() | |
assert table.array.dtype == [ | |
(('string test', 'string_test'), '|O8'), | |
(('fixed string test', 'string_test_2'), '|S10'), | |
('unicode_test', '|O8'), | |
(('unicode test', 'fixed_unicode_test'), '<U10'), | |
(('string array test', 'string_array_test'), '|S4'), | |
('unsignedByte', '|u1'), | |
('short', '<i2'), | |
('int', '<i4'), | |
('long', '<i8'), | |
('double', '<f8'), | |
('float', '<f4'), | |
('array', '|O8'), | |
('bit', '|b1'), | |
('bitarray', '|b1', (3, 2)), | |
('bitvararray', '|O8'), | |
('bitvararray2', '|O8'), | |
('floatComplex', '<c8'), | |
('doubleComplex', '<c16'), | |
('doubleComplexArray', '|O8'), | |
('doubleComplexArrayFixed', '<c16', (2,)), | |
('boolean', '|b1'), | |
('booleanArray', '|b1', (4,)), | |
('nulls', '<i4'), | |
('nulls_array', '<i4', (2, 2)), | |
('precision1', '<f8'), | |
('precision2', '<f8'), | |
('doublearray', '|O8'), | |
('bitarray2', '|b1', (16,)) | |
] | |
votable.to_xml(join(TMP_DIR, "regression.tabledata.xml"), | |
_debug_python_based_parser=_python_based) | |
assert_validate_schema(join(TMP_DIR, "regression.tabledata.xml")) | |
votable.get_first_table().format = 'binary' | |
votable.to_xml(join(TMP_DIR, "regression.binary.xml"), | |
_debug_python_based_parser=_python_based) | |
assert_validate_schema(join(TMP_DIR, "regression.binary.xml")) | |
votable2 = parse(join(TMP_DIR, "regression.binary.xml"), pedantic=False, | |
_debug_python_based_parser=_python_based) | |
votable2.get_first_table().format = 'tabledata' | |
votable2.to_xml(join(TMP_DIR, "regression.bin.tabledata.xml"), | |
_astropy_version="testing", | |
_debug_python_based_parser=_python_based) | |
assert_validate_schema(join(TMP_DIR, "regression.bin.tabledata.xml")) | |
with get_data_fileobj( | |
'data/regression.bin.tabledata.truth.xml') as fd: | |
truth = fd.readlines() | |
with open(join(TMP_DIR, "regression.bin.tabledata.xml"), 'rb') as fd: | |
output = fd.readlines() | |
# If the lines happen to be different, print a diff | |
# This is convenient for debugging | |
for line in difflib.unified_diff(truth, output): | |
if IS_PY3K: | |
sys.stdout.write( | |
line.decode('utf-8'). | |
encode('string_escape'). | |
replace('\\n', '\n')) | |
else: | |
sys.stdout.write( | |
line.encode('string_escape'). | |
replace('\\n', '\n')) | |
> assert truth == output | |
E assert ['<?xml versi...ata\r\n', ...] == ['<?xml versio... data\n', ...] | |
E At index 0 diff: '<?xml version="1.0" encoding="utf-8"?>\r\n' != '<?xml version="1.0" encoding="utf-8"?>\n' | |
astropy\io\vo\tests\vo_test.py:175: AssertionError | |
------------------------------- Captured stdout -------------------------------- | |
astropy\io\vo\exceptions.py:71: E02: c:\windows\temp\tmpvr5w9j\regression.binary.xml:16:1: E02: Incorrect number of elements in array. Expected multiple of 0, got 2 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W48: c:\windows\temp\tmpvr5w9j\regression.binary.xml:45:5: W48: Unknown attribute 'value' on OPTION | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: c:\windows\temp\tmpvr5w9j\regression.binary.xml:167:7: E02: Incorrect number of elements in array. Expected multiple of 4, got 1 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W49: c:\windows\temp\tmpvr5w9j\regression.binary.xml:167:7: W49: Empty cell illegal for integer fields. | |
warn(warning) | |
--- | |
+++ | |
@@ -1,302 +1,302 @@ | |
-<?xml version="1.0" encoding="utf-8"?>\r | |
-<!-- Produced with astropy.io.vo version testing\r | |
- http://www.astropy.org/ -->\r | |
-<VOTABLE version="1.1" xmlns="http://www.ivoa.net/xml/VOTable/v1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.ivoa.net/xml/VOTable/v1.1">\r | |
- <DESCRIPTION>\r | |
- The VOTable format is an XML standard for the interchange of data\r | |
- represented as a set of tables. In this context, a table is an\r | |
- unordered set of rows, each of a uniform format, as specified in the\r | |
- table metadata. Each row in a table is a sequence of table cells,\r | |
- and each of these contains either a primitive data type, or an array\r | |
- of such primitives. VOTable is derived from the Astrores format [1],\r | |
- itself modeled on the FITS Table format [2]; VOTable was designed to\r | |
- be closer to the FITS Binary Table format.\r | |
- </DESCRIPTION>\r | |
- <COOSYS ID="J2000" equinox="J2000" system="eq_FK5"/>\r | |
- <PARAM ID="wrong_arraysize" arraysize="0" datatype="float" name="wrong_arraysize" value=" "/>\r | |
- <PARAM ID="INPUT" arraysize="*" datatype="float" name="INPUT" ucd="phys.size;instr.tel" unit="deg" value="0 0">\r | |
- <DESCRIPTION>\r | |
- This is the most interesting parameter in the world, and it drinks\r | |
- Dos Equis\r | |
- </DESCRIPTION>\r | |
- </PARAM>\r | |
...snip... | |
-</VOTABLE>\r | |
+<?xml version="1.0" encoding="utf-8"?> | |
+<!-- Produced with astropy.io.vo version testing | |
+ http://www.astropy.org/ --> | |
+<VOTABLE version="1.1" xmlns="http://www.ivoa.net/xml/VOTable/v1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.ivoa.net/xml/VOTable/v1.1"> | |
+ <DESCRIPTION> | |
+ The VOTable format is an XML standard for the interchange of data | |
+ represented as a set of tables. In this context, a table is an | |
+ unordered set of rows, each of a uniform format, as specified in the | |
+ table metadata. Each row in a table is a sequence of table cells, | |
+ and each of these contains either a primitive data type, or an array | |
+ of such primitives. VOTable is derived from the Astrores format [1], | |
+ itself modeled on the FITS Table format [2]; VOTable was designed to | |
+ be closer to the FITS Binary Table format. | |
+ </DESCRIPTION> | |
+ <COOSYS ID="J2000" equinox="J2000" system="eq_FK5"/> | |
+ <PARAM ID="wrong_arraysize" arraysize="0" datatype="float" name="wrong_arraysize" value=" "/> | |
+ <PARAM ID="INPUT" arraysize="*" datatype="float" name="INPUT" ucd="phys.size;instr.tel" unit="deg" value="0 0"> | |
+ <DESCRIPTION> | |
+ This is the most interesting parameter in the world, and it drinks | |
+ Dos Equis | |
+ </DESCRIPTION> | |
+ </PARAM> | |
...snip... | |
+</VOTABLE> | |
_____________________ test_regression_python_based_parser ______________________ | |
@pytest.mark.xfail('pyfits304x') | |
def test_regression_python_based_parser(): | |
> _test_regression(True) | |
astropy\io\vo\tests\vo_test.py:195: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
_python_based = True | |
def _test_regression(_python_based=False): | |
# Read the VOTABLE | |
votable = parse( | |
get_data_filename('data/regression.xml'), | |
pedantic=False, | |
_debug_python_based_parser=_python_based) | |
table = votable.get_first_table() | |
assert table.array.dtype == [ | |
(('string test', 'string_test'), '|O8'), | |
(('fixed string test', 'string_test_2'), '|S10'), | |
('unicode_test', '|O8'), | |
(('unicode test', 'fixed_unicode_test'), '<U10'), | |
(('string array test', 'string_array_test'), '|S4'), | |
('unsignedByte', '|u1'), | |
('short', '<i2'), | |
('int', '<i4'), | |
('long', '<i8'), | |
('double', '<f8'), | |
('float', '<f4'), | |
('array', '|O8'), | |
('bit', '|b1'), | |
('bitarray', '|b1', (3, 2)), | |
('bitvararray', '|O8'), | |
('bitvararray2', '|O8'), | |
('floatComplex', '<c8'), | |
('doubleComplex', '<c16'), | |
('doubleComplexArray', '|O8'), | |
('doubleComplexArrayFixed', '<c16', (2,)), | |
('boolean', '|b1'), | |
('booleanArray', '|b1', (4,)), | |
('nulls', '<i4'), | |
('nulls_array', '<i4', (2, 2)), | |
('precision1', '<f8'), | |
('precision2', '<f8'), | |
('doublearray', '|O8'), | |
('bitarray2', '|b1', (16,)) | |
] | |
votable.to_xml(join(TMP_DIR, "regression.tabledata.xml"), | |
_debug_python_based_parser=_python_based) | |
assert_validate_schema(join(TMP_DIR, "regression.tabledata.xml")) | |
votable.get_first_table().format = 'binary' | |
votable.to_xml(join(TMP_DIR, "regression.binary.xml"), | |
_debug_python_based_parser=_python_based) | |
assert_validate_schema(join(TMP_DIR, "regression.binary.xml")) | |
votable2 = parse(join(TMP_DIR, "regression.binary.xml"), pedantic=False, | |
_debug_python_based_parser=_python_based) | |
votable2.get_first_table().format = 'tabledata' | |
votable2.to_xml(join(TMP_DIR, "regression.bin.tabledata.xml"), | |
_astropy_version="testing", | |
_debug_python_based_parser=_python_based) | |
assert_validate_schema(join(TMP_DIR, "regression.bin.tabledata.xml")) | |
with get_data_fileobj( | |
'data/regression.bin.tabledata.truth.xml') as fd: | |
truth = fd.readlines() | |
with open(join(TMP_DIR, "regression.bin.tabledata.xml"), 'rb') as fd: | |
output = fd.readlines() | |
# If the lines happen to be different, print a diff | |
# This is convenient for debugging | |
for line in difflib.unified_diff(truth, output): | |
if IS_PY3K: | |
sys.stdout.write( | |
line.decode('utf-8'). | |
encode('string_escape'). | |
replace('\\n', '\n')) | |
else: | |
sys.stdout.write( | |
line.encode('string_escape'). | |
replace('\\n', '\n')) | |
> assert truth == output | |
E assert ['<?xml versi...ata\r\n', ...] == ['<?xml versio... data\n', ...] | |
E At index 0 diff: '<?xml version="1.0" encoding="utf-8"?>\r\n' != '<?xml version="1.0" encoding="utf-8"?>\n' | |
astropy\io\vo\tests\vo_test.py:175: AssertionError | |
------------------------------- Captured stdout -------------------------------- | |
astropy\io\vo\exceptions.py:71: W17: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:89:2: W17: GROUP element contains more than one DESCRIPTION element | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W46: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:103:28: W46: char value is too long for specified length of 10 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W46: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:105:28: W46: unicodeChar value is too long for specified length of 10 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W46: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:106:11: W46: char value is too long for specified length of 4 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:125:7: E02: Incorrect number of elements in array. Expected multiple of 4, got 1 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W49: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:125:7: W49: Empty cell illegal for integer fields. | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W46: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:133:17: W46: char value is too long for specified length of 10 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W46: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:136:17: W46: char value is too long for specified length of 4 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W49: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:140:6: W49: Empty cell illegal for integer fields. | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W01: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:143:18: W01: Array uses commas rather than whitespace | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:159:7: E02: Incorrect number of elements in array. Expected multiple of 16, got 0 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W49: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:159:7: W49: Empty cell illegal for integer fields. | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W49: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:159:7: W49: Empty cell illegal for integer fields. (suppressing further warnings of this type...) | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W46: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:165:17: W46: unicodeChar value is too long for specified length of 10 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:189:7: E02: Incorrect number of elements in array. Expected multiple of 16, got 0 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W01: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:203:13: W01: Array uses commas rather than whitespace | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:205:7: E02: Incorrect number of elements in array. Expected multiple of 6, got 0 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:213:7: E02: Incorrect number of elements in array. Expected multiple of 4, got 1 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:219:7: E02: Incorrect number of elements in array. Expected multiple of 16, got 0 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W01: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:233:12: W01: Array uses commas rather than whitespace | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:235:7: E02: Incorrect number of elements in array. Expected multiple of 6, got 0 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:243:7: E02: Incorrect number of elements in array. Expected multiple of 4, got 1 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:245:7: E02: Incorrect number of elements in array. Expected multiple of 4, got 1 (suppressing further warnings of this type...) | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W46: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:263:28: W46: char value is too long for specified length of 10 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W46: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:265:28: W46: unicodeChar value is too long for specified length of 10 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W46: C:\Program Files\Jenkins\jobs\astropy-winxp-py2.7-np1.6\workspace\build\lib.win32-2.7\astropy\io\vo\tests\data/regression.xml:266:11: W46: char value is too long for specified length of 4 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W39: ?:?:?: W39: Bit values can not be masked (suppressing further warnings of this type...) | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: E02: c:\windows\temp\tmpvr5w9j\regression.binary.xml:167:12: E02: Incorrect number of elements in array. Expected multiple of 4, got 1 | |
warn(warning) | |
astropy\io\vo\exceptions.py:71: W49: c:\windows\temp\tmpvr5w9j\regression.binary.xml:167:12: W49: Empty cell illegal for integer fields. | |
warn(warning) | |
--- | |
+++ | |
@@ -1,302 +1,302 @@ | |
-<?xml version="1.0" encoding="utf-8"?>\r | |
-<!-- Produced with astropy.io.vo version testing\r | |
- http://www.astropy.org/ -->\r | |
-<VOTABLE version="1.1" xmlns="http://www.ivoa.net/xml/VOTable/v1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.ivoa.net/xml/VOTable/v1.1">\r | |
- <DESCRIPTION>\r | |
- The VOTable format is an XML standard for the interchange of data\r | |
- represented as a set of tables. In this context, a table is an\r | |
- unordered set of rows, each of a uniform format, as specified in the\r | |
- table metadata. Each row in a table is a sequence of table cells,\r | |
- and each of these contains either a primitive data type, or an array\r | |
- of such primitives. VOTable is derived from the Astrores format [1],\r | |
- itself modeled on the FITS Table format [2]; VOTable was designed to\r | |
- be closer to the FITS Binary Table format.\r | |
- </DESCRIPTION>\r | |
- <COOSYS ID="J2000" equinox="J2000" system="eq_FK5"/>\r | |
- <PARAM ID="wrong_arraysize" arraysize="0" datatype="float" name="wrong_arraysize" value=" "/>\r | |
- <PARAM ID="INPUT" arraysize="*" datatype="float" name="INPUT" ucd="phys.size;instr.tel" unit="deg" value="0 0">\r | |
- <DESCRIPTION>\r | |
- This is the most interesting parameter in the world, and it drinks\r | |
- Dos Equis\r | |
- </DESCRIPTION>\r | |
- </PARAM>\r | |
...snip... | |
-</VOTABLE>\r | |
+<?xml version="1.0" encoding="utf-8"?> | |
+<!-- Produced with astropy.io.vo version testing | |
+ http://www.astropy.org/ --> | |
+<VOTABLE version="1.1" xmlns="http://www.ivoa.net/xml/VOTable/v1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.ivoa.net/xml/VOTable/v1.1"> | |
+ <DESCRIPTION> | |
+ The VOTable format is an XML standard for the interchange of data | |
+ represented as a set of tables. In this context, a table is an | |
+ unordered set of rows, each of a uniform format, as specified in the | |
+ table metadata. Each row in a table is a sequence of table cells, | |
+ and each of these contains either a primitive data type, or an array | |
+ of such primitives. VOTable is derived from the Astrores format [1], | |
+ itself modeled on the FITS Table format [2]; VOTable was designed to | |
+ be closer to the FITS Binary Table format. | |
+ </DESCRIPTION> | |
+ <COOSYS ID="J2000" equinox="J2000" system="eq_FK5"/> | |
+ <PARAM ID="wrong_arraysize" arraysize="0" datatype="float" name="wrong_arraysize" value=" "/> | |
+ <PARAM ID="INPUT" arraysize="*" datatype="float" name="INPUT" ucd="phys.size;instr.tel" unit="deg" value="0 0"> | |
+ <DESCRIPTION> | |
+ This is the most interesting parameter in the world, and it drinks | |
+ Dos Equis | |
+ </DESCRIPTION> | |
+ </PARAM> | |
...snip... | |
+</VOTABLE> | |
------------------ generated xml file: ..\..\testresults.xml ------------------- | |
============== 3 failed, 2182 passed, 50 skipped in 10.84 seconds ============== | |
``` | |
The problem appears to just have something to do with newlines, but I haven't looked much at it beyond that. | |
<author>mdboom</author> | |
Can you check this with mingw? It was working for me on my Windows py-2.7 with the extensions compiled in msvc before, but this should have more robust handling of newlines. | |
<author>mdboom</author> | |
Actually, I think the reason this may have worked for me and not for you is the way it was checked out from git. I've been using cygwin git, which probably doesn't do newline translation on checkout. If yours does, the groundtruth data files will end up having Windows line endings, which don't compare correctly with what gets generated in the tests. In either case, being non-strict about line endings in the tests is probably a good idea. | |
<author>embray</author> | |
I am using cygwin git. I think I have it configured to translate to CRLF on checkout and back to LF on commit. | |
<author>embray</author> | |
In any case, I just tried your fix and it works for me now, I say go ahead and merge. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Upgrade to wcslib-4.10. | |
<author>mdboom</author> | |
This addresses some of the issues that previously had to be fixed locally in 4.9 | |
WCSLIB version 4.10 (2012/02/06) | |
-------------------------------- | |
* C library | |
- datfix() and spcfix() now return informative messages when dates and | |
AIPS-convention spectral axes are translated (changes contributed by | |
Michael Droettboom). spcaips() now returns an error status for | |
invalid values of VELREF. | |
- wcssub() has been augmented with the ability to add new axes onto a | |
wcsprm struct. | |
WCSLIB version 4.9 (2012/01/24) | |
------------------------------- | |
* C library | |
- Fixes to wcsfixi() for collecting the messages properly in the info | |
array (from Michael Droettboom). | |
- Handle certain malformed date strings more gracefully in datfix(). | |
- Make informative messages printed by wcserr_prt() a bit more | |
informative. | |
<author>astrofrog</author> | |
All tests pass for Python 2.6/2.7/3.1/3.2 on MacOS 10.7. Feel free to merge! | |
</issue> | |
<issue> | |
<author>embray</author> | |
Fixed 3 table tests that were failing on my Windows machine | |
<author>embray</author> | |
I think these tests were failing due to my Windows machine using 32-bit | |
builds of Python and Numpy, so that the default integer type was int32 | |
instead of int64. | |
This fixes the tests in a few places to not assume that ints will be | |
int64. Another approach would be to modify the table code so that | |
unspecified types just use int64 by default. I'm not sure which approach | |
is preferable. | |
<author>eteq</author> | |
On python 3.2.2 (I *think* it's 64-bit) on OS X 10.6, this causes a new failure: http://paste.pocoo.org/show/547502 . On python 2.7.2 (32-bit), however, all tests pass. Note that I was *also* seeing these 3 errors in OS X before this commit, so it's not windows-specific (rather, I guess, 32-bit-specific). | |
<author>eteq</author> | |
Reading it again, that last comment's sentence structure might be a bit confusing. To clarify: this PR *fixes* the same 3 test failures for me in 32-bit 2.7.2, but introduces a new one in 3.2.2. | |
<author>astrofrog</author> | |
For me, `test_partial_names_dtypes` is failing on all Python (64-bit)/Numpy versions on MacOS 10.7. | |
<author>taldcroft</author> | |
Sorry for this mess. Maybe the easiest path to multi-platform success is to explicitly specify dtype=np.int32 for the relevant columns and test for this. The point of these tests is just to make sure that the type didn't change during the column manipulations. | |
<author>embray</author> | |
That's strange--when creating a Numpy array from an array of ints, Numpy should use the largest native int type by default. Likewise, `numpy.dtype('int')` should return the largest native int type. So I'm not sure what's going on there. | |
Yeah, maybe best for now to just make the types explicit in the tests. | |
<author>eteq</author> | |
I think I understand why this is happening in 64-bit... If you look at line 60 of ``test_init_table.py``, the dtype is specified as "i4"... but then on line 80, where it's failing, it's comparing to ``dtype('int')``. So on 64-bit systems this fails. | |
So I think the solution is to just change where it's failing at ``numpy.dtype('int')`` to ``numpy.dtype('int32')``, as it's already explicity int32. Although perhaps that what you meant in your comment, @taldcroft | |
<author>taldcroft</author> | |
Yes @eteq that was my intent, just make sure all the setup data initialization is explicitly set with dtype as int32 (for the appropriate columns) and test for match with int32. I can get to this tomorrow if @iguananaut doesn't do it already. Fixing the code should be simple but testing on all the platforms takes a little time for me. I have a windows VM setup somewhere... | |
<author>embray</author> | |
@taldcroft Offered a better fix for this on #156. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
vo: Unicode refactoring | |
<author>mdboom</author> | |
This is a major refactoring of io.vo to make the Unicode handling more sane. The current situation came out of a need to support Python 2.5 that no longer exists. | |
The new version does all XML parsing and writing in Unicode, regardless of Python version. Only the file objects at either end do encoding/decoding to/from utf-8. Care is taken that non-Unicode aware file objects (such as the default ones in Python 2.x) still work transparently. | |
<author>embray</author> | |
That looks great--I will need to give io.fits the same treatment at some point. | |
<author>astrofrog</author> | |
All tests pass for Python 2.6/2.7/3.1/3.2 on MacOS 10.7. Feel free to merge! | |
</issue> | |
<issue> | |
<author>embray</author> | |
Crash running setup.py test on Python 3 in Jenkins | |
<author>embray</author> | |
I'm trying to add a build for Astropy on Windows with Python 3.2. The build stage goes fine, but when it gets to running the tests, Python crashes with: | |
``` | |
Traceback (most recent call last): | |
File "py._io.capture", line 54, in start | |
WindowsError: [Error 6] The handle is invalid | |
During handling of the above exception, another exception occurred: | |
Traceback (most recent call last): | |
File "<string>", line 1, in <module> | |
File "astropy\tests\helper.py", line 149, in run_tests | |
return pytest.main(args=all_args, plugins=plugins) | |
File "_pytest.core", line 467, in main | |
File "_pytest.core", line 460, in _prepareconfig | |
File "_pytest.core", line 419, in __call__ | |
File "_pytest.core", line 430, in _docall | |
File "_pytest.core", line 348, in execute | |
File "_pytest.helpconfig", line 25, in pytest_cmdline_parse | |
File "_pytest.core", line 348, in execute | |
File "_pytest.config", line 10, in pytest_cmdline_parse | |
File "_pytest.config", line 343, in parse | |
File "_pytest.config", line 321, in _preparse | |
File "_pytest.config", line 297, in _setinitialconftest | |
File "_pytest.capture", line 101, in resumecapture | |
File "py._io.capture", line 229, in startall | |
File "py._io.capture", line 56, in start | |
ValueError: saved filedescriptor not valid, did you call start() twice? | |
``` | |
This is related to py.tests's stdio capture. It's trying to call `os.fstat()` on the stdin file descriptor (0) and failing. My guess is that it has something to do with Jenkins also messing with stdio capture. I don't have any problem if I run the exact same commands outside Jenkins. | |
What's odd is that this is only happening in Python 3--not Python 2. I don't see why that should matter. I will also see about raising this issue with py.test's devs. | |
<author>embray</author> | |
I think this Python bug may actually be the culprit: http://bugs.python.org/issue8780 | |
<author>embray</author> | |
Nope. That's not it--my copy of Python 3.2.2 has the fix for that issue. | |
<author>embray</author> | |
Any comment on this? I'm anxious to merge this as it should finally get all the build configurations working on Windows. | |
Look at all the pretty Skittles: https://jenkins.shiningpanda.com/astropy/job/astropy-winxp32-multiconfig/ Don't you want them all to be blue? :) | |
<author>mdboom</author> | |
It seems find in this case, since we clearly end execution of the parent process right after the child process is done. I say let's push this through. If it breaks other things, we'll know pretty quickly. In that case, we can set the close_fds flag only on Windows. | |
<author>embray</author> | |
Yup, it really shouldn't cause any problems. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Support both new and old-style string formatting in Table | |
<author>eteq</author> | |
See the discussion in #130 - it would be great if there was a way to intelligently determine what style of format string the user was using, and use the ``s % fmt`` or ``s.fromat(fmt)`` syntax, appropriately. | |
<author>embray</author> | |
Theoretically one can construct a string that's a valid format string for the given set of arguments in both formatting styles. That being the case, new-style formatting should be tried first. Only if that fails should it fall back on trying old-style formatting. (And failing that just treat the format string as a constant value? That could theoretically have a use...) | |
<author>taldcroft</author> | |
Annoyingly str.format doesn't actually fail if more args are provided than it converts. | |
>>> "%f".format(2) | |
'%f' | |
EDIT: | |
>>> '{}' % 2 | |
TypeError: not all arguments converted during string formatting | |
So if we try old style first there is a chance. | |
Or if people prefer the new style (and think that most end-users will agree) we could just say that new style is the only accepted format. | |
<author>astrofrog</author> | |
Not so fast ;-) | |
In [13]: '%4.1f{}'.format(1.) | |
Out[13]: '%4.1f1.0' | |
In [14]: '%4.1f{}' % 1. | |
Out[14]: ' 1.0{}' | |
Of course, I'm sure what you are suggesting would work in 90% of user cases, but I don't think we can rely on exceptions to distinguish the two. | |
<author>astrofrog</author> | |
It might be easiest to just switch to new-style format, as long as we explain it clearly? I personally prefer the old-style format, but I guess it's called 'old-style' for a reason... | |
<author>taldcroft</author> | |
Why does it have to be so hard? I thought I had something reasonable until discovering this bizarre behavior: | |
```python | |
In [27]: '{}' % 1 | |
--------------------------------------------------------------------------- | |
TypeError Traceback (most recent call last) | |
/data/baffin/tom/git/astropy/<ipython-input-27-d8ebe0b57d41> in <module>() | |
----> 1 '{}' % 1 | |
TypeError: not all arguments converted during string formatting | |
In [28]: '{}' % np.int64(1) | |
Out[28]: '{}' | |
``` | |
<author>embray</author> | |
@taldcroft That behavior was so bizarre to me that I couldn't help the urge to find out why that was going on. The reason is that `np.int64(1)` is technically a numpy array, albeit a scalar value. Numpy array types have a non-NULL `tp_as_mapping` member, which `PyString_Format` checks on the format args to see if it can be treated like a dict. When a dict is used as the format arg, the behavior has always been to allow more items in the dict than there are arguments in the format string. Hence the behavior you reported. In other words, `'{}' % np.int64(1)` is roughly equivalent, for this purpose, to `'{}' % {}`. | |
I say just require new-style format args, document the fact, and leave it at that :) | |
<author>eteq</author> | |
Isn't the only time this shows up in `table` in functions where you're giving the string for a single value, rather than a composite string? If that's the case, the following should work. It will mis-format the cases @astrofrog showed above, but I think someone will notice if their table has ``{}`` or something dangling on the end of every data entry... Or am I missing some subtlety? | |
``` | |
if fmtstr.lstrip().startswith('{'): | |
result = fmtstr.format(number) | |
elif fmtstr.lstrip.startswith('%'): | |
result = fmtstr % number | |
else: | |
raise ValueError('not any kind of format string you dolt!') | |
``` | |
I'm also perfectly fine with just requiring new-style only. | |
@iguananaut - that behavior is similar to the incredibly non-intuitive result here: | |
``` | |
>>> import numpy as np | |
>>> scalar = 3 | |
>>> ascalar = np.array(scalar) | |
>>> print scalar | |
3 | |
>>> np.isscalar(scalar) | |
True | |
>>> print ascalar | |
3 | |
>>> np.isscalar(ascalar) | |
False | |
``` | |
A source of endless annoyance to me. | |
<author>taldcroft</author> | |
See pull request #168. | |
I discovered another annoying feature of numpy: | |
```python | |
>>> '{:f}'.format(np.float(1)) | |
'1.000000' | |
>>> '{:f}'.format(np.float32(1)) | |
Traceback (most recent call last): | |
File "<stdin>", line 1, in <module> | |
ValueError: Unknown format code 'f' for object of type 'str' | |
>>> '{:f}'.format(np.float64(1)) | |
'1.000000' | |
>>> '{:d}'.format(np.int(1)) | |
'1' | |
>>> '{:d}'.format(np.int32(1)) | |
Traceback (most recent call last): | |
File "<stdin>", line 1, in <module> | |
ValueError: Unknown format code 'd' for object of type 'str' | |
>>> '{:d}'.format(np.int64(1)) | |
Traceback (most recent call last): | |
File "<stdin>", line 1, in <module> | |
ValueError: Unknown format code 'd' for object of type 'str' | |
>>> np.int.__format__ | |
<method '__format__' of 'int' objects> | |
>>> np.int32.__format__ | |
<method '__format__' of 'object' objects> | |
>>> | |
>>> dat = np.ones(2, dtype='f4') | |
>>> '{:f}'.format(dat[0]) | |
Traceback (most recent call last): | |
File "<stdin>", line 1, in <module> | |
ValueError: Unknown format code 'f' for object of type 'str' | |
``` | |
It looks like this is fixed but not in 1.6: | |
http://projects.scipy.org/numpy/ticket/1675 | |
<author>eteq</author> | |
@taldcroft - This is supposed to be closed now that #168 is merged, right? | |
Also, given that this is now implemented, does that mean that the `Column` class documentation for the `format` option should be updated to indicate either one is possible? Or is there somewhere else in addition to `Column` that this is used that I missed? (this kind of small fix can be done directly in master) | |
<author>taldcroft</author> | |
@eteq - this is closed. I've updated the Column docstring, I had missed this previously. Per your suggestion I edited the file directly in GitHub, hope this is not a problem with the overall astropy development process. | |
<author>eteq</author> | |
@taldcroft - nope, that's perfect! We agreed a while back that small changes are fine directly in master for those with the necessary privileges. | |
I'll close this issue, then - thanks! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Add support for masking/null values to Table class | |
<author>eteq</author> | |
The table docs say: "At this time the ``astropy.table`` package has no support for masking and null values. This is to be done." This issue simply records that desire in the issue system. | |
<author>astrofrog</author> | |
For the record, when working on ATpy, I found a lot of bugs in the Numpy masked structure array support, some of which were only fixed by Numpy 1.5, so if we want to use the built-in Numpy masked arrays, then this is just an advance warning that we will see failures when using Numpy < 1.5. | |
<author>eteq</author> | |
Ok, good to know - we'd want to make judicious use of `pytest.mark.xfail` if that's the internal implementation, then. | |
<author>astrofrog</author> | |
@taldcroft - I've started to look at the code to see how we could do this, and I see four main courses of action if we want to implement masking: | |
1. Create ``Masked...`` versions of all the objects, e.g. ``MaskedRow``, ``MaskedColumn``, ``MaskedTable`` which contain the data as a masked Numpy array rather than a Numpy ndarray. | |
2. Add a ``mask`` attribute to the current table infrastructure and keep the mask and data separate internally (but make it so the user can extract MaskedArrays if needed, like in ``NDData``). | |
3. Allow the data to be stored either inside a Numpy ndarray or MaskedArray inside the current elements | |
4. Always use MaskedArray instances internally. | |
One extra complication as I mentioned in a previous comment is that Numpy 1.4 has some bugs in the Masked array implementation. | |
I started to implement (1), but the issue is that it creates a lot of duplicate code, even if we use inheritance. This would be a bit of a pain to maintain. (4) would be easier to maintain than (3), but given the points about the bugs in the Masked array implementation in 1.4, I would actually lean towards (2) for now (as in NDData). Maybe it would be best to go with that option and then upgrade to allowing (but not requiring) MaskedArrays internally? | |
Do you have any thoughts about this? | |
<author>taldcroft</author> | |
@astrofrog and @eteq - One question: what is the support policy for previous versions of numpy? Can we drop 1.4 once 1.7 is out, which maintains support for the current and 2 previous major versions? This might have an impact on the implementation of masking support. | |
<author>taldcroft</author> | |
I took a really quick hack at this and was pleasantly surprised that the following patch seemed to work for supporting a masked column: | |
``` | |
+import numpy.ma as ma | |
+ | |
+ | |
+class ColumnMasked(ma.MaskedArray): | |
"""Define a data column for use in a Table object. | |
Parameters | |
@@ -160,9 +163,9 @@ class Column(np.ndarray): | |
if data is None: | |
dtype = (np.dtype(dtype).str, shape) | |
- self_data = np.zeros(length, dtype=dtype) | |
+ self_data = ma.zeros(length, dtype=dtype) | |
elif isinstance(data, Column): | |
- self_data = np.asarray(data.data, dtype=dtype) | |
+ self_data = ma.asarray(data.data, dtype=dtype) | |
if description is None: | |
description = data.description | |
if units is None: | |
@@ -172,7 +175,7 @@ class Column(np.ndarray): | |
if meta is None: | |
meta = deepcopy(data.meta) | |
else: | |
- self_data = np.asarray(data, dtype=dtype) | |
+ self_data = ma.asarray(data, dtype=dtype) | |
self = self_data.view(cls) | |
self._name = name | |
@@ -187,6 +190,7 @@ class Column(np.ndarray): | |
return self | |
``` | |
I got stuck though trying to do multiple inheritance to put all the masking-independent bits into a ColumnBase class. | |
```python | |
--> 271 class Column(np.ndarray, ColumnBase): | |
272 """Define a data column for use in a Table object. | |
273 | |
TypeError: Error when calling the metaclass bases | |
metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases | |
``` | |
I'm a little wary of option (2) because it feels like we would be rewriting np.ma. You need to provide implementations for every numpy operator and ufunc that depends on masking (e.g. sum(), + - etc). On the other hand I think that pandas may have actually taken that strategy (not 100% sure on this) since the split internal representation (masked or normal array) has its own problems. | |
Anyway @astrofrog if you want to take a crack at this feel free. I would lean to doing (3) with a masked version of Column and Row (if needed), but it's kind of a hard problem and I can't really see the path to a complete solution (and all possible problems) for any of them. So just pushing ahead and trying something might be the only way. | |
<author>astrofrog</author> | |
Looks promising! One slight complication as @mdboom pointed out in #363 is that Numpy masked arrays can't mask individual elements inside a cell in a multi-dimensional column (as far as I understand). If we wanted this, we may need to implement our own approach... | |
<author>taldcroft</author> | |
I wonder if metaclasses can work here to dynamically set the inheritance of a Column object to either np.ndarray or np.ma.MaskedArray. That way there would not need to be a distinction. Any thoughts metaclass-master @iguananaut? | |
Roger the comment from @mdboom on VO and MaskedArray. I would say that we should target missing value support within Table for 0.2 (by the easiest means possible) and leave integration with VO for later. We should definitely take a look at how Pandas tackled this issue since it has good missing value support. | |
<author>astrofrog</author> | |
@taldcroft - regarding your question about the Numpy version - I feel like we should try and support 1.4 as much as possible. If it is the case that masking would be very easy with 1.5+, then I say we just make an exception and allow masked tables to only be supported with Numpy 1.5+, but we should still strive to stay compatible with 1.4 for now. | |
<author>eteq</author> | |
I agree with @astrofrog here - we want astropy to build and work generally with 1.4, but if something in 1.5+ masking really makes it easier, just make sure you add checks at the top of the relevant places for a version, and throw an excpetion that specifically mentions "you need numpy 1.5 or greater" or similar. (Also, be sure to use xfail in any tests.) | |
<author>astrofrog</author> | |
This was done in #451, so closing. | |
<author>embray</author> | |
Since this was done in #451, it will be in the 0.2 release and does not belong in the 0.3 milestone. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
vo: gzip handling | |
<author>mdboom</author> | |
For writing, this adds a "compressed" kwarg, which when True, writes to a gzip-compressed xml file. The old behavior of going off of the file extension is still supported. | |
For reading, it reads the gzip magic number to decide if the file should be read as a gzip file. | |
<author>embray</author> | |
FWIW, io.fits.util has a utility function called `isfile()` that checks if an object is a file object. The py3compat module patches it to work reliably on Python 3. Might be one of many utility functions moving into astropy.utils somewhere. | |
Which brings up another question, unrelated to this issue: The approach I took in PyFITS for Python 3 compatibility is based around monkey-patching. Anywhere that there's some operation that can't be done the same way in both Python versions (eg. `isinstance(obj, file)`) I provide a function such as `isfile()`. The py3compat module then patches in a different implementation of that function for Python 3. Would this be a good approach to use for Astropy as a whole? | |
The main downside to the current approach, as I see it, is that currently you have to look in two different places to see each implementation. All the Python 3 implementations are in the py3compat module, which is looking a bit cluttered. Might be better, if two versions of a function have to exist, to define them both in the same module within an if statement. | |
<author>astrofrog</author> | |
All tests pass on MacOS 10.7 with all Python (64-bit)/Numpy combinations. | |
<author>mdboom</author> | |
@iguananaut: I agree that we should try to move most of the py3 compatibility code into one place -- in io.vo and to a lesser extent wcs it's all basically inline. I think I prefer having the two versions of the function side-by-side -- but I don't feel incredibly strongly about. The only kind of monkey-patching I would like to avoid whenever possible is monkey-patching other libraries (e.g. Numpy or the stdlib) as that can cause strange and hard-to-track problems with other people's code -- but I know that's not what you're suggesting. | |
<author>eteq</author> | |
+1 to *not* monkeypatching other code unless absolutely necessary, and to having py3 compatibility code all in ``astropy/utils/compat`` | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Clarify information about legacy packages | |
<author>taldcroft</author> | |
Upon running python setup.py install I got the following messages about astropy's compatibility layer. Sorry I haven't been following the details, but I don't know quite what this implies. Certainly the average user will be confused. Is there a problem if I do nothing? If I uninstall my pyfits and re-install astropy, will "import pyfits" still work? | |
``` | |
(astropy27)ccosmos$ python setup.py install | |
------------------------------------------------------------ | |
The legacy package 'pyfits' was found. | |
To install astropy's compatibility layer instead, uninstall | |
'pyfits' and then reinstall astropy. | |
------------------------------------------------------------ | |
------------------------------------------------------------ | |
The legacy package 'pywcs' was found. | |
To install astropy's compatibility layer instead, uninstall | |
'pywcs' and then reinstall astropy. | |
------------------------------------------------------------ | |
``` | |
<author>mdboom</author> | |
I'll try to explain better what it does here, and hopefully from that we can come up with some better language for the message. | |
Astropy includes a "compatibility shim" called "pyfits" which (basically) forwards calls to "pyfits" to "astropy.io.fits", for users who want to run code written to use "pyfits", but only want to install "astropy". | |
When the real "pyfits" is already installed, we don't want to install the "compatiblity shim" on top of it -- that would be overriding something that the user already explicitly installed. | |
So, if you do nothing in the situation above "import pyfits" will import the real "pyfits". If you uninstall "pyfits" and reinstall astropy, "import pyfits" will actually use "astropy.io.fits". | |
Does that clarify what's going on? Now how do we say this in a concise way? | |
<author>taldcroft</author> | |
That makes sense now. I would say to put what you wrote into a subsection of the astropy installation documentation ("Compatibility layers" or something). Then the setup.py message can say "For further explanation see http://...". It would be nice (but not necessary) if that documentation also had a few words about how to uninstall a package, which isn't entirely trivial. | |
<author>eteq</author> | |
+1 to @taldcroft's suggestion - the uninstall instructions can be something as simple as something that's basically "do ``import pyfits``, ``print pyfits``", and then delete the directory that is shown. Also look for pyfits in the easy-install.pth file and delete the line that references it if there is one there." | |
<author>eteq</author> | |
@taldcroft @mdboom - this was scheduled for 0.1 ... any updates on this? | |
<author>mdboom</author> | |
This just fell through the cracks. I'll add a few blurbs to the documentation... | |
<author>eteq</author> | |
@mdboom - did this get in some commit, or should I re-label it for 0.2? | |
<author>mdboom</author> | |
Just go ahead and re-label it... | |
<author>cdeil</author> | |
Would it also be possible to mention that at https://trac.assembla.com/astrolib that pywcs, pyfits and other packages are now integrated in astropy. A big fat label at the top "astrolib is dead. go to astropy". :-) | |
Also it might be useful to redirect users to astropy from the pypi pages. E.g. http://pypi.python.org/pypi/pywcs doesn't mention astropy and lists http://projects.scipy.org/astropy/astrolib/wiki/WikiStart as it's homepage, which is a dead link. | |
<author>eteq</author> | |
@cdeil - that are both excellent suggestions... presumably @mdboom has access to the pywcs pypi entry, but I'm not at all clear who's in charge of the astrolib page (perhaps @mdboom knows that as well?) | |
<author>mdboom</author> | |
@cdeil: I have added a note on the astrolib page for pywcs and vo.table that astropy is now the preferred way to get those packages. I'd love to make a more blanket statement about astrolib in general, but I don't think we're quite there yet. | |
I have also fixed the pywcs homepage on PyPI. | |
<author>astrofrog</author> | |
This can probably be closed once #353 is merged, as it adds information to the documentation. | |
<author>astrofrog</author> | |
#353 disables compatibility packages by default, and adds docs users have to read to enable it, so I think this issue is also resolved. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Issue with sub-package containing Cython code and data | |
<author>astrofrog</author> | |
It is impossible to develop a package with both cython code and data (which requires a setup_package.py file). Steps to reproduce: | |
git clone https://github.com/astropy/astropy.git | |
cd astropy/astropy | |
mkdir newpkg | |
echo "from .myfib import fib" > newpkg/__init__.py | |
echo "def fib(): pass" > newpkg/fib.pyx | |
echo "from .fib import fib" > newpkg/myfib.py | |
touch newpkg/setup_package.py | |
cd .. | |
python setup.py test | |
which gives: | |
$ python setup.py test | |
Freezing version number to astropy/version.py | |
------------------------------------------------------------ | |
The legacy package 'pyfits' was found. | |
To install astropy's compatibility layer instead, uninstall | |
'pyfits' and then reinstall astropy. | |
------------------------------------------------------------ | |
------------------------------------------------------------ | |
The legacy package 'vo' was found. | |
To install astropy's compatibility layer instead, uninstall | |
'vo' and then reinstall astropy. | |
------------------------------------------------------------ | |
Traceback (most recent call last): | |
File "setup.py", line 65, in <module> | |
packagenames, package_dirs) | |
File "/Volumes/Raptor/tmp/test/astropy/astropy/setup_helpers.py", line 189, in update_package_files | |
for pkgnm, setuppkg in iter_setup_packages(srcdir): | |
File "/Volumes/Raptor/tmp/test/astropy/astropy/setup_helpers.py", line 257, in iter_setup_packages | |
module = import_module(name) | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module | |
__import__(name) | |
File "/Volumes/Raptor/tmp/test/astropy/astropy/newpkg/__init__.py", line 1, in <module> | |
from .myfib import fib | |
File "/Volumes/Raptor/tmp/test/astropy/astropy/newpkg/myfib.py", line 1, in <module> | |
from .fib import fib | |
ImportError: No module named fib | |
Note that if I remove ``setup_package.py``, then it works fine. | |
<author>mdboom</author> | |
Astropy has the requirement that all code is importable before any C or Cython extensions have been built. The same problem is exhibited in `astropy/io/vo/__init__.py` and is solved by only importing extensions when `astropy.setup_helpers.is_in_build_mode()` returns `False`. These sorts of "import guards" are generally only necessary in `__init__.py`, since the `setup.py` script needs to import `__init__` in order to import `setup_helpers`. `setup.py` doesn't generally import any other modules within the packages unless `__init__` explicitly does that. | |
<author>eteq</author> | |
@astrofrog - I think @mdboom outlines the best solution here... does that seem like a good solution? | |
If so, however, this should be clearly documented somewhere - we may want to add a section along the lines of "creating a new subpackage"... or perhaps this can go in the ``setup_helper.py`` description section. | |
Either way, lets leave this issue open until said documentation is in place. | |
<author>mdboom</author> | |
I think I've had a change of opinion about this -- there is a way around it. I've attached a pull request. | |
This lessens the need for `__init__.py` files to check `is_in_build_mode()` to optionally turn off importing other things in their package that may require 2to3 or C extensions to have been built. | |
The real cause of `__init__.py` files being imported at build time was due to a `setup_package.py` file within that package being imported. By importing the file using `imp.load_module()` rather than `importlib.import_module()`, we can import the file as a standalone module without causing an import of its parent package as a side effect. This allows (in many cases) for the `__init__.py` to not need to block the import of 2to3 or C extension code because the `__init__.py` is not longer imported at build time. | |
However, there is still a special case in `astropy.wcs` where the package itself is needed at build time, so a call to `is_in_build_mode()` has been added there. However, that case is an exception, not the norm. | |
Also in this change: The setup script currently tries to import the legacy module (vo, pyfits, pywcs) to see if it's in fact `pyfits` or astropy's legacy shim that masquerades as `pyfits`. However, if it's the latter, and it's imported at build time, and `astropy` has been uninstalled but its `pyfits` legacy shim has not been (a corner case if there ever was one), importing the legacy shim will result in importing pyfits from the source tree which doesn't work because it requires 2to3 and has C extensions. The change here is to rather than import the legacy shim, to just open it as a file and look for the magic variable name `_is_astropy_legacy_alias`. | |
<author>astrofrog</author> | |
@mdboom - I like this. I was worried that the current scheme is not ideal, and a lot of people would run into this issue when developing their own sub-packages. So +1 from me! | |
<author>eteq</author> | |
+1 to this approach from me, as well... certainly makes things easier! | |
It's still probably a good idea to mention that using `is_in_build_mode` is the best way to block any imports that might fail during build (e.g. if someone else does something like `astropy.wcs`) in the docs, though (probably as a note in ``docs/development/building_packaging.rst``). That could either be added here, or I can add it directly in master if you don't want to do it in this PR. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Remove wcsconfig.h from version control. | |
<author>embray</author> | |
Because this file can be updated depending on the platform being built on, | |
it shouldn't be under version control (it gets annoying because git picks | |
up on any modifications made to it). Since it will be generated at build | |
time if it doesn't already exist, then even more reason for it to not be | |
there to begin with. | |
<author>mdboom</author> | |
Agreed. Just an oversight. | |
<author>embray</author> | |
Okay--just wanted to check with you first, mainly. | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
io.ascii | |
<author>taldcroft</author> | |
This is a preview pull-request for my rework of asciitable as astropy.io.ascii. The goal is to enable initial review and discussion. | |
All tests pass on linux x86_64 py3.2 and py2.7, but I've just gotten a slew of emails that OSX10.7 is failing miserably on TR's jenkins server. Will need to investigate. EDIT: this was resolved. | |
This code base is structured to allow rsync'ing of code and test files between standalone asciitable and astropy.io.ascii. Thus there are some things that might look awkward (and of course suggestions for improvement are welcome!). In addition asciitable supports 2.5 and doesn't require NumPy, so there are still more tests that look ugly. | |
On the plus side both asciitable (optionally) and astropy.io.ascii (by default) now take advantage of astropy.table.Table. | |
Docs are ported as well. | |
<author>eteq</author> | |
I definitely like it big-picture (I'm a fan of `asciitable`, for that matter), although I haven't read through this terribly closely yet. | |
A couple organizational question/comments, though: What is the ``astropy/io/ascii/tests/t`` directory for? Is that just for storing the data that the tests use? If so, this is exactly the sort of place where the `astropy.config.data` package might be useful. I understand that this is pre-existing code, and it might be a lot harder to keep it in sync with the standalone version, but it's something you at least want to consider. | |
Also, I would also strongly encourage you to include the docs whenever you get a chance to that (before merging) - one of the best things about `asciitable` is the excellent docs, IMHO! | |
<author>taldcroft</author> | |
@eteq : | |
I'm working on getting the docs in now. This requires a wee bit of work since some of the content needs to be different (e.g. no installation instructions etc) from standalone `asciitable` and as usual I'm aiming for (basically) one code base. | |
`astropy/io/ascii/tests/t` is the test data, and as you suspected the reason it is in `tests` is for compatibility with standalone `asciitable`. This is also matches what `io.fits` does (which may also be motivated by the duality with `pyfits`). | |
<author>eteq</author> | |
Looks pretty good (and tests pass for me), aside from the documentation comment I made inline above. Other than that, are you done making changes? | |
<author>taldcroft</author> | |
Commit e1d54f0 updates the docs per the comments from @eteq. I don't have any other changes in the queue, so maybe we should merge this? | |
<author>taldcroft</author> | |
If anybody has additional comments or wants more time for review then speak by Friday. Otherwise I'll plan to merge sometime this weekend. | |
<author>astrofrog</author> | |
As you mentioned in #174, maybe you could add a legacy shim into this pull request? | |
<author>taldcroft</author> | |
@astrofrog - since there is pending pull request #174 that will change the behavior of the legacy shim, I've opened issue #175 to track adding this for io.ascii and do it after #174 is merged. | |
<author>astrofrog</author> | |
@taldcroft - sounds good. From my point of view, this looks ready to merge. | |
<author>eteq</author> | |
Looks fine to me too... But @taldcroft, I didn't fully understand your comment above: do you mean you want to implement #175 in this pull request, or should we merge this pull request and then you'll issue another one to fix #175? | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
NaN-friendly convolution | |
<author>astrofrog</author> | |
This pull request implements convolution functions for multi-dimensional arrays, with proper support for NaN values (which scipy's ``convolve`` does not have, but which is needed for Astronomical images). | |
The current code has the following limitations: | |
* Only 2D convolution is supported at this time | |
* No documentation has been added | |
* The tests are not yet exhaustive | |
I want to address all these points before we consider merging in the code. However, I wanted to open the pull request to get feedback on the existing code, before extending the code to 1D and 3D+, and writing more tests, since these will involve significantly more code. | |
The only function intended for users is: | |
from astropy.nddata.convolve import convolve | |
This calls the different Cython functions depending on dimensionality and boundary treatment. The reason for implementing the four different Cython functions is that it is significantly faster than constantly checking the boundary option inside a single Cython function. | |
The ``convolve`` function's docstring should explain all the options. | |
I decided to include this in ``astropy.nddata`` to mirror ``scipy.ndimage`` (rather than putting it in ``astropy.tools``. I think that is the right thing to do. I think that ``convolve`` should be callable directly as above, but of course we can also add a ``convolve`` method for ``NDData`` objects that then relies on ``nddata.convolve``. | |
The treatment of the edges and NaN values is inspired by IDL's CONVOL function (e.g. http://star.pst.qub.ac.uk/idl/CONVOL.html). | |
Performance-wise, the speed is very similar to scipy's ``convolve`` when ``boundary=None``, and slightly worse for other boundary options, but that is the price to pay for dealing with the NaN values correctly. | |
Let me know what you think! | |
<author>eteq</author> | |
I definitely, like the overall scheme, but one organizational thought. I'm concerned about how ``astropy/nddata/convolve/__init__.py`` works. As it stands now, if I do ``from astropy.nddata.convolve import convolve``, I get a *function*, even though there's a module ``astropy/nddata/convolve/convolve.py``. Right now this isn't necessarily a big deal because there's only one function in this module, but it still strikes me as quite confusing, and might get much worse if more functions are added later. In addition, requireing two nested layers of imports starts getting confusing when the user-level only involves a single function (and in the future, probably only a few more). | |
Instead, how about leaving this ``__init__.py`` empty, and moving what's in it to ``astropy/nddata/__init__.py``, and changing the name of the package from "convolve" to "convolution" or something like that (e.g., in ``astropy/nddata/__init__.py``, the import would be ``from .convolution.convolve import convolve``. Then nothing is masked, and the convolution still lives in the ndddata module. | |
<author>astrofrog</author> | |
So if I understand correctly what you are saying, the user would then import it with | |
from astropy.nddata import convolve | |
This does make sense, since scipy has the same: | |
from scipy.ndimage import convolve | |
On a side note, it would be possible to add functions such as gaussian_filter() that would again mirror the scipy capabilities, but with support NaN values, and these could then be imported with: | |
from astropy.nddata import gaussian_filter | |
Anyway, I'll implement your suggestion, but I will wait for other comments before doing so. | |
<author>eteq</author> | |
@astrofrog - yep, that's what I had in mind, as long as the function `convolve` doesn't overshadow the package `convolve` (and that would be fixed by changing the package name to `convolution`). I think in general we want the most user-friendly stuff in the first-level subpackage, but always leave the deeper subpackages/modules accessible. | |
<author>astrofrog</author> | |
I've implemented all the comments! Is there anything else you can think of before I extend this to more dimensions? How many dimensions should this be extended to? I'm not sure if many people would need 4D convolution and above, but 1D, 2D, and 3D certainly seem useful. | |
<author>astrofrog</author> | |
I having a little issue with optimization. On line 71 of ``boundary_none.pyx``, I want `val * ker` to be equal to zero if `val` is NaN and `ker` is zero (instead of NaN, which is the normal result). However, I can't figure out a way to do this without doubling the runtime. For example, doing: | |
if ker != 0.: | |
top += val * ker | |
bot += ker | |
literally doubles the runtime speed. Does anyone have any idea how to do this differently? Is there an efficient way to override the multiplication (or use a custom mult) function that will have the behavior NaN * 0 = 0? | |
<author>astrofrog</author> | |
If I try a different if statement such as: | |
if not isnan(val): | |
top += val * ker | |
bot += ker | |
then I don't get much of a slowdown. isnan was defined as a macro with: | |
cdef extern from "math.h": | |
bint isnan(double x) | |
The example I'm using has no zeros nor nans, so in both cases the if statement always returns zero. But ``ker != 0.`` seems to take much longer than ``isnan(val)``. Any ideas on how to speed up ``ker != 0.``? | |
<author>astrofrog</author> | |
I've found a way around my previous question about optimization which seems to work fine. I've now added some basic documentation. The next step will be to add more tests, extend to 1 and 3-dimensions, and add examples to the documentation. | |
<author>eteq</author> | |
I agree that 1D convolution should be the next priority, followed by 3D. I can't think of any particular reason why you'd want to implement 4D (for that matter, offhand I can't think of any common cases for 3D, but I imagine those do exist). | |
I'm also getting 3 test failures for your current version in OS X 10.6 (py2.7.2 32-bit): http://paste.pocoo.org/show/551273 http://paste.pocoo.org/show/551274 http://paste.pocoo.org/show/551275 | |
<author>eteq</author> | |
@astrofrog - you may know this already, but one very useful trick for optimizing Cython code: do ``cython -a foo.pyx`` - that will generate a file ``foo.html`` that has the .pyx file, but where if you click on a line, it will show what C code that expands into. More importantly, it shows lines in yellow that are expanding into *slow* C code... so it shows where you might have forgotten to declare some cdef or something... Or it will reveal places where you can give compiler directives to speed things up (that's what made me realize the comments I made above). | |
<author>astrofrog</author> | |
@eteq - I resolved the issue with the failed tests, which was that for boundary_none, the arrays should be initialized with np.zeros, not np.empty (because not all the pixels get reset). | |
<author>eteq</author> | |
Two other oddities I noticed while doing some test along the following lines: | |
``` | |
a = arange(16).reshape(4,4) | |
k = [[0,1,0],[1,1,1],[0,1,0]] | |
convolve(a,k,'fill') | |
``` | |
(which is the following array and kernel): | |
``` | |
array([[ 0., 1., 2., 3.], array([[ 0., 1., 0.], | |
[ 4., 5., 6., 7.], [ 1., 1., 1.], | |
[ 8., 9., 10., 11.], [ 0., 1., 0.]) | |
[ 12., 13., 14., 15.]]) | |
``` | |
First, this fails as written because the function wants `a` to have float32 or float64, and wants `k` to be an array. Can you add checks that convert it to the relevant format if it's not? e.g. If a non-array as passed in, do ``array(input,dtype=float)``, if an array is passed on that's not a float type, convert it with ``.astype(float)``, but if it's already a float leave it alone? | |
Second, and more confusingly (to me) it appears that you are normalizing the kernel - if I do the above as ``scipy.ndimage.convolve(a,k,mode='constant')`` I get out the following array: | |
``` | |
array([[ 5., 8., 12., 12.], | |
[ 17., 25., 30., 27.], | |
[ 33., 45., 50., 43.], | |
[ 33., 48., 52., 40.]]) | |
``` | |
But if I use your implementation, as I described above (fixed so that `a` and `k` are arrays), `result` is | |
``` | |
array([[ 1. , 1.6, 2.4, 2.4], | |
[ 3.4, 5. , 6. , 5.4], | |
[ 6.6, 9. , 10. , 8.6], | |
[ 6.6, 9.6, 10.4, 8. ]]) | |
``` | |
Which is the same thing divided by 5 (the sum of the kernel). Is there a particular reason you chose to have the kernels be normalized? I would think it would be better for them not to be forced to be normalized, as that's more consistent with scipy.ndimage that people might be used to. | |
<author>eteq</author> | |
Something just struck me about this: in #141 there was a discussion about including C source code in an ``src`` directory... do we want to do the same with ``.pyx`` files, given that they compile to C files? @mdboom, do you have any thoughts here. | |
<author>astrofrog</author> | |
@eteq - good point about the kernel normalization, I will fix this. I do have to use re-normalization in the Cython code (to avoid dips towards regions with NaNs, where fewer pixels can be used, but that doesn't mean I have to re-normalize the whole kernel. | |
@eteq and @mdboom, let me know if I should move the *.pyx files | |
<author>mdboom</author> | |
I don't know if I have a strong feeling either way about the location of the .pyx files. As they currently compile and install as .so files in the same place as where the .pyx files are, I think it might be most straightforward to leave them where they are. | |
<author>eteq</author> | |
@mdboom - that sounds reasonable to me - so Cython files live with python files, and non-Cython files live in ``src/...`` inside the relevant sub-package? I can update the coding guidelines to reflect this if we think that's a standard that works for us. | |
<author>eteq</author> | |
@astrofrog - as far as you're concerned, is this ready to go? | |
<author>astrofrog</author> | |
No, this is not ready yet - I need to fix the normalization as you suggested, and implement 1-D and 3-D convolution. I was away for the last week, but should be able to make progress on this soon. | |
<author>astrofrog</author> | |
@eteq - you can now pass nested lists as input, integer values will work, and you can normalize the kernel on the fly (though default is not to). | |
I've implemented 1-d and 3-d convolution, and I'm now going to improve the docs a little before a final review. | |
<author>astrofrog</author> | |
This pull request is now ready for final review! | |
<author>eteq</author> | |
I think it might be good to organize the documentation you added slightly differently (to match other packages), but we can address that later after the content itself is merged in. (And overall, I like the doc content a lot!) | |
All tests pass for me, so as far as I'm concerned, you can merge when ready. | |
<author>astrofrog</author> | |
Agree about the documentation structure - I just wanted to have something there, but it needs to be integrated better into the rest of the docs. | |
@taldcroft, @iguananaut, @mdboom - I plan to merge this at the start of the week, in case you want to review this too. | |
<author>keflavich</author> | |
I'm a latecomer to this thread, but I have an alternative solution to the convolution-ignoring-nans problem implemented here: http://code.google.com/p/agpy/source/browse/trunk/AG_fft_tools/convolve_nd.py | |
It uses FFTs (either numpy's or FFTW3's, if FFTW3 is installed) and gets around NaNs by setting them to zero, then creating a "weight" array that is 1 where the image is not nan and 0 where it is nan. The weight array is smoothed with the same kernel as the image, then divided out. In any location where the kernel does not encounter NaNs, the weight stays 1, but if there is a nearby NaN, the weight will be decreased, so the average will be over fewer pixels. | |
The implementation posted above works in N dimensions. It could use more & better unit tests, and it would be especially good to compare directly to the "convol" approach implemented here. I think it would be good to include both implementations, but the pull request is obviously more mature in terms of astropy compliance. | |
<author>eteq</author> | |
@keflavich Have you done any performance testing? I suspect this PR is faster because it's all Cython-optimized... and I'm also concerned about the memory cost of an FFT-based approach. | |
Is there some other advantage to the FFT approach aside from allowing arbitrary dimensionality? | |
I'm inclined to suggest that we merge this pull request now, and then @keflavich, if you want to re-write your code into an astropy-compliant form, you can issue a new PR later that either adds an n-dimensional option or replaces this one once it's ready (if there's a compelling reason to replace this one). Does that sound good? | |
<author>keflavich</author> | |
I was under the impression that FFTs are generally the fastest way to | |
perform convolutions. They increase execution time as n log n, while | |
a straight convolution goes as n^2. So, the answer will depend on | |
image size - for small images, I would expect this pull request to be | |
faster, but for large images ffts should win. | |
The memory cost is certainly an issue. Again, I like having both | |
options. My version of the FFT convolution can be extra memory | |
intensive in favor of speed (it allows padding to the nearest | |
2^n+3^n+5^n dimensional shape). | |
While cython optimization is probably pretty fast, numpy's ffts and | |
fftws are really intended to be fast. My comparison of the different | |
ffts is here: | |
http://code.google.com/p/agpy/source/browse/trunk/tests/test_ffts.py | |
I haven't written tests for convolution yet... I'd like to see that | |
comparison though. | |
On Wed, Mar 21, 2012 at 4:14 PM, Erik Tollerud | |
<reply@reply.github.com> | |
wrote: | |
> @keflavich Have you done any performance testing? I suspect this PR is faster because it's all Cython-optimized... and I'm also concerned about the memory cost of an FFT-based approach. | |
> | |
> Is there some other advantage to the FFT approach aside from allowing arbitrary dimensionality? | |
> | |
> I'm inclined to suggest that we merge this pull request now, and then @keflavich, if you want to re-write your code into an astropy-compliant form, you can issue a new PR later that either adds an n-dimensional option or replaces this one once it's ready (if there's a compelling reason to replace this one). Does that sound good? | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/pull/155#issuecomment-4628479 | |
-- | |
Adam | |
<author>eteq</author> | |
@keflavich I see your point here on the O(n log n) scaling, so I can definitely see the merits of having both available. I'm mainly concerned about the potential for confusion in having a variety of convolution options. But I think this could be easily remedied by an explicit enough naming scheme - e.g., where this pull request centers around the driver function just named `convolve`, there counld be a second one that has a driver function `convolvefft` and make sure to include a note in the docstrings indicating basically "use convolve when you are running into memory issues, and convolvefft to better scale with size." | |
Would you be fine with us merging this one as is, and have you submit a pull request later for the fft-based version (after you've got the doscstrings and code to astropy standards)? | |
<author>keflavich</author> | |
Absolutely, that's essentially what I intended, I just wanted to bring | |
this to everyone's attention. I agree that "convolvefft" is a good | |
naming scheme, and that's a good way to distinguish the two. | |
On Wed, Mar 21, 2012 at 6:40 PM, Erik Tollerud | |
<reply@reply.github.com> | |
wrote: | |
> @keflavich I see your point here on the O(n log n) scaling, so I can definitely see the merits of having both available. I'm mainly concerned about the potential for confusion in having a variety of convolution options. But I think this could be easily remedied by an explicit enough naming scheme - e.g., where this pull request centers around the driver function just named `convolve`, there counld be a second one that has a driver function `convolvefft` and make sure to include a note in the docstrings indicating basically "use convolve when you are running into memory issues, and convolvefft to better scale with size." | |
> | |
> Would you be fine with us merging this one as is, and have you submit a pull request later for the fft-based version (after you've got the doscstrings and code to astropy standards)? | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/pull/155#issuecomment-4630481 | |
-- | |
Adam | |
<author>astrofrog</author> | |
I also agree that we could add always add a ``convolvefft`` (or rather ``convolve_fft``!) function after that would live side by side with the current ``convolve``. Note also that you could try re-using some of the tests that exist for ``convolve``, because at the end of the day, one would hope that both techniques give the same result. | |
I'm going to merge thie PR now since we agree that is the way to proceed. | |
<author>embray</author> | |
On the convolvefft issue, I'm kind of in favor of just having a single n-dimensional `convolve` function in the API. There's no reason, however, that there can't be a choice of algorithms implementing it, and selectable via a simple keyword argument. Then we can have a convolution shootout over a variety of images to determine which one would make for the best default :) | |
But the way I see it, they're just different strategies for doing the same thing. The advantages and disadvantages of each strategy can be explained in the docstring (just as @astrofrog's implementation is already doing for the different boundary strategies). | |
The downside I see to this approach is that each implementation has some optional arguments that are in no way compatible with the other, which could lead to confusion. Though I tend to think that with a bit of cleaning up, and well organized documentation, this can be mitigated. | |
<author>eteq</author> | |
@iguananaut - perhaps you should copy this comment into #182 now that it exists as a separate PR? | |
<author>keflavich</author> | |
Noticed while writing convolve_fft (#182) tests - convolve may change the kernel array (modify in-place). This happens if kernel.dtype.kind == 'f'. I think this can be solved by replacing the initial checking routines with "kernel = np.asarray(kernel,dtype='float')" as @mdboom suggested on #182. If you want to keep that behavior, it should be documented. | |
<author>astrofrog</author> | |
@keflavich - good catch! I'll fix this. | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Fix table tests to pass on 32-bit machines | |
<author>taldcroft</author> | |
This passes tests on linux x86. | |
<author>embray</author> | |
Works for me on my Linux machine (64-bit) and on Windows. I say go ahead and merge and I'll cancel #145. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Fixes path arguments to the test command on Windows. When pytest.main()... | |
<author>embray</author> | |
... is passed the arguments as it string, it runs that string through shlex.split(). However, it does not use the 'posix' argument to shlex--or more specifically it doesn't set it to false for Windows as I did here. That causes Windows paths to be mangled. So we can fix that on our end. | |
Sorry, I meant to push this branch to my fork, but I think I typed 'upstream' by accident. We can delete it once it's merged. | |
<author>mdboom</author> | |
I can confirm this doesn't break things in my environments, so I think this is probably fine to merge. | |
<author>embray</author> | |
Sounds good to me then. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Fix setup.py sdist | |
<author>mdboom</author> | |
Fixes the sdist command so it produces a reasonable tarball, which includes generated Cython *.c files if any. | |
- The sdist command in setuptools/distribute is annoying because it doesn't respect package_data -- that information has to be duplicated in MANIFEST.in. It appears to work well enough to just use the distutils one instead, which on Python 2.7 and later respects package_data. | |
- sdist requires that all paths are relative to the source root -- some of the code that was generating C extensions was using absolute paths. | |
<author>embray</author> | |
I'm not sure I understand this. The setuptools sdist will automatically include *anything* that's under version control in the package. I don't know why we even have so much crud in MANIFEST.in--the only things it needs to explicitly mention are any generated files we want to include in the distribution. | |
<author>mdboom</author> | |
That feature apparently only works with SVN and CVS (according to the docs and my own experimentation). Git support requires a plug-in. I personally prefer the explicit approach that uses package_data -- but an alternative to this may be to help the user install the setuptools-git plugin somehow. | |
<author>eteq</author> | |
Yeah, that's why I ended up putting all that stuff in ``MANIFEST.in`` in the first place - it doesn't respect the version-control stuff. Also, affiliated packages are free to use whatever version control they want, so I think we can't count on any particular VCS detail. | |
The only thing that worries me about this: is there anything else that makes use of ``MANIFEST.in``? Perhaps ``bdist``? I don't think we want a situation where *some* commands use ``MANIFEST.in`` and others do not... ideally we would remove ``MANIFEST.in`` completely. | |
<author>eteq</author> | |
I looked more closely and realized what you're doing here doesn't actually remove the need for ``MANIFEST.in`` but just for the `package_data`... So as it stands now, the package_data and source files are automatically included, but the rest is not (particularly docs), correct? | |
So that makes my earlier question even more pertinent - if the only place ``MANIFEST.in`` matters is for ``sdist`` and ``bdist``, might it be better to override these to not use ``MANIFEST.in`` at all? That would make it easier to figure out what actually makes it into the tarballs (something of constant irritation/confusion for me in other contexts...). | |
<author>mdboom</author> | |
AFAICT, `MANIFEST.in` is only used for `sdist`. `bdist` simply packages everything that would be installed -- which doesn't require `MANIFEST.in`. | |
I think this solution is a reasonable middle ground -- `MANIFEST.in` only needs to include everything that goes into a source distribution that isn't already going in an install (or binary distribution). Without this change, some files need to go *both* in package_data and MANIFEST.in. This at least means each file only needs to be mentioned in one place, albeit in different places. | |
If we can get the version control stuff working with git though, I can see how that would be even better -- I haven't had any luck, though. | |
<author>embray</author> | |
The setuptools-git package works for me--I guess that's why I forgot setuptools didn't have git support built in (I could have sworn they did add that in recent versions of distribute, but I guess I'm wrong). | |
There's no reason I can think of that a *user* would need it. They're not likely to be creating any source releases. Or if they do have some crazy reason to do so, it should be documented that they should have setuptools-git to make sure everything is included properly. MANIFEST.in would only be needed to explicitly include some generated files (version.py most importantly, and any generated c files). | |
That said, I don't really have a problem with applying this patch either. It's an annoying oversight on the part of distribute that it ignores package_data in its sdist. I might actually submit and upstream bug about that, because looking at the code for its sdist, there's a place where it probably *should* add any package_data files, but just forgets to. | |
<author>mdboom</author> | |
It's not just that a user doesn't need this functionality, it's that it creates an additional dependency that has to be applied in places where it's built and tested etc. All this came about because I wanted to make a Jenkins test case that built a source tarball and then used that to build in another environment. (To ensure that the Cython files were getting correctly built and included). So there's still some benefit (admittedly small) to having something that works out-of-the-box to make things more readily testable. | |
At a minimum, I think if we choose to rely on setuptools-git, then we should add a hook to sdist that raises an exception if setuptools-git is not installed. | |
<author>embray</author> | |
That's a good point. In that case I wouldn't bother messing with sdist at all and just do as this patch suggests. I'm not thrilled about having to maintain the MANIFEST.in. But at least as long as package_data is handled properly it won't be too frequently necessary to add things to it manually. | |
And I like the idea of having an sdist build test. | |
<author>embray</author> | |
FWIW I submitted an upstream pull request with a fix for this issue for distribute: https://bitbucket.org/tarek/distribute/pull-request/4/make-sdist-work-more-like-distutils-wrt | |
This fix makes distribute's builtin sdist work more like distutils. Though even if the fix gets accepted we can't rely on it for now. (I think I actually have commit rights on distribute, but I'm not going to push a behavior-modifying change without some review :) | |
</issue> | |
<issue> | |
<author>embray</author> | |
Link _iterparse.so with -Bsymbolic | |
<author>embray</author> | |
This had me beating my head against the wall for a while. | |
For some reason one of the vo tests has always been failing for me, due to what appeared to be some problem deep within the XML parsing. | |
It turns out the problem is that something in my Python is loading my system's libexpat.so well before _iterparse gets imported. So it ends up using libexpat functions from my system's expat instead of the one it was compiled with. My system's libexpat is old and has some incompatibilities with the one we're compiling with, the details of which are unimportant. | |
Admittedly, this patch in its current form may not work on all platforms, but I thought I'd put it out there for comment. | |
<author>mdboom</author> | |
Is this on OS-X? | |
<author>embray</author> | |
Looks like it's not. I'll update accordingly. | |
<author>mdboom</author> | |
I asked because OS-X is usually where these flat namespace issues come up. What platform are you seeing it on? | |
<author>embray</author> | |
I don't know if this is generally an issue on OSX. It doesn't seem to be on any of our OSX build machines, though I'm using the python in irafdev on them. It turns out on our Linux machines libexpat gets used by libfontconfig which is in turn used by libX11ft. | |
<author>mdboom</author> | |
Still seems odd to me. The expat in astropy is statically linked -- I don't quite understand how anything dynamically linked could clobber its namespace. What's the actual symptom? | |
<author>mdboom</author> | |
Hmm... even with `libexpat` loaded in the python process (`import Tkinter` is one way to force this), I can't reproduce this. I don't doubt this patch works, but I'd like to understand better. If you do an "nm" on "_iterparser.so" are there any expat undefined symbols? The only ones I get are from Python. | |
<author>embray</author> | |
I get the same thing from "nm". This surprised me too, which is why it led to so much head-bashing. But all the XML_* symbols are globally bound, which means the loader searches from them in order starting from the executable itself, and then through each shared library in the order they were loaded: | |
``` | |
2936: symbol=XML_ParserCreate_MM; lookup in file=python [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/lib64/libpthread.so.0 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/lib64/libdl.so.2 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/lib64/libutil.so.1 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/bray/sc1/root/lib/libtk8.5.so [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/bray/sc1/root/lib/libtcl8.5.so [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/usr/lib64/libX11.so.6 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/lib64/libm.so.6 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/lib64/libc.so.6 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/lib64/ld-linux-x86-64.so.2 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/usr/lib64/libXft.so.2 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/usr/lib64/libXrender.so.1 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/usr/lib64/libfontconfig.so.1 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/usr/lib64/libfreetype.so.6 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/usr/lib64/libXau.so.6 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/usr/lib64/libXdmcp.so.6 [0] | |
2936: symbol=XML_ParserCreate_MM; lookup in file=/lib64/libexpat.so.0 [0] | |
2936: binding file astropy/utils/xml/_iterparser.so [0] to /lib64/libexpat.so.0 [0]: normal symbol `XML_ParserCreate_MM' | |
``` | |
Whereas if I use `-Bsymbolic` this tells the loader to first look for symbols locally with _iterparser.so itself, ignoring the GLOBAL/LOCAL binding. So I don't see anything like the above output from the loader. | |
I wonder if it has to do with differences in the loader. | |
<author>embray</author> | |
I'll add that the fact that my python is linked to my own tcl/tk libs shouldn't have anything to do with it. That's pretty much the only difference between my python and the one on irafdev. | |
<author>mdboom</author> | |
Closed by #160. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Only export the Python extry points from iterparse.so. | |
<author>mdboom</author> | |
A less brutal solution to #159. | |
Since I still can't reproduce the root problem, @iguananaut: can you let me know if this resolves it? If so, we should probably do the save for astropy.wcs. | |
<author>embray</author> | |
This works for me too. I'm not sure what makes this *less* brutal. If anything it's more brutal by forcing all non-Py symbols to be local, so help it. On the other hand, the intent here is much clearer, in the infinitely remote chance that someone were to examine the binary. So I'm happy with it. | |
</issue> | |
<issue> | |
<author>nhmc</author> | |
adding info function to print available constants, plus extra constant values | |
<author>nhmc</author> | |
Here are a couple of minor changes to the constants sub-package. I've added a few more useful values (Rydberg, eV, Jansky and Mpc) along with a function info() that returns a string with a quick summary of all the available constants in the si and cgs modules. I've also added these summaries to the si and cgs docstrings, so users can easily do, say: | |
```python | |
from astro.constants import cgs | |
cgs? | |
``` | |
from ipython to see what values are available. There may be better ways to generate such a summary than with the info() function, any suggestions are welcome! | |
Cheers, | |
Neil | |
<author>nhmc</author> | |
Aha, I just discovered the constant docstring is dynamically updated with the available constants by __init__.py. So I'll have to remove the info functions and docstring changes. I'll update the request when I get the chance. | |
<author>embray</author> | |
Right, I don't think an `info()` function is really necessary. But I do like how the docstring is formatted--it could be useful if the si and cgs modules had an auto-generated docstring in that format. The constants package `__init__` just lists what constants are available, whereas the docstring in the si and cgs modules should list each constant with its value and units in the respective system. | |
<author>nhmc</author> | |
Will the latest changes __init__.py updates the docstrings of cgs and si listing values and units of the constants, and the info functions are removed. Let me know if this looks ok or if it needs any more work. | |
<author>eteq</author> | |
I really like the auto-filled table in the docstring! We might want to consider slightly altering the format later to look nicer in sphinx, but for now that part is perfect. | |
Regarding the added constants however... The Rydberg constant is definitely a good one to include. I think `eV` should *not* be here, though, because the exact same number is available as `e` (after all, that's how an eV is defined...). | |
And as for `Mpc` and `Jy`, I think these are slated for inclusion in the forethcoming `units` subpackage that is currently being worked on by @perrygreenfield (#126 and #164), as they are more unit systems than they are fundamental constants (and actually, the same is true for `eV`). Given that, it would be confusing to have them appear in both `units` and `constants` (and hence I would advocate for either including them now but removing them as soon as `units` is working, or just not including them at all until `units`). @perrygreenfield and @astrofrog, do you have any thoughts on this? | |
<author>astrofrog</author> | |
I agree that e, Mpc, and Jy don't need to be included, but the auto-filled table is nice! | |
<author>nhmc</author> | |
The constants tables should also be displayed as tables in Sphinx, so hopefully they already look nice :) | |
Regarding the eV being the same value as e: that's true for SI, but not for cgs units (which everyone should be using anyway ;) So I think it needs to be included. | |
If Mpc and Jy are going to appear in a units package, that's great, I'll remove them here. Note kpc is already present, should it be removed too? | |
As an aside, I think the line between 'fundamental constant' and units is kind of fuzzy -- is the earth mass a fundamental constant or a unit? I haven't thought carefully about this, but my intuition says it's a bad idea to try and separate the two into different packages. | |
(edited because replying via email makes ugly-looking comments!) | |
<author>embray</author> | |
FWIW I think eV should be in the units package. I agree with @nhmc that sometimes the distinction between units and constants can be fuzzy (after all, most of the units are defined in relation to some constant quantity). But I think in this case that `e` is the more fundamental constant quantity here, whereas `eV` is a sort of compound derived unit from it. Though it's still based on a constant so I agree the distinction here is fuzzy. | |
<author>nhmc</author> | |
Ok, I've removed Jy, Mpc and eV. | |
<author>eteq</author> | |
Ok, looks great! Merging. | |
And to clarify, it may be that we will want to add some things like these later, depending on exactly how the `units` package ends up getting organized... But I'm thinking it's better to wait until then to add them, rather then add them now and then remove them later (we want to avoid breaking peoples' code). | |
(Also, I hadn't noticed that `kpc` and `pc` were both in there... in reflection it probably would have made more sense to either leave those out or include `Mpc` as you suggest, but as I was saying above, I think we want to leave things mostly steady on that front until we have decided exactly how `units` and `constants` are subdivided). | |
<author>eteq</author> | |
Thanks for the contribution, @nhmc ! | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Documentation re-organization/homogenization | |
<author>astrofrog</author> | |
The current Sphinx documentation needs to be improved to make it more homogeneous (since a lot of the current docs were pulled in from existing projects). In a lot of cases, the import statements could be improved, e.g. | |
>>> import astropy.io.fits | |
>>> hdulist = astropy.io.fits.open('input.fits') | |
could be better written as: | |
>>> from astropy.io import fits | |
>>> hdulist = fits.open('input.fits') | |
<author>astrofrog</author> | |
With #115 merged and #187 about to be (thanks @eteq!), do we want to now discuss how to lay out the documentation consistently across sub-packages? This is probably one of the more major things to be dealt with before 0.1.0. I think we just need to lay out some guidelines for documenting sub-packages, and have each sub-package maintainer update their docs to bring them in line with the guidelines. | |
<author>eteq</author> | |
Definitely a good idea - I have some specific ideas that also tie into package organization that I was planning to post shortly to astropy-dev. | |
There's also a need to reformat some of the guidelines and re-organize the developer section - this shouldn't take too long, but I'm waiting for #187 to merge because its kind of painful to edit the docs until that's in. | |
Another item that might take a bit more time (and which I'd be happy to farm out to anyone interested...) would be editing the github instructions to better reflect what our in-practice workflow has become - I've seen a few people express confusion that I think is mostly because those pages are not entirely applicable to the way we've actually ended up doing things. | |
<author>astrofrog</author> | |
Here are some ideas for how we can improve what's already there: | |
* Limit the table of the contents on the front page to the main sections, not sub-sections. It's too long at the moment, and makes it hard to see what's what. Also, we need more of a narrative on the front page, and maybe we can have three separate table of contents - one for installation etc., one (the main one) for subpackages like WCS, FITS, table, etc. and one for more technical things and for developers. | |
* Make the section titles more descriptive - 'astropy.io.wcs Documentation' is not a useful title. We should remove 'Documentation' from all the headings, since it's obvious it's documentation, and put things like 'Reading and Writing FITS files (astropy.io.fits)'. The subject should be the main title, not the location. This is exactly what Scipy does: http://docs.scipy.org/doc/scipy/reference/ - and I think it's a much better way to go. | |
* The main packages for the sub-packages could be improved a lot. For example, in http://astropy.readthedocs.org/en/latest/wcs/index.html, the information about the licenses etc. should be at the bottom of the page, not the top. There should be a clearer narrative introduction to the package. I *don't* like how scipy handles this though, as they completely separate tutorial from reference: http://docs.scipy.org/doc/scipy/reference/ - this leads to confusion because there are two scipy.signal (for example) sections in the docs. I agree there should be a separation between the two, but within the sub-packages, not at the top level. I think we should lay down a common structure for the main page of all sub-packages, e.g: | |
* Introduction | |
* Quick tutorial | |
* Reference and more specific tutorials (i.e. the main table of contents of the docs) | |
* Auto-generated API? | |
* Acknowledgments and Licenses (optional) | |
I think we should try and do as much of this as possible before 0.1, because at the moment, I feel that if I told someone to install astropy, they wouldn't actually know what to do with it (apart from use the 'legacy' packages). | |
<author>astrofrog</author> | |
Just cc-ing @taldcroft, @iguananaut, and @mdboom to bring this discussion to their attention. | |
@eteq - I know you want to have a discussion on the mailing list, but I figured that to get things moving, we could at least have a discussion with the current sub-package maintainers so we can converge on this somewhat before going to the list. | |
<author>embray</author> | |
Thanks @astrofrog for summarizing some of the specifics of what needs to be done--much of this is exactly what I've been thinking and I couldn't have put it better. | |
You already suggested this in your rough layout, but let me emphasize that all the generated API docs would be better off moved to the end. The "manual" section of the docs should follow more of a continuous narrative. Sure, there isn't necessarily overlap in what all the subpackages do. But it will be better to describe, in some sequence, what you can do with Astropy and how to do it. That goes to your example of having a chapter on "Reading and Writing FITS Files" rather than just "astropy.io.fits Documentation", and assuming the user knows to look there to find out how to do whatever it is they're trying to do. | |
I've been revising some of the PyFITS documentation as it is, and some of those revisions will probably help improve the narrative in that section of the Astropy docs as well. | |
And eventually we'll want to add some examples that show multiple parts of Astropy working together. But that's a bit more work. | |
<author>eteq</author> | |
@iguananaut - I'm not sure I fully understand what you're saying here... you think *all* of the API reference should be at the end, or each sub-package should have an API reference at the end of it? (see my comments below) | |
@astrofrog - I agree with all your points - in particular I agree with your point that the API/reference should not be a separate independent section. In a project like astropy where the subpackages are fairly separate in what they do, it's a lot more confusing to have the reference separate, and it clearly leads to confusion in the scipy docs where sometimes things that should be in the docstrings end up in the docs, and vice versa. | |
I also think we need to plan around the API docs always being the most up-to-date, and in some cases the only complete documentation. We should definitely *aim* for the goal of having good narrative docs, but I think, practically speaking, it will be a long time (if ever) before that goal is achieved, given the constantly-evolving nature of astronomer-produced software. By including the API as part of the documentation, it places a higher premium on ensuring the docstrings are enough to use anything new that gets merged in, and we should always reject anything for which that isn't true. | |
<author>eteq</author> | |
@astrofrog - agreed we should get things moving on this if there are the resources to do so. There are a lot of things I have been "meaning to do," and of course we all know how those schedules slip :) Perhaps the thing to do is to create a sub-package template like that @astrofrog has outlined here and see what people think about that - with that, it will be easier to sub-divide work to various people for different packages. If this sounds good to you, I can go ahead and make a PR with just such a template to send out to the mailing list/discuss here. | |
A few bullet points of my own for "goals for 0.1": | |
* A clean-up of the developer documentation. This includes streamlining the git/github docs now that we have a pretty good flow going for how we manage things. It could also include re-organizing/clarifying the developer guidelines, although I'm not sure exactly how much can be done there. I had planned on doing this, but I clearly haven't yet, so if someone else wants to start on this, I certainly won't stop them :) | |
* An introduction/overview that defines some terms better and explains the distinction between the Astropy project and the package, as well as the role of affiliated packages. This is sort of in the vision, but should be a lot more apparent (this might also end up on the web site). I will definitely do this part. | |
<author>embray</author> | |
@eteq For now I can agree that the different subpackages of astropy are segregated enough in their functionality that it may be fine to leave the docs somewhat segregated as they currently are. But my hope (and maybe this is overly optimistic) is that we (and hopefully other developers from the community) will start looking at these subpackages and finding ways that they can better be integrated with each other and work together (where it makes sense to). At that point it might not make as much sense to treat them as segregated sub-products in the docs or otherwise. | |
(I don't know if that made much sense--I'm barely on my first cup of coffee.) | |
<author>eteq</author> | |
@iguananaut I see what you're saying, and I agree that there are major advantages to be gained by cross-connecting the subpackages. However, I still think at the *API* level, it makes sense to subdivide by package - sphinx makes it easy to jump between the subpackages as long as you reference everything correctly, so the location doesn't matter that much from a reference point of view, and I think it does improve readability if you just want to use stuff from the one package. | |
I definitely agree that once we have some good cases, we want to provide some useful examples of how connecting different subpackages is useful (already we have `astropy.io.vo` and `astropy.io.table`, I suppose). Perhaps we should add a root-TOC section called something like "Examples of subpackage interactions" or something, to encourage people to populate it ASAP? | |
<author>kbarbary</author> | |
I agree with @iguananaut that once I know what something does, I'm mainly interested in getting to the API in the docs. But, I also see why you might want to keep the reference/API sections inline. A possible compromise: On the main index page for the docs, the list of subpackages can look like | |
* Configuration system (astropy.config) [Reference/API] | |
* Reading and writing FITS files (astropy.io.fits) [Reference/API] | |
* ... | |
where Reference/API is a link to the Reference/API section of the subpackage docs. This would leave everything organized by subpackage, but make it easy for people who know what they want to get there fast. | |
Everything else that @astrofrog and @eteq suggest sounds good to me. | |
<author>taldcroft</author> | |
I agree with the overall view of having API docs available within subpackages. I think this matches the most common user workflow, which is getting to the topic of concern (I need to read a FITS file), scanning narrative docs, then if necessary looking at API docs to get the exact details right. Also, I haven't thought through the full details but I suspect this will also make it easier to maintain standalone packages with less fiddling (but this should not be a big driver for astropy design decisions in any case). | |
The compromise from @kbarbary may work, but I'm a little worried about how it will look when there are 10 or 15 main topics with all these [Reference] links hanging off the end. In any case I would propose generally calling these docs the "Reference" docs and not using the word "API", which is not informative to many astronomers. | |
Would it be possible to please the power users by generating a page with a Reference TOC that just links into all the subpackage Reference sections? You wouldn't be able to navigate between subpackage reference sections (with next/prev) but this would probably be OK. | |
<author>astrofrog</author> | |
@taldcroft - the idea of a reference TOC sounds reasonable, and should be easy enough to do. | |
@eteq - a template for docs layout sounds good! | |
<author>eteq</author> | |
I also really like the idea of a separate reference TOC - I'm a little unsure how to do that with sphinx, but I'm sure we can figure it our (although it may require another extension). | |
<author>eteq</author> | |
Much of the stuff likely to get done for v0.1 is in #277, aside from the idea of a reference TOC. I'll make a separate issue for that and close this. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Implement affiliated package registry | |
<author>astrofrog</author> | |
The Astropy vision states that *Affiliated packages will be listed in a central location (in addition to PyPI) that will allow an easy installation of all the affiliated packages, for example with a script that will seamlessly download and install all the affiliated packages. The core package will also include mechanisms to facilitate this installation process.* | |
An easy way to do this would be to simply have a file in the astropy.github.com repository containing a list of affiliated packages with names, URLs, etc., as a JSON file. This file can then be accessed by an affiliated package installer. This also means that affiliated packages are added to the JSON file through proper commits to the astropy.github.com repository, which prevents anyone from registering a package. I can take a stab at this, unless anyone has a better idea for how to implement this? | |
<author>astrofrog</author> | |
Here's an example of what the JSON file could look like: | |
{ | |
"packages": [ | |
{ | |
"name": "specutils", | |
"maintainer": "", | |
"installable": false, | |
"home_url": "https://github.com/astropy/specutils", | |
"repo_url": "http://github.com/astropy/specutils.git", | |
"tarfile_url": null, | |
"pypi_name": null | |
}, | |
{ | |
"name": "pyidlastro", | |
"maintainer": "Tom Aldcroft", | |
"installable": false, | |
"home_url": "https://github.com/astropy/pyidlastro", | |
"repo_url": "http://github.com/astropy/pyidlastro.git", | |
"tarfile_url": null, | |
"pypi_name": null | |
}, | |
{ | |
"name": "photutils", | |
"maintainer": "Rene Breton", | |
"installable": false, | |
"home_url": "https://github.com/astropy/photutils", | |
"repo_url": "http://github.com/astropy/photutils.git", | |
"tarfile_url": null, | |
"pypi_name": null | |
}, | |
{ | |
"name": "astropysics", | |
"maintainer": "Erik Tollerud", | |
"installable": true, | |
"home_url": "http://packages.python.org/Astropysics/", | |
"repo_url": "http://github.com/eteq/astropysics.git", | |
"tarfile_url": "http://pypi.python.org/packages/source/A/Astropysics/Astropysics-0.1.dev-r1142.tar.gz", | |
"pypi_name": "astropysics" | |
} | |
] | |
} | |
we could host it at http://www.astropy.org/affiliated/registry.json. Then for people to submit their packages, we can have a form somewhere on www.astropy.org that they fill in, and - after review - we can add the package to registry.json. I don't think we should have a fully automated system, because we should review each submission carefully. | |
<author>eteq</author> | |
I didn't realize you had made a suggestion here, otherwise I would have answered sooner... This approach sounds good to me and was the sort of thing I had in mind... but a few detail-level concerns: | |
* If we put it in the astropy.github.com repo, should we be worried that we might accidentally over-write it when we alter the web page? I don't think we ever decided on a definite workflow for the web site, so it *might* involve force-pushing and overwriting past versions of the page... | |
* It may be that we want to add further information later (things like C-library requirements, or whatever)... it's probably fine to just have the JSON files leave that out, but then the ones you have above that have "null" it probably makes sense to just leave off completely. | |
* I'm not sure what the point of "installable" is? Shouldn't the prsence of, say, a pypi name or a tarfile_url be enough to indicate it can be installed? | |
<author>astrofrog</author> | |
@eteq - sorry for the delay, here are my replies to your points: | |
* since the main website is now separate from the documentation, and given that we aren't going to be updating the repository *that* often initially, how about putting it in an ``affiliated`` directory in the astropy-website repo? Then we could have affiliated.astropy.org point to astropy.org/affiliated (in case we move it some day) and have the json file at affiliated.astropy.org/registry.json (which is a pretty generic URL that we can point elsewhere in future) | |
* agree about just leaving off things that are set as null | |
* installable was meant to mean whether the package is ready to be used - but what I think I really should have put is ``stable : true/false``. Then maybe the installer would have the option to install unstable packages (off by default). This would then be useful for affiliated packages not yet on pypi. But I don't have strong feelings about this, so I can remove this. The idea was just that initially, probably none of the affiliated packages are going to be stable and on pypi, so the registry is likely to be empty. | |
<author>astrofrog</author> | |
@eteq - should I go ahead and also start working on an installer? I'm happy to do it, but didn't want to duplicate efforts in case you already have something. | |
<author>eteq</author> | |
@astrofrog - just saw this now: | |
**re:web site** Your plan sounds good (``affiliated.astropy.org`` -> ``astropy.org/affiliated``). | |
**re:installable** Ah that makes sense - yeah, I like ``stable`` better. | |
**re: installer** I've already done a lot of work on similar stuff in other places that I can port code from, so I think it makes sense for me to do it. I've been waiting for this format to get implemented and a web site set before starting work on this, because obviously that has to come first. | |
<author>astrofrog</author> | |
Ok, I opened a pull request on the website to place an initial version of the registry - I added it to the source, and it gets included in the sphinx build, which avoids any risk of it getting overwritten. | |
See https://github.com/astropy/astropy-website/pull/6 | |
<author>eteq</author> | |
astropy/astropy-website#6 was accepted, so I will close this (and open a new one for the installer-related tools) | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Implement astropy.units sub-package | |
<author>astrofrog</author> | |
This issue is to indicate that a units sub-package should be implemented before the 0.1.0 release if possible. | |
@perrygreenfield has already provided an example implementation in #126. | |
<author>astrofrog</author> | |
Done in #370! | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Implement astropy.coords sub-package | |
<author>astrofrog</author> | |
This issue is to indicate that a coords sub-package should be implemented before the 0.1.0 release if possible | |
<author>embray</author> | |
Now that #471 is closed and the `astropy.coordinates` package has been merged I think we can close this placeholder issue. Any future changes to the coordinates package can go through new issues. | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Parse non-compliant VOTable-1.1 files from Chandra footprint server? | |
<author>taldcroft</author> | |
Someone on the CfA pythonusers list posted a question about parsing a VOTable-1.1 file that is delivered by the Chandra Data Archive footprint server. He used this script: | |
```python | |
import urllib2 | |
from coatpy import Siap, Sesame | |
import vo | |
from vo import table as votable | |
url='http://cxc.harvard.edu/cgi-gen/cda/footprint/get_vo_table.pl?strict=1' | |
foot=Siap(url) | |
params={} | |
params['RA']=40.669 | |
params['DEC']=-0.013 | |
params['SIZE']=0.0 | |
params['FORMAT']='image/fits' | |
hla = Siap(url) | |
f=open('testfoot.xml','wb') | |
f.write(hla.getRaw(**params)) | |
f.close() | |
``` | |
The file `testfoot.xml` is available at http://hea-www.harvard.edu/~aldcroft/tmp/testfoot.xml | |
This file does not comply with VOTable-1.1 and astropy.io.vo.table('testfoot.xml', pedantic=False) complains that a `PARAM` does not have the required `datatype` attribute. The same file is successfully parsed by topcat however. | |
The question here is whether the `pedantic=False` mode can be opened up to allow reading of this brand of non-compliant file. | |
<author>mdboom</author> | |
It can be -- we'll have to assume the intended datatype is "char". Have the Chandra authors been informed? It doesn't even validate against the schema... | |
<author>taldcroft</author> | |
Regarding the Chandra authors being informed, we've sent email to the group we think is responsible. On Monday we'll find out if it's the right people. In the meantime I'll inform the original poster that he can try your vo/missing-datatype branch. Thanks for the quick turnaround. | |
<author>taldcroft</author> | |
BTW, I tried reading the problematic file with this branch and it appears to have succeeded. But I don't know enough to confirm if all the expected data were fully parsed. | |
<author>astrofrog</author> | |
By the way, on my machine, ``volint`` crashes (which I presume, however invalid the file, should not happen?) | |
Validation report for testfoot.xml | |
1: W03: Implictly generating an ID from a name 'INPUT:POS' -> | |
'INPUT_POS' | |
Traceback (most recent call last): | |
File "/Users/tom/Library/Python/2.7/bin/volint", line 5, in <module> | |
pkg_resources.run_script('astropy==0.0.dev843', 'volint') | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources.py", line 499, in run_script | |
self.require(requires)[0].run_script(script_name, ns) | |
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources.py", line 1235, in run_script | |
execfile(script_filename, namespace, namespace) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/EGG-INFO/scripts/volint", line 5, in <module> | |
astropy.io.vo.volint.main() | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/vo/volint.py", line 19, in main | |
table.validate(args.filename[0]) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/vo/table.py", line 213, in validate | |
print_code_line(line, w['nchar'], file=output) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/utils/console.py", line 588, in print_code_line | |
line = line[offset + 1: ] | |
TypeError: slice indices must be integers or None or have an __index__ method | |
<author>astrofrog</author> | |
Regarding my previous comment, should line 587 be: | |
new_col = min(width // 2, len(line) - col) | |
instead of | |
new_col = min(width / 2, len(line) - col) | |
since we are using Python 3 division? | |
<author>astrofrog</author> | |
(sorry, I should have opened a separate issue, but I originally thought the error was linked specifically to this file) | |
<author>mdboom</author> | |
A fix for this is now included in this pull request. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Implement astropy.time sub-package | |
<author>astrofrog</author> | |
This issue is to indicate that a time sub-package should be implemented before the 0.1.0 release if possible | |
<author>adrn</author> | |
What are the details behind this? I wrote a simple wrapper to Python's datetime() object that understands sidereal time and MJD/JD, but other people may have more sophisticated code that they'd like to implement? | |
<author>astrofrog</author> | |
See http://groups.google.com/group/astropy-dev/browse_thread/thread/f04dab0e74f319b3/8631c9de85bedcfa?lnk=gst&q=time#8631c9de85bedcfa | |
<author>astrofrog</author> | |
And also: https://github.com/astropy/astropy/wiki/astropy.time | |
<author>astrofrog</author> | |
This has been done by @taldcroft in #332 | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Add support for both new- and old- style formatting | |
<author>taldcroft</author> | |
This should close issue #148. | |
Note the use of `.tolist()` to address the problem from http://projects.scipy.org/numpy/ticket/1675. I think it's OK but I wouldn't normally want to do this. | |
<author>embray</author> | |
Aside from the above comment, this is pretty clever. +1 | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Table pretty printer | |
<author>taldcroft</author> | |
Add functionality to output a table nicely using the defined `format` attributes, probably using the io.ascii fixed width table functionality. | |
Should this be `Table.__str__` or something else like `Table.print`? | |
<author>mdboom</author> | |
It's nice how the Numpy ndarray __str__ and __repr__ return abridged versions of the array if it's large. This makes it useful in ipython where you don't accidentally print out some enormous thing. I would (if possible) reserve `__str__` for this, and use something else for printing out a complete table. | |
<author>taldcroft</author> | |
I agree @mdboom that `__str__` and `__repr__` should definitely abridge the data as needed. The question is whether `__str__` should attempt to make the table "pretty" and apply any user-supplied formatting. The default numpy output is functional but not especially human-readable for wide tables. | |
For instance we could keep the numpy `__repr__` behavior but have `__str__` map to a `print` method that pretty-prints the table (fixed-width formatted output) and limits the number of rows. The `print` method would have some kwargs to control output, e.g. an optional `row_limit`, maybe a way to get the output to a file, etc. `print` would probably be a thin wrapper around `write_ascii`, which gives access to the full flexibility of `io.ascii`. | |
For outputting the entire table we were talking about being able to register hooks to `io` packages so you might do something like `table.write_ascii(outfile, *args, **kwargs)` or `table.write_vo(outfile, ...)`. I would probably make `outfile` optional with a default of sys.stdout. | |
<author>mdboom</author> | |
@taldcroft: Agreed on all points. | |
<author>eteq</author> | |
@taldcroft - +1 from me, as well, for the plan here. | |
<author>astrofrog</author> | |
@taldcroft - can this be closed since it was implemented in #234? | |
<author>taldcroft</author> | |
Yes. | |
</issue> | |
<issue> | |
<author>wkerzendorf</author> | |
problem with affiliate package setup.py | |
<author>wkerzendorf</author> | |
Hi guys, | |
I tried the following today with https://github.com/wkerzendorf/specutils/blob/master/setup.py and this happened. You guys probably know what's going on. If not I can debug this further. | |
Wolfgangs-MacBook-Pro:specutils wkerzend$ python setup.py develop | |
Freezing version number to specutils/version.py | |
Traceback (most recent call last): | |
File "setup.py", line 41, in <module> | |
setup_helpers.get_debug_option()) | |
File "/Users/wkerzend/scripts/python/astropy/astropy/version_helper.py", line 204, in generate_version_py | |
f.write(_get_version_py_str(packagename, version, release, debug)) | |
File "/Users/wkerzend/scripts/python/astropy/astropy/version_helper.py", line 158, in _get_version_py_str | |
major, minor, bugfix = _version_split(version) | |
File "/Users/wkerzend/scripts/python/astropy/astropy/version_helper.py", line 37, in _version_split | |
bugfix = 0 if len(versplit) < 3 else int(versplit[2]) | |
ValueError: invalid literal for int() with base 10: '' | |
<author>astrofrog</author> | |
This works for me. It may be that you are using an old version of Astropy. Could you try installing the latest Astropy package to see if that helps? | |
<author>embray</author> | |
This does look familiar to me--I seem to recall this bug from several weeks ago. Though maybe it cropped up again? | |
This brings up a good point though--we should have a test build of the sample affiliated package. | |
<author>wkerzendorf</author> | |
if you get the master branch of my tree you can experience the error yourself (I hope). In addition, this only happened once, now I seem to get a different error. | |
<author>astrofrog</author> | |
@wkerzendorf - I am not getting the error with your latest version. What I meant before is that you need to update to the latest master version of the astropy repository (not affiliated packages) because ``version_helper.py``, which is used by the affiliated packages, was updated. The line numbers in your traceback don't match the current version. Let us know if you still get the error once you update to the latest upstream master of astropy. | |
To get the original error message, try deleting ``specutils/version.py``. | |
<author>eteq</author> | |
@wkerzendorf - I also have no problem building with the latest version of astropy installed... You might try doing | |
``` | |
import astropy | |
print astropy.__version__ | |
``` | |
And then we can tell for sure what version you have. | |
<author>wkerzendorf</author> | |
so I did: | |
git fetch upstream | |
git merge upstream/master (while being on the master branch of my own repo) | |
and it still doesn't work | |
>>> import astropy | |
>>> astropy.__version__ | |
'0.0dev-r526' | |
>>> | |
I'm probably doing something wrong | |
<author>astrofrog</author> | |
What is: | |
git remote -v | |
and | |
git log | head -1 | |
? | |
<author>astrofrog</author> | |
The latest version is '0.0.dev835' (or maybe even more recent now) so you're definitely using an old version. | |
<author>wkerzendorf</author> | |
upstream git://github.com/astropy/astropy.git (fetch) | |
upstream git://github.com/astropy/astropy.git (push) | |
commit 390d26d95cc210c4b55a6013dd33ea5800c5952f | |
<author>astrofrog</author> | |
390d26d95cc210c4b55a6013dd33ea5800c5952f is from Jan 10th, so definitely outdated. What is the full output of: | |
git fetch upstream | |
git merge upstream/master | |
? | |
<author>wkerzendorf</author> | |
so the problem was that I accidently created a branch called upstream/master. sorry for the confusion. | |
It all works now! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Make build_sphinx use _build directory | |
<author>eteq</author> | |
This has been discussed in #102, #115, and #117. Basically, the ``python setup.py build_sphinx`` command should use the version of astropy that is in the ``_build`` directory, and ``make html`` in the docs directory should use whatever version is actually installed. | |
Note that this should be done *after* #115, because there will probably be quite a bit of incompatible changes. | |
<author>astrofrog</author> | |
@eteq - I think you fixed this in #187, so can this issue be closed? | |
<author>eteq</author> | |
Yep, this was what motivated #187 in fact. Closing! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
1 test failure in vo_test | |
<author>eteq</author> | |
http://paste.pocoo.org/show/551253/ | |
This is on OS X 10.6, with 32-bit python 2.7.2 (EPD 7.1-2, although I test it on the macport python 2.7.2 and saw the same error). The error does *not* appear in python 3.2.2 on the same computer. | |
<author>astrofrog</author> | |
Interesting, I'm not seeing this issue any 64-bit Python installation on MacOS X. So many configurations to test! I should really be testing 32-bit installations too (on 10.7). I guess that since you saw the same error with MacPorts, this is a 10.6 vs 10.7 issue rather than 32-bit vs 64-bit. | |
Does ``git bisect`` give any clues as to when this first occurred? | |
<author>eteq</author> | |
``git bisect`` gives "3c05807c2969a011cdd04b134ba6e761022728d6 is the first bad commit" - this came from PR #150 by @mdboom | |
<author>mdboom</author> | |
Very puzzling. I'm not sure if the failure is in the reading or the writing of the XML file. Is there anyway you can send me the "regression.binary.xml" file in the TMP_DIR? | |
<author>eteq</author> | |
@mdboom - That was kind of fun actually... to get the temporary directory to last long enough I had to inject a `time.sleep` call inside the test and then quick go looking for the "regression.binary.xml" before the sleep expired... | |
At any rate, I don't know if there's any convenient way to attach to github issues, so I just e-mailed it to you as an attachment. | |
<author>mdboom</author> | |
I'm kind of stumped. I'm curious what data is being fed to expat. Can you put a gdb breakpoint and or printf in line 644 in iterparse.c and print the contents of "buf"? Does it start with "<?xml" or is there something else there? | |
<author>mdboom</author> | |
You can also just remove the line that removes the temporary directory in | |
"teardown_module" -- it doesn't use the py.test tmpdir stuff because the | |
TMP_DIR needs to persist between tests. | |
Mike | |
On Thu, Feb 16, 2012 at 1:48 AM, Erik Tollerud < | |
reply@reply.github.com | |
> wrote: | |
> @mdboom - That was kind of fun actually... to get the temporary directory | |
> to last long enough I had to inject a `time.sleep` call inside the test and | |
> then quick go looking for the "regression.binary.xml" before the sleep | |
> expired... | |
> | |
> At any rate, I don't know if there's any convenient way to attach to | |
> github issues, so I just e-mailed it to you as an attachment. | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/issues/172#issuecomment-3995979 | |
> | |
-- | |
Michael Droettboom | |
http://www.droettboom.com/ | |
<author>mdboom</author> | |
I wonder if you're running into the problem Erik ran into here #159, and the solution in #159 or #160 works for you. You'll have to remove the `if sys.platform.startswith('linux')` statement to get it to have effect on OS-X, of course. | |
<author>eteq</author> | |
I tried the solution of #159 (using "-symbolic" instead of "-Bsymbolic", which the OS X ld man page suggests is the same thing?), with no change, and setting the trick of #160 to run in OS X fails because the "--version-script" option is not available in the OS X linker (although is there something equivalent?). | |
I do have a version of expat loaded from macports, but uninstalling it doesn't change anything. There's also a ``/usr/lib/libexpat.dylib`` (which I probably shouldn't remove given that I think it's build into the system), but I'm not sure how to check which one is being used... any hints? | |
<author>astrofrog</author> | |
@eteq - normally on Mac, if you find the .so files in the built package, you should be able to do: | |
otool -L thefile.so | |
to see the list of dynamic libraries it is linked against. In my case, I can't find any .so files that are linked against expat. | |
<author>mdboom</author> | |
otool will only display the dynamic modules specified at compile time. In the case of what was happening in #159, the expat was being picked up at runtime by virtue of Tkinter being imported into the Python interpreter (which loads expat) and then iterparse.so having global symbols referring to expat. otool doesn't know anything about the runtime environment the dynamic library will eventually be loaded into, so it can't figure that out. | |
@eteq: were you able to print out the buffer that's being passed to expat? That may give some clues (though I'm still rather stumped). | |
<author>eteq</author> | |
@mdboom: I just e-mailed you a dump of the buffer - it does indeed start with ``<?xml`` and seems to be at least well-formed XML (although when I run it through xmllint and try to validate, it gives some weird error I don't understand...) | |
And indeed ``otool`` didn't reveal anything odd... is there anything like the ``LD_DEBUG`` environment variable that I can use to see what library is actually getting loaded at runtime? I've come across the ``dyldinfo`` tool before, but it doesn't seem to actually do the loading to tell me if it actually is finding another file (or I may be mis-using it)... | |
<author>mdboom</author> | |
I don't use Mac OS-X much, so I don't know if this exists there, but on Linux I use `lsof` to display the files open by a particular process. This will show what dynamically loaded libraries the Python process has opened. | |
But that all may be a wild goose chase. What this test does is it reads in a file from the source tree, writes it out in "binary" mode (which just means it includes base64-encoded blobs) and reads that in. It's the second reading in that fails, so we know that the XML parser works at least once -- which leads me to believe that maybe after all it isn't just a matter of linking to the wrong version of the library. (I originally suspected it might be linking to a differently-configured version of the library). | |
So you can probably see that this doesn't make a whole lot of sense. It could be some sort of interaction with global state -- iterparse.c has no global variables and a new parser object is created for each file that is parsed. I don't believe expat has any global variables that would matter either. As a sanity check, if you parse the regression.binary.xml file directly, as such, does it work? | |
``` | |
from astropy.io.vo import parse | |
with open("regression.binary.xml", "rb") as fd: | |
votable2 = parse(fd, pedantic=False) | |
``` | |
<author>eteq</author> | |
If I do the parsing directly, I get the same error as in the test in an ipython session, I get the same error as the test produces: ``ValueError: 1:4: not well-formed (invalid token)``. | |
I also checked `lsof`... and the only thing that had "expat" in it was ``/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/lib-dynload/pyexpat.so``. I'm a little confused by the whole situation here, though... the comments in #159 imply to me that expat is dynamically bound... but when I look at the code it seems like the cextern version of expat is statically bound... so what should I be looking for in ``lsof``? | |
As another possibly-useful tidbit, if I put the line ``_python_based=True`` at the start if _test_regression, the tests all pass... so I guess that means it has something to do with the C iterparser somehow...? | |
<author>eteq</author> | |
One additional interesting discovery: If I comment-out lines 93-97 in ``astropy/utils/xml/iterparser.py``, the test in question passes (the one testing gzip stuff fails, but that's to be expected). | |
Perhaps this means that the ``fd.seek`` required to rewind after looking for the magic number is somehow doing something that internally screws up the C-level parser? | |
<author>mdboom</author> | |
To explain the dynamic linking -- expat is included in iterparse.so, but the expat symbols are globally exposed, so the linker dynamically resolves those symbols to the first expat library that was loaded in the process (even though they could have just as easily have been resolved within the same .so), which could be some other expat -- such as that pyexpat.so that has apparently been imported at some point (presumably by some other code that is being run at Python startup). This is a behavior of the dynamic linker that was new to me, but after @iguananaut discovered it and I read more on it, it seems that that is indeed how it works. The solution was to hide all symbols except the single entry point needed to import the module from Python. An alternative solution was to mark the module as symbolic which tells the linker not to rebind global symbols. | |
But I think that's all barking up the wrong tree because the parser does work once. If that was the problem, I would expect it completely fail from the start. | |
Yes -- the failure is in the C-based iterparser -- or an interaction between it and the file handle that is being passed to it. | |
Having commented out those lines resolving it is useful information, but puzzling. When you printed out the buffer that is passed to expat it included the first two bytes of the file ("<?"). All it does it call fread on the file handle and pass the buffer to expat -- expat doesn't actually read the file itself, so there's nothing complicated going on in there. And the whole file was read in a single read call. | |
Does changing line 113 of iterparser.py to `yield fd.read` resolve the issue? (Revert the gzip change you made of course). It's not an ideal solution because it forces the reading to happen through a Python function call, but it might provide more clues. | |
<author>eteq</author> | |
@mdboom - your explanation of the linking made perfect sense - thanks! | |
And changing 113 from ``yield fd`` to ``yield fd.read`` does indeed seem to do the trick... | |
<author>mdboom</author> | |
Ok... we're getting close. Certainly using fd.read is a solution, but I'd like to do that only when needed, and I don't think we're yet at the bottom of what creates the situtation in which it fails. | |
<author>eteq</author> | |
It turns out that when I did the test before, I was overwriting the file I used to check ``buf``, so I was seeing the successful buffer from a *later* test. When I instead append to the file instead of writing and look at all the runs of ``iterparse.c``, some of them are indeed incomplete... but if I remove the gzip/`seek` code block, the ``buf`` is always a complete-looking XML file. | |
In the cases where it fails, though, it's more than just missing the first two characters or something like that - it seems to be chopping a large chunk at the start of the file (~80 lines)... | |
<author>mdboom</author> | |
Perhaps (and I haven't looked at the Python code to verify this) `seek` is only shifting a pointer in a secondary buffer over the real file object -- and then when we get the real one from C it's in the wrong place. I've attached a pull request to do the seek from C. Does that resolve the issue? (I can't reproduce here, so it's just a wild guess). | |
<author>embray</author> | |
I haven't followed this issue very closely, but is it possible that this is due to the use of an `io.BufferedReader` object to read the file? I was bitten by this recently. `io.open` with 'wb' mode returns a `BufferedReader` by default which actually isn't supposed to support seeking. The fact that it allows you to seek at all is a bug (this is fixed in the cpython tip--it throws an exception if you try to seek a `BufferedReader`). | |
This could explain your suggestion that seek is updating a secondary pointer, but not updating the real file handle, or something of the like. To get a `BufferedRandom` you need to open the file in binary read/write mode: | |
`f = io.open(filename, 'rb+')` | |
Of course, you don't want that either because you want the file to be read-only. I think this is a real annoyance of the io module design. It's meant to work equally well for non-file I/O streams (like a socket) as it does for a file, which is why objects like `BufferedReader` exist. Except it's not smart enough to know that this *is* a file on disk and should allow random access. | |
The only way I've found to create a read-only `BufferedRandom` is like this: | |
`f = io.BufferedRandom(io.open(filename, 'rb', buffering=0))` | |
which looks suspiciously Java-ish. Though you can also just open with buffering=0 to get the raw `FileIO` object and be done with it. I don't know that there's anything to gain from Python-level buffering in cases like this. | |
<author>embray</author> | |
Turns out what I just wrote is actually misleading--`BufferedReader` should support seek if the underlying raw object can support it (such as a file). That said, I've still run into problems with this before, though I can't remember the full context now. | |
<author>mdboom</author> | |
@iguananaut: All that makes sense, but in this case its not a `BufferedReader` -- it's an old-style Python file: `open('foo.xml', 'rb')` | |
<author>embray</author> | |
I see. Well in that case I have no idea. | |
<author>eteq</author> | |
This PR as it stands doesn't fix the failure, but I noticed the following when it compiled that led me to a fix: | |
``astropy/utils/xml/src/iterparse.c:908: warning: passing argument 1 of ‘lseek’ makes integer from pointer without a cast`` | |
Looking more closely at the code, it appears that the ``self->fd`` is actually a `PyObject*`, rather than being an integer file descriptor - ``self->file`` is actually the file descriptor. So If I change the line added by this pull request to ``lseek(self->file, 0, SEEK_SET);`` instead of ``lseek(self->fd, 0, SEEK_SET);``, the failure disappears! | |
I went in and looked at the C-level file object code in python, and the seek code seems to use fseek, but with a bunch of compiler directives for workarounds that may or may not be in operation here... is there any easy way to check all of the compiler flags that were true for the version of python I'm running (it's a binary...)? | |
(Also, as an aside: shouldn't the seek call, either in C or python, be ``(l)seek(<fd>,-2, SEEK_CUR)`` instead of ``(0,SEEK_SET)``? It's always possible someone will want to start the parsing part-way through a file or something...) | |
<author>embray</author> | |
@eteq Your python should include a python-config executable (or python2.7-config, etc.). Use `python-config --cflags` to see the compiler flags it was built with. | |
<author>mdboom</author> | |
Oops -- the `self->fd` vs. `self->file` was just a mistake. You are correct that's how it should be. | |
As for parsing part way through the file -- I suppose it's possible someone may want to have the XML embedded within a larger file. But changing to `SEEK_CUR` won't make it work. You said the symptom was that the file pointer was at some arbitrary point deep within the file. If we don't know what that offset is, we can't seek back to the beginning of the XML. Safer to just require that the file only contains the XML content and that the beginning really is the beginning. If it's a multi-part file, creating some sort of non-real-file Python file-like object to wrap it will work. | |
<author>eteq</author> | |
@iguananaut - whoops, sorry, I meant the compiler *defines*, not the flags. That is, the stuff in the ``pyconfig.h`` file that apparently gets generated when you want to compile Python from source. In particular, I'm wondering about `HAVE_LARGEFILE_SUPPORT` because it seems to impact how `file.seek` operates. I could try compiling and installing a new python from source and hope that it reproduces the error, but I'd like to avoid that... | |
@mdboom - I see your point here... in principal, ``SEEK_CUR`` *should* work, but it clearly isn't. | |
I've looked a bit into the `file.seek` function, and *if* the binary I'm using (from the EPD) is configured the same as the default configuration for OS X 10.6, it appears that `file.seek` ends up doing the equivalent of `fseeko`, which is identical to the stdio `fseek` except that it takes an `off_t` as the seek position instead of a `long`. Can you think of any reason why this would give a different result than the `fseek` call we're doing here? (in contrast, the `io` module, which is what Py 3.x is using, seeks to use `lseek` withour ever touching any `fseek`, so that might be why the probably isn't present there?) | |
<author>mdboom</author> | |
Note the solution here uses an `lseek` and thus a file descriptor not a `FILE *` pointer. | |
Python 3 changed to exclusively use file descriptors, which if I understand correctly, don't have the multi-threading problems that passing `FILE *` pointers around does. I don't know if the problem we're seeing is related to multi-threading. | |
<author>eteq</author> | |
Well, perhaps we're seeing *why* Python 3 decided to use all fds - clearly something unintended is happening here... | |
So is the pull request here intended as a permanent fix? That is, should this pull request be merged in its current form, or are you still considering a different way to resolve this? | |
<author>mdboom</author> | |
I think this seems like a reasonable fix -- barring the "typo" of `self->fd` vs. `self->file` you pointed out -- there's no real performance penalty (unlike passing a `read` function would have). Only downside is the assumption that the XML contents start at the beginning of the (real) file, but I think that's a reasonable assumption to make. We can document that if we think it's necessary. | |
<author>eteq</author> | |
It's probably worth a quick note somewhere in the docstrings because it would be difficult to diagnose if someone actually were to run afoul of it... but just a brief mention should do the trick. If a use case appears where it really matters, we can re-adress the matter then. | |
<author>mdboom</author> | |
I've updated the pull request. Feel free to merge if it works for you. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
wcs: Windows C linking fixes | |
<author>mdboom</author> | |
Make astropy.wcs linkable from C on Windows. | |
This is a backport of r2510 in pywcs. | |
On Windows, we need to define some macros to prevent importing wcsset() from the "standard" library, which conflicts with the wcsset() in wcslib. These macros need to be defined both for building astropy.wcs itself and for any external C libraries that link to it. | |
<author>embray</author> | |
I see no reason to delay this--it comes straight from pywcs. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
setup/version-legacy-packages | |
<author>mdboom</author> | |
This adds an equivalent version to the legacy package so version checks against the legacy packages work. | |
I understand this requires manual updating as legacy packages are merged into astropy -- if anyone can think of a better way I'm open to it, but this solves my immediate need for now. | |
<author>embray</author> | |
+1 that's a good idea. | |
I have to say, this whole legacy shim apparatus is pretty awesome. I can't wait for the first Astropy release so we can start trying to use it. | |
One thing I might add to this: In addition to `__version__` it might be nice if each package could provide a add_legacy_alias a whole dict of arbitrary variables to include in the generated `__init__.py`. | |
For example, PyFITS has an `__svn_version__` variable. I don't know that there's any *code* that relies on it being there, but some users might check it. I could use this a place to record the svn revision each time I do a merge to astropy from pyfits. | |
<author>taldcroft</author> | |
I'm guessing I'll want to add asciitable as a legacy package. | |
<author>mdboom</author> | |
This now includes @iguananaut's extension to include arbitrary values in the legacy shim's `__init__.py`. | |
<author>eteq</author> | |
Perhaps a short section should be added in the developer documentation to mention how this system works? I can actually imagine this mechanism might be useful for to people who might not be following the development too closely (including e.g. adding legacy shims for old packages to *affiliated* packages rather than the astropy core). | |
<author>eteq</author> | |
@mdboom - are you thinking will add the aforementioned documentation section in this pull request, or should I merge this and just add an issue about it to be dealt with later? | |
<author>mdboom</author> | |
Sure, let's go ahead and merge. I had meant to add developer docs, but it's a little tricky -- the usage of this is already documented, so that's the basics covered, at least. | |
<author>eteq</author> | |
Ok, I'll try to look for a space to put this in the work I'm doing for #162, and afterwards you can fill it in whenever you find the time. Thanks! | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Add legacy package for io.ascii to asciitable | |
<author>taldcroft</author> | |
Add legacy package to package_setup.py for io.ascii to map to asciitable. (Pending pull request #174). | |
<author>astrofrog</author> | |
Now that #174 is merged, can this be done? | |
<author>taldcroft</author> | |
Yup, will do. If I understand all I need to do is edit `setup_package.py` in `astropy/io/ascii/` and add something like: | |
```python | |
def get_legacy_alias(): | |
return setup_helpers.add_legacy_alias('asciitable', 'astropy.io.ascii', '0.8.0') | |
``` | |
Is that correct? | |
<author>astrofrog</author> | |
That *should* be it, but we're discovering some issues with the legacy layers (see #193), so maybe you could hold off until we've resolved that? | |
<author>taldcroft</author> | |
I'll wait until further notice. | |
<author>astrofrog</author> | |
Now that #174 and #194 are merged, could you address this issue ahead of 0.1? | |
<author>taldcroft</author> | |
@astrofrog - will do over the weekend. | |
<author>taldcroft</author> | |
I've decided not to have asciitable as a legacy package of io.ascii. After cutting the cord I was immediately able to clean out a whole bunch of crufty code and start planning for real integration with the Table class. This issue is therefore closed with no action required. | |
My current branch for this development (io-ascii) passes tests but I need to scrub docs a bit more to make sure everything is consistent. | |
cc\ @eteq @iguananaut @mdboom | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Test legacy shims | |
<author>mdboom</author> | |
This is basically a placeholder. | |
It would be nice to run the tests from "legacy" packages against the legacy shims in astropy. To do this properly will require some serious package namespace voodoo magic, but it would be nice to have a way to determine how backward compatible these legacy shims really are. (While care has been taken to maintain a consistent interface, some divergence has already happened in the name of better integration with other parts of astropy etc.) | |
<author>embray</author> | |
I think we can consider this overcome by circumstances, since the importance of the legacy shims has been somewhat downgraded to "use with caution". | |
<author>mdboom</author> | |
I agree with @iguananaut. Closing. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Bug with Header.fromTxtFile | |
<author>astrofrog</author> | |
As of PyFITS 3.0.5 and astropy.fits, the following header cannot be read in correctly: | |
SIMPLE = T / | |
BITPIX = -32 / | |
NAXIS = 3 / | |
NAXIS1 = 11 / | |
NAXIS2 = 12 / | |
NAXIS3 = 32 / | |
EXTEND = T / | |
CRVAL1 = 57.6599999999 / | |
CRPIX1 = -799.000000000 / | |
CDELT1 = -0.00638888900000 / | |
CTYPE1 = 'RA---SFL' / | |
CRVAL2 = 0.00000000000 / | |
CRPIX2 = -4741.91300000 / | |
CDELT2 = 0.00638888900000 / | |
CTYPE2 = 'DEC--SFL' / | |
CRVAL3 = -9959.44378305 / | |
CRPIX3 = 1.00000 / | |
CDELT3 = 66.4236100000 / | |
CTYPE3 = 'VELO-LSR' / | |
For example, using: | |
from astropy.io import fits | |
h = fits.Header() | |
h.fromTxtFile('cube.hdr') | |
produces the following Exception: | |
Traceback (most recent call last): | |
File "test.py", line 8, in <module> | |
h.fromTxtFile('cube.hdr') | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/util.py", line 212, in deprecated_func | |
return func(*args, **kwargs) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/header.py", line 1770, in fromTxtFile | |
padding=False) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/header.py", line 395, in fromfile | |
return cls.fromstring(blocks, sep=sep) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/header.py", line 312, in fromstring | |
return cls(cards) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/header.py", line 92, in __init__ | |
self.append(card, end=True) | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/header.py", line 1036, in append | |
if str(card) == blank: | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/card.py", line 435, in __str__ | |
return self.image | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/card.py", line 613, in image | |
self.verify('silentfix') | |
File "/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev843-py2.7-macosx-10.6-x86_64.egg/astropy/io/fits/verify.py", line 65, in verify | |
raise VerifyError('\n' + x) | |
astropy.io.fits.verify.VerifyError: | |
Card image is not FITS standard (unparsable value string: T / | |
BITPIX = -32 / | |
NAXIS =). Fixed card to meet the FITS standard: SIMPLE | |
Unfixable error: Unprintable string '\nBITPIX = -32 /\nNAXIS =' | |
<author>embray</author> | |
Confirmed this on both PyFITS trunk. This actually shouldn't be a problem with PyFITS 3.0.5 since it's using completely different code for this (just confirmed, this actually works in 3.0.5). But this is a problem on trunk, and in Astropy. | |
By the way, don't use Header.fromTxtFile--it's deprecated. Use `python -Wd` to enable deprecation warnings (since they're not enabled by default on Python 2.7 and up) to see if your code is using any other deprecated functions. Instead use `Header.fromtextfile`. It's the exact same thing--fromTxtFile is now just an alias to the new function. | |
Anyways, the problem here is that it's still expecting each line to be padded out to 80 characters before it reaches the newline. Obviously this is wrong and should be fixed. | |
<author>embray</author> | |
Okay, I've got this fixed in pyfits' trunk. I need to do a merge at some point, as there are a few other bug fixes. | |
<author>astrofrog</author> | |
Thanks! | |
<author>astrofrog</author> | |
@iguananaut - I think this is fixed with #251 (works for me now). Can this be closed? | |
<author>embray</author> | |
Yes. This should be fixed. | |
</issue> | |
<issue> | |
<author>embray</author> | |
Merge latest fixes from pyfits before 0.1 release | |
<author>embray</author> | |
This issue serves as a general reminder for me to merge/test any unmerged bug fixes from PyFITS into astropy.io.fits prior to the 0.1 release. | |
<author>astrofrog</author> | |
Closing this since the latest changes have been merged in in #251 | |
<author>astrofrog</author> | |
(feel free to reopen if there were other fixes you wanted to commit before 0.1) | |
<author>embray</author> | |
Yeah, this was meant to stay open--there will still be additional merges. | |
<author>astrofrog</author> | |
Oops, sorry :-) | |
<author>astrofrog</author> | |
@iguananaut - can this be closed now? | |
<author>embray</author> | |
Uhhhh hold on. I think I have one more merge to do. | |
<author>eteq</author> | |
@iguananaut - this can be closed now, right? | |
<author>embray</author> | |
Yup. | |
</issue> | |
<issue> | |
<author>embray</author> | |
wcs extension rebuild every time I run `setup.py test` | |
<author>embray</author> | |
This seems to be due to astropy/wcs/include/docstrings.h being updated every time, even if there's no reason to. I think the rebuild is triggered even if its modtime is changed. | |
This is a minor annoyance, but one that could probably be fixed somehow at some point. | |
<author>mdboom</author> | |
FWIW: I can't reproduce. It shouldn't be updating docstrings.h every time -- only if the generated content has changed. Is it's content changing on your machine for some unrelated reason? | |
<author>embray</author> | |
None of the content is changing. It appears that just the modtime is changing, so distutils still detects it as "modified" and recopies it to build/, which somehow or other triggers a rebuild of all wcslib and all related source. | |
<author>embray</author> | |
I considered that it might be a newline issue, but that's not it. This is on Linux, and the file on disk has UNIX newlines. | |
<author>mdboom</author> | |
I'm a bit stumped. All I can think to suggest is to step through `setup_helpers.py:write_if_different` and see whether the comparison is working correctly. | |
<author>embray</author> | |
I put a breakpoint in write_if_different, and when it gets to docstrings.h I get this: | |
``` | |
for d in difflib.ndiff(original_data.splitlines(), data.splitlines()): print d | |
/* | |
DO NOT EDIT! | |
This file is autogenerated by astropy/wcs/setup_package.py. To edit | |
its contents, edit astropy/wcs/docstrings.py | |
*/ | |
#ifndef __DOCSTRINGS_H__ | |
#define __DOCSTRINGS_H__ | |
#if defined(_MSC_VER) | |
void fill_docstrings(void); | |
#endif | |
- extern char doc_DistortionLookupTable[375]; | |
? ^ | |
+ extern char doc_DistortionLookupTable[374]; | |
? ^ | |
- extern char doc_K[154]; | |
? ^ | |
+ extern char doc_K[153]; | |
... | |
``` | |
and so on... | |
<author>embray</author> | |
Stepping further through the setup.py run, it seems that docstrings.h is written to twice: The first time decrements each char array size by one. The next write increments them again, so there's no change to the contents, though the file gets rewritten twice. Odd... | |
<author>mdboom</author> | |
There's something fishy that's causing `generate_c_docstrings()` to run twice. We should probably get to the bottom of that, as it might indicate more work is being done than necessary. | |
In the meantime, I think we can get around the docstring length changing problem by not modifying the docstrings in-place. (Again, I'm not able to reproduce the problem, so this might not be quite right). | |
<author>embray</author> | |
That seems to have done the trick! Nice job fixing something you couldn't even reproduce. | |
The fact that it's only happening for me is disconcerting though... | |
<author>eteq</author> | |
This seems to work just fine for me without any problems (although I never saw this bug in the first place). | |
I'll go ahead and merge it, but perhaps one of you should add an issue indicating that ``generate_c_docstrings()`` is acting up? | |
</issue> | |
<issue> | |
<author>wkerzendorf</author> | |
BoolMask | |
<author>wkerzendorf</author> | |
While developing the Spectrum1D class, we realized that there's a need for masks and errors that can deal with arithmetic. | |
The BoolMask is a first attempt to write a simple ndarray subclass that will create a new mask (in the maskedarray sense, masked=True) by applying a logical or to the mask of both operands. | |
mask1 = BoolMask(ones((100,100))) | |
mask2 = BoolMask(ones((100,100))) | |
mask1 + 5 -> ValueError unsupported operand types | |
mask1 + mask2 -> logical_or of the whole bool array. | |
There's obviously documentation missing, but I wanted to get some initial feedback before proceeding further. | |
<author>eteq</author> | |
I don't understand the purpose of this object - it appears to just be a ``numpy.ndarray`` that has a boolean dtype. Wouldn't it be simpler to just use ``numpy.ndarray``s and implement the maks ORing and error propogation in the Spectum1D classes? That seems less confusing/magical. | |
Or am I misunderstanding the point of this object? | |
<author>embray</author> | |
Regardless of whether or not this object is necessary (I have no idea?) it might as well also support `__or__`, `__and__`, and `__xor__`. | |
Also `__mul__` should be an and, not an or. Aaaaand if I put on my "mathematician hat" I would probably also complain about `__div__` even being defined :) | |
<author>wkerzendorf</author> | |
So the main difference of this object to a normal bool numpy.ndarray is that it handles arithmetic differently. The logical_or is the right way to handle operations for a bool mask: Imagine two images (2d nddata objects in our case). They have some bad pixels masked. When you add (multiply, divide, ...) them you want the new nddata object to be masked where one or the other (or both) images had bad pixels. So this is the reason for the logical_or. @iguananaut you're right, that mathematically this does not make sense for just individual objects, but these objects are only ever supposed to be used in conjunction with an nddata object. | |
I'm planning on having similar VarianceError arrays - where an addition or subtraction results in `sqrt(a**2 + b**2)` for the error arrays. | |
I'm doing this because when implementing arithmetic in the Spectrum1D object I need to do something with the error arrays and mask arrays. I believe the easiest thing is to just do self.mask + operand.mask and the mask array objects itself will know what that means. For example, it will throw an exception when the masks are not of the same type. | |
I hope that makes it clearer. What do you guys think? | |
<author>wkerzendorf</author> | |
I should add that the arithmetic operation of masks and errors is not specific to spectrum1d. A simple boolean mask is the same for an image, a spectrum, a cube, ..... . That's why I didn't implement it in Spectrum1D (well I did for showing what I'm doing, but it should live here at the core of NDData). | |
<author>eteq</author> | |
@wkerzendorf - thanks for the clarification. | |
I guess my thought is that this is over-engineering the problem. If I were to look in the spectrum and see | |
``` | |
def __sub__(self,other): | |
newflux = self.flux - other.flux | |
newmask = self.mask - other.mask | |
return Spectrum(self.dispersion,newflux,newmask) | |
``` | |
I say to myself: "What does it mean to subtract two masks? That must be array subtraction." Now you've reimplemented it here to make ``-`` into ``logical_or``, but I don't know that unless I go and look at the BoolMask object... and if all I want to do is understand how spectra are added in the code, I don't even know that `self.mask` is a special BoolMask object, because everything else is a numpy array. In contrast, the following: | |
``` | |
def __sub__(self,other): | |
newflux = self.flux - other.flux | |
newmask = self.mask | other.mask | |
return Spectrum(self.dispersion,newflux,newmask) | |
``` | |
Now I know exactly what's going on, because it's a standard numpy array... and it doesn't take up any more space in the code than the other implementation. | |
More generally, there *are* going to be lots of ways to implement errors in the various other kinds of NDData subclasses... and it makes more logical sense to me that those implementations live in the particular kind of NDData rather than in an external class that has to be overridden to do anything different. | |
<author>wkerzendorf</author> | |
overengineering vs underengineering ;-). What you have implemented in your second code is how a bool mask is handled. People might not necessarily have a bool mask for their spectra, but something else, where I don't know the arithmetic. | |
In addition, a bool mask behaves the same if its a image, a spectral cube, .... . This means that a boolmask is the same for all children of nddata, why not make one here. I do understand where you're coming from, but I think a well thought documentation would solve that. | |
Maybe we can do something like that: we don't just use the + operator but we use a method so newmask = self.mask.add(other.mask), that might make it more clear. | |
I think this problem becomes more apparent with the errors. The community might want to have different errors to standard deviation and I think again this will be the same for all subclasses of nddata. | |
Maybe it is more clear with the method in the mask and error classes. | |
<author>eteq</author> | |
I don't see how self.mask.add(other.mask) helps the problem - it's not the operator usage itself that's confusing, but rather the whole idea that we want to "redefine" what addition means for something where it already has meaning. That's guaranteed to cause a lot of confusion, especially given that, practically speaking, a bool/"bad-pixel" mask is probably the most common kind of mask we'll encounter. | |
I see your point moreso in the case of errors, though. You're suggesting we have something like an "SDError" class, a "VarError" class, and perhaps a "LogNormalError" class and so on, is that correct? | |
If that's what you mean, there's a separate problem there: arithmetic operations on errors aren't always meaningful *unless you know the value*. Take the trivial case of ``y = x**2`` - if `x` has gaussian error of SD=`dx`, basic error analysis says ``dy = abs(2*x)*dx`` ... so you need to know *both* `x` and `dx`. Therefore a class with just the error is useless - it *has* to also know the value. | |
So the method I described above (dealing with it at the level of the NDData sublcass method, rather than representing errors as objects) would solve both of these problems by having users subclass e.g., `Spectrum1D` into `Spectrum1DLogNormalError` or something like that, and they have to work out the error propagation in that subclass (but then they *don't* have to reimplement anything else, because it's still a Spectrum1D subclass). On the other hand, that might end up requiring a lot more work in those subclasses in some cases... | |
I'm still not entirely sure which is better, though - I see some of your points here... but I think re-defining addition for something that looks like an array is something that will lead to massive confusion. You *CANNOT* count on people to read the documentation in cases where they're likely to assume they understand what's going on. They won't. | |
<author>eteq</author> | |
Also, @wkerzendorf - can you post on astropy-dev that this discussion is happening? There are plenty of people there who don't monitor pull requests that probably have some thoughts on this issue. | |
<author>eteq</author> | |
(And encourage interested parties to look at the pull request and comment here, rather than in astropy-dev itself, so then we have a single discussion instead of two separate ones.) | |
<author>astrofrog</author> | |
While I see the use of re-definining arithmetic for error propagation, I find the above mask arithmetic very confusing. As @eteq said, when reading the code, ``mask_a - mask_b`` makes no sense to me, and I would have to go read up the BoolMask code to understand what it's doing. It also adds a layer of complication for the user, who has to do ``mask=Boolmask(mask)``, and I know how much users dislike instantiating classes inside of class instantiations... I would vote for just using boolean arrays for masks and explicitly doing the correct logical and/or operation on them. | |
If you did want to do something like this, it would be more correct to define an array class that contains errors and masks, and which would allow you to do e.g. ``flux_1 - flux_2`` and have masks and errors be automatically propagated (using the correct notation behind the scenes). In fact, we have such an array class, which is ``NDData``! So I would be in favor of implementing seamless addition/subtraction/etc in NDData (with error/mask/unit propagation), and overload these methods in e.g. Spectrum to add a check that the dispersions are the same. Inside the ``__add__``, etc. methods in ``NDData``, the mask arithmetic would be written properly with ``|`` and ``&``. Since all the arithmetic would be done in one place (``NDData``) there would be no need for a BoolMask abstraction because the arithmetic would be written out only once for all ``NDData`` sub-classes. Anyway, just a thought! | |
<author>wkerzendorf</author> | |
So I have an updated version of this idea of classed masks and errors on the spectrum1d pull request. NDData can only do so much as you somehow need to tell nddata what kind of error or mask it is. | |
Masks are not necessarily bool masks. They could also work like flags (bitmask) and then would have a different arithmetic operator (or they could be any other sort of mask that we haven't even thought of yet. | |
Errors have the same problem, SDErrors are added in quadrature (for example), but that does not go for variance or logarithmic errors. somehow nddata needs to know what to do with these. | |
I take @eteq and @astrofrog's point that overloading the operator is confusing. So in my new implementation I have made them explicit functions .mask_add .mask_sub (for errors its .error_add .error_sub). | |
So from where I see it, the main problem is that nddata needs to have some way of knowing how to work on masks and errors (which will not always be the same operation), with operations like convolution, arithmetic and interpolation (I guess these are the three basic types). I implemented this using classes. I can see this being implemented using other methods like function pointers in the meta data, which I regard as less clean. | |
For now, I'm closing this pull request and refer for future discussion to https://github.com/astropy/specutils/pull/6 . | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
io.ascii tests broken with numpy-dev | |
<author>astrofrog</author> | |
It looks like some io.ascii tests are broken when using the developer version of Numpy with Python 3.2: https://jenkins.shiningpanda.com/astropy/job/astropy-py3.2-numpy-dev/lastCompletedBuild/testReport/ | |
@mdboom - are the tests using the latest revision of Numpy? | |
<author>taldcroft</author> | |
OK, will set up a dev numpy on my machine and try to reproduce. It looks like these are different instances of one error. | |
<author>mdboom</author> | |
Yes, the numpy-dev builds track numpy master. | |
<author>taldcroft</author> | |
Maybe this is a bug in numpy-dev? Or I don't understand strings and unicode in numpy... When I use automatic data conversion with `np.asarray` then I get unexpected and inconsistent results for strings. The specific issue is that the `table.Column` class uses `np.asarray()` on numpy `np.str_` data types in the `io.ascii` test that fails. Below is a comparison of python 3.2 with numpy 1.6 and numpy-dev | |
*numpy-1.6.0* | |
``` | |
>>> np.asarray([np.str_('abcdefghij')]) | |
array(['abcdefghij'], | |
dtype='<U10') | |
>>> np.asarray([np.str('abcdefghij')]) | |
array(['abcdefghij'], | |
dtype='<U10') | |
>>> np.asarray([np.unicode('abcdefghij')]) | |
array(['abcdefghij'], | |
dtype='<U10') | |
>>> np.asarray([np.unicode_('abcdefghij')]) | |
array(['abcdefghij'], | |
dtype='<U10') | |
``` | |
*numpy-dev* | |
``` | |
>>> np.asarray([np.str('abcdefghij')]) | |
array(['abcdefghij'], | |
dtype='<U10') | |
>>> np.asarray([np.str_('abcdefghij')]) | |
array(['abcde'], | |
dtype='<U5') | |
>>> np.asarray([np.unicode('abcdefghij')]) | |
array(['abcde'], | |
dtype='<U5') | |
>>> np.asarray([np.unicode_('abcdefghij')]) | |
array(['abcde'], | |
dtype='<U5') | |
``` | |
<author>taldcroft</author> | |
I have a simpler example now: | |
*numpy-dev* | |
```python | |
>>> d = np.array(['a']) | |
>>> np.asarray([d[0]]) | |
array([''], | |
dtype='<U0') | |
``` | |
*numpy-1.6.0* | |
```python | |
>>> d = np.array(['a']) | |
>>> np.asarray([d[0]]) | |
array(['a'], | |
dtype='<U1') | |
``` | |
<author>astrofrog</author> | |
Looks like a bug to me... Maybe worth pinging the numpy-dev list? | |
<author>astrofrog</author> | |
Maybe the bug was introduced here? https://github.com/numpy/numpy/commit/91f87e1f613630ff0ad9864017f059afcd6e57f1 | |
<author>taldcroft</author> | |
I've submitted a ticket: | |
http://projects.scipy.org/numpy/ticket/2081 | |
I'm not sure what to do in the meantime if we want to get clean test runs for the rest of astropy. | |
<author>mdboom</author> | |
I think it's fine to consider astropy "ok" as long as tests are passing for released versions of numpy. The numpy-dev tests are there for exactly this reason -- to discover upcoming bugs or breaks in compatibility in numpy nice and early. | |
<author>astrofrog</author> | |
I agree with @mdboom, though if they don't fix the bug by 1.7.0, or if this is a 'feature' and not a bug, then we'll have to include a workaround. I labeled this ticket 'Upstream fix required' (is that correct?), and I guess we could leave it open. | |
<author>taldcroft</author> | |
The numpy devel issue is fixed in 32a4a7d. See also http://projects.scipy.org/numpy/ticket/2081. In theory the numpy-dev tests will pass now. | |
<author>astrofrog</author> | |
The tests are passing now: https://jenkins.shiningpanda.com/astropy/job/astropy-py3.2-numpy-dev/ - closing the issue. | |
</issue> | |
<issue> | |
<author>keflavich</author> | |
convolve_fft - counterpart to convolve. NaN-friendly and mask-friendly, N-dimensional convolution | |
<author>keflavich</author> | |
As stated in pull request [155](https://github.com/astropy/astropy/pull/155), this is an N-dimensional FFT-based convolution code. I've done my best to ensure astropy compliance, but I probably missed some key components. Getting the unit tests in place and checking for agreement with [convolve.py](https://github.com/astropy/astropy/blob/master/astropy/nddata/convolution/convolve.py) is probably essential before this request can be accepted. | |
However, it has some nice features: | |
- N-dimensional | |
- works with masked arrays (hack-ey approach, though; is there a better one?) | |
- can be multithreaded with FFTW (is this sort of "optional requirement" allowed in astropy?) | |
<author>mdboom</author> | |
One general comment at first glance: run this through the pep8 tool for whitespace compliance, if you wouldn't mind. | |
<author>eteq</author> | |
I haven't read this in much detail yet, but two quick notes: | |
* I think we agreed that it's ok to relax the 80-char limit when it's really awkward to shorten the line - it shouldn't be common, but there are some places where going a few characters over ends up being much more readable than strict compliance. | |
* I'll take a look at your documentation and see if I can figure out the build problem soon. | |
<author>keflavich</author> | |
I'm not sure there will be any build problem on astropy, but the | |
formatting looks all wrong when I try it in my local repository. | |
Which brings me to a question I should perhaps post somewhere else, | |
but what is the workflow for adding a file to the API documentation? | |
[the docguide](http://astropy.org/development/docguide.html) gives | |
some description, but stops short of a complete description... I | |
assume sphinx-apidoc + sphinx-build both need to be run? | |
Also, I started running test_convolve's tests with convolve_fft, and | |
it fails pretty miserably for the 1d tests. I've clearly made some | |
indexing mistakes that I'm going to clean up before this request | |
should be accepted. | |
On Thu, Mar 22, 2012 at 3:31 PM, Erik Tollerud | |
<reply@reply.github.com> | |
wrote: | |
> I haven't read this in much detail yet, but two quick notes: | |
> | |
> * I think we agreed that it's ok to relax the 80-char limit when it's really awkward to shorten the line - it shouldn't be common, but there are some places where going a few characters over ends up being much more readable than strict compliance. | |
> * I'll take a look at your documentation and see if I can figure out the build problem soon. | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/pull/182#issuecomment-4648861 | |
-- | |
Adam | |
<author>eteq</author> | |
I'm in the middle of cleaning up the documentation and implementing a standard (mostly automatic) way of doing the API pages. I just pushed a change to master that now makes the documentation for nddata autogenerate appropriately, so if you | |
add ``from .convolution.convolve_fft import convolve_fft`` to the ``astropy/nddata/__init__.py`` file, you should get the api documentation to appear automatically on the nddata doc page. | |
<author>eteq</author> | |
I just tried building the docs after adding ``from .convolution.convolve_fft import convolve_fft`` at line 17 of ``astropy/nddata/__init__.py``, and they build fine after the fixes I've described above. (Although note that that's after merging with master, because I just made a change to master that adds the API documentation for `nddata`). | |
<author>astrofrog</author> | |
Preliminary tests indicate that this is indeed going to be faster than my cython implementation for large kernels (for the moment, I'm seeing a turnover around ~32 pix - not *that* large), so this is definitely going to be useful. I do see some differences in results for some random data, so we may both have work to do to get these to agree. | |
One suggestion I would have is that - given that there are a lot of parameters - you could separate the parameters into 'Parameters' and 'Advanced Parameters' - this may be an idea for other Astropy routines too, so that starting users know which parameters to focus on. In 'Advanced Parameters' I would put parameters which don't exist in ``convolve``. If there are options you think are important in there, we could port them to ``convolve``. Anyway, just a thought! | |
<author>astrofrog</author> | |
This is a minor comment regarding pull requests - it's often a good idea to to the work and pull request on a branch in your fork that is not master (but don't change it at this stage). Though this is more of an issue if you want to work on separate pull requests at the same time. | |
<author>keflavich</author> | |
Thanks for the git tips. Is that because you don't want to see all of my debug-level commits in the pull request history? If I eventually pulled from my work-in-progress branch, wouldn't that bring over all the commits anyway? Or are you suggesting only bringing finished products into 'master'? | |
Re: speed - I'll work on speed tests today. | |
Re: parameters - the parameters in convolve and convolve_fft disagree pretty badly in terms of edge behaviors. I've been working to get them to behave consistently, and it will just be an issue of documenting carefully. I like the suggestion about 'advanced parameters', so I'll add that. | |
One other tidbit - convolve_fft allows even or odd dimensioned kernels, though for even-dimensioned kernels one has to be careful about placing the center. I'm going to add a helper routine 'make_kernel' to take care of that (i.e., center gaussians / boxcars / tophats correctly). | |
<author>astrofrog</author> | |
@keflavich - regarding git and branches, what I mean is the following. Say you want to implement feature A, e.g. convolve_fft. Let's say you make the changes to master as you have done, then open a pull request. Now if you want to implement feature B (some other unrelated routine, say to do with spectra) independently from A, you'll have to create a branch, but if you create a branch from A you'll include feature A in your branch (unless you create an empty branch and pull from the upstream master). Alternatively, say we decide (hypothetically) that we don't need convolve_fft, then if you want to implement any other features in your branch in future, you'll have to rewrite the history on master to get rid of those commits. This is a bit of a mess, so the simpler approach is to always have your master branch reflect upstream/master and then create branches for each different feature/pull request. If you decide to not pursue a feature, you just remove that branch. Does this make sense? | |
The issue of debug commits is unrelated - if we decide there are too many commits doing and undoing things, creating large diffs, then we or you can always squash the commits into one or a few using rebase before we merge. But that can be done regardless of whether you are using branches or working on master. | |
<author>eteq</author> | |
I'll echo some of @astrofrog's git-related comments: It's definitely better practice to merge in "feature" branches rather than master, so then you can have as many pull requests as you want going at the same time. Also, while it's so big a deal, it's a minor pet peeve of mine to see all these references to "master" in the commit logs that are referring to the master branch belonging to someone other than @astropy. | |
One other git-related comment: I see some of your commits are by "Adam <keflavich@gmail.com>" and others are "Adam Ginsburg <adam.g.ginsburg@gmail.com>". Depending on how paranoid you are about e-mail accounts in the open, you might change these to whichever you want them all to be. See http://stackoverflow.com/questions/750172/how-do-i-change-the-author-of-a-commit-in-git for tips on how to rewrite history with the new names. | |
<author>eteq</author> | |
Also, perhaps you didn't notice my comment about this them before, but the docstrings as you have do not build correctly in Sphinx. You can't simply choose any name for the sections, it has to match the numpy docstring standard. | |
So a section like @astrofrog suggested called "Advanced parameters" will *not* work, nor will "options" or "Advanced options". Instead, you have to use one of the ones described at https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt#sections (or the equivalent section in the astropy docs). Specifically, it calls for a section called "Other parameters" to do what you want. To indicate that they are optional, you add "optional" after the type (see my inline comments above), and similarly, you have to move stuff from "options" to be under "parameters". | |
<author>keflavich</author> | |
@eteq - Tom said I should not change branch names at this point. To change branch names, I think I'd have to create a new pull request, no? I can do that if needed. | |
I made the documentation changes you recommended, excepting the comment about returns... I'm not really sure how to change it at the moment. | |
<author>eteq</author> | |
@keflavich - sorry, yeah, I didn't mean you should change the branches now, I was just suggesting it to you to make it easier for you in the future. | |
Also, are you able to build the docs? I have some more inline comments below that I noticed when I built the docs (so if you can build them yourself, it might be easier for you to diagnose these things yourself). | |
<author>eteq</author> | |
One final general comment: The docstring standards say that, in general, docstrings should be all in complete sentences - you're using shorthand a fair amount in some of the parameter strings. This is not a major probelm (e.g., you don't have to go back and fix it unless you want to), but it's a thing to keep in mind, because writing in shorthand in docstrings often morphs into something like pseudo-code if you're not careful. | |
<author>keflavich</author> | |
I was finally able to build the docs. I had to `python setup.py build_ext --inplace` to get convolve to work, which I hadn't realized was necessary. I think it's compliant now, excepting a problem with `array` pointing to python's array instead of the input parameter when using backticks. I removed fftshift, since it was causing confusion and didn't do anything obviously necessary. | |
<author>eteq</author> | |
Oh, the bit about `array` hadn't occurred to me... mildly annoying, but I guess there's really no obvious solution. | |
And if you're wondering, the need to use ``--inplace`` in the build is not a feature, it's a bug :) I'm working on fixing it now along with generally cleaning up the docs build process to hopefully be more reliable. | |
<author>eteq</author> | |
@keflavich - #187 should fix the build issues you were having... you might try merging that with your branch here (locally on your own computer, I mean, in another branch that you make only for doing that, not one that you have on github) just to test that this fixes your issues. | |
One final thing to consider about this code that I realized while doing #188 : would you be averse to moving the `convolve_fft` and `make_kernel` functions into ``convolve.py`` instead of their own files as they are here? While it's a good idea to spread out files somewhat, I think it makes sense to include closely related functionality like this all in one module. | |
<author>eteq</author> | |
@keflavich - you're still working on this, right? | |
<author>keflavich</author> | |
Yes, I lost track of it. I also didn't realize that convolve.py | |
(#155) was accepted; I'll need to revise this pull request to | |
incorporate those changes. I'll try later this week. | |
On Tue, Apr 17, 2012 at 1:54 PM, Erik Tollerud | |
<reply@reply.github.com> | |
wrote: | |
> @keflavich - you're still working on this, right? | |
> | |
> --- | |
> Reply to this email directly or view it on GitHub: | |
> https://github.com/astropy/astropy/pull/182#issuecomment-5184244 | |
<author>eteq</author> | |
I just noticed now that some commits appeared here a few days after your comment, @keflavich - is this ready for final review, or are you still working on it? (it looks like it needs to be rebased against master, though, regardless). | |
<author>keflavich</author> | |
I'd recommend checking that I did the rebase (merge) correctly... I'm still a git novice and apparently not very good at googling git commands. However, otherwise, I think it is ready for final review. | |
<author>eteq</author> | |
@keflavich - it looks like you did ``git merge`` here. FYI, ``git merge`` is *not* the same as rebase. To clarify the difference, some diagrams (I took these from the git man pages). You started with | |
``` | |
A---B---C convolve_fft | |
/ | |
D---E---F---G master | |
``` | |
And it looks like you merged it, which means: | |
``` | |
A---B---C convolve_fft | |
/ \ | |
D---E---F---G---H master | |
``` | |
Whereas a rebase involves changing it to: | |
``` | |
A'--B'--C' fft_convolve | |
/ | |
D---E---F---G master | |
``` | |
so in the latter case, after merging into master, it's like there was never any branch. This is of course very bad to do in the astropy master branch, because its changing history, but if you have a feature branch that no one else has been using, it's safe to rebase. | |
After experimenting a little, though, I see that your branch would be very difficult to rebase, anyway, though, because you merged a much of times with master while working on this. Was that intentional? It's better to avoid merging with anything on a "feature" branch like this (except when needed to get specific functionality from master), as that makes it much more confusing in the event that you have merge conflicts. | |
I also see you have a ``convolve_fft`` branch on your github fork... this is the right branch to be merging, right? (your master), not that one? | |
<author>eteq</author> | |
Sorry, after that last comment I saw two other little things here - once you've addressed these, I'm happy to merge (or, if you want, I can just merge as is and make these changes myself in master) | |
<author>eteq</author> | |
Hmm, actually, I'm also getting a bunch of test failures in the convolve tests if I merge with master and run this - see http://pastebin.com/mVSAcRgv for my test results. Any idea what's going on there? (This is under Ubuntu 12.04 w/ a 64-bit python) | |
<author>astrofrog</author> | |
@eteq - do the tests pass if you add a third argument of ``10`` to the calls to ``assert_array_almost_equal_nulp`` in the tests, e.g. | |
assert_array_almost_equal_nulp(z, x, 10) | |
Not specifying a third argument defaults it to 1, which might be slightly too restrictive for cross-platform compatibility. | |
<author>astrofrog</author> | |
I will try and review this later today. Note that before merging, we should push this to a staging branch. I can push it to mine once the tests pass for @eteq. | |
<author>eteq</author> | |
Changing `assert_array_almost_equal_nulp` to 10 makes it so there are 88 failures instead of 120... At least some of the others seem to be due to rounding errors, though (e.g., ``assert np.array([1.,...])[0] == z[0]`` casues an error because ``z[0]`` is 1e-16 off from 1. I haven't had a chance to look through all the failures, though. | |
(Also, the same error is happening on 64-bit Mac OS X 10.7) | |
<author>keflavich</author> | |
A few categories of response... | |
1. git - This is my first attempt to work on a large collaboration on git (or any DVCS); my approach is therefore rife with errors, which I think are mostly errors of convention. I originally created the pull request from my 'master' branch because I had no idea there was any reason not to; the "convolve_fft" branch was created later and, as far as I can tell, cannot replace master in this pull request (though I could close and create a new pull request if needed). | |
I also didn't understand the difference between rebase and merge in the above context. Rebase is clearly what I wanted, but I never got one to proceed successfully - pull was just easier. Based on @eteq's comment above, I don't think I want to try to rebase now. | |
As it stands, will accepting this pull request screw up astropy's history? Or will it just include annoying components? | |
2. Testing - There were some errors when I ran the tests, consistent with those you observed. I did not have those same errors when I last ran the tests. I think that indicates some machine / install dependent behavior. I've added more "nulp" tests and removed the "==" tests to try to get around this problem. | |
I also found out that numpy's fft (which I never tested in the previous version, but is now included in the test suite) does not operate at float64 precision, at least as far as I can discern. I've therefore changed some of the tests involving np.fft to check for agreement at the float16 level instead. Numpy's fft behavior strikes me as strange, any idea what might cause the disagreements? | |
3. Copyrights - I've copied them from some other files in astropy, but neither convolve nor make_kernels had them before. | |
4. ConfigurationItems - Added. Better make sure I've got them right; the documentation you pointed me to looked like a work in progress? | |
<author>eteq</author> | |
@keflavich - sorry it's taken so long - this one dropped off my radar... | |
1. Let me try to put it in a one-sentence form: Rebase moves all your work to come after the current master, and merge leaves it where it is, but melds the two branches together. So with rebase, when you look at the history, it's just a linear development history, while merge involves a branch (and is thus a bit harder to read). More importantly here, if your changes are all additions (rather than concurrent changes with something that also changes in master), rebase will ensure that your changes appear as one chunk instead of weaving into and out of the changes in master. The latter reason is really why it would have been nice here. | |
The general lesson is that once you've made a branch, don't pull from master unless you have to to get your code to work, because it makes for a much more confusing revision history. But that's a lesson-learned - it's fine as it is now, given that it would be hard to fix it at this point. | |
2. I have no idea what's going on with the numpy fft... maybe drop a question about it in their mailing list? As for the tests, now I'm only getting 12 failures instead of 100 or whatever before, and they all seem to be "not quite close enough" sort of failures - see http://bpaste.net/show/30290/ | |
3. They are missing in a few random places where people forgot to include them... I'll have to do a final look-through before 0.1... but they should be there in any file that has any real content (so easiest is to just do it for all non-empty files). But as you have it now seems fine. | |
4. See @astrofrog's comments and mine inline. I'll put out a PR shortly to somewhat clarify those docs - once I do that, let me know if there are other clarifications that might be helpful. | |
<author>keflavich</author> | |
On ConfigurationItems: In order to evaluate the ConfigurationItems at runtime AND allow them to be overridden by keyword arguments passed to the function directly, I'd need to do something like | |
def f(blah=None): | |
if blah is None: | |
blah = BLAH_CONFIGITEM() | |
right? I think it makes sense to have these parameters available to the users as normal python keyword arguments. So, should I not use ConfigurationItems at all? Or use them as I've described in this post? I was under the impression that ConfigurationItems would be used to modify the defaults, and that runtime changes would be left to keywords passed to functions, but as you've described it, I am incorrect. | |
<author>keflavich</author> | |
@eteq On the NULP issues, you have lines like this: | |
E AssertionError: X and Y are not equal to 10 ULP (max is 4.3835e+18) | |
That's not a small disagreement, but I think it results from treatments of 0 in the assert_almost_equal routines: | |
>>> numpy.spacing(numpy.float16(0)) | |
5.9605e-08 | |
>>> numpy.spacing(numpy.float32(0)) | |
1.4012985e-45 | |
>>> numpy.spacing(numpy.float64(0)) | |
4.9406564584124654e-324 | |
>>> numpy.spacing(numpy.float64(1e-300)) | |
1.657809211691619e-316 | |
So if we're comparing 0.0 to something very small, the disagreement can still be 18 orders of magnitude. Unfortunately, I cannot reproduce this error, so it's not particularly easy to fix. My workaround has been to force both arrays being compared to be type float16, which sort of forces agreement, but I had only implemented that for the numpy FFTs because on my machine, FFTWs' FFTs agree. | |
I can change all the tests to compare at float16 precision, or mark the errors you're receiving as "expected", or we can try to further diagnose what system configuration is required to produce the errors. My guess is that you're on a 32 bit system, or at least your python and/or numpy and/or FFTW are 32 bit, while my system is uniformly 64 bit, masking the errors. | |
<author>eteq</author> | |
@keflavich re: `ConfigurationItem` | |
However, do you have a strong reason for wanting these to be available as function parameters? The reason I suggested these make good configuration items is that they are the sort of settings that seem to be more system-based than runtime-relevant. Furthermore, I'd say functions with a bunch of options that are repeated across a module are to be avoided. So based on "There should be one-- and preferably only one --obvious way to do it.", I'm thinking they shouldn't be function options. | |
But having said that, if you have a good case for when someone would want to set these options in the function call but *usually* want a system-configured default, then the example in the post you made here is a good way to do it. | |
FYI, I just merged #247 which clarifies the configuration system quite a bit - you may want to glance over those changes - they should appear shortly in the docs at http://docs.astropy.org/en/latest/configs.html | |
<author>keflavich</author> | |
@eteq: Ah, I understand. The main reason I think users should be able to choose at runtime which FFT they'll be using (or how many cores) is a memory issue: The parallel FFTW will make copies of each array you FFT, so if you're performing FFTs on large arrays, you may run out of RAM and start swapping, thereby losing any advantage from the parallel implementation. I'm not sure there's any good reason you'd want to use numpy's fft if you have FFTW installed, but I believe there could be reasons I'm not thinking of. So, I'll implement the system-configured default with override. | |
<author>eteq</author> | |
@keflavich re: NULP mine should be all 64-bit too, but I don't have FFTW installed at all... | |
@astrofrog - what do you think about this? Should we just do all tests at the 16-bit level? Another option would be to have these be "marked" tests that are only run if you give a special argument to py.test (like we do now with remote_data) | |
<author>eteq</author> | |
Oh, and how do I check if my numpy is 64 or 32-bit? I assume it's 64 because my python is 64, but I guess I never looked to see if you could compile it in some kind of 32-bit mode on a 64 bit arch. | |
<author>keflavich</author> | |
@eteq Since you don't have FFTW installed, all the errors must be happening for numpy FFTs, which in turn means that I messed up the tests. I think your test failures probably resulted from my misuse of the ConfigurationItems. In the latest version I just pushed, I added an extra requirement in the test_convolve_fft so that the FFTW tests will only be run if FFTW is installed (before, the FFTW tests were being run with numpy's FFT, which resulted in the errors you encountered) | |
<author>astrofrog</author> | |
Quick note: I'm a bit unsure on how this is going to merge. It doesn't merge cleanly on my computer, so not sure how GitHub will deal with this even though it claims to be able to merge it. @eteq - do you think we should merge this in manually via cherry-picks, which would allow us to rebase properly? | |
<author>astrofrog</author> | |
See my comment above about Numpy's FFT - since it appears to be causing more harm than good, how about just only using scipy's FFT? We have to understand better why Numpy's FFT doesn't give the same results as Scipy. I don't understand why the results between scipy and numpy's FFT should not be equal at the bit level. | |
<author>astrofrog</author> | |
I just realized I missed a few comments on exactly this issue earlier. Would the issues all go away if we just rely on scipy's FFT? I don't like the idea of just recasting to float16 (and this doesn't work on most Numpy versions anyway). 99% of Astropy users will have Scipy installed anyway, so I don't think it's a big issue to do this. | |
<author>astrofrog</author> | |
Note that if we do this, we just need to mark the tests as xfail for now and upgrade to a system like that suggested in #250 where tests that test functions with optional dependencies are marked as ``@optional_deps``. | |
<author>astrofrog</author> | |
(thought we should only mark the tests as xfail at the last minute once we agree that they pass for all of us with scipy installed) | |
<author>astrofrog</author> | |
For info, I pushed this (merged with the latest master) to my staging branch, so this will give you a good overview of failed tests https://jenkins.shiningpanda.com/astropy/job/astropy-staging-astrofrog-osx-10.7-multiconfig/ | |
<author>keflavich</author> | |
I've removed the np.float16 references and instead hard-coded an accuracy check of abs(0-0) < 1e-14. I think this is potentially dangerous, but I don't have a better solution right now. I've added scipy's fft as an option, but numpy's fft is still the fallback (it works to pretty darn high precision, it just doesn't test well). I admit, the configuration options are confusing now, but I prefer more options to being forced into using scipy. | |
<author>astrofrog</author> | |
The use of ``assert_allclose`` in ``test_convolve_kernels.py`` is also not compatible with Numpy 1.4.1 - could you substitute it for something else? | |
I guess what I'm saying about the Numpy FFT is that it appears to behave differently on different platforms, and until we can get the tests to pass without hacks, and since it seems the Scipy FFT works fine, maybe you should strip out the Numpy FFT stuff from this PR, then we merge this in, then you open a new PR adding the Numpy stuff, and we can iterate on it until we find something that works well? (that way we can merge this in today without worrying about the Numpy FFT stuff). @eteq, what do you think? | |
If you both think that we should just merge it as-is, and then just work on improving the tests, we can do that too. But to me it doesn't seem to be a testing issue in that the scipy and numpy FFT *have* to be returning different results, otherwise, why would the value to compare to 0 be different? | |
<author>astrofrog</author> | |
Here are the latest test results: | |
https://jenkins.shiningpanda.com/astropy/job/astropy-staging-astrofrog-debian-multiconfig/10/ | |
https://jenkins.shiningpanda.com/astropy/job/astropy-staging-astrofrog-osx-10.7-multiconfig/21/ | |
https://jenkins.shiningpanda.com/astropy/job/astropy-staging-astrofrog-winxp32-multiconfig/14/ | |
On Windows, for some reason, the tests from the unmerged comology branch are being included and that is what is making things fail, but otherwise convolve_fft has the same issue with Numpy 1.4.1 as Mac and Linux. | |
So precision-wise, it seems all the tests now pass. | |
<author>astrofrog</author> | |
I just realized that ``fftw3`` is yet a different package from numpy and scipy (this hadn't hit me before). I guess given the fact there are now three different options for the FFT, it might be better to have an option ``fft_package=`` (or something better) with possible options ``fftw3``, ``scipy``, and ``numpy``? (not sure which should be the default). Same goes for the configuration item? Then it will be easier to cycle through each of the three options for the tests, and would be less confusing that having the possibility of setting ``use_numpy_fft=True, use_scipy_fft=True``. | |
If you just want to merge this in as-is, I'm happy to do some more work on this routine to try and implement my own suggestions. | |
<author>eteq</author> | |
FWIW, all tests are now passing for me | |
@astrofrog - I'm not having any problem merging this into master... if I just reset my staging branch to master and merge this one, it does so without complaint... | |
@keflavich - I agree with @astrofrog that the choice of which FFT to use should be a single option/configuration item that can either be 'scipy','numpy', or 'fftw'. It's very confusing to have three conflicting boolean options... Note that you *can* specify enum-like configuration options by giving a list of strings as the 'default' value: ``ConfigurationItem('ffttype', ['fftw','scipy','numpy'],'description...')`` - whichever one is first in the list ends up as the default. Or alternatively, you could have it be a comma-separated list of strings giving the order in which to try the three options. Either way is fine with me. | |
<author>eteq</author> | |
Oh, and as for what to do about numpy vs. scipy, I'm fine with either way (merging only the scipy stuff now and working later on numpy, or merging this now and cleaning it up later). Is there any significant performance difference between the numpy and scipy implementations? | |
<author>keflavich</author> | |
@astrofrog I don't think numpy's fft is machine-dependent after all; it was just poorly implemented tests misleading me. So there's no reason not to include it. | |
@astrofrog, @eteq: I agree completely, ffttype should be an option as you've described. I'll work that in shortly. | |
I don't know if there's a performance difference between numpy and scipy... I'll change the fft speed tests to find out. | |
<author>astrofrog</author> | |
@keflavich - when you implement ``fft_type``, make sure that all three get tested for each test, but also ensure that the tests are skipped if the optional dependencies are not present. See | |
http://astropy.readthedocs.org/en/latest/development/testguide.html#tests-requiring-optional-dependencies | |
for information about how to do this. If you have ``HAS_SCIPY`` and ``HAS_FFTW3`` then in the tests you could do: | |
if fft_type == 'fftw3' and not HAS_FFTW3: | |
pytest.skip('fftw3 is not installed') | |
elif fft_type == 'scipy' and not HAS_SCIPY: | |
pytest.skip('scipy is not installed') | |
at the top of each test (I think you have to call pytest.skip rather than using the decorator since you only want to skip some of the parametrized tests). | |
<author>keflavich</author> | |
@astrofrog: Apparently numpy's FFT has some very small performance benefits in some cases. But I'd say my speed tests are rather inconclusive. | |
<author>astrofrog</author> | |
I'm going to be picky, but can you use skip rather than not including the options? (which was my first instinct too). We need to know that being skipped due to a missing dependency with the automated testing systems. | |
<author>eteq</author> | |
And I agree with @astrofrog that it would be better to skip the tests that aren't relevant because of missing dependencies. But if this is going to be very difficult to do, it's also not absolutely necessary. | |
Also, @astrofrog - I think maybe you're right that one of us should to try to manually merge this after a rebase instead of using the merge button, given the various merge/pull oddities. It may be more trouble than its worth, but whoever does the merging should at least try that first. | |
<author>keflavich</author> | |
@eteq Would it hurt if I tried to rebase from astropy:master at this point? Not sure that it will work, but I may as well try to learn if the exercise is harmless. | |
<author>astrofrog</author> | |
@keflavich - well, I would definitely make a copy of the repository and try on that if you want to experiment. I wouldn't call it harmless ;-) | |
@eteq - I'm happy to seeing to the merging, and I can try a rebase. I can also try and see if I can sort out the skips. | |
<author>astrofrog</author> | |
It took a little while, but I managed to rebase. Just one question - I'm getting a one line difference with your code (because I had to resolve merges), and I can't remember if the right version is to have: | |
if not _ASTROPY_SETUP_: | |
or: | |
if not setup_helpers.is_in_build_mode(): | |
which one is the correct one to use? (@eteq?) | |
Shall I merge? (I'll open a pull request so I can check and merge it via GitHub) | |
<author>astrofrog</author> | |
Ok, I opened a pull request (#253) with the rebased convolve_fft code, and have implemented the test skipping and also have made sure that if ``fft_type`` is not specified, and FFTW is not installed, it will fallback on scipy, and then on numpy if scipy is not present (I've discussed this off-github with @keflavich). | |
<author>eteq</author> | |
So shall we close this in favor of #253, then? | |
<author>astrofrog</author> | |
@keflavich - #253 (the rebased version of this PR) has been merged, so I'm closing this one. Thanks for your contribution! | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Implemented messaging system | |
<author>astrofrog</author> | |
This is a prototype implementation of a messaging system that improves significantly on #93 and addresses comments on the astropy-dev mailing list. There are no tests and documentation yet (I will do this if we agree on the implementation). | |
The current implementation does not depend on the logging module, but instead uses a custom Message class to store information about message levels, content, and origin. The use of the name `messaging` is to avoid confusion (`logging`/`logger` made people think of log files). | |
The main improvement in practice compared to #93 is that messages are stored as classes, and can contain more information than a simple string. At the moment, there is no ``Messenger`` class, in the sense that there is a single messenger, the module itself (unlike the logging module where one can instantiate different loggers for different parts of the code using getLogger). If we feel that there is a need for multiple messengers, we can implement getMessenger which would return a Messenger class, but we should think about whether this is really needed. | |
One nifty feature is that it is possible to catch messages with the Python ``with`` syntax: | |
with messaging.catch_messages() as messages: | |
messaging.debug("Nothing much here") | |
messaging.info("For information") | |
`messages` is then a list of `Message` instances that can be examined. I've also implemented the ability to catch only specific messages by level or by origin: | |
with messaging.catch_messages(filter_level=messaging.INFO) as messages: | |
messaging.debug("Nothing much here") | |
messaging.info("For information") | |
The above will catch only `INFO` messages. There is also a ``filter_origin`` argument, but I don't think it will work because ``find_current_module`` returns an object that I need to figure out how we compare to module names (@eteq - any ideas?). | |
Anyway, I've posted an example script you can use here to test things out: https://gist.github.com/2184446 | |
As I said, this is just a prototype at this stage, so extensive comments/thoughts are welcome! Once we converge on this, I'll post to the mailing list to ask for further comments. | |
<author>eteq</author> | |
Generally I like the overall interface. | |
A few slightly more general items than my inline comments above: | |
* Instead of the `*_COLOR` constants with explicit escape characters, I would suggest using the `astropy.utils.console.color_print` function to allow use of "green" and "red" and so on. | |
* It would be nice if the default string representation included the origin somehow - That's one of the things that makes sphinx errors so much easier to work with... | |
* What exactly is the point of the "error" category? That is, how is it different from throwing an exception? (Although see my comment/question below) | |
And a few items that probably fall more in the category of "todo assuming this general interface satisfies everyone": | |
* Set it up so that python warnings can be captured by this system (there's some function that allows redirection of warning messages, although it may have been you that pointed that out to me, @astrofrog) | |
* Figure out of there's some way to make sure Exceptions pass through this system as well? (I'm less sure how to do this or if its possible, but it would be valuable to have exceptions appear in log files...) | |
* Add an option of sending the messages to a log file, the console, or both simultaneously (and along with this, perhaps more fine-grained control of which categories get sent so that e.g. errors go to the file but do not appear as extra console print statements along side the exceptions). Although presumably that's what you had in mind with the Messanger class? | |
<author>embray</author> | |
I hate to say this after you've already done this work, but I don't see the point? What does this do that the built-in logging module doesn't already do? | |
You wrote "but instead uses a custom Message class to store information about message levels, content, and origin". But the logging module already does this with `LogRecord` objects. In fact, any arbitrary context information can be attached to LogRecords through the various .log, .debug, .info, etc. methods using keyword arguments, or with LoggerAdapters. | |
The color stuff is very nice, but can be implemented as a custom handler for a logger. | |
When you start trying to take into account the issues that @eteq raised you'll find yourself reimplementing large swaths of the logging module, just under a different name. | |
If people were "confused" by thinking that logging refers *specifically* to "log files" that would be a misconception on their part that should be fixed--we shouldn't call it something different from what any other software system calls it just because someone doesn't understand that collecting informational messages about running software into a single interface and directing it to various output streams (be they files, sockets, or something more exotic) is "logging". When I first saw "messaging" in the title of this, my initial thought was some sort of asynchronous message broker for IPC (and why would we need that?) So calling it "messaging" is just as confusing. | |
Now, if people have objections to the built in logging module itself, I probably wouldn't disagree. It's a bit crufty and difficult to understand. That said, | |
* The builtin logging module is very powerful, and battle-hardened. Most anything we need we can do with it. | |
* If we really hate it, there is still prior art in alternative logging frameworks, and I don't see any point in reinventing the wheel with something that won't be compatible with other software that does use a standard logging framework. | |
Do we have an actual list of needs/requirements for logging somewhere? I know there have been discussions, but is there anything documented from that? I would start there instead, and see what we can do to implement it within existing frameworks. | |
<author>embray</author> | |
@eteq `log.error()` and the like is mostly used in the context of conveying error info to non-technical users. A common use-case is to catch an exception and, rather than print a traceback and die, log an error (in GUI-based applications this may cause an error dialog to be raised or somesuch). | |
In particular, it's not uncommon to use this in the context of a custom `sys.excepthook` to log all uncaught exceptions. It's precisely for this usecase that the log methods in the stdlib logging module accept an `exc_info` keyword argument. I can post examples of this if you're interested. | |
<author>astrofrog</author> | |
@iguananaut - this is why I did not spend any time writing docs or tests ;-) I agree with many of your points - in particular I completely agree that we should use ``logging`` *if* it satisfies our requirements (which I agree we need to make clearer - I was just trying to address various points that came up on the mailing list). One thing that did not seem immediately obvious is how to catch the output from logging, as I've done here with catch_messages (and how warnings has catch_warnings). | |
Maybe we can see how #93 could be modified to behave the same as this PR. | |
<author>eteq</author> | |
@iguananaut - I see your point here about not re-inventing the wheel... But I think there's a possible advantage here that it allows us to use the internals of `logging` while letting us capture the events, allowing a straightforward way of combining warnings and messages. I also think the `find_current_module` functionality is crucial here, as well. So basically, this interface lets us use this message as a "gatekeeper" of sorts. | |
Having said that, I suspect you're right that everything here can be done by appropriately configuring a `logging.Logger` instance as in #93... I think the reason there aren't clear requirements is that the on-list discussion didn't really conclude (this was the one about using `warnings` vs. `logging` or both). | |
So I guess the point is that this is an approach that shows how we might adopt a "gatekeeper" approach for messages and warnings, and tell everyone who works with astropy to use that instead of `warnings.warn` and `print` or `Logger.whatever` calls. Although if we take this approach just going through a `Logger` instance, that works as far as I'm concerned as well. But somehow we need to decide which... | |
<author>astrofrog</author> | |
Maybe we should draw up an API for how we want this to *behave*, which will be our requirements, and *then* we can see if we can implement it purely with logging, and if not, we can develop a wrapper around logging (and if that doesn't work either, we have a custom implementation like this PR?). We could even write the tests before the code ;-) | |
For example, I think that the following requirements are probably fairly uncontroversial: | |
* We should be able to do: | |
x.debug('a debugging message') | |
x.info('an information message') | |
x.warn('a warning') | |
where ``x`` is either an object or a module (tbd) | |
* There should be a way to filter messages/logs based on importance, and debug shouldn't be shown by default. | |
* There should be a way to capture and inspect the content and origin of messages/logs | |
Other things that are less clear to me: | |
* Do we want to be able to filter/control messages/logs on a sub-package basis. For example, do we want to show debug messages just for ``astropy.io.fits``? | |
* Do we want to be able to capture messages/logs based on their origin? | |
* Do we want to be able to capture messages/logs only with a specific importance level? (e.g. just warnings) | |
* Do we want ``warn`` to raise a true warning? | |
* Do we want to be able to capture/display exceptions? | |
Anyway, these are just thoughts, and there are probably other issues I've not included here. | |
<author>astrofrog</author> | |
I've read up a bit more about the logging module, and I agree with @iguananaut that we should be making use of it. I have overlooked some of the functionality, but I now think it should be possible to use it, and even implement some kind of ``catch_log`` context manager (by writing our own handler). | |
But before I go ahead and code some more, I've created a wiki page at https://github.com/astropy/astropy/wiki/Logging so that we can list our requirements. @eteq and @iguananaut (and others reading this) - could you go and edit this page to add what you think are important requirements? | |
<author>eteq</author> | |
@astrofrog - I added some to the wiki page, but I think there is an important decision to be made here that determines one of the requirements: Should this *replace* the warnings module (catching these messages in the way you describe), or should warnings instead be handled as in the stdlib and simply be noted in the log when they are emitted? I don't have a strong opinion which, other that definitely *not* both. | |
<author>astrofrog</author> | |
@eteq - just to make sure I understand, do you mean in the first case that this system would transform a stdlib warning into a logger warning, wheareas in the second case, it would keep the distinction? | |
<author>astrofrog</author> | |
There is an issue with having this system automatically catch warnings and exceptions, which is that it might be hard to implement in practice. The only way I see this working is if we add a decorator to all functions/methods in astropy (and all future contributions) which doesn't seem worth the effort. Or can you think of another way to do this? | |
If we decide this is impractical, then we could just require people use the logging/messaging warn *instead* of ``warnings.warn``. After all, as discussed in the mailing list, if we allow both, then how do you determine in which cases each one should be used? I think it's always going to be a blurry line... (dependencies such as numpy may however emit warnings, so that's something else to think about). What we *could* do is have ``logger.warn`` emit warnings using ``warnings.warn`` instead of ``print``. | |
<author>eteq</author> | |
@astrofrog - You actually got more at what I meant in the second comment. The two options I see are: | |
1. Astropy packages should *only* use `Logger.warn` or `Message.warn` or whatever, and provide the tools necessary to make those catchable as in the stdlib `warning` - e.g. catch them using something like what you have in this PR | |
2. Astropy packages should use `warnings.warn` for warnings. These still need to appear in the log, but we tell people to never use `Logger.warn`/`Message.warn` directly - only through `warnings.warn`. Then there's probably no need for the log-catching tools like you have here, because `warnings` already provides that. | |
In either case you make a good point that we could either have the `Logger.warn` log go through the `warnings.warn` function, *or* vice-versa... so we can directly enforce the "only one" rule by simple pushing one into the other. | |
As I said, I don't actually have a strong opinion between the two - 2 is probably easier because it uses existing tools, but 1 is more coherent/consistent. | |
<author>eteq</author> | |
Oh and I don't follow why you're saying a decorator is needed. For `warnings`, we can overwrite `warnings.showwarning` to go into the log instead of going to stderr. What I *don't* know for sure is how these methods interact with techniques for responding to warnings programatically... @iguananaut, do you have any insights here? | |
Alternatively, we could use ` logging.captureWarnings` to force all warnings to pass through the logger, essentially enforcing option 1 above. | |
And for exceptions, I meant that the log *only* logs uncaught exceptions. As @iguananaut pointed out, this is trivially achieved by replacing `sys.excepthook` with a function that sends the exception to the log. | |
<author>astrofrog</author> | |
@eteq - ah, I understand what you mean now. I think that option 1 would be cleaner. I think it makes more sense to ask people to do: | |
Logger.debug(...) | |
Logger.info(...) | |
Logger.warn(...) | |
than | |
Logger.debug(...) | |
Logger.info(...) | |
warnings.warn(...) | |
We could even have: | |
Logger.error(...) | |
which would raise an exception when we want to raise an exception explicitly, and we could still also catch other exceptions with the mechanism you suggest. I didn't read properly about ``sys.excepthook`` before, but I understand how it would be used now. | |
<author>astrofrog</author> | |
Note that what I meant before is that if people do want warnings to be 'real' ``warnings.warn`` warnings, then one could consider having a logging handler for ``Logger.warn`` that would use ``warnings.warn`` to display the warning (instead of a normal ``print``), so that warnings could be caught *both* by ``catch_warnings`` and by our custom ``catch_*`` method. So then we tell people to use ``Logger.warn``, but this actually would return a real warning (which would nevertheless be formatted like the info and debug messages if that makes any sense). | |
<author>embray</author> | |
You don't want Logger.error(...) to actually raise an exception. Rather, the opposite: error(...) can be used to write information about an exception to the log. | |
For warnings, log.warning and warnings.warn are accomplishing two (very slightly) different things, and there's lots of bikeshedding to be done over that. Though regardless of how the distinction is made between them, it's not a bad idea to log `warnings.warn()` calls through `log.warning()`. In fact, this is so common that it's in the stdlib :) | |
As for capturing log messages in testing, there are a number of ways this can be accomplished. testfixtures has a nice one that can be used as either a decorator or a context manager: https://github.com/Simplistix/testfixtures/blob/master/testfixtures/logcapture.py I wonder if its license is compatible with ours. If not, I can write something similar to this. | |
It might also be worth looking at some of the log capture stuff I've been working on off and on for astrodrizzle: https://svn.stsci.edu/trac/ssb/stsci_python/browser/stsci_python/trunk/tools/lib/stsci/tools/logutil.py#342 | |
Don't look too closely at it though :) One of the major features of logutil.py, which does something I've yet to see anywhere else, is that it actually tees stdout and stderr through a logger. So that, for example, `print()` calls actually get sent through the log as an `[INFO]` message (and are still sent to the console stdout as well). I don't think that that's functionality we need or want to include in Astropy. But I point it out because logutil.py also includes support for logging warnings and exceptions. | |
Right now you can only start and stop it with two functions `setup_global_logging()` and `teardown_global_logging()`. Astrodrizzle itself contains a decorator function that wraps a function with `setup/teardown_global_logging()` in a reentrant manner. I need to include a version of that in logutil.py itself though, as well as a context manager version. Also, this isn't thread-safe at the moment either. | |
<author>eteq</author> | |
I agree with @iguananaut that `Logger.error` should not be used to *raise* exceptions, because that introduces a second way to do something already built into python. Instead, we just say that if there's recoverable error (e.g. an exception is caught and handled somehow), we recommend that a message indicating this should be written to `Logger.error`. `Logger.critical` can be reserved only for uncaught exceptions (`sys.excepthook`). | |
And to clarify what I meant by "catching" warnings: this goes back to something Vicki Laidler mentioned in the on-list discussion: this is not about catching them for testing (although that is a useful thing to be able to do) as much as it is about being able to programmatically respond to warnings if some action is called for. e.g., if a warning along the lines of "SciPy not found, some stuff might not work," a GUI might use this information to gray out an option that's normally available. | |
The problem is, I just did some testing with `logging.captureWarnings`, and it turns out that if you convert a warning into an error and then catch it in the usual way, it does *not* appear in the `logging` log. So with that in mind I think I agree with @astrofrog that option 1 is better, assuming we can massage `Logger` into being providing a way to "capture" log entries. If we do this, we should also do `logging.captureWarnings`, but we should try to convince people to avoid `warnings` when possible because of the potential confusion. It might be nice to keep the `Warning` object heirarchy, though, and include it in the log via the `exc_info` kwarg that the `Logger.log` family of functions accept. | |
I know both Vicki and @iguananaut noted that `warnings.warn` and `Logger.warn` have slightly different uses... But in this project we (hopefully) will have a lot of different devs that aren't necessarily in close contact and even users who will probably be using this facility. So any subtle distinction will be lost, and just add to confusion instead of clearing it up. @iguananaut, what do you think with that in mind? | |
<author>embray</author> | |
For added clarity, I will use Warning with a capital W to refer to warnings module Warnings. | |
I wouldn't want to use Warnings within our own code to signal a condition to other parts of the code. Normal exceptions are already fine for that, and don't require mucking about with catching warnings. That's not to say that Warnings aren't useful to third-party applications using the code that might want to respond to some non-fatal condition in the library. But that doesn't mean we should be catching warnings all over the place in our own code. | |
To give a simple example from Numpy: | |
``` | |
>>> import numpy | |
>>> numpy.sqrt(-1) | |
__main__:1: RuntimeWarning: invalid value encountered in sqrt | |
nan | |
``` | |
Numpy just issues the warning, but it doesn't actually *do* anything with the warning. From what I can tell, Numpy doesn't catch or filter any warnings outside the context of testing. Though in writing an application (or library) that uses Numpy, I'm free to add a filter on that warning if I want to react in some particular way when a bad value goes to my sqrt() call. | |
<author>embray</author> | |
I should add: Maybe a good distinction between `warnings.warn()` and `logging.warn()` is that the latter should be used primarily for conditions where there's nothing the user can or should do to resolve the issue. It's something I mostly only use in command-line scripts or GUIs. | |
<author>astrofrog</author> | |
I agree that we shouldn't catch our own warnings/logs apart from in tests. But it would be nice to provide helper functions to do this in case users want to. | |
I understand the distinction between ``warnings.warn``, and ``logger.warn`` in theory, but I still find the line very blurry in practice, because sometimes it's not obvious if user action is required. Wouldn't it just make more sense to always use logger.warn for example, and just expect that the user will read all warnings and decide whether action is required? | |
<author>embray</author> | |
Oh, no doubt that it's a subjective distinction. My feeling about the warnings module is that it's a bit of a distraction from this whole discussion. I would tend not to use it, and only need it when I know I need it (or when I user requested it in a place that makes sense). | |
There are a few places in PyFITS where it gets used I think sensibly: The first case is just for deprecation warnings, which is one of the main things it was meant for in the first place. The other is for some non-fatal FITS-specific issues. For example there's the "card comment truncated" warning--usually it can be safely ignored, but conceivably one might want to prevent updates to a value if it means losing part of the comment. | |
<author>eteq</author> | |
Ok, I agree also that it's better to use exceptions to carry the sort of information that may need to be responded to - I was mainly mentioning it because a few people on the list had emphasized that they thought it would be useful to have this option. | |
So how about this for a way forward: @astrofrog, you could update #93 (or make a new PR if that's easier) to implement the requirements in https://github.com/astropy/astropy/wiki/Logging using `logging`. State in the documentation basically that we always use `Logger.warn` unless there is some very specific reason to using `warnings.warn`. Then add a `ConfigurationItem` along the lines of `capture_warning`, and if it's True (which should be default), redirect all warnings that come from an astropy context into the astropy `Logger`. | |
How does that sound? | |
<author>astrofrog</author> | |
@eteq - I'll give it a shot, and will update #93 to reflect this | |
<author>astrofrog</author> | |
Just to give an update, I've been working on this, and should be able to open a new pull request tomorrow. | |
<author>astrofrog</author> | |
A more advanced implementation is in #192 | |
<author>astrofrog</author> | |
Closing this in favor of #192 which has now been merged. | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Table reverse | |
<author>taldcroft</author> | |
Add a Table method to reverse the order of rows in the table. | |
<author>eteq</author> | |
Looks fine to me other than the tiny thing I mentioned. As far as I'm concerned, its mergable. | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
Add Table fancy index slicing using list | |
<author>taldcroft</author> | |
Previously fancy row slicing was only possible using an `np.array` as the item, e.g. `table[np.array([1, 3, 4])]`. This commit adds slicing using a Python list like `table[[1, 3, 4]]`. | |
There is an open question about whether this is actually a good thing or if it would be confusing for users. It was suggested by a user as a convenience but I'm interested in opinions from the group. It seems OK to me and is consistent with structured arrays and recarray. | |
<author>eteq</author> | |
This idea seems sound to me - as you say, it's more consistent with numpy arrays, so at least for me this is *less* confusing. | |
<author>taldcroft</author> | |
Only slightly confusing because if you put in a tuple instead of a list it does something entirely different, namely selecting columns by name. But if nobody objects to this change this week I'll go ahead and merge it. | |
<author>eteq</author> | |
Oh, I didn't realize that... I guess I am a bit concerned about using list and tuple arguments for two different behaviors. | |
Is it possible to change it so that you can pass in either a tuple or list of integers and get the indexed rows out, but pass in lists or tuples of strings and get columns out? And if you want a column by column number, you should have to do ``table.get_column([0,1,2,5]`` or ``table.column[0,1,2,5]`` or something like that. | |
<author>taldcroft</author> | |
This is basically the reason a Python list was not allowed originally. One complication with your idea is that `Table.__getitem__` then needs to check that every item in the list or tuple is of the same type (int or str), which will get slow for large lists. Alternately one could just look at the first list/tuple item and make a decision there and let things fail later, or ... well it starts to get at least slightly messy. | |
But maybe the tuple / list confusion won't be so bad in practice. The API is pretty intuitive for users, something like `table['a', 'd']` will give the expected result and `indexes = range(10, 20, 2); table[indexes]` will give the expected result. | |
<author>eteq</author> | |
Ah, I see... I had forgotten that ``table['a','d']`` translates to a tuple - and I now see that it will fail if you do ``table[0,1]`` or something like that - so using a tuple of ints won't work anyway. So with that in mind, there's a lot less potential for confusion. I'd suggest you add a note/warning about this in the docs though (if it isn't already there)... it's a good idea to point out possible gotchas of this sort as clearly as possible. | |
I'm not sure if this is related, but I did notice one behavior that I did not expect: | |
``` | |
>>> t=table.Table({'a':[1,2,3,4],'d':randn(4)}) | |
>>> t | |
<Table rows=4 names=('a','d')> | |
array([(1, -0.11932575156412743), (2, 1.885969875650091), | |
(3, 0.8059702494058479), (4, 0.2947832971262504)], | |
dtype=[('a', '<i8'), ('d', '<f8')]) | |
>>> t['d','a'] | |
<Table rows=4 names=('d','a')> | |
array([(-0.11932575156412743, 1), (1.885969875650091, 2), | |
(0.8059702494058479, 3), (0.2947832971262504, 4)], | |
dtype=[('d', '<f8'), ('a', '<i8')]) | |
>>> t[['d','a']] | |
<Table rows=4 names=('a','d')> | |
array([(1, -0.11932575156412743), (2, 1.885969875650091), | |
(3, 0.8059702494058479), (4, 0.2947832971262504)], | |
dtype=[('a', '<i8'), ('d', '<f8')]) | |
``` | |
The first result makes sense, but why does the second happen? Why does indexing on a list of column names give the table right back out again (in it's *original* order, not the flipped version I expected)? | |
<author>astrofrog</author> | |
I'm fine with this change, I think it makes sense to have list-based indexing as an option. | |
<author>eteq</author> | |
Actually, I should clarify, given that it wasn't obvious from my previous comment: I am happy with this now that I understand the intended use cases... I was only suggesting that this behavior be as clear as reasonably possible in the docs. | |
<author>taldcroft</author> | |
@eteq - I rebased this PR and will merge shortly. I'm planning a rework of the Table docs before 0.1 to improve overall clarity. Your comment from a month ago about making sure the docs are clear will be taken care of then. | |
<author>astrofrog</author> | |
@taldcroft - great! Note that there is a PR open (#218) that affects the layout of the front page of the docs. The PR isn't definite yet, but the layout is likely to be very similar, so when you work on the docs, could you make them conform to that template? | |
</issue> | |
<issue> | |
<author>embray</author> | |
building convolve breaks on Windows with MSVC | |
<author>embray</author> | |
This is the output I get when I try to run setup.py build on Windows using MSVC as the compiler: | |
``` | |
cythoning astropy\nddata\convolution\boundary_extend.pyx to astropy\nddata\convo | |
lution\boundary_extend.c | |
building 'astropy.nddata.convolution.boundary_extend' extension | |
creating build\temp.win32-2.7\Release\astropy\nddata | |
creating build\temp.win32-2.7\Release\astropy\nddata\convolution | |
C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Python27\lib\site-packages\numpy-1.6.1-py2.7-win32.egg\numpy\core\include -IC:\Python27\include -IC:\Python27\PC /Tcastropy\nddata\convolution\boundary_extend.c /Fobuild\temp.win32-2.7\Release\astropy\nddata\convolution\boundary_extend.obj | |
boundary_extend.c | |
astropy\nddata\convolution\boundary_extend.c(1368) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(1379) : warning C4013: 'isnan' undefined; assuming extern returning int | |
astropy\nddata\convolution\boundary_extend.c(1555) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(2193) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(2204) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(2439) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(2450) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(3217) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(3228) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(3239) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(3522) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(3533) : warning C4018: '<' : signed/unsigned mismatch | |
astropy\nddata\convolution\boundary_extend.c(3544) : warning C4018: '<' : signed/unsigned mismatch | |
C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Python27\libs /LIBPATH:C:\Python27\PCbuild /EXPORT:initboundary_extend build\temp.win32-2.7\Release\astropy\nddata\convolution\boundary_extend.obj /OUT:build\lib.win32-2.7\astropy\nddata\convolution\boundary_extend.pyd /IMPLIB:build\temp.win32-2.7\Release\astropy\nddata\convolution\boundary_extend.lib /MANIFESTFILE:build\temp.win32-2.7\Release\astropy\nddata\convolution\boundary_extend.pyd.manifest /MANIFEST | |
Creating library build\temp.win32-2.7\Release\astropy\nddata\convolution\boundary_extend.lib and object build\temp.win32-2.7\Release\astropy\nddata\convolution\boundary_extend.exp | |
boundary_extend.obj : error LNK2019: unresolved external symbol _isnan referenced in function ___pyx_pf_7astropy_6nddata_11convolution_15boundary_extend_convolve1d_boundary_extend | |
build\lib.win32-2.7\astropy\nddata\convolution\boundary_extend.pyd : fatal error | |
LNK1120: 1 unresolved externals | |
error: command '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\link.exe"' | |
failed with exit status 1120 | |
``` | |
I'm not sure. Could this be a Cython bug? For what it's worth, the Cython on this Python installation was built with MSVC, and the build succeeds no problem when I compile in mingw. | |
<author>mdboom</author> | |
A similar bug in Numpy may be helpful: | |
http://projects.scipy.org/numpy/ticket/1502 | |
I don't think this is a Cython bug, since isnan is being declared right in our own code: | |
``` | |
cdef extern from "math.h": | |
bint isnan(double x) | |
``` | |
apparently on MSVC it's spelled "_isnan" and is in "float.h": | |
http://msdn.microsoft.com/en-us/library/tzthab44.aspx | |
But maybe we could just use numpy's isnan, since they've already done the cross-platform work? | |
``` | |
cdef extern from "npy_math.h": | |
bint npy_isnan(double x) | |
``` | |
<author>embray</author> | |
Aha! I didn't even look at the code for this yet. I'll give that a try. | |
<author>embray</author> | |
The attached PR should fix the issue. This is working for me now with MSVC, and doesn't seem to cause any problems on other platforms. | |
<author>mdboom</author> | |
Just confirming it doesn't seem to break anything for me, either. | |
<author>eteq</author> | |
Same here - @astrofrog, this looks fine to you, right (as the `convolve` author)? If so, I think it's good to merge. | |
<author>astrofrog</author> | |
This seems fine, so I'm merging (though I haven't had time to do performance tests with ``npy_isnan`` vs ``isnan``, but the top priority is that it compiles on all platforms, and this PR fixes this). | |
<author>embray</author> | |
@astrofrog This should answer any performance questions you have about `npy_isnan` vs `isnan` ;) | |
``` | |
/* | |
* IEEE 754 fpu handling. Those are guaranteed to be macros | |
*/ | |
#ifndef NPY_HAVE_DECL_ISNAN | |
#define npy_isnan(x) ((x) != (x)) | |
#else | |
#ifdef _MSC_VER | |
#define npy_isnan(x) _isnan((x)) | |
#else | |
#define npy_isnan(x) isnan((x)) | |
#endif | |
#endif | |
``` | |
<author>astrofrog</author> | |
@iguananaut - ok, thanks! That makes sense :-) | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Fix doc building to use astropy source | |
<author>eteq</author> | |
Fixing this was more subtle than I realized, but this PR now finally makes the sphinx build operate consistently. Now, ``python setup.py build_sphinx`` always uses the version of astropy in the build directory, while ``make html`` in the ``docs`` directory will use the installed version. | |
As part of this, a critical bug is fixed that in the past meant you *had* to have the C extensions installed via ``python setup.py develop`` or ``python setup.py build_ext --inplace`` to get the docs to build. That is no longer the case. | |
This closes #134 . | |
<author>mdboom</author> | |
Very welcome change. For me, `python setup.py build_sphinx` seems to dump the built docs into `docs/docs/_build/html` rather than `docs/_build/html`. Any idea why? | |
<author>embray</author> | |
Awesome, thanks for fixing this. I'm seeing the same docs/docs behavior, but otherwise seems good. | |
We've put a lot of work, I think, into developing workarounds to a lot of the annoyances with Python software builds--it seems almost worth pulling out a lot of Astropy's package framework as a drop-in template to use for arbitrary Python-based projects (we've already done that to an extent with the affiliated package template, thought that's still tied to Astropy). But that's another project for another time :) | |
<author>mdboom</author> | |
I agree. The infrastructure for this project is really good! | |
<author>eteq</author> | |
The errant 'docs' directory was because I was changing into the docs directory in the subprocess to get it to put the generated docs in the right place... but that meant the directory names from the ``build_sphinx`` command were wrong. Fixed by just using all absolute paths computed in the correct places. @iguananaut and @mdboom, can you double-check this fixes it for you? | |
@iguananaut - Yeah, maybe we should make our own packaging/build tools from scratch and call it ``distutils3`` or ``packaging2``? :) Seriously, though, I agree it could be worth factoring the non-astro stuff out if someone has the time/inclination... | |
<author>astrofrog</author> | |
This works for me! The docs appear in the right place. | |
<author>eteq</author> | |
Alright, looks like its all building as expected. Merging! | |
</issue> | |
<issue> | |
<author>eteq</author> | |
moved imports for convolve.py inside function | |
<author>eteq</author> | |
I noticed while working on building the documentation that the documentation build would fail with import errors due to ``convolve.py`` having a bunch of imports at the top-level that depend on the Cython files being already compiled. This PR moves those imports inside the convolve function so that it is now possible to import convolve.py even if the C extensions aren't yet/are improperly built. | |
@astrofrog, given that you wrote `convolve`, is this alright with you? | |
<author>astrofrog</author> | |
Yes, this seems fine to me! | |
<author>mdboom</author> | |
Fine change -- but we still have other parts of the documentation (wcs for example) that can't be built without compiling first. I'll open a new issue for that. | |
<author>eteq</author> | |
@mdboom - yeah, it would be good if that wasn't the case, so an issue sounds good, but I see how it might be quite a bit of work to change that. I'm just thinking we should try to keep *new* modules import-safe when possible. | |
<author>mdboom</author> | |
I'm not suggesting fixing it so the docs build without having built extensions -- only that the documentation build should raise an error if the built modules aren't present. (Right now, it silently fails). But #187 does the right thing and makes things so much easier it's much less of an issue now anyway. | |
<author>eteq</author> | |
@mdboom - Ah, I see what you mean now. | |
At any rate, I'll go ahead and merge this, then. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
added -d/--pdb option to test command | |
<author>eteq</author> | |
This is a straightforward pull request that just adds another option to the test command that is the same as the ``--pdb`` option to py.test ... this is basically like the ``--pep8`` option in that it's just for convinience, so I don't have to type ``-a "--pdb"`` all the time. | |
(Primarily this is asking for review from @jiffyclub, as he wrote most of the testing code.) | |
<author>embray</author> | |
+1; I find my self using `-A --pdb` fairly often. | |
<author>jiffyclub</author> | |
Looks good to me. | |
<author>astrofrog</author> | |
+1 | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Test imports | |
<author>eteq</author> | |
This adds an additional test that walks through the whole astropy package and checks to make sure the modules can all be imported without failing - this is particularly useful with shiningpanda (or other "clean" environments) in that it will make sure there are no top-level imports accidentally sneaking in that will break imports. | |
This can be used also to check if all imports work *before* compiling by just running ``py.test -m importtests`` from the source root (assuming there isn't an inplace build). | |
@mdboom - this also includes two small fixes in `vo` and `wcs` that were revealed by running the test. The `vo` one is clearly just a typo, but I thought I should point out the `wcs` fix - apparently 2to3 just removes any ``from __future__ import division`` statements, so the ``del division`` was giving an error in py 3.x - this seems to work in both cases now, but you might want to glance at it to make sure there aren't unintended consequences... | |
<author>mdboom</author> | |
Does this mean if I wanted to add optional plotting functionality to `io.vo` by adding a module `astropy/io/vo/plotting.py` and my first line was `import matplotlib` that this would fail? I think we should be free to write optional modules that way, as long as they aren't imported at the package level. Maybe this should be modified to simply import package namespaces, i.e. `__init__.py` files, and not everything. | |
Also, confused about the example of running this test before compiling. That gives me: | |
``` | |
>py.test -m importtests | |
========================================== test session starts =========================================== | |
platform linux2 -- Python 2.7.2 -- pytest-2.2.4.dev2 | |
collected 0 items / 1 errors | |
================================================= ERRORS ================================================= | |
___________________________________________ ERROR collecting . ___________________________________________ | |
../../../python/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_path/common.py:312: in visit | |
> for x in Visitor(fil, rec, ignore, bf, sort).gen(self): | |
../../../python/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_path/common.py:358: in gen | |
> for p in self.gen(subdir): | |
../../../python/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_path/common.py:358: in gen | |
> for p in self.gen(subdir): | |
../../../python/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_path/common.py:348: in gen | |
> if p.check(dir=1) and (rec is None or rec(p))]) | |
../../../python/lib/python2.7/site-packages/pytest-2.2.4.dev2-py2.7.egg/_pytest/main.py:479: in _recurse | |
> ihook.pytest_collect_directory(path=path, parent=self) | |
../../../python/lib/python2.7/site-packages/pytest-2.2.4.dev2-py2.7.egg/_pytest/main.py:135: in call_matching_hooks | |
> plugins = self.config._getmatchingplugins(self.fspath) | |
../../../python/lib/python2.7/site-packages/pytest-2.2.4.dev2-py2.7.egg/_pytest/config.py:289: in _getmatchingplugins | |
> plugins += self._conftest.getconftestmodules(fspath) | |
../../../python/lib/python2.7/site-packages/pytest-2.2.4.dev2-py2.7.egg/_pytest/config.py:188: in getconftestmodules | |
> clist.append(self.importconftest(conftestpath)) | |
../../../python/lib/python2.7/site-packages/pytest-2.2.4.dev2-py2.7.egg/_pytest/config.py:217: in importconftest | |
> self._conftestpath2mod[conftestpath] = mod = conftestpath.pyimport() | |
../../../python/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_path/local.py:549: in pyimport | |
> raise self.ImportMismatchError(modname, modfile, self) | |
E ImportMismatchError: ('astropy.conftest', '/home/mdboom/Work/builds/astropy/astropy/conftest.py', local('/home/mdboom/Work/builds/astropy/dist/astropy-0.0.dev839/astropy/conftest.py')) | |
======================================== 1 error in 2.82 seconds ========================================= | |
``` | |
Also, shouldn't doing that with Python 3 (against non-built code) create an inordinate amount of SyntaxErrors anyway? | |
<author>eteq</author> | |
@mdboom - My understanding of the general plan was that even in modules, ``import matplotlib`` is to be discouraged in favor of putting this in the functions. Having said that, I see your point here, particularly because some modules (like the sphinx-based modules) *require* that there be an import to be able to make subclasses. My thought for a solution to this is to use something like what was discussed on the astropy-dev list for functions. We could allow for a special marker function that can indicate that a module has optional dependencies, allowing the import to succeed but indicating that nothing in the module can be used without triggering an ImportError. | |
In the interim, though, you can see there's a `known_import_fails` list that can be used to flag a module as "ok to fail." The main idea of the test is to catch *unexpected* import failures to guard against regression. | |
And I'm confused by your traceback - I don't see that Ubuntu 11.10 or OS X 10.7... do you have astropy installed or running in ``develop`` mode? That might cause some oddity about two imports being present. If you just do ``py.test`` does it run like it's supposed to? Although, it's not necessarily worth worrying too much about this, because I think this may end up more useful once there's a way to select which C extensions should be installed or not - then it can still be run on the as-built version and not worry at all about working directly in-source. | |
<author>eteq</author> | |
@astrofrog , do you recall exactly what we decided (if anything definite) about imports - that is, did we say every module should be importable, every package, or what? | |
<author>astrofrog</author> | |
@eteq - I'm not sure if we decided anything at this level. What I do know is that we decided for sure that `import astropy` should not fail if dependencies are not installed, but if @mdboom has a `plotting.py` file in the vo module, does this mean that this module would get imported anytime ``astropy.io.vo`` gets imported? If so, that is not desirable because I might want to use ``vo`` without having matplotlib installed. Otherwise, then does this mean that ``astropy.io.vo.plotting`` can only be imported directly, and is only imported from ``astropy.io.vo`` inside functions? | |
<author>mdboom</author> | |
@astrofrog: The latter: such a plotting module would not be imported from `io/vo/__init__.py`, and would only get imported explicitly or from other functions that needed to do plotting. | |
This case is more than theoretical: to implement a new matplotlib projection, for example, code needs to inherit from matplotlib classes. Doing that from the non-toplevel of a module is possible, but it's cumbersome. Easier to just have a module to localize the matplotlib usage and only import that module when needed. I think this pull request is based on a false assumption that any module should be importable without dependencies. I think it's safe to say that able any top-level package, but not any module. | |
<author>eteq</author> | |
Ok, I definitely see your point @mdboom and broadly am fine with it... but I think it might make sense to have a policy along the lines of either "always put optial dependencies in functions unless you can't" or "optional dependencies at the module level should have nothing else in the module level." | |
Given that you're right there are some cases where this test should be willing to fail, what's the best fix? Ideally, it would catch something like the situation before #188 - noticing that this test fails might prompt someone to say "oh, I can put this in a function," when they might otherwise have left it at module-level. | |
Basically, I see three approaches, and once we decide on which, I can update this test appropriately: | |
1. Leave this test as-is. Then any time you have a module that would fail at import, you should update the `known_import_fails` variable in this test to exclude your function, but I should add a note to this test indicating that you should only add something to it if it can't be moved to function-level. | |
2. Change this to only fail when *packages* fail, or maybe only "public" packages (mostly top-level, but also e.g.,`io.vo` and `io.fits`). | |
3. Implement some magic that lets you explicitly state you're using an optional dependency. I'm working on something like this now based on an astropy-dev discussion, so I'll refer back to this when I have it working. | |
<author>astrofrog</author> | |
@eteq how about testing only imports of sub-packages that contain an ``__init__.py`` file? This would automatically include things like ``io.vo`` and ``io.fits`` but would exclude ``plotting.py`` which is never imported into a sub-package. | |
<author>eteq</author> | |
Closing without merging, as it is supplanted by #458 | |
<author>embray</author> | |
Removed this from the milestone since it won't be merged. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Bug with char fields in io.vo | |
<author>astrofrog</author> | |
If I run the script to create a VO table from http://astropy.readthedocs.org/en/latest/vo/intro_table.html#building-a-new-table-from-scratch, then the char field only has a single character in each cell: | |
<!-- Produced with astropy.io.vo version 0.0.dev617 | |
http://www.astropy.org/ --> | |
<VOTABLE version="1.2" xmlns="http://www.ivoa.net/xml/VOTable/v1.2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.ivoa.net/xml/VOTable/v1.2"> | |
<RESOURCE type="results"> | |
<TABLE> | |
<FIELD ID="filename" arraysize="1" datatype="char" name="filename"/> | |
<FIELD ID="matrix" arraysize="2x2" datatype="double" name="matrix"/> | |
<DATA> | |
<TABLEDATA> | |
<TR> | |
<TD>t</TD> | |
<TD>1 0 0 1</TD> | |
</TR> | |
<TR> | |
<TD>t</TD> | |
<TD>0.5 0.3 0.2 0.1</TD> | |
</TR> | |
</TABLEDATA> | |
</DATA> | |
</TABLE> | |
</RESOURCE> | |
</VOTABLE> | |
I also get the following two warnings: | |
/Volumes/Raptor/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev984-py2.7-macosx-10.7-x86_64.egg/astropy/io/vo/exceptions.py:71: W15: ?:?:?: W15: FIELD element missing required 'name' attribute | |
warn(warning) | |
/Volumes/Raptor/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev984-py2.7-macosx-10.7-x86_64.egg/astropy/io/vo/exceptions.py:71: W47: ?:?:?: W47: Missing arraysize indicates length 1 | |
warn(warning) | |
I also see the issue in the standalone vo module. | |
<author>astrofrog</author> | |
I will need to update ATpy - is the main required change for this to work to add arraysize='*' when defining character fields? | |
<author>mdboom</author> | |
Yes. It turns out the spec treats a missing arraysize as being equal to "1", so a char field without an arraysize is equivalent to the C declaration: | |
``` | |
char foo[1] | |
``` | |
<author>astrofrog</author> | |
Ok, thanks! This looks good to merge. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Advanced logger implementation based on logging module | |
<author>astrofrog</author> | |
This PR replaces #93 and #183, and implements a logging system based on the requirements in https://github.com/astropy/astropy/wiki/Logging, discussions on the astropy-dev mailing list, and discussions in #93 and #183. I think all concerns have been addressed (though there are a few open questions below). | |
The current implementation relies on the logging module, and extends the Logger class to provide additional functionality, including: | |
* optional coloring of messages | |
* keeping track of the module that called the log (this adds an `origin` keyword to `LogRecord` entries) | |
* a context manager to capture logging entries to a list | |
* a context manager to write logging entries to a file | |
* the ability to enable/disable catching of ``warnings.warn`` entries | |
* the ability to insert entries for uncaught exceptions in the log | |
* the ability to control various aspects of the logging via the Astropy configuration file(s) | |
The following examples demonstrate the functionality: https://gist.github.com/2322952 (too many examples to show here). | |
If this implementation looks better than the previous PRs, then I will close #93 and #183. | |
Remaining open questions: | |
* I need to figure out how to set the origin correctly for the uncaught exceptions when catching these - any ideas? | |
* When we use a context manager, do we want log messages *not* to appear in stdout? | |
* Do we want each sub-package in Astropy to use a different instance of the logging class, or is the current implementation fine? If the former, rather than import ``logger``, modules could import ``get_logger`` which would return a logger with the right hierarchy (using ``find_current_module``). However, this could get a bit confusing, and I'm not sure what the advantages are compared to the current implementation. Any thoughts? | |
Once we've converged on the API, I'll write up docs and tests (but as before, there is no point in writing these if the code and API are going to change significantly) | |
Let me know if you have any feedback/suggestions! | |
<author>astrofrog</author> | |
@mdboom, @eteq: I'm getting a failure with Python 3 that seems related to the fact that console.py is being imported before being 2to3'd, because astropy.config gets imported before build, which requires the logger to be imported, which requires the console utils. Is there a way for the test helpers to avoid reading in the whole configuration package? Or do I have to place the color_print import into the function where it's used? I feel like the latter isn't ideal - in my opinion, the main code shouldn't have to have tests for whether we're in build mode, etc. or worry about whether things will be importable without being 2to3'd. I feel it should be more up to the testing framework to have hacks if necessary. Any thoughts? | |
<author>astrofrog</author> | |
One possibility is to make console.py importable without 2to3, which we might be able to achieve using: | |
from __future__ import unicode_literals | |
and remove u' from all strings? Are there any potential issues with this? | |
<author>eteq</author> | |
Actually, It's probably bad to be importing `astropy.config` at all just by invoking setup.py, as that causes it to make the ``.astropy`` directory. I've traced it to ``astropy/tests/helper.py`` using a `ConfigurationItem` to decide if the system py.test is supposed to be used instead of the as-packaged version - without that import, config isn't used until 2to3 is run. I'm working on fixing this now. | |
<author>eteq</author> | |
See #197 - it should fix this. @astrofrog, you might try merging that into this on your staging branch and see if it solves the problem? | |
<author>eteq</author> | |
As for the code itself, I like it a lot! I'll answer the open questions as I see it - I have a bunch of more minor comments, but I'll address those once we've settled these. | |
1. You should be able to use the third argument of excepthook, which is a `traceback` object - that should be able to tell you the original source of the exception. | |
2. I would say no - everything should appear in the log unless someone has a reason to not put it there. Or maybe have it be an option to the capturing function that defaults to no? | |
3. I think the current system is more straightforward. Having zillions of separate loggers would quickly get very confusing, particularly for connecting to `warnings` and `excepthook` | |
One other largish-scale thing that should probably be included: A `ConfigurationItem` that can be given a file name (or perhaps a directory name, and then dated log names can be used), which causes the log to be placed in said file. I'd say it should default to `~/astropy/log` or something like that | |
<author>astrofrog</author> | |
Thanks for reviewing this! Should the configuration for the log file be disabled by default? I don't think we really need to log by default to a file, as this obviously has a performance hit. But I could be convinced either way. We could always make use of the RotatingFileHandler for this as it will make it behave more like a system log, and have this separate from the ``capture_to_file`` which is intended for a limited context. | |
<author>astrofrog</author> | |
@eteq - #197 fixes the issue for me once I merge it into my logging-new branch | |
<author>astrofrog</author> | |
@eteq - I've figured out how to add the origin to exceptions. In the process, I discovered that if I run my test script in ipython, exceptions are *not* caught, and this is because ipython does not allow sys.excepthook to be overridden. I guess we'll have to find a solution for that in future, though it's not really top priority. | |
<author>eteq</author> | |
I was thinking a log file should be enabled by default, because we aren't sending log messages *that* often, presumably, so I wouldn't have thought there'd be a noticeable performance hit. But it's probably not a big deal. It would be good to have a "standard" location like ``~/.astropy/log`` that is used if e.g. the logging option is set to True, though... and `RotatingFileHandle` is a neat scheme - these are good reasons to use `logging`, I guess :) | |
One other thing that occurs to me: if the log is going to a file, the entries should also have timestamps. | |
And I just sent a message to the ipython-dev mailing list about whether or not there's a workaround for IPython's `sys.excepthook` trick. | |
<author>eteq</author> | |
Oh, and did you mean to include #197 in this branch? I was saying you could just try merging them in your staging branch instead of actually in this PR. It's probably not good to both merge this as it is now and #197, in case #197 gets modified... Maybe you should remove astrofrog/astropy@7a6be5bd370e8c7d03ee16fa6ccc252e54958985 and they will just be merged when both PRs hit master? | |
<author>astrofrog</author> | |
@eteq - oops, I didn't mean to include that commit in the PR, I've removed it now. I'll implement the log file changes later today. | |
<author>astrofrog</author> | |
@eteq - I added ConfigurationItems for logging to a file (which is independent from the ``catch_to_file`` context manager). I think we settled all the main points for now (with the exception of how to deal with ipython, but I guess we might have to wait to find that out). Please feel free to make more detailed comments on the code now, and I'll do my best to address them promptly. Once you're happy with the code, I'll write the docs and tests, and we can send it out to the mailing list for a final review! | |
<author>mdboom</author> | |
This looks great. Nice and straightforward use of `stdlib.logging`. | |
As for `sys.excepthook` I wonder if what one needs to do is to delegate to the existing `sys.excepthook`. That is, store it in a local variable on setup, and replace it with a function that does our thing and then delegates to that existing `excepthook`. (Just off the top of my head -- haven't tested the theory). | |
<author>astrofrog</author> | |
@mdboom - thanks for the review! I will work on addressing these this evening. Regarding ``sys.excepthook``, that is what I am already doing, unless I am misunderstanding what you are saying? It's just that for IPython, sys.excepthook doesn't even seem to be called at all. | |
<author>mdboom</author> | |
@astrofrog: Ah, I missed the `_excepthook = sys.excepthook` at the top of the file. Not quite sure why it's not working, then, unless ipython is subsequently overriding it somehow. | |
In any case, the current implementation will only override the excepthook that was around at the time of import. If it gets subsequently changed and then the user installs our excepthook later, it doesn't behave as expected. It might be simpler to install our excepthook in any case, and only have it do anything if the configuration option is set. | |
<author>astrofrog</author> | |
When I looked into it, I realized it was a known issue: http://stackoverflow.com/questions/1261668/cannot-override-sys-excepthook - apparently ipython does not use excepthook according to the SO accepted answer. Anyway, it seems to work fine in normal Python, so I think we may just need a special clause if code is being run from IPython. @eteq's message to the ipython-dev list is here: | |
http://mail.scipy.org/pipermail/ipython-dev/2012-April/008944.html | |
and it has one answer that I will try out. | |
<author>astrofrog</author> | |
I think I've addressed all suggestions, with the exception of: | |
* A fix for IPython - I tried the suggestion on the ipython-dev mailing list, but couldn't quite figure it out, so maybe we can just open an issue for this, and once one of us figures it out, we can open a new pull request? | |
* Overriding the excepthook from the start - I actually wonder whether this is what we really want. If the user subsequently changes sys.excepthook and overrides ours, ours will no longer work. Maybe what we should do is keep it the way it is, but move ``_excepthook = sys.excepthook`` to just before we override it? (@mdboom?) | |
Apart from these issues, once #179 is merged in, I think this is ready to go. | |
<author>astrofrog</author> | |
(and of course I should work on the docs and tests before this can be merged!) | |
<author>eteq</author> | |
@astrofrog - I'll try to take a look at the IPython business - if I don't get to it before this is otherwise ready, feel free to assign the issue to me. | |
I also agree that a better choice for `sys.excepthook` is to assign `_excepthook` right before you assign the new `sys.excepthook` in `set_catch_exceptions` rather than in global scope. Better yet, you could use an *instance* variable on `AstropyLogger`. In that case, it might be clearer to have two functions along the lines of `catch_exceptions` and `restore_exceptions` that take no arguments (in place of the boolean `catch` option you have now). That makes it slightly clearer that that's the moment in which you're capturing the current `sys.excepthook` and replacing it. | |
Otherwise, this is looking pretty good to me... although I admit I'm getting a bit lost in the iterations, so I'll look forward to your docs with some examples to clarify exactly what this settles on :) | |
<author>astrofrog</author> | |
I agree that it makes more sense to have two methods to enable/disable logging of warnings - though as @mdboom pointed out, ``catch`` is not necessarily a good name, so how about ``enable_warnings_logging``, ``disable_warnings_logging``, ``enable_exceptions_logging``, and ``disable_exception_logging``? Do you have other suggestions? | |
<author>eteq</author> | |
I'd say the enable/disable names you've given here make sense. Just make sure to note somewhere in the docstring that when the enable is called, it stores the excepthook at that time, and then disable replaces it with the one that was used *before*. You probably want the `disable_*`s to throw an exception if someone calls them and sys.excepthook is not the one we put there, though - we don't want to end up overwriting an excepthook that someone put in *after* ours. (similarly, the `enable_*`s should raise an exception if you call them when our excepthook is already in place). | |
I've done a little digging on the IPython mechanism, and I'm pretty sure we can make it work, but I should wait until you've made these enable/disable changes before fiddling with it, because I'll have to make changes inside those functions. | |
<author>astrofrog</author> | |
@eteq, @mdboom - I've implemented the changes to store the ``excepthook`` and ``showwarning`` functions just when we enable the logging. I also started implemented tests, and I realized that since we are dealing with a single instance of a class, I needed to be able to reset it to defaults, so I've moved all the initial configuration into a method ``set_default``, which is called before every test. In the tests themselves, I also have to make sure that if a test fails, ``excepthook`` and ``showwarning`` are reset to default before the next test. Anyway, this was a lot of pain, but I think it's working now. I also added some preliminary (user) documentation. Remaining issues (for which you might have some insight?): | |
* ``test_exception_logging`` can't work in its present form, because pytest catches the exception, so the exception is not logged because only uncaught exceptions are logged. Maybe we'll just have to give up trying to test it? | |
* I need to figure out how to get the correct origin for ``warnings.warn`` (which is why ``test_warnings_logging`` is failing) | |
* I need to find a better way to find the calling module, because sometimes it's one level up, sometimes, two, and doesn't work in interactive mode. So I was thinking of just having a function that calls ``find_module_name`` with increasing depth until the module is not the current one (``logging_helper``). Not ideal, but should work, right? It would allow me to just override makeRecord, and not all the individual methods. | |
* As before, we need to fix ipython, but it sounds like @eteq can do this after we've merged this PR. | |
Could you take a minute to look over the current version and let me know if you see anything major before I continue implementing tests and docs, and trying to fix the above issues? | |
<author>astrofrog</author> | |
I figured out how to get the origin for ``warnings.warn`` calls (which involves also using find_module_name). Now at the moment, I hadn't overridden ``makeRecord``, but instead each individual method, because the depth to include in ``find_module_name`` depends on how it's being called - and this also changes for ``warnings.warn`` calls... Anyway, code wise, it would be *much* simpler to have: | |
def makeRecord(self, name, level, pathname, lineno, msg, args, exc_info, func=None, extra=None): | |
if extra is None: | |
extra = {} | |
if 'origin' not in extra: | |
origin = 'unknown' | |
for i in [2, 3, 4, 5]: | |
module = find_current_module(i) | |
if module.__name__ not in [CURRENT_MODULE, 'logging']: | |
origin = module.__name__ | |
break | |
extra['origin'] = origin | |
return Logger.makeRecord(self, name, level, pathname, lineno, msg, args, exc_info, func, extra) | |
Note that this involves a small loop for every call to the logger. Is this acceptable? Note however than ``makeRecord`` only gets called when a message satisfies the threshold, so by default this is just for warnings and exceptions. So it's actually better than the current code, which figures out the origin for all messages, even debug. Can I go ahead and put this in, even though it's not the most elegant code? | |
<author>astrofrog</author> | |
Oh, and in the above code: | |
CURRENT_MODULE = find_current_module(1).__name__ | |
<author>eteq</author> | |
FYi, ffcc912ace61879ba135d68ab76bcfff077245f5 fixes a problem that was preventing me from building the docs on this branch. | |
<author>eteq</author> | |
You can do what you had in mind in the third bullet point by using the ``finddiff`` argument of `find_current_module` - if you do ``find_current_module(1,True)` and it will return the first module in the call stack that is not the same as where it is called from. | |
I'm not sure I fully understand your solution for `warnings.warn`... but can that also make use of ``finddiff``? I could also modify `find_current_module` to accept a list of module names that it shouldn't stop on (as an alternative ``finddiff`` syntax) - would that solve the problem here? | |
I'm also not sure why this has to be called every time - isn't it only for warnings? In that case, can't you restrict it to when the level is `WARNING`? | |
<author>astrofrog</author> | |
``find_current_module`` gets called for every log call, e.g. ``log.info``, ``log.warn``, ``log.debug`` (but only for ones that are equal to or above the logging level), and also ``warnings.warn`` and exceptions if requested. | |
It would indeed be useful to be able to provide a list of modules not to stop on. When one calls ``log.warn``, ``warn`` is inside the ``logging`` module, which is why I'm excluding that in addition to the current module. | |
<author>eteq</author> | |
One possible solution to the testing of the excepthook dillema: you could make your own excepthook in the test that just stores the exception in a list or something (instead of pytest.raises or letting it kill the interpreter as is default), then raise an Exception, check that the logger did the right thing, and then re-apply the original excepthook (in the finally clause of a try/finally block). | |
<author>astrofrog</author> | |
@eteq - I tried exactly that, but it looks like py.test does something that means that doesn't work. Try this out with py.test and you'll see what I mean: | |
import sys | |
def test(): | |
sys.excepthook = lambda x, y, z: None | |
raise Exception("Yo") | |
I guess py.test is being a pain like ipython and doing something weird that I don't quite understand. | |
<author>astrofrog</author> | |
Actually, they aren't really doing anything weird, but they probably both wrap code in ``try...except`` and raise the error with their own syntax, so ``excepthook`` never gets called because it's only for uncaught exceptions. I'm not sure if there is a way around this. | |
<author>astrofrog</author> | |
I think I figured it out. I just need to use a ``try...except`` myself inside the test. | |
<author>astrofrog</author> | |
Scrap that, if I use ``try...except`` then excepthook never gets called! I was trying: | |
raised = False | |
try: | |
logger.enable_exception_logging() | |
with logger.log_to_list() as log_list: | |
raise Exception("This is an Exception") | |
except Exception as e: | |
raised = True | |
but of course that doesn't work because the error never makes it to the log... | |
<author>eteq</author> | |
Hmm... well, this isn't an ideal solution because it doesn't directly test `sys.excepthook`, but you could do something like | |
``` | |
try: | |
raise Exception('info') | |
except: | |
log._excepthook(*sys.exc_info()) | |
``` | |
Because `sys.exc_info()` returns the same thing that gets passed into `excepthook`. Then at least you know the `_excepthook` method itself is working. | |
<author>eteq</author> | |
And for `find_current_module`, I'll make this modification (hopefully within a few hours), on https://github.com/eteq/astropy/tree/find_current_module-list and you can either just merge it into this branch, or I can issue a separate PR. | |
As for whether or not we should be concerned about this... I think `find_current_module` runs pretty fast as long as you only need to go a few steps back in the call stack, so it's probably fine... and the log itself is probably about as slow anyway. I would say we should implement it now, and if it turns out it's performance-limiting down the road, we can optimize then. | |
<author>eteq</author> | |
https://github.com/eteq/astropy/tree/find_current_module-list now has the updated `find_current_module` that will do what you want - feel free to pull those two commits in here, or I can do a separate PR. | |
<author>astrofrog</author> | |
@eteq - you were very close with the ``try...except`` suggestion - even better is to do: | |
except: | |
sys.excepthook(*sys.exc_info()) | |
because then it actually tests whether ``sys.excepthook`` was correctly overridden! | |
<author>astrofrog</author> | |
@eteq and @mdboom - I believe I've addressed all your comments, but let me know if I forgot anything? | |
@eteq - this includes your commits to fix ``find_current_module``. | |
I believe this is now ready for a final review! | |
(I think the docs could probably be arranged a bit better, but we can just figure out the best way to do this as part of #162.) | |
<author>astrofrog</author> | |
@iguananaut - since you reviewed the previous logging pull requests, do you have any comments on this one? | |
<author>embray</author> | |
Whoa. Somehow I never even saw this until now. | |
<author>mdboom</author> | |
Looks good. I still see "catch_warnings" and "catch_exceptions" as config keys, though. You had written: | |
"I've changed catch_warnings and catch_exceptions to log_warnings and log_exceptions" | |
in a comment. The default log_file_format value is also something that may be difficult to automatically parse (since the fields aren't quoted). Maybe just the wrong git commits in here? | |
<author>astrofrog</author> | |
@mdboom - good catch, I had changed the variable names, but not the configuration item names (fixed now) | |
<author>astrofrog</author> | |
@mdboom - I'm open as to what log file format to use, do you have any suggestions? | |
<author>mdboom</author> | |
Maybe we should also change | |
``` | |
LOG_FILE_FORMAT = ConfigurationItem('log_file_format', "%(asctime)s, " | |
"%(origin)s, %(levelname)s, %(message)s", | |
"Format for log file entries") | |
``` | |
to | |
``` | |
LOG_FILE_FORMAT = ConfigurationItem('log_file_format', "%(asctime)s, " | |
"%(origin)s, %(levelname)r, %(message)r", | |
"Format for log file entries") | |
``` | |
so that the log files will be valid CSV (i.e. embedded commas in the message won't screw things up). | |
<author>mdboom</author> | |
@astrofrog: I don't have strong feelings about the default log format to use -- it is overridable after all -- but the default should be easy-to-use for simple situations. I actually want to amend my above comment to this: | |
``` | |
LOG_FILE_FORMAT = ConfigurationItem('log_file_format', "%(asctime)r, " | |
"%(origin)r, %(levelname)r, %(message)r", | |
"Format for log file entries") | |
``` | |
This makes the logs both valid CSV and also makes something simple like: | |
``` | |
for line in fd.readlines(): | |
parts = eval(line) | |
``` | |
work. | |
<author>embray</author> | |
Looks great! Thanks for all the fine work you did on this. | |
I'm not sure either about the ipython issue, but I haven't looked into it much yet. There's probably a way around it. | |
<author>eteq</author> | |
Is it intentional that most of the public methods on `AstropyLogger` don't have docstrings? | |
<author>eteq</author> | |
And aside from the docstrings and my comment about the exception/warning defaults, looks good to me! | |
@iguananaut - I think I figured out how to make this work with IPython, but I'm waiting for this to finalize before issuing a new PR. | |
<author>astrofrog</author> | |
@eteq @iguananaut - I added docstrings - let me know if you have any comments! | |
I've implemented all the recent issues, except turning on the logging of warnings and exceptions by default (as discussed above). Of course, it shouldn't be an issue which one users choose, but the issue is that the testing relies on messing up the hooks and resetting them to system values, except that there is no way to get hold of the system value when the configuration items are set to ``True`` by default... | |
<author>astrofrog</author> | |
Thinking about this more, given that the hook overriding is generally fiddly and prone to issues re: testing, I'm actually going to suggest we leave it off by default for now (since it doesn't even work properly with ipython, etc.). Even if it did work, I don't really like the idea that merely importing astropy causes ``sys.excepthook`` and ``warnings.warn`` to be overridden (which we have no idea whether it would cause conflicts with other packages), in the same way that you were not comfortable with the AstropyLogger being the default logging class. Any thoughts? | |
<author>eteq</author> | |
Aside from the one docstring and deciding on the hook overriding, I'm happy with this. Great work! | |
Now about defaults for the hook overrriding: I still think (if we can solve the problem you're having with it overriding the system defaults) they should be default. If we were doing anything strange or arcane in these, I would agree they shouldn't be default, but the only thing we actually do is pass them into the logging system, so that should always be safe, at least in theory, right? This is the sort of thing I doubt many (if any) users will change, so then it defeats the point of having a unified log that everyone can count on to show all their warnings and errors in order (which, at least to me, was one of the major advantages). | |
We could just leave them off by default and have a separate pull request that just makes them default to on, and ask the list what they think? | |
<author>astrofrog</author> | |
@eteq - just to see if there is an obvious solution I'm missing, could you try adding: | |
# Reset hooks | |
log.disable_warnings_logging() | |
log.disable_exception_logging() | |
after | |
# Set up the logger | |
log._set_defaults() | |
in setup_function, and see if you can get the single failing test to pass? You'll see that the issue is that even though the logging is supposed to be disabled, it calls our excepthook, not the original one. | |
<author>astrofrog</author> | |
Ah, never mind, I think I have a solution! (it looks like ``sys.__excepthook__`` is always the *original* one, so I can use that at the top of the tests) | |
<author>astrofrog</author> | |
All right, I added the docstring, and turned on the warnings and exception logging by default, and all tests pass. I check if the logging is enabled before disabling it, so the tests pass regardless of whether the logging is enabled or disabled. | |
I'm testing on Jenkins on my staging branch - otherwise, is this ready to merge? | |
<author>astrofrog</author> | |
For safety, I also figured out how to get the 'true' showwarning function and use that in the tests, so I think this is all pretty robust now. | |
<author>astrofrog</author> | |
I'm getting failures with other Python versions, so I'm looking into it. | |
<author>astrofrog</author> | |
The tests are now passing on all Python/Numpy versions on Mac and Linux (ignore the failures on the Astropy Jenkins for MacOS X, the build publishing is having issues). | |
<author>astrofrog</author> | |
@taldcroft - I just realized you may have missed this PR, so just thought I'd ping you in case you wanted to comment on it before we merge? | |
<author>taldcroft</author> | |
I spent a little time looking through this module and seeing how it would look for developers and users. Overall it looks good and I don't have any problems. | |
The only small question I have is whether there should be some configurability of the StreamHandler output. Basically it seems to be fixed to `levelname: message`. If there is some module or tool that needs to provide a lot of outputs then all the `INFO: ` prefixes could get tiresome. There was a comment about using `print` for functions that are explicitly printing something for the user, so that might be an option but then you lose the file logging. I would not say this is a blocker for merging, but maybe something to think about. | |
<author>astrofrog</author> | |
@taldcroft - thanks for your comments! I agree it would be nice to be able to customize the standard output in future (I will do this in a future PR). Note that INFO messages will not be shown by default - the default will be to show only warnings and errors. | |
<author>astrofrog</author> | |
@eteq - is this ready to merge? | |
<author>eteq</author> | |
Your comment to @taldcroft prompted me to look at this - just so I undertand, there's a separate level for the file and the console, right? (`LOG_LEVEL` and `LOG_FILE_LEVEL`?). I think `LOG_FILE_LEVEL` should default to at least 'INFO' (maybe even 'DEBUG'), because it's not so much of a problem to have the log file hold extra information, and that way it's at least put somewhere. | |
It may even be wise to default the `LOG_LEVEL` to 'INFO'... I guess that actually makes sense to me, because I would have thought the 'INFO' category was for informational messages that the user would want to see... my general philosophy would be to default to more messages and let the user silence them when desired (although others may disagree). | |
Aside from straightening this out, I'm completely happy with this and I'll let you have the merging honor given how much work you've put into it! | |
<author>astrofrog</author> | |
@eteq - I agree that it makes sense to have INFO messages by default. I guess one thing I am worried about is the potential size of the log file over time (especially with INFO), so I think we are going to have to use a rotating file handle instead of just a filehandle (see http://docs.python.org/library/logging.handlers.html#rotatingfilehandler). That probably makes sense anyway, as after 1 year of usage, even just warnings could add up to a lot. If I do that, any suggestions for what the parameters should be (max size, and max number of backups?) | |
<author>astrofrog</author> | |
Regarding my previous comment, how about a maximum size of 1Mb, and 5 backups? | |
<author>taldcroft</author> | |
Do you know how the rotating handler behaves for multiple instances potentially accessing the same rotating file set? One can easily imagine having many astropy sessions or applications going at once. | |
Somehow these days 1 Mb seems very small. I wouldn't care about any logging output that was less than 100 Mb, though I guess some users on managed systems mind find this making a dent in their quota (by default, which could of course be changed). | |
<author>astrofrog</author> | |
@taldcroft - good point, I'm not sure how it works when multiple instances are running. This looks interesting: | |
http://pypi.python.org/pypi/ConcurrentLogHandler/ | |
(and suggests the default handler does not work well with multiple processes). Maybe I should leave it the way it is for now, and we can look into this in more detail after this PR is merged. | |
<author>astrofrog</author> | |
I changed the default levels to INFO. I'll merge this tomorrow morning, in case there are any last-minute thoughts about this. I think I will leave the following to a future PR: | |
* Figure out a clean way to have multiple processes write to a rotating log file | |
* Allow the standard output format to be customized | |
* Fix excepthook overriding to work in IPython | |
<author>eteq</author> | |
This looks good to me - once you merge I'll start work on item 3 - hopefully it should take very little time. | |
<author>astrofrog</author> | |
I've opened new issues in #213, #214, and #215 to keep track of the three to-dos discussed above | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Issues with vo legacy shim | |
<author>astrofrog</author> | |
There are issues when using the vo legacy shim: | |
In [7]: from vo.table import parse | |
--------------------------------------------------------------------------- | |
ValueError Traceback (most recent call last) | |
/Users/tom/<ipython-input-7-a619543623bc> in <module>() | |
----> 1 from vo.table import parse | |
/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev993-py2.7-macosx-10.6-x86_64.egg/astropy/io/vo/table.py in <module>() | |
13 # LOCAL | |
14 from . import exceptions | |
---> 15 from . import tree | |
16 from . import util | |
17 from ...utils.xml import iterparser | |
/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev993-py2.7-macosx-10.6-x86_64.egg/astropy/io/vo/tree.py in <module>() | |
18 | |
19 # LOCAL | |
---> 20 from .. import fits | |
21 from ... import __version__ as astropy_version | |
22 from ...utils.collections import HomogeneousList | |
ValueError: Attempted relative import beyond toplevel package | |
In addition, it looks like the vo shim gets installed/added three times: | |
------------------------------------------------------------ | |
The legacy package 'vo' was found. | |
To install astropy's compatibility layer instead, uninstall | |
'vo' and then reinstall astropy. | |
------------------------------------------------------------ | |
------------------------------------------------------------ | |
The legacy package 'vo' was found. | |
To install astropy's compatibility layer instead, uninstall | |
'vo' and then reinstall astropy. | |
------------------------------------------------------------ | |
------------------------------------------------------------ | |
The legacy package 'vo' was found. | |
To install astropy's compatibility layer instead, uninstall | |
'vo' and then reinstall astropy. | |
------------------------------------------------------------ | |
<author>mdboom</author> | |
The second part of the issue is resolved by the attached pull request. It seems that if you import different modules using `imp.load_module` using the same name, it ends up using the same namespace across calls. | |
The first part is trickier. I was compelled to change the relative imports that go outside of the `io.vo` tree to absolute imports, i.e.: | |
``` | |
from .. import fits | |
``` | |
to | |
``` | |
from astropy.io import fits | |
``` | |
but that seems to break the `python setup.py test` in the common case. Will spend some more time looking into this. | |
<author>astrofrog</author> | |
These fixes work for me for the multiple install and the logic issues. | |
<author>embray</author> | |
I haven't tested this yet, but is the issue with relative imports in compatibility packages still unresolved? | |
It kind of seems strange to me, like it shouldn't do that. But I'll have to look into it more. But if there's no other way, maybe this is a good excuse to call for reneging on the relative imports requirement? :) | |
<author>mdboom</author> | |
Here's the issue here --> | |
It's easy enough to get at things that are imported by `__init__.py` -- the shim just needs to do: | |
``` | |
from astropy.io.vo import * | |
``` | |
However, in order to import modules of a package from outside that package, eg. | |
``` | |
from vo.table import parse | |
``` | |
the shims do | |
``` | |
import pkgutil | |
__path__ = pkgutil.extend_path(__path__, "astropy.io.vo") | |
``` | |
This sets up the shim "as if" it were at `astropy.io.vo`, but it in that situation, it doesn't allow relative imports that go above that level (in this case `astropy.io.vo`). | |
I tried changing all of this "peer" imports to absolute ones, but that breaks `python setup.py test` (I think just because it gets confused and tries to load `astropy` from the source tree rather than the built tree. | |
There's probably a way out of this mess, just haven't found it yet (and also haven't devoted much time yet). | |
<author>astrofrog</author> | |
As I was fiddling around to try and understand this better, I noticed that the following works: | |
import sys | |
from astropy.io.vo import table, tree | |
from astropy.io import vo | |
sys.modules['vo.tree'] = tree | |
sys.modules['vo.table'] = table | |
sys.modules['vo'] = vo | |
del table, tree, vo | |
from vo.table import parse | |
from vo.tree import Table, Field | |
import vo.table as t | |
import vo | |
parse('aj285677t3_votable.xml', pedantic=False) | |
I checked, and ``vo.__path__`` points to ``astropy-0.0.dev991-py2.7-macosx-10.6-x86_64.egg/astropy/io/vo`` as expected... Am I missing something? | |
<author>mdboom</author> | |
@astrofrog: The problem with such an approach is that it requires every module that *could* be imported to be imported in advance. This will not be ideal for subsidiary functionality (e.g. plotting) with extra requirements etc. Whatever approach we take needs to be "lazy". | |
I believe I have a solution now based on PEP 302 finders and loaders that does the mapping from `vo.table` to `astropy.io.vo.table` on the fly. It seems to work well in the cases I've tested (`from vo.table import parse`, `from vo import table`, `import vo`, `from vo import *`). Can you confirm it works for you as well? | |
There is a theoretical downside in that: | |
``` | |
>>> from vo import table | |
>>> table.__package__ | |
'astropy.io.vo' | |
``` | |
which is surprising, but I don't know of any code that relies on `__package__`. | |
<author>embray</author> | |
Finally getting back to this. For some reason GitHub hasn't been working for me on my work computer for several days. IT seems to have fixed it, though I still need to find out what the problem was... | |
Anyways, this solution looks great. As for the `__package__` issue, my understanding is that it has to be what it is in order for relative imports to work correctly... So yes, surprising. But you're right that it's unlikely for there to be any client code that uses it. | |
<author>eteq</author> | |
Just so we're clear, @mdboom, is this ready to merge once @astrofrog confirms that it fixes the original issue? It appears to do the trick for me, but it would be good to confirm with @astrofrog, as well. | |
Also, I just noticed that if I use ``python setup.py develop`` instead of ``python setup.py install``, the legacy shims don't get installed (or at least ``from vo.table import parse`` does not work). Is that expected behavior? | |
<author>mdboom</author> | |
Yes, I think this is ready to merge. | |
I'm not sure how to support `python setup.py develop`. `develop` seems to be based on the assumption that the installation heirarchy mirrors in the installed heirarchy, which makes doing this stuff kind of difficult. We could detect `develop` and dump the files at the root of the source tree, but I'm really not a fan of packages that do that -- source and build products should be kept separate as much as possible. | |
<author>astrofrog</author> | |
It seems to work for me! | |
<author>embray</author> | |
Yeah, I really don't think they need to be supported for develop. | |
<author>eteq</author> | |
Yeah, I'm fine with not supporting `develop` mode for this sense - ``develop`` is a convinience for certain things, but it shouldn't be the canonical case. Just wanted to make sure. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
When to install compatibility packages | |
<author>astrofrog</author> | |
I'm confused by the Astropy behavior with compatibility packages. At the moment, if I have the genuine pyfits installed, and no astropy installation, astropy installs the legacy shim. If the legacy shim is already installed, it doesn't install it. This seems to be contradictory with the warning message: | |
------------------------------------------------------------ | |
The legacy package 'vo' was found. | |
To install astropy's compatibility layer instead, uninstall | |
'vo' and then reinstall astropy. | |
------------------------------------------------------------ | |
which suggests to me that the correct behavior is that if the *real* PyFITS is installed ('legacy package'), then the shim should *not* be installed, and that if people want the shim, they should uninstall the real package. However, if the installed `pyfits` is the legacy shim, then it should be re-installed in case it has been updated. Is that not the intended behavior? If so, the following code has the wrong logic: | |
found_legacy_module = False | |
try: | |
location = imp.find_module(old_package) | |
except ImportError: | |
pass | |
else: | |
# We want ImportError to raise here, because that means it was | |
# found, but something else went wrong. | |
# We could import the module here to determine if its "real" | |
# or just a legacy alias. However, importing the legacy alias | |
# may cause importing of code within the astropy source tree, | |
# which may require 2to3 to have been run. It's safer to just | |
# open the file and search for a string. | |
filename = os.path.join(location[1], '__init__.py') | |
if os.path.exists(filename): | |
with open(filename, 'U') as fd: | |
if '_is_astropy_legacy_alias' in fd.read(): | |
found_legacy_module = True | |
shim_dir = os.path.join(get_legacy_alias_dir(), old_package) | |
if found_legacy_module: | |
warn('-' * 60) | |
warn("The legacy package '{0}' was found.".format(old_package)) | |
warn("To install astropy's compatibility layer instead, uninstall") | |
warn("'{0}' and then reinstall astropy.".format(old_package)) | |
warn('-' * 60) | |
if os.path.isdir(shim_dir): | |
shutil.rmtree(shim_dir) | |
return (None, None) | |
I think it should be: | |
if '_is_astropy_legacy_alias' not in fd.read(): | |
<author>astrofrog</author> | |
Changing this line does indeed make the installation behavior work as I would expect - I have PyFITS, vo, and pywcs already installed, so it does not install the legacy shims. | |
<author>mdboom</author> | |
This logic got screwed up in some recent changes. See #193 for a fix. | |
<author>astrofrog</author> | |
This is fixed now that #193 is merged - thanks! | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Interoperability between astropy.wcs and legacy pyfits | |
<author>astrofrog</author> | |
The following example demonstrates something that does not work: | |
In [1]: import pyfits | |
In [2]: h = pyfits.getheader('.../2MASS_k.fits.gz') | |
In [3]: from astropy import wcs | |
In [4]: w = wcs.WCS(h) | |
--------------------------------------------------------------------------- | |
TypeError Traceback (most recent call last) | |
/Users/tom/<ipython-input-4-dcd57af92df8> in <module>() | |
----> 1 w = wcs.WCS(h) | |
/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev989-py2.7-macosx-10.6-x86_64.egg/astropy/wcs/wcs.pyc in __init__(self, header, fobj, key, minerr, relax, naxis, keysel, colsel, fix) | |
250 else: | |
251 raise TypeError( | |
--> 252 "header must be a string or an astropy.io.fits.Header " | |
253 "object") | |
254 try: | |
TypeError: header must be a string or an astropy.io.fits.Header object | |
but it might be nice if it could work, as some people may have the real PyFITS package installed, but the PyWCS legacy shim installed. This is not a major issue, but it may confuse users. | |
<author>mdboom</author> | |
I agree, it would be nice to make this work, but I think #193 should be resolved before tackling this. | |
<author>astrofrog</author> | |
Duplicate of #345. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
Tests create .astropy directory | |
<author>eteq</author> | |
Because #187 makes it straightforward to run tests without any install, it's problematic that the tests create the astropy configuration directory (``~/.astropy`` by default). This is probably fixable with some clever py.test tricks to fool the tests into using a temporary directory as the configuration directory. | |
<author>embray</author> | |
I'm probably mistaken, but I thought this issue came up a little while back, and that you had implemented exactly this--using a temp dir for the tests instead of mucking with ~/.astropy. | |
<author>eteq</author> | |
I can see why you're thinking that - I had to fix some tests that actually save configuration *files* - the problem here is that the code actually creates the config/cache *directories* (but not the files in them) - it's actually in the code that goes and looks for the directories. | |
<author>embray</author> | |
I see what you're saying now. | |
<author>astrofrog</author> | |
Since #283 has been merged, can this be closed? | |
<author>eteq</author> | |
Yep, forgot to use the magic "closes" phrase in one of the commits... | |
</issue> | |
<issue> | |
<author>eteq</author> | |
fixes test helper to not import astropy.config if invoked from setup | |
<author>eteq</author> | |
Currently, running setup.py causes `astropy.config` to be imported, because it imports ``astropy/tests/helpers.py``, which uses `ConfigurationItem`. This is problematic for a few reasons - it means that anything in astropy.config has to be 2to3-independent, and it casues the ``~/.astropy`` to be created just for running ``setup.py``, which is neither intended nor expected behavior. This PR fixes these by just not using the `ConfigurationItem` trick if we are running inside ``setup.py``. | |
The configuration item can still be updated using the environment variable ASTROPY_USE_SYSTEM_PYTEST - once #48 is fixed, this should be updated to reflect whatever envar naming scheme is used there. | |
@mdboom, the solution here for knowing we are in ``setup.py`` is inspired by the way you dealt with ``is_in_build_mode``. I realize this seems redundant, given that `set_build_mode` is called in ``setup.py``, but the problem here is that ``setup_helpers.py`` *itself* imports the ``tests/helpers.py``. What do you think of this? | |
<author>mdboom</author> | |
This seems fine. We could reduce some of the redundancy by using the same key for this as we currently are... | |
i.e. replace `_in_setup` with `_build_mode` and remove the call to `set_build_mode` in `setup.py` (and probably remove the `set_build_mode` function from `setup_helpers.py`. | |
But that's all just finesse to remove duplication -- I think in other ways this patch is fine as-is. | |
<author>mdboom</author> | |
@iguananaut: Those seem like good improvements over what's there now. | |
<author>eteq</author> | |
@iguananaut and @mdboom - I also like @iguananaut's idea, and have implemented it (except I used single-underscores instead of double-underscores... I have a vague memory of a Guido post about how double-underscore surrounded variables should be avoided for variables that aren't actually part of python itself... although it really isn't that terribly important, I suppose). | |
Two things about this, though: | |
* @iguananaut, I tried your suggestion for ``astropy/__init__.py`` and found I had to instead use builtins as a dictionary instead of using attribute access inside the except clause... this is very magical/mysterious behavior to me - do you understand why that is (although it does work as you suggested in this form, I think) | |
* As it stands now, when an affiliated package runs it's setup.py, it will still set the `_ASTROPY_SETUP` variable - is this what we want, or do we want `_ASTROPY_SETUP_` to be different from the "build mode" that might be applicable for each affiliated package. My feeling is that this is just fine the way it is, as we can always change it later if some affiliated package really cares about this distinction... but I thought I'd see if anyone else cares. | |
<author>embray</author> | |
Ah, right, I didn't think of that. From the Python docs (`http://docs.python.org/library/__builtin__.html` <-- bad GitHub parsing the underscores in that URL!): | |
> CPython implementation detail: Most modules have the name __builtins__ (note the 's') made available as part of their globals. The value of __builtins__ is normally either this module or the value of this modules’s __dict__ attribute. Since this is an implementation detail, it may not be used by alternate implementations of Python. | |
It's an implementation detail, for example, that at the command-line the `__builtins__` in the `__main__` module directly references the `__builtin__` module (or `builtins` in Python 3). | |
I think a more correct approach would be to `import __builtin__` (or for Python 3 `import builtins as __builtin__`) and use that, rather than rely on the `__builtins__` global. Also, in the places where you have something like `__builtins__.get('_ASTROPY_SETUP_')` you can just use the `_ASTROPY_SETUP_` directly. That's the advantage of putting it in the builtins--it makes it available in the global namespace of every module. | |
Looks good otherwise :) | |
<author>eteq</author> | |
Thanks for the tips @iguananaut, this should be safer against future changes (and more py3-happy). | |
Unless there are objections I'll merge this shortly. If we run into a bizzare use-case where someone really needs their package's "build mode" to be separate from the astropy core build mode, we can revisit then. | |
<author>embray</author> | |
Works for me. | |
<author>astrofrog</author> | |
Looks good to me! | |
</issue> | |
<issue> | |
<author>taldcroft</author> | |
io.ascii outputs recarray | |
<author>taldcroft</author> | |
@eteq wrote "I was trying out astropy.io.ascii.read today, | |
and I noticed that it's returning a `numpy.core.records.recarray` | |
rather than an `astropy.table.Table`. Is that intended behavior? It | |
seems like after all the trouble to implement the table, it makes | |
sense to have the reader functions use that class... Or is it | |
backwards-compatibility with asciitable thing?" | |
This documents an issue with a known fix. Will be fixed in the next set of io.ascii commits. | |
<author>astrofrog</author> | |
@taldcroft - do you plan to merge in the lastest asciitable for the 0.1 release? If so, could you update the docs to the new format at the same time? (the new docs for astropy.table are great). | |
<author>taldcroft</author> | |
@astrofrog - yes, I have been planning to address this issue and update the docs to the new format. I guess today is the nominal code freeze but I need a few more days. Things got very busy at work in the last couple weeks... | |
<author>astrofrog</author> | |
@taldcroft - no worries, early next week would be fine. Thanks! | |
<author>taldcroft</author> | |
Closed by Pull Request #260. | |
</issue> | |
<issue> | |
<author>embray</author> | |
io.fits should return table data as Astropy Tables | |
<author>embray</author> | |
This issue here as a reminder that this work needs to be done at some point (there's also a ticket for this in PyFITS' tracker). Currently PyFITS/astropy.io.fits has its own custom array class for representing FITS table data. Since FITS tables do have a few "special" features it will probably still be necessary to have a special class specifically for FITS tables, though for the sake of consistency it should at least be based on the Astropy Table class (which is a lot cleaner and more usable than what PyFITS currently has anyways). | |
This won't be doable for the first release--it's a big change and will be very difficult, if not impossible, to implement in a backwards compatible manner. For the first release it's probably better that astropy.io.fits (not to mention the pyfits legacy shim) work as a drop-in replacement for PyFITS. | |
This work will also have to be backported to PyFITS, so I'll want to tie it to an eventual PyFITS release. | |
<author>embray</author> | |
I really need to start working on this soon. I am sick to death of of people having problems with manipulating tables in pyfits. It's not their fault--given a programmable interface to a table they expect it to just work when, for example, they try to add or delete a column, or modify its format. But pyfits just never did this quite right. Or at least there aren't enough tests that the functionality has been well-maintained, if it did ever work. Using a version of the Astropy tables will finally make table manipulation just work (albeit with some inefficiencies for certain operations, but better than nothing). | |
<author>astrofrog</author> | |
I don't think this will be done by 0.3.0 | |
<author>astrofrog</author> | |
This won't happen for 1.0 (and is not critical) so removing milestone. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Issues with format strings in ConfigurationItem | |
<author>astrofrog</author> | |
While implementing @mdboom's suggestion to put the log file format in a ConfigurationItem object in #192, I ran into issues, bceause it seems that configobj tries to interpret the format statements. If I do: | |
from astropy.config import ConfigurationItem | |
LOG_FILE_FORMAT = ConfigurationItem('log_file_format', "%(asctime)s, " | |
"%(origin)s, %(levelname)s, %(message)s", | |
"Format for log file entries") | |
I get the following exception: | |
Traceback (most recent call last): | |
File "test_config.py", line 1, in <module> | |
from astropy.config import ConfigurationItem | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/__init__.py", line 22, in <module> | |
from .tests.helper import TestRunner | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/tests/__init__.py", line 8, in <module> | |
from . import helper | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/tests/helper.py", line 17, in <module> | |
from ..config import ConfigurationItem | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/config/__init__.py", line 13, in <module> | |
from .logging_helper import logger | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/config/logging_helper.py", line 51, in <module> | |
"Format for log file entries") | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/config/configs.py", line 145, in __init__ | |
self() | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/config/configs.py", line 276, in __call__ | |
val = sec[self.name] | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/extern/configobj_py2/configobj.py", line 570, in __getitem__ | |
return self._interpolate(key, val) | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/extern/configobj_py2/configobj.py", line 562, in _interpolate | |
return engine.interpolate(key, value) | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/extern/configobj_py2/configobj.py", line 365, in interpolate | |
value = recursive_interpolate(key, value, self.section, {}) | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/extern/configobj_py2/configobj.py", line 343, in recursive_interpolate | |
k, v, s = self._parse_match(match) | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/extern/configobj_py2/configobj.py", line 430, in _parse_match | |
value, section = self._fetch(key) | |
File "/Users/tom/dropbox/Code/development/Astropy/astropy_logging/astropy/extern/configobj_py2/configobj.py", line 399, in _fetch | |
raise MissingInterpolationOption(key) | |
astropy.extern.configobj_py2.configobj.MissingInterpolationOption: missing option "asctime" in interpolation. | |
Do you have any ideas for a workaround? I tried using ``\`` to escape the ``%`` and/or the ``(``, but it doesn't seem to help. | |
<author>astrofrog</author> | |
By the way, I also tried to use ``%%``, which doesn't work either | |
<author>eteq</author> | |
I *think* the best solution to this is to just turn off string interpolation in configobj (I don't think we'll need it anyway, and certainly don't right now...). Can you try https://github.com/eteq/astropy/tree/config-deactivate-interpolation and see if that fixes it for you? | |
<author>astrofrog</author> | |
Works for me! I agree that I don't see the need for interpolation. Once you've merged this in to master, I'll update my PR with the latest changes. | |
<author>astrofrog</author> | |
Maybe we can just merge this, and then if we decide in future that we want interpolation, we can reconsider? | |
<author>eteq</author> | |
Done. As you say, anyone who actually needs the string interpolation in the future can fix this then :) | |
</issue> | |
<issue> | |
<author>nhmc</author> | |
Cosmology package for possible inclusion in astropy | |
<author>nhmc</author> | |
Here is the first iteration of a cosmology package for astropy, as discussed on the astropy-dev list: | |
https://groups.google.com/forum/?fromgroups#!topic/astropy-dev/lyfxR9gJVe0 | |
There is a mechanism for setting and getting a default cosmology, and printing a warning if this default is requested without having been explicitly set. The convenience functions in cosmology.py show how other functions in astropy could both accept a Cosmology instance as a keyword argument, and access the default cosmology if that keyword is not set. | |
Some things it would be good to have feedback on: | |
- fast spline interpolations for the integration methods of the pre-defined cosmologies aren't implemented yet. I'm not sure of the best way to add these. | |
- the methods involving integrations only work on a scalar redshift value at the moment. They could be changed to also accept a list or array of redshifts. This is trade-off between making the code more complicated and make the methods convenient to use. I think I'm in favour of accepting redshift arrays. | |
- There are a lot of other variables (e.g. sigma8, spectral index) associated with the cosmologies in parameters.py that are not saved when a cosmology instance is created, since they are not currently used. However, it might be nice to have a standard way to link them to a Cosmology instance. One possibility is to simply make the parameter dictionary an attribute of the instance. Maybe @roban could comment on the best way to do this to make integration with CosmoloPy easy? | |
- I haven't yet included code to read a default cosmology from a configuration file, as I wasn't sure of the right way to do this. | |
Thanks! | |
<author>astrofrog</author> | |
To read a default cosmology from the configuration file, use something like: | |
from ..config import ConfigurationItem | |
DEFAULT_COSMOLOGY = ConfigurationItem('default_cosmology', 'WMAP7', | |
'The default cosmology to use') | |
Then use the value as: | |
DEFAULT_COSMOLOGY() | |
The second value in ``ConfigurationItem`` is the default cosmology. Once we've made some changes in ``setup.py``, the configuration file(s) will be automatically written out with that default. | |
<author>nhmc</author> | |
I've added the configuration file and made Om , Ol and H0 non-keyword arguments. I haven't written a decorator for wrapping functions to enable selecting a default cosmology; it's not clear to me how this would be done. | |
<author>eteq</author> | |
I'm a bit confused about your implementation of the "default" cosmology here. How do I use the `ConfigurationItem` to set the default cosmology? It looks to me that if I set it as anything other than 'no_default', it sets `_default` to whatever string I give it, and then `_default` is a string, not a `Cosmology`. | |
Also, while this may not be immediately apparent, a `ConfigurationItem` can change at runtime - that's why you call it instead of it just being a value. So the fact that you do ``_default = DEFAULT_COSMOLOGY()`` at the module level is a bit confusing. Or are you doing this on purpose - that is, the configuration item is just for the one at startup, and any time it changes later is to be ignored? If that's the case, I recommend changing the function names to `get_current_cosmology` and `set_current_cosmology` or similar, because then there's a distinction between "default" (the one you get on startup) and "current" (the one that's either the default or the one you set programatically later). | |
<author>nhmc</author> | |
@eteq Thanks for the comments. I'll fix up the docstrings as you suggest, rename the module, tweak the imports and move the scipy imports. | |
> I'm a bit confused about your implementation of the "default" cosmology here. How do I use the ConfigurationItem to set the default cosmology? It looks to me that if I set it as anything other than | |
'no_default', it sets _default to whatever string I give it, and then _default is a string, not a Cosmology. | |
This is a bug - thanks for finding it. It should convert the string in the configuration item to a cosmology in the same way as is used by set_default(). | |
> Also, while this may not be immediately apparent, a ConfigurationItem can change at runtime - that's why you call it instead of it just being a value. So the fact that you do _default = DEFAULT_COSMOLOGY() at the module level is a bit confusing. Or are you doing this on purpose - that is, the configuration item is just for the one at startup, and any time it changes later is to be ignored? If that's the case, I recommend changing the function names to get_current_cosmology and set_current_cosmology or similar, because then there's a distinction between "default" (the one you get on startup) and "current" (the one that's either the default or the one you set programatically later). | |
I am more comfortable with only reading from the configuration file when cosmology is first imported, but I don't understand the use cases for changing the configuration values at run time, so maybe this isn't the best way to do it. Could you give an example situation when you expect to change the configuration values at run time? | |
<author>eteq</author> | |
Regarding the runtime changing... Here's a concrete scenario: suppose I want to know how some result I've just attained depends on using either WMAP5 or WMAP7 parameters, but it involves using some functions that use the `get_default` function. At runtime, then, I want to be able to do something like this: | |
``` | |
set_default('WMAP7') | |
result_wmap7 = my_science_code() | |
set_default('WMAP5') | |
result_wmap5 = my_science_code() | |
print result_wmap7 - result_wmap5 | |
``` | |
Where `my_science_code` has, say, a call to ``distance_modulus(z)`` embedded in it somewhere. In that case, I'm changing the cosmology that `distance_modulus` uses, which is the "current" cosmology, but not the "default" one (at least for WMAP5). | |
So there are two ways of doing this: | |
1. Have `DEFAULT_COSMOLOGY` actually mean default *at import* - in this case, changing the value at runtime has no effect unless you save it to the configuration file (and even then you'll only see the change if you reload this package, or re-start the python interpreter). In this case, the only way to change the "current" cosmology is via `get/set_default` (which I would advocate should be changed to `get/set_current`, or at least something other than `default`.) | |
2. Have `DEFAULT_COSMOLOGY` be queried at *every* call to `get_default`, and if it changed since the last time, change the default to reflect the new cosmology. This renders the `set_default` function unnecessary, but then requires some way of adding custom user Cosmology sub-classes to the list that `get_default` accesses. | |
My impression from the on-list discussion is that option 1 was what most people favored. | |
<author>nhmc</author> | |
Thanks for the examples - I think I also prefer your option 1. The latest commit makes the following changes: | |
* Rename `cosmology.py` to `core.py`. | |
* Makes the docstrings conform to the astropy standard. | |
* Fix the bug pointed out by @eteq in setting a cosmology from a configuration value, and renames the `get_` and `set_default` functions to `get_` and `set_current`. A default cosmology is read from the astropy configuration option `cosmology`, but only when astropy.cosmology.core has been imported - this corresponds to the implementation in @eteq's point 1. There is a warning in the `set_current` docstring that changing the configuration values at runtime won't change the current cosmology, this should only be done using `set_current`. | |
* All methods and functions now accept arrays of redshifts. | |
* The scipy.integrate import has been moved from the top of the module into each function that requires it. | |
There are also a few more tests along with some minor changes. Assuming this all looks ok, then documentation will eventually have to be added for this module under astropy/docs. Some optional things that could be done at a later time: | |
* Add fast spline interpolation for functions requiring integration for pre-defined cosmologies | |
* Refactor `Cosmology` to inherit from a more general cosmology class that could be used to represent cosmologies that aren't isotropic and homogeneous. | |
<author>nhmc</author> | |
And just to be clear - I think this is ready for a final code review. It would be great to get this included in the 0.1 release if possible. | |
<author>astrofrog</author> | |
Apart from the two minor comments, this looks good to me. I'm fine with you putting the docs in a separate pull request as long as you do it before June 1st. But it would be even better if you could include at least some basic documentation already in this pull request. Note that there is an open pull request regarding the layout of the front page of the docs (#218), so you should probably try and respect that layout (even though that pull request hasn't been merged in yet). | |
By the way, I got a few PEP8 errors you might want to fix: | |
core.py:19:10: E222 multiple spaces after operator | |
core.py:26:6: E221 multiple spaces before operator | |
core.py:43:1: E302 expected 2 blank lines, found 1 | |
core.py:104:43: E225 missing whitespace around operator | |
core.py:665:1: W293 blank line contains whitespace | |
core.py:690:13: E231 missing whitespace after ',' | |
I can't vouch for the correctness of the algorithms in this pull request, since I don't really do cosmology, so hopefully @eteq will be able to confirm that everything is fine. | |
<author>nhmc</author> | |
@astrofrog: Thanks for the comments. I'll add some documentation to this request once @eteq has had a chance to look at the latest version. | |
I fixed most of the pep8 issues, but would prefer to leave things like `dm2*np.sqrt(1. + Ok*dm1**2 / dh_2)` because I think it's easier to read than with spaces around every operator. If anyone feels strongly about this I'm happy to make it pep8 compliant though. | |
<author>eteq</author> | |
I promise to look at this within a day or two - it's been near the top of my todo list for the last few days, but I keep having to put out fires in other places... | |
<author>eteq</author> | |
At a high level I like this a lot, and will be happy to see it merged ones my comments here are addressed. There are obviously a bunch of inline comments I left above, but I had two other comments that are slightly broader in scope: | |
* Right now, it looks to me like you're using 'no_default' to signal WMAP7, but with a warning when `get_cosmology` is first called. I think the approach most people on the list favored was that the warning should be printed *even if* a cosmology is given as a default in the configuration file. I realize this might be a bit annoying to see the warning constantly printed, though, so perhaps the configuration option should have a piece where if it just says 'WMAP7', the warning gets shown, but 'WMAP7_nowarning' uses WMAP7, but silences the warning. | |
* It would be useful to have inverse versions of some/all of the methods and convinience functions. E.g. `lookback_time_to_z`, `distmod_to_z` and similar. This could of course be implemented in a later PR, but I thought I'd put it out there now as something we'll eventually want... | |
<author>nhmc</author> | |
Thanks for all the comments, I've made the changes @eteq suggests except for those mentioned below. | |
> Right now, it looks to me like you're using 'no_default' to signal WMAP7, but with a warning when get_cosmology is first called. I think the approach most people on the list favored was that the warning should be printed even if a cosmology is given as a default in the configuration file. | |
I don't think this is necessary. The reason for a warning is a gentle reminder that if someone is doing an operation that depends on the cosmology, they should think about what cosmology they're using. | |
If the user has specified the cosmology in the configuration file, then they've made that decision and don't need to be reminded of it every time they import the package. Only if the package has to pick its own default value (`set_current()` hasn't been called and the `default_cosmology` option in the configuration file is set to 'no_default` ) should the user be warned. I think this also the only situation in which most people on the list thought a warning is needed. | |
The warning should also be made only once, not every time get_current() is called. Imagine someone writes a script that calls cosmology functions 20 times, and send it to someone who hasn't set a cosmology in their configuration file. You don't want it printing 20 messages all saying its using WMAP7 when they run it. | |
<author>nhmc</author> | |
The last commit addresses @eteq's comments (apart from the issue with printing the warning described immediately above). I've taken out the documentation in `__init__.py` as suggested, I'll include it in the documentation for this package in another pull request. | |
<author>eteq</author> | |
This looks stupendous to me - I'll want to take one final look at it when I'm a bit more awake, but overall looks great to me. | |
Regarding the warning/default business: After testing I realized something that will let us have our cake and eat it too: if I just make the change I suggested above (not set ``_current = WMAP7``), the warning still only appears once, because the `warnings` module knows to silence repeated identical warnings. But that's behavior that can be easily switched off with a `warnings` filter. So then the folks that want to be sure they know can fiddle with the warning filter to be warned however they want, but we still get the default behavior of only one warning. Sound good? | |
One other item: it looks like overall you've got great test coverage, but with one thing missing: there seems to be no `angular_diameter_distance_z1z2` test. It's fine if you don't want to worry about getting that for now, but if you have a test case or two, you might want to fill it in to get pretty much full coverage. | |
<author>astrofrog</author> | |
I pushed this to my staging branch (after rebasing on master) and noticed that some of the tests fail due to the absence of scipy. I've opened an issue relating to testing routines with optional dependencies (#250) | |
Since it'd be good to get this merged in before June 1st, maybe we can mark these as xfail for now and then update things according to what we decide in #250? | |
<author>nhmc</author> | |
I'll add some tests for `angular_diameter_distance_z1z2` and make the `get_current` change @eteq suggested. I don't have ssh access today, but I'll try to submit a final version later tonight. | |
I think it's ok to mark to tests that need scipy as a known fail for now. To clarify why we need to use scipy for the `quad` numerical integration: in principle we could include a numpy-based integration function to use if scipy isn't available, but it's tricky to write one that handles integration to infinity correctly (This is required by the `age` method). | |
Should I rebase this branch against master for this pull request, or does that require opening a new PR? | |
<author>astrofrog</author> | |
If you want to rebase against master, you can do it then force push to this branch, and the PR will update. Make sure you make a backup of the repo first before rebasing, after all this work ;-) | |
<author>astrofrog</author> | |
And regarding the tests, let's mark them as ``xfail`` for now, and I'll update them in #250. | |
<author>eteq</author> | |
I agree we should just skip these and get the optdeps stuff worked out later, but I recommend against using `xfail` for this - better is to do `skipif`, because then they will succeed if numpy is actually present rather than "xpassing" - xfail is for tests that generally won't pass, not those that *conditionally* succeed. You might do it like so (in the tests): | |
``` | |
import pytest | |
try: | |
import scipy | |
HAS_SCIPY = True | |
except ImportError: | |
HAS_SCIPY = False | |
@pytest.mark.skipif('not HAS_SCIPY') | |
def test_that_uses_scipy(): | |
... | |
``` | |
See http://pytest.org/latest/skipping.html#marking-a-test-function-to-be-skipped if you want more info on this. | |
Edit: fixed typo in code - I left out the type of exception to catch. | |
<author>astrofrog</author> | |
+1 to @eteq's suggestion - once you've added this (and optionally addressed @eteq's point about #240), let's merge! | |
<author>eteq</author> | |
+1 on ready to merge once the scipy test business is fixed. Great work! | |
<author>nhmc</author> | |
Ok, I think this is ready to be merged. I rebased the branch and changed the code to use `isiterable` from `utils.misc` and to skip most of the tests if scipy isn't present. `get_current` also no longer sets the `_current` global variable to WMAP7 if it has not already been set. As @eteq says, this still results in only a single warning being printed per session since that's the default behaviour of the warnings module. Annoyingly, a warning is still printed after every call to `get_current` in IPython, but this seems to be an IPython-specific feature (or bug?), since the normal Python intepreter only prints the warning a single time per session. | |
I will open another pull request with Sphinx documentation for the package later this week. | |
<author>astrofrog</author> | |
Thanks! I've just pushed this to my staging branch to ensure that all tests pass. I'll report back! | |
<author>astrofrog</author> | |
@nhmc - I think one of the tests is missing the skip decorator, as one of the tests is still failing on Jenkins: | |
https://jenkins.shiningpanda.com/astropy/job/astropy-staging-astrofrog-debian-multiconfig/NUMPY_VER=1.4.1,PLATFORM=debian6,PYTHON_VER=2.7/lastCompletedBuild/testReport/astropy.cosmology.tests/test_cosmology/test_distmod/ | |
<author>nhmc</author> | |
There were two missing decorators, with the latest change it should be ok (fingers crossed!). | |
<author>astrofrog</author> | |
@eteq - once I've confirmed that all the tests pass or skip, shall I go ahead and merge? There's a couple of commits that include then remove some unresolved conflicts (37274a74f9d25771e98a9d97fb3680bca3f4f422), but not really enough to warrant rebasing to squash those commits, right? | |
<author>astrofrog</author> | |
All tests now pass or skip on MacOS X and Linux, so we are good to merge! | |
@eteq - feel free to merge (and decide whether any commits need to be squashed first) | |
<author>astrofrog</author> | |
I'm going to go ahead and merge this, and squash a couple of commits together for a cleaner history. | |
<author>astrofrog</author> | |
I've merged this PR, and set the merge commit message to what it would have been if the PR had been merged through GitHub, so for all purposes, the history will look like it was a normal PR. I squashed three commits together to get rid of the unresolved merging. | |
<author>astrofrog</author> | |
@nhmc - thanks for this important contribution! :-) | |
<author>nhmc</author> | |
No worries, thanks for all your and @eteq's help! | |
<author>eteq</author> | |
I second that - thanks, @nhmc! | |
</issue> | |
<issue> | |
<author>wkerzendorf</author> | |
Implementation of errors and masks | |
<author>wkerzendorf</author> | |
To implement arithmetic operation, convolution and interpolation for masks and errors @andycasey and I implemented ndarray subclasses for them. If nothing is specified when instantiating a nddata object it will automatically convert ndarrays to the appropriate mask and error types. | |
<author>wkerzendorf</author> | |
@eteq I think for errors and masks (and I would focus on the errors first as it is more obvious there) are not specific to spectrum1d. They are a base level NDData problem which is the same ranging from 1d spectra, images, cubes, to n-dimensonal simulations. There are a couple of ways to incorporate them in nddata (that I thought about) and this ones seems to be the most straight forward. I think implementing them in Spectrum1d would inadvertently lead to a lot of code duplication. | |
<author>astrofrog</author> | |
We should try and make progress with this. First I think that first we need to separate the issue of masks and errors, as they are quite different, so I propose splitting the pull request. | |
In the case of errors, I have several issues: | |
* I have major reservations about the current implementation. In particular, the division and multiplication are only approximations, which break down once errors are significant (e.g. 50%), and I think the results should on the contrary be as accurate as possible. | |
* I think we should have already several different classes in the initial pull request, to see how things are going to work. For instance, we could have SDError, VarianceError, PDFError, and some kind of log-normal error. | |
* Related to the previous point, what happens when you add two datasets that have errors with different classes? | |
* Do we really want to inherit from ndarray? What are the benefits, as opposed to storing the array in a hidden attribute? For example, do all the methods on arrays, e.g. ``sum``, ``mean``, etc. even make sense here. | |
I guess what I'm trying to say is that we're going to have to put a lot of thought into how to deal with the errors properly. I suggest that we create a wiki page with a list of requirements, code examples, how we want things to behave, then work on the implementation. | |
For masks, I think that we also want to think about this more, in particular, I kind of agree with @eteq that the arithmetic cannot be done at the level of the mask objects, because they don't know what the user wants (for errors, things are well formulated mathematically, but not for masks). | |
<author>wkerzendorf</author> | |
@astrofrog I agree that this is one of the most important things and we should put some effort into this. | |
I do not believe masks and errors are that different issues as both of these need to handle arithmetic operations in conjunction with nddata or nddata-based (spectrum1d, ccdimage, ...) objects. The idea is that if the arithemtic it is then passed through to functions of error and mask that know what they are doing. | |
Anyways I'll address the points in order of appearance. | |
1. do you mean numerically unstable? How would you make them stable? | |
2. More is always better ;-) But I think the current SDError pretty well shows what happens. In case of addition, error_add is called and does whatever you want. | |
3. That should not work. Same with mismatching units and mask types. We came to that conclusion in Spectrum1D for units and I think that should hold here as well. I think it is prudent to have some convenience functions that transform errors from one to the other. so if you have datasets with mismatching errors for example: | |
`nddata1; nddata2 # are our two nddata sets with one VarianceError and one SDError | |
nddata1 + nddata2 # raises an exception | |
nddata1.error = VarianceError2SDError(nddata1.error) #could also be a method; I don't care | |
nddata1 + nddata2 # all is peachy` | |
4. I have a slight preference for ndarray, as many functions that we use (like broadcast, methods like shape, ndim) automatically work without us having to implement it. But if people have good arguments for not doing that then, we can certainly inherit from object. I think it is important that we have a common superclass that we can do isinstance checking. | |
I agree we need to put some thought into this, but I think one good thing is to have a pull request that shows one implementation (here's one I prepared earlier ;-) ). I have thought about this problem in a bit of detail as I was trying to implement it for Spectrum1D. I thought of another approach using functions pointers hidden in the meta data, which seemed inferior. | |
Last but not least: masks. Of course they know what the user wants: If the user wants a BoolMask then it will behave like masked arrays in numpy. Ignoring values in sums and so on. If the user specifies other masks then it will do what is written in these masks. Masks are the same for Spectrum1D, Images, IFUs, simulation data ,.... the same as errors. | |
<author>astrofrog</author> | |
A couple of quick notes (need to think about the rest more) | |
1. Ignore me - I was thinking of error propagation in general, but after double checking, it does seem that even the equation for division is exact. It's when you get to things like power, etc. that things are no longer exact. | |
2. I think we should have at least two to show how interactions happen (or don't). I think it's also important to show two different types to figure out what is going to be common and can go in a base ``Error`` class. | |
<author>eteq</author> | |
First, regarding splitting: I agree with @astrofrog - they aren't the same thing, if nothing else to be evidenced by the fact that very different topics are coming up here. However, I also agree with @wkerzendorf that we should treat them the same in the sense of how they interact with `NDData` (e.g. if we say the error objects always get added if the image objects are being added, then we should treat the mask objects the same). But we can split the pull request while still agreeing these should be consistent (so that the specific details of each don't drag down the other). | |
1. (no longer relevant, but github "cleverly" makes a list starting with 2 start at 1...) | |
2. This is tied into 3 - just the SD one to start is fine if they don't interact (as @wkerzendorf is suggesting), but if they do interact then we should have at least one other as an example of how they should do it. | |
3. Sorry to keep causing trouble :) But I think this is *not* the same as the situation in `Spectrum1D` - that was about what to do if the wavelength values don't match, because there's lots of ways to combine/interpolate mis-matched spectra. In this case, for errors (but not for masks, as I am pointing out above), there are completely deterministic ways of combining them. E.g. if you add two values where one has S.D. error and one is var errors, it's pretty obvious how they are supposed to be combined. I agree that in many more complicated cases we want to throw an exception (if you try to combine a laplace distributed error with a lognormal error, lets say), but when the way to combine is unambiguous, we want it to just work. | |
4. I agree we do *not* want to use `ndarray`, because it implies there errors are always arrays, which may not be true. What If I want an `NDData` object where the error is just the poisson distribution centered around the mean value (the value in `data`)? It should be possible to do this without having to keep a second array in-sync with `data`, and you can't do that if you subclass from `ndarray` (at least not at all easily). I agree with @wkerzendorf that a base class with a few basic functions/properties is a very good idea, though (e.g., `var`, which gives the second-moment of the distribution). | |
(see my example above for a case where we don't know how the user wants to combine masks without making them decide. People use mask flags for all kinds of wacky things!) | |
<author>wkerzendorf</author> | |
@eteq regarding the arithmetic of nddata objects with different errors and masks: I believe for now it is easiest to throw an exception if they do not match. It is very similar to `Spectrum1D`'s behaviour: e.g. Spectrum1D throws a tantrum if the units are not the same. Both units are frequency or wavelength units, so one can convert the into each other (like sd and var). The question remains what units should the end product have, the units from the first and second one. | |
I'm not saying that's impossible to do, but it is ambiguous. | |
Anyways, the whole point is that this implementation allows for such magic to happen in the background. Var-errors can be converted to sd errors on the fly if that is what we want to do. For now, I would throw an exception and worry about making it automagical later. | |
(4) ...screw you github, I can make my own numbering scheme and there's nothing you can do about it ;-). That is fine. However this reconfirms me in my assumption that error_add and mask_add should have access to the data, masks and errors of both operand and self nddata. | |
I think in general for the first release we want bool masks and sd errors to lay a foundation that others can build upon. | |
<author>astrofrog</author> | |
Just a couple of comments: | |
- I do think that we should *not* sub-class from ndarray, because we actually don't expect the dimensions of the errors to match the dimensions of the data. For example, as @eteq pointed out, errors could be stored as scalar values (constant value over the image), we might want to be able to have them be stored as functions, or we might want to define a full PDF for each pixel, or a single PDF over the image, etc. You mentioned that we would have to re-implement some array methods, but most methods would *need* to be reimplemented to make sense. For example, what does ``sum`` mean on an SDError? Is it the sum of the values? Or should they be added in quadrature like standard deviations? Therefore I believe that at least errors should not inherit from ndarray. Note that masks are different, as the point of a mask is to have a 1-to-1 correspondance with the data, so in that case one could argue that subclassing from ndarray makes sense. | |
- I do think we want to show at least two types of errors, maybe just SDError and VarianceError, to show the mechanism that would check for compatibility with arithmetic operations. I agree we can implement most of the magic later, but we need to ensure that the API is magic-friendly to start with, and the best way is to show a single example (and SDError and VarianceError should be particularly simple to implement). | |
<author>eteq</author> | |
@wkerzendorf - as I said above, I've come around to your point regarding masks, but I'm not quite following what you are saying about errors... I agree that the point is to have the magic happen in the background, *when it's unambiguous* - | |
(4) You raise a good point that there is a need for the masks to have access to the data, though... this is why I was saying before that it might be better to implement this at the e.g. `Spectrum1D` or `Image` level, as then there's no need to cross-connect the data and the erors/mask. But I can also see the virtue of this design... maybe the thing to do is use `weakref`s to access the data so they don't prevent the data from being garbage-collected? | |
But I still agree with @astrofrog that *if* we are going to allow error-conversion, there should be at least two to make it clear who the conversion is supposed to work. | |
I also still think it would be better if you split this into two pull requests, especially if you are going to split it into a base class for each (mask and errors), which I think you should. Then it will be much easier to review the PRs in detail without having tons of confusing threads in the diffs. | |
<author>wkerzendorf</author> | |
@eteq's comments: | |
First, It's never unambiguous. Imagine a + b = c(where a,b nddata objects). a has errortype x, b has errortype y. x can be converted to y and vice versa. The ambiguity (which will never go away, unless x=y) , what error type does c have? x or y? | |
Second: sorry you misunderstood me. I meant the error_add (and friends....) function needs to have access to all the data and not the error object. This way we go around the problematic garbage collection (but good point, which we should think about in this interconnected hell). I believe that the masks and errors do exactley the same no matter what kind of object it is (a spectrum, an ifu, a bird, a plane, ...) so we need to solve this here at ground zero. | |
Third: Agreed, we need to check if in this framework we can make the conversion-magic happen. This is the last thing on my list (next ones are two pull requests and subclass from object (probably)). | |
Irregardless of the pull requests, when thinking about masks and errors there are many similarities that we should keep in mind (they are both tag-alongs of nddata, they need to be taken care of in the right way when doing arithmetic, ...). | |
Finally, I think that when talking about masks, it often seems to be more like flags (@eteq mentioned, saturated, defect, ...) maybe we should call them flags. What do people think? In the end we'll probably inherit the new NA value from numpy 2.0 anyways (which might make boolmasks obsolete; only boolmasks). | |
<author>eteq</author> | |
@wkerzendorf | |
First: If you're doing ``c = a + b``, it's the `__add__` method of `a` that gets called, so the error object type to take is the one from `a` *if it's compatible with b*. The point I'm making is that it's safe to automatically convert as long as no information is lost. If e.g. `a` has `SDError` and `b` has `VarError`, `c.error.var` should still give the same number whether or not `c.error` is `SDError` or `VarError`. If the mapping ever requires a user decision, then it should give up and throw an exception. | |
second: I don't understand what you're saying here, still. `Error.__add__` always has the error information, because it's a method of the error object. Isn't that your whole reason for implementing this at the error/mask level instead of in `nddata` subclass operators? So that e.g. `NDData.__add__` can just do `newdata = self.data + other.data`, `newerror = self.error + other.error` and so on? | |
And you are absolutely right that there are similarities we should keep in mind - I promise we will be looking at both PRs if we split them :) But I think when we have the API ironed out and are getting into the detailed code-review stage, it will be far less confusing if there are separate, smaller PRs. | |
As for masks/flags, this is a hopeless ambiguity - I've heard both terms used to mean the other one many times. I think mask makes slightly more sense because "mask" is generally more commonly used for pixel-like data (usually there's some agreed definition of a "bad" pixel), and "flags" is more common in catalog settings. But it's really just semantics at this point. But you *can't* count on using the numpy 2.0 NA value for a long time yet - as you say, we may want to internally use that eventually *if* numpy 2.x is installed, but we have to continue supporting older versions of numpy (and probably will for many years, based on how slowly astronomers can be to upgrade). | |
<author>wkerzendorf</author> | |
It seems we agree at least on the basics. Which is a huge leap forward in astropy progress (doesn't happen to often ;-) ). | |
@eteq | |
1. what error type does c have SDError or VarError. You say the one from a, but why not the one from b. It's ambiguous and just a choice that we make at the implementation time. anyways, we'll worry about that later | |
2. I'll implement this and you'll see what I mean. | |
3. I'm looking forward to your critical eyes perusing both my pull requests (that sounds wrong ;-) ). | |
4. well you're thinking too much along the line of images and spectra, but it can be any array based stuff. anyways, there's a s**t storm coming our way anyways when talking about arithmetic with boolmasks (what to do?: ignore the masked values, add them up but flag them as bad in the final product, etc.). But for now I'm just pretending that everything is fine and will try to live in blissful ignorance. | |
<author>eteq</author> | |
Good to see progress is being made! | |
Should we close this PR in favor of #223 and #224 ? | |
<author>wkerzendorf</author> | |
I have left this open to see if there's a few things missing, I guess no one complained that there's fundamental differences. So I close this one. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Search page shows 'Searching' multiple times | |
<author>astrofrog</author> | |
The search results page in the Astropy documentation shows the word 'Searching' three times while searching (twice once the search is done). Here is an example: | |
http://astropy.readthedocs.org/en/latest/search.html?q=FITS&check_keywords=yes&area=default | |
<author>astrofrog</author> | |
It looks like RTD have their own (better) search, so I'm going to switch to using that. | |
</issue> | |
<issue> | |
<author>yannick1974</author> | |
Should fitscheck be renamed | |
<author>yannick1974</author> | |
Dear developers, | |
Both astropy and python-pyfits provide a 'fistscheck' program which could lead to a conflict when both are installed. If the two programs do the same job, we can keep the same name and it's up to the distribution packagers to decide which one to use. | |
But if astropy's fitscheck is significantly different, maybe it's a good idea to use a different name so that both versions can co-exist (and because packagers could choose different name to avoid the conflict in their distributions). | |
Yannick | |
<author>eteq</author> | |
@iguananaut can confirm, but the idea is that they are the same functionality. The eventual plan is to phase out the standalone `pyfits` and replace it with `astropy.io.fits` (if all goes well), so I presume that would mean only enhancements would go into the astropy version in the future. (right, @iguananaut?) | |
<author>embray</author> | |
Indeed, they're both the same thing functionality-wise. Actually, it didn't occur to me that pyfits was even being packaged for any OS distros, but it appears that it is for Debian, Archlinux, Gentoo, and probably others... | |
So this is a potential problem for packagers. Unfortunately, giving the two scripts different names would be a mistake: They're both the same thing. Anyone who already has pyfits would expect "fitscheck" to be available. But as Astropy is meant to be a drop-in replacement for pyfits, it should also provide "fitscheck". | |
Debian packages have a mechanism to "replace" other packages. When Astropy gets released, anyone who makes a package of it should probably mark it as Replaces: pyfits. Though I don't know other packaging systems well enough to comment on whether that's supported. | |
The Pyfits version also matters a great deal here. At some point I'll have to try to track down whoever's maintaining these packages and work with them on this issue, but for now I don't see it as an immediate concern. But I'm glad you brought it up. | |
<author>eteq</author> | |
(I'll leave this open to serve as a reminder to iron things out with packagers when there's an actual Astropy release to be packaged) | |
<author>yannick1974</author> | |
Thanks for the answers. Erik, you're right. Debian's dpkg has mechanism to indicate that a package 'provides' and 'replaces' another package. It also have an 'alternative' mechanism that could be use for fitscheck if there were significant differences between the two programmes. | |
<author>astrofrog</author> | |
I'm running into this issue with MacPorts - I'm not sure how to deal with it: should I just force all macports users to upgrade to astropy and ditch pyfits (via a replaced_by keyword, which does exist in macports), despite astropy only being alpha for now? MacPorts actually renames gcc to e.g. gcc-mp-4.5 to avoid conflicts with system-installed compilers, so for now I could rename fitscheck to fitscheck-astropy, until we are confident we can switch over to astropy completely... | |
<author>astrofrog</author> | |
After some experimentation, I think I'm going to rename the astropy scripts to have an ``-ap`` prefix for now, and later on we can always decide whether astropy should replace pyfits in package managers. | |
<author>embray</author> | |
That's fine by me. | |
Are you saying you will do this just for MacPorts (and presumably other packagers would have to do something similar?) or are going to change this is Astropy itself? | |
<author>astrofrog</author> | |
I'll just do this in the MacPorts portfile, so this requires no change in astropy itself. | |
<author>embray</author> | |
Okay--that's probably better for now. | |
<author>astrofrog</author> | |
I am closing this, since we agree that this is mostly a packaging issue, not something that needs to be changed in the core package. | |
</issue> | |
<issue> | |
<author>eteq</author> | |
rescales logo in docs | |
<author>eteq</author> | |
This is a very simple change, but I wanted to bring it to the attention of @astrofrog, @kbarbary, and @taldcroft, because you all mentioned how big the index page logo renders (astropy/astropy-website#2). This fix causes the header logo to rescale to match the width of the column (which is a good idea anyway). | |
I can't do this in exactly the same way with the sidebar logo, but I can override the sphinx html templates to do the same thing in HTML. Do you think I should do this, or leave it alone? | |
<author>eteq</author> | |
Oh, and you can see it in action at http://eteq.github.com/astropy/ | |
<author>astrofrog</author> | |
I also find the logo too large. In fact, do we need the full logo at all there? How about just having the logo on the left the way it is now? | |
@taldcroft - readthedocs takes care of versioning, i.e. you can see the docs for different versions (based on tags), so it's a really nice solution for both stable and dev docs, but I understand your concern. So here's my idea - how about hiding the URL behind docs.astropy.org? Then there would be no difference to hosting it ourselves! (we do have subdomains available, so we can do that). | |
<author>astrofrog</author> | |
By the way, the readthedocs source is available, so if we decided we didn't want to rely on them one day, we could set up our own docs server seamlessly. | |
<author>taldcroft</author> | |
@astrofrog @eteq @kbarbary | |
What about having readthedocs as a repository of versioned docs but having the current release available on the AstroPy site. The AstroPy site is really beautiful and distinctive (with the very nice touch of bootstrap), so why would we want to immediately send people to RTD, which is fine but basically just vanilla sphinx? When I read the AstroPy docs (which is what people will mostly do on the web site) I would like that nice trademark AstroPy experience! Otherwise the AstroPy site is just a portal to another more useful site and people won't even bother going there. | |
<author>astrofrog</author> | |
@taldcroft - I *do* agree that it would be nice to have the latest stable docs on the main website. However, I had issues with using the bootstrap theme with the current content, so it will require work to make it look good (the CSS needs to be significantly tweaked, and the layout has to also be changed a bit). I do plan to do this at some point, but it's quite low in my priority list. Once everything else is ready for 0.1, I'll give it a try :-) I guess what I meant before is that what we have at the moment is 'good enough' for now, even though it's not 'best'. | |
<author>eteq</author> | |
@taldcroft - in addition to @astrofrog's points, RTD's biggest advantage is that it builds a new version of the development documentation every time a new commit is sent to master (and can do maintenance branches and the like if we want it to) so we don't have to manually rebuild when we remember to. Also, their theme is very nice. And I like @astrofrog's idea of pointing docs.astropy.org to the RTD page - http://read-the-docs.readthedocs.org/en/latest/alternate_domains.html indicates this just means setting the CNAME record appropriately. | |
As for the banner logo, I was thinking the fact that we don't know the size might be a good thing - e.g. a browser with some weird font or rendering or something might screw up the look of the page, anyway. But I'm also fine with just setting the width to 489. | |
@astrofrog - I think it's nice to have the full banner in the doc index page because at least in some cases people will look at the docs without looking at the home page. I honestly don't understand why it's a problem to see the logo twice on just that one page - it's a very nice logo, after all :) But I'm not strongly attached and could just remove it if the rest of you don't like it. | |
@astrofrog and @taldcroft - What do you both think about the size of the sidebar logo? That can be resized straightforwardly by just resizing the file. | |
<author>astrofrog</author> | |
@eteq - 100% is definitely not a good idea. I have wide screens, and have my browser window wide by default, and the logo is, well, HUGE! Maybe just set it to the same as the Astropy home page? | |
I've asked James to try pointing docs.astropy.org to RTD for now. | |
I opened a ticket (#206) to remind us of the idea to *try* having the latest stable docs in the website. We should try it out at some point and see if it works better than having to redirect people to RTD by default. But as I said, I think this is low priority for now. Another option if people prefer that is to have the bootstrap theme in RTD. | |
For the sidebar logo, how about 150x150? | |
<author>taldcroft</author> | |
@eteq - the sidebar logo could be reduced by 50% in my opinion, but probably opinions will vary. | |
@astrofrog - I see you made a new issue, that makes me happy. :-) | |
BTW, going offline for the day now (climbing). | |
<author>astrofrog</author> | |
@eteq - for the banner logo, can't you use a resized image instead of setting the width? If we are going to make it like the astropy.org site, then we could just use a file with the right size to start with. | |
<author>eteq</author> | |
Alright, it's been updated to use the same image as in the web site - see http://eteq.github.com/astropy/page1/ for that version. Also see http://eteq.github.com/astropy/page2/ for the version that has also had the sidebar logo resized to 150 x 150 | |
<author>astrofrog</author> | |
The banner logo looks good! For the side logo, I kinda like the size on the current RTD site, which I think is just the same as what you have, but is shrunk a bit by the RTD theme? | |
<author>astrofrog</author> | |
By the way, it looks like the color of the logo changed when you reduced it - is that intended? One option is to use the 150x150 one but with a 25 pixel margin to make it 200x200 - this will ensure it's centered in the nav bar in RTD. | |
<author>mdboom</author> | |
The modified image have a ICC profile applied. Removing it doesn't correct the image though -- the processing must have done something upon saving. | |
<author>kbarbary</author> | |
+1 to @taldcroft's suggestion of a more seamless experience between the homepage and docs. But also agree with @astrofrog that it is not a priority right now and what's there now is good enough to go live. | |
Ideally, I feel like going to the docs should not make you feel like you're being taken away from the site you're on (regardless of where it is hosted) - As a user, I would rather have the documentation look like a subdirectory of the main site. I tried to see how this could be done for multiple doc builds (mulitple versions and/or projects), but found it difficult to achieve in an easily maintainable way, since sphinx is geared towards generating stand-alone sites. | |
By the way, I would also be interested in working on integrating the docs with the bootstrap theme. | |
@eteq - prefer the look of http://eteq.github.com/astropy/page2/ (but with logo horizontally centered in the side bar) | |
<author>astrofrog</author> | |
@kbarbary - please feel free to give the bootstrap theme a shot if you want. The main issue that you'll run into is that, especially in the API sections, the section titles are too long, so the navigation in the menubar doesn't look good. The solution, in my eyes, is to remove the previous/next topics from the menubar, and put them above the page content in the main page (on the left and right of the page respectively). Also, the current CSS doesn't look that great for the main page content by default, especially the paragraph markers. | |
However, I don't think most people will care too much about having a different sites for the docs though - docs.python.org do it, docs.scipy.org, etc. I personally prefer to keep a separation between the two. | |
<author>astrofrog</author> | |
Looking at the previews again, I think the page2 version is fine - just add a margin to center it. That version actually has colors more consistent with the astropy.org front page. | |
<author>eteq</author> | |
Alright, have a look at http://eteq.github.com/astropy/page3/ - that has the added margin of 25pix on each side. | |
Also take a look at http://eteq.github.com/astropy/page4/ and http://eteq.github.com/astropy/page5/ - those are the same, but I've added the "astropy" text either below or above the logo. I also changed the sidebar color to match the RTD theme, because the default made it hard to see the text. Not that I am *not* suggesting that color scheme for the sidebar, as it's obviously painfully difficult to read the text - I'm just including it so you can see better what it would look like with the RTD theme background. | |
<author>eteq</author> | |
Oh, and I think I favor page5, but I'm also ok with page3 (or page4, for that matter) | |
<author>astrofrog</author> | |
I personally prefer the page3 version. If people prefer a version with text, I also prefer page5 to page4. | |
<author>jiffyclub</author> | |
I like the color scheme on 4 & 5, and prefer astropy text below the logo. | |
<author>mdboom</author> | |
When we use the RTD background, the foreground text colors also change right? Your changing the background color there was just to illustrate how the logo looks against the background, not how the theme would look in general, right? | |
I like the text underneath the logo (page 4) (EDITED TYPO) | |
<author>astrofrog</author> | |
@mdboom - page 5 has the text above the logo - did you mean that one? | |
<author>mdboom</author> | |
Oops. Mistyped. I meant page 4. | |
<author>eteq</author> | |
Alright, it sounds like 4 or 5 is preferred over 3... but 4 and 5 have 2 votes each. Anyone (@taldcroft or @kbarbary, perhaps?) want to break the tie (or if you both prefer 3, that would simply make it a multi-level tie :) ? | |
@mdboom - yes, I meant the color change just to indicate how the logo looks against the RTD background, because it's a bit hard-to-read with the default background. | |
<author>astrofrog</author> | |
@eteq - I'm willing to change my vote to 4 if you put more space between the logo and the text, it's a little too close at the moment. | |
<author>eteq</author> | |
Good point, @astrofrog - have a look at http://eteq.github.com/astropy/page6/ - that one has better spacing between the text and logo. (this also goes back to the default background color) | |
<author>astrofrog</author> | |
@eteq - the spacing looks good in page6. I say just go for that, and if anyone wants to tweak it further, they should feel free to open a PR. | |
<author>taldcroft</author> | |
@eteq @astrofrog - page6 looks good to me. The orange logo on green is good, and moving back to the dark sidebar background makes the white text actually readable. | |
<author>astrofrog</author> | |
@eteq - just to make sure we're on the same page, you are not suggesting putting the green theme back on RTD, right? My understanding is that we're only changing the logo, not the themes here, is that correct? | |
<author>eteq</author> | |
@astrofrog - yep, that's correct - only the logo is changing here. | |
Alright, I will merge this in shortly. | |
<author>astrofrog</author> | |
@eteq - ok, sounds good! | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
Have the latest 'stable' docs be part of the Astropy website | |
<author>astrofrog</author> | |
This issue is just to keep track of the option to try having the stable docs be part of the Astropy website rather than just being in RTD. This does not mean we *should* do it, but is just a reminder that we should try it out. | |
<author>eteq</author> | |
@astrofrog - do we still want to do this? I think RTD is a good solution (especially with the docs.astropy.org redirect set up), so I don't necessarily see the need for this. We can revisit this if down the road something goes bad with RTD, but I would say we should close this issue. | |
<author>mdboom</author> | |
Just as a sanity check -- since I don't know much about the details here -- will this also allow us to have multiple versions of the docs concurrently available (as the Python docs are, for example)? | |
<author>eteq</author> | |
@mdboom - yes, readthedocs.org allows us to have a build for every tag (or branch). The plan is to add a page with all the released versions on the astropy web site (once we have more than one). | |
<author>kbarbary</author> | |
Sounds good to me. | |
<author>astrofrog</author> | |
So shall we close this? | |
<author>taldcroft</author> | |
I'm OK with closing this. | |
</issue> | |
<issue> | |
<author>astrofrog</author> | |
_generate_all_config_items imports non-existent function | |
<author>astrofrog</author> | |
I tried running ``_generate_all_config_items``, but it tries to import a function (``load_module``) from ``pkgutils`` that does not exist: | |
In [1]: from astropy.config.configuration import _generate_all_config_items | |
In [2]: _generate_all_config_items('astropy') | |
--------------------------------------------------------------------------- | |
ImportError Traceback (most recent call last) | |
/Users/tom/<ipython-input-2-8983cb45a7a4> in <module>() | |
----> 1 _generate_all_config_items('astropy') | |
/Users/tom/Library/Python/2.7/lib/python/site-packages/astropy-0.0.dev992-py2.7-macosx-10.6-x86_64.egg/astropy/config/configuration.pyc in _generate_all_config_items(pkgornm, reset_to_default) | |
435 this, though - this might not always be what you want. | |
436 """ | |
--> 437 from pkgutil import find_module, walk_packages | |
438 from types import ModuleType | |
439 | |
ImportError: cannot import name find_module | |
<author>astrofrog</author> | |
It seems to work with: | |
from pkgutil import get_loader, walk_packages | |
from types import ModuleType | |
from ..utils import find_current_module | |
if pkgornm is None: | |
pkgornm = find_current_module(1).__name__.split('.')[0] | |
if isinstance(pkgornm, basestring): | |
package = get_loader(pkgornm).load_module(pkgornm) | |
elif isinstance(pkgornm, ModuleType) and '__init__' in pkgornm.__file__: | |
package = pkgornm | |
else: | |
msg = '_generate_all_config_items was not given a package/package name' | |
raise TypeError(msg) | |
I tried attaching the code to the pull request, but hub tells me I need to set a token for the API, and github doesn't want to tell me my token since they are deprecating it... | |
<author>astrofrog</author> | |
I managed to get the ``hub`` command to work, so I attach my changes as a pull request. I noticed a few other issues with generating config files, so I'll add fixes for that too. | |
<author>astrofrog</author> | |
All right, that's all I can manage. One thing I can't figure out is how to add white lines before the section titles, e.g. before ``[io.fits]`` in: | |
# Set to True to load system pytest. This item will *not* be obeyed if using | |
# setup.py. In that case the environment variable ASTROPY_USE_SYSTEM_TEST | |
# must be used | |
# boolean | |
use_system_pytest = False | |
[io.fits] | |
@eteq - I'll let you decide whether you want to merge the current commits in, since you developed the original system. I know we still have to tie the _generate function into setup.py (which might be problematic since we can't import ``config`` at build time...) but in the mean time, this fix at least allows us to run the function manually. | |
<author>astrofrog</author> | |
By the way, is the cfgtype really needed? If you just require strings to have quotes around them in the config file, there there would never be any ambiguity regarding what the type is. Is it not worth requiring quotes so as not to have to give the type explicitly? | |
<author>astrofrog</author> | |
To make it clearer what I mean, I feel like the following is cumbersome: | |
# The file to log messages to | |
# string | |
log_file_path = ~/.astropy/astropy.log | |
If we instead required strings to have quotes, then we could do away with the cfgtype in the file entirely: | |
# The file to log messages to | |
log_file_path = '~/.astropy/astropy.log' | |
Then it's easy enough, with a series of ``try...except`` to figure out the type, and there should not be any ambiguity for int/float/bool/string. This would make the config file a lot cleaner. | |
<author>mdboom</author> | |
@astrofrog: `_generate` should not hook up into `setup.py`. The user installing astropy is not necessarily the user running it. `_generate` needs to run on first non-build import if the config file is missing. | |
<author>mdboom</author> | |
I also agree -- we should try to avoid putting `# string` in the file if possible. | |
<author>astrofrog</author> | |
@mdboom - I agree about it being generated at (non-build) import time. If we want to do away with e.g. '#string', is there any reason *not* to use ``eval(...)`` on the right hand side content to convert it to the right type, e.g.: | |
In [8]: eval('"hello"') | |
Out[8]: 'hello' | |
In [9]: eval('1.2') | |
Out[9]: 1.2 | |
In [10]: eval('1') | |
Out[10]: 1 | |
In [11]: eval('False') | |
Out[11]: False | |
(in a ``try...except`` of course) | |
<author>astrofrog</author> | |
Just another thought relating to this, since I've had a chance to look into the code - we need a function in ``configuration.py`` that returns the name of the configuration file, and could be used by ``get_config`` instead of effectively having: | |
packageormod = find_current_module(2) | |
packageormodspl = packageormod.split('.') | |
rootname = packageormodspl[0] | |
get_config_dir(), rootname + '.cfg' | |
i.e we probably want ``get_config_file`` to exist. Furthermore, I was wondering whether in the same way as we have ``_ASTROPY_SETUP_``, we might want a ``_PACKAGE_NAME_`` variable to be set globally, to make it easier for code across Astropy to know whether we are in an affiliated package or in the main package? We could simply set ``_PACKAGE_NAME_`` to 'astropy' in the ``__init__``, and require affiliated packages to change that value? | |
(edit: the reason I am suggesting this is because we also need the filename to figure out whether the file exists, and whether to call ``_generate_all_config_items``) | |
<author>eteq</author> | |
First, regarding `cfgtype` - I'm not sure what you're saying, @astrofrog - the point of `cfgtype` is that it provides the type for configobj's validator mechanism (which I only use internally, as that part of configobj is rather confusing). So at least *something* has to know the correct type for the value. The ``# string`` part is not actually used by the `ConfigurationItem` system, though - the type is determined by either the `cfgtype` argument or the default value's type. The ``#string`` in the config file only service as documentation (along with the description) of what the type should be so that anyone reading the configuration file doesn't need to try to guess from the context. | |
That is, the configuration file itself doesn't do any evaluating - doing `eval()` on the rhs of the configuration file is actually a significant security risk, because then anyone can just introduce a configuration file and execute arbitrary python code on a user's computer - so the config files only allow simple key-value pairs. | |
Or am I just completely misunderstanding what you're saying here? | |
<author>eteq</author> | |
@astrofrog - I'm not clear on why you're saying we need `get_config_file`? The way I envisioned this is used is that the user runs a script/calls a function (or perhaps this happens on-first-import, as @mdboom was saying) that goes and populates everything. If the user has already set a value for some of the items, it will take those values, and save them as the values when it saves all of the configuration files. So there's not supposed to be any need to try to figure out which files need to be generaterated - the configobj state should always reflect the most recent values in the files, and saving will just put those same values in the file (along with the defaults for the others). | |
Or, again, am I missing some subtlety you're pointing out (this configuration stuff is hard!)? | |
<author>astrofrog</author> | |
@eteq - first, regarding cfgtype, I think I misunderstood the purpose of the commented type, I had assumed it was used by the configuration system to convert it to the right Python type, but of course I am wrong, the type is set when the ``ConfigurationItem`` is initialized. So scrap my comment about ``eval``. My original motivation for this comment was more that it looks a bit untidy in the config script, so I was wondering whether a better way to show the type would just be to use quotes for strings, and then it should be obvious from the default value what type is required (and of course, the user will get an error if the type is wrong), because then the configuration item assignment looks like Python code. If you agree we don't need to explicitly list the types, I can add a commit to this pull request to remove that. | |
<author>astrofrog</author> | |
@eteq - regarding the file, what I mean is more how do you know if it's first import and whether to populate everything? I was thinking that the presence of the file would do? In that case, we need to be able to call a function from ``astropy/config/__init__.py`` that checks whether the config file already exists. | |
<author>eteq</author> | |
@astrofrog - re:cfgtype - I see your point here about it looking untidy - and I've confirmed that configobj will do as you expect here (e.g. the ``'`` and ``"`` characters are not included in the string as long as they are matched pairs). The only concern is that it might be confusing if users go and change some configuration items that are supposed to be strings and *don't* leave in the ``'`` or ``"``... it will still work fine, but the type is less clear. But I'm willing to live with that, so feel free to remove the line that puts in the type as long as it adds the necessary ``'`` or ``"`` characters. | |
@astrofrog - re:`get_config_file` - My idea was that you only auto-run the initializer if there is *no* astropy configuration present at all. We leave the function/script available to write *all* of the files if the user explicitly requests it, but I don't think it's necessary (and probably a bit confusing) to have the configuration files get written on an as-needed *per-module* basis. Does that seem reasonable to you? | |
<author>astrofrog</author> | |
@eteq - just a clarification regarding creating the configuration file: do you mean it would get created if there was no ``.astropy``, or no ``.astropy/config/astropy.cfg``? And how do you propose to implement this? Maybe I can just leave this as an open issue, and let you implement it? | |
<author>eteq</author> | |
@astrofrog - I hadn't decided for sure, but I was thinking that it would make the most sense to do it at the same time as the ``config`` directory is created... As #196 indicates, this needs to be fixed, anyway. | |
I'd say it makes sense to merge this PR now so that `_generate_all_config_items` is working (after you have removed the line that adds the type) , and just recognize that this is the intended solution to #87, as that issue already exists. I'll make a note there that this is the intended behavior. | |
<author>astrofrog</author> | |
@eteq - I've removed the type from the output, but there is one small issue - it looks like it's impossible to get configobj to write out strings with quotes! (if you force quote the strings, it ends up outputting '"string"'...). However, I don't think quotes are really needed in the end, *nor* the explicit type. It should be pretty clear from the description and default values what is required, and configuration file options should be described in the documentation anyway. Let me know if you agree, and I can go ahead and merge this. | |
<author>eteq</author> | |
An excellent example of a package trying to be too smart for its own good... if you add trailing or leading whitespace, it includes the quotes, but if not, it automatically always removes them. Oh well, as you say, it should ordinarily be clear from the description. If not, it really just means the description isn't clear enough. | |
Before you merge, though: I just ran the tests on this and three of them fail now. Are you seeing the same thing? | |
<author>eteq</author> | |
See astrofrog/astropy#9 for fixes that make the tests pass now. | |
One thing that this reminded me of, though: there is a possibility of choosing from a list of options in configobj, and in that case, it might be more useful to have the list of options in the comment above the configuration item... what do you think? Should it be put back in when an option-list is the type? | |
<author>astrofrog</author> | |
@eteq - thanks for the code to fix the tests. I added back the printing of options (albeit in a more 'human-readable' format) and updated the tests accordingly. Are we good to merge? | |
<author>astrofrog</author> | |
One thing I couldn't figure out - how to put a blank line before a section title. Any ideas? | |
<author>astrofrog</author> | |
Regarding my last question, If there is no other way, I can implement the (minor) changes in the branch mentioned here: | |
http://code.google.com/p/configobj/issues/detail?id=8 | |
into the bundled configobj files. | |
<author>eteq</author> | |
To put a blank line before a section title, add an empty string to the list of the `configobj.comments` entry for the section in question. For example, if you want to go into a configobj that has no blank lines and add them, you'd do this: | |
``` | |
c = ConfigObj(cfgfilename) | |
for sname in c.sections: | |
# comments is a dictionary mapping names to a list of strings | |
c.comments[sname].append("") | |
with open(cfgfilename, 'w') as f: | |
c.write(f) | |
``` | |
That actually is almost exactly what that branch you noted does, now that I look at it. | |
<author>eteq</author> | |
See astrofrog/astropy#10 for an implementation of this - I'm not sure if this is exactly what you want, but hopefully it's something you can work from to get the right blank lines. | |
<author>astrofrog</author> | |
@eteq - I incorporated your fix (and modified it to prevent writing a blank line on the first line). Is this ready to merge? | |
<author>eteq</author> | |
Whoops, sorry - looks like you need to rebase because of a change I just made in master (I forgot this was still pending). It shouldn't actually conflict, though - it just added a new function and associated test that's not related to this PR. | |
Otherwise, looks good to me. | |
</issue> | |
<issue> | |
<author>mdboom</author> | |
Make "import astropy" faster | |
<author>mdboom</author> | |
I was noticing that "import astropy" on my machine takes > 1sec. That seemed a little crazy. It turns out that `py.test` was being imported unconditionally. This changes it so it is only imported when testing actually happens. `python setup.py test` still works, as does `python -c "import astropy; astropy.test()"`. | |
<author>embray</author> | |
Whoops! Yeah, this should be fixed. I've noticed too that it's a little slow, but hadn't given it much thought. | |
<author>astrofrog</author> | |
Good catch! Feel free to merge (maybe push to staging first just to be sure?). | |
<author>eteq</author> | |
@astrofrog - somthing like this should also be applied to the package-template as well, right? | |
<author>astrofrog</author> | |
@eteq - indeed, I will try and do that this evening (unless you get there first) | |
</issue> | |
<issue> | |
<author>kbarbary</author> | |
bootstrap theme for docs - initial layout | |
<author>kbarbary</author> | |
This is my take on adapting the bootstrap theme to the docs, viewable here: | |
http://kbarbary.github.com/astropy-docs | |
Let me know what you think. There's a couple more things I'd like to do if this is merged: | |
* Make the "breadcrumbs" and prev/next bar part of the topnav bar so that it stays visible at all times | |
* Add a link to astropy.org | |
<author>eteq</author> | |
Hmm... It is nice that this is more consistent with the web page. I also like the "astropy:docs" logo bit at the top. However, I have some concerns (that may or may not be fixable) with this layout. | |
1. I really don't like how narrow this is. It looks fine for a web site, but when I want to read documentation I don't want most of my (widescreen) real-estate wasted. This also causes weirdness in a variety of places - for example, the summary tables in the configuration system documentation (which you can take as a template of what most of them will eventually look like) breaks in odd places because of the narrowness, and doesn't look as much like a table. Also, some of the documented functions have weird justification because the narrowness forces line-breaks that leave a lot of white space. | |
2. I'm a bit put-off by the "contents" and "page" drop-downs. When browsing documentation, I usually find myself jumping around looking for specific things, and this effectively doubles the number of clicks I need to do that, as compared to the sidebar style. Is there a reasonable way to get the sidebar while still preserving the bootstrap look-and-feel? (I'm not sure if that would be good, but it would be nice to at least see to try to compare to this). | |
3. The landing page has only the TOC, and no general info. People will sometimes be landing at the docs without ever looking at the web page, so there should be some of the basic info here (although that can be updated later, I suppose). | |
<author>astrofrog</author> | |
Thanks for trying this out! I do think that pages that have more of a narrative look great, e.g. | |
http://kbarbary.github.com/astropy-docs/configs.html | |
http://kbarbary.github.com/astropy-docs/nddata/convolution.html | |
and I like the use of the bar just below the navbar. On the other hand, I find API and more technical pages (e.g. tables) hard to read, and I think these would require more work. Take for example: | |
http://kbarbary.g |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Ohh, @dfm, you've tricked me. This came up as one of 7 results in a google search, and it took me a minute to remember what this was.