Skip to content

Instantly share code, notes, and snippets.

@MSeifert04
Created August 8, 2016 22:05
Show Gist options
  • Save MSeifert04/7322bafb40ed58be449a33ba9acf0560 to your computer and use it in GitHub Desktop.
Save MSeifert04/7322bafb40ed58be449a33ba9acf0560 to your computer and use it in GitHub Desktop.
C:\-\astroquery>python setup.py test --remote-data
astroquery\sdss\setup_package.py:2: RuntimeWarning: Parent module 'astroquery.sdss' not found while handling absolute import
import os
running test
running build
running build_py
running egg_info
writing requirements to astroquery.egg-info\requires.txt
writing astroquery.egg-info\PKG-INFO
writing top-level names to astroquery.egg-info\top_level.txt
writing dependency_links to astroquery.egg-info\dependency_links.txt
writing entry points to astroquery.egg-info\entry_points.txt
reading manifest file 'astroquery.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'CHANGES.rst'
warning: no files found matching '*.pyx' under directory 'astroquery'
warning: no files found matching '*.c' under directory 'astroquery'
warning: no files found matching '*.c' under directory '*.pyx'
warning: no files found matching '*.pxd' under directory '*.pyx'
warning: no files found matching '*' under directory 'cextern'
warning: no files found matching '*' under directory 'scripts'
no previously-included directories found matching 'build'
no previously-included directories found matching 'docs\_build'
no previously-included directories found matching 'docs\api'
warning: no files found matching '*.pyx' under directory 'astropy_helpers\astropy_helpers'
warning: no files found matching '*.h' under directory 'astropy_helpers\astropy_helpers'
no previously-included directories found matching 'astropy_helpers\build'
warning: no previously-included files matching '*.o' found anywhere in distribution
writing manifest file 'astroquery.egg-info\SOURCES.txt'
============================= test session starts =============================
platform win32 -- Python 2.7.12, pytest-2.8.3, py-1.4.30, pluggy-0.3.1
benchmark: 3.0.0 (defaults: timer=time.clock disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
Running tests with astroquery version 0.3.3.dev3339.
Running tests in lib.win-amd64-2.7\astroquery docs.
Date: 2016-08-08T23:42:30
Platform: Windows-10-10.0.10586
Executable: C:\-\python.exe
Full Python Version:
2.7.12 |Continuum Analytics, Inc.| (default, Jun 29 2016, 11:07:13) [MSC v.1500 64 bit (AMD64)]
encodings: sys: ascii, locale: cp1252, filesystem: mbcs, unicode bits: 15
byteorder: little
float info: dig: 15, mant_dig: 15
Numpy: 1.11.1
Matplotlib: 1.5.1
Pandas: 0.18.1
Astropy: 1.2.1
APLpy: not available
pyregion: not available
Using Astropy options: remote_data.
rootdir: c:\-\appdata\local\temp\astroquery-test-khi1xp, inifile: setup.cfg
plugins: benchmark-3.0.0, pep8-1.0.6, cov-2.2.1
collected 780 items
astroquery\alfalfa\tests\test_alfalfa.py ...
astroquery\alma\tests\test_alma.py ......
astroquery\alma\tests\test_alma_remote.py ..FFFFFF
astroquery\alma\tests\test_alma_utils.py s.s
astroquery\atomic\tests\test_atomic.py ....
astroquery\atomic\tests\test_atomic_remote.py ....
astroquery\besancon\tests\test_besancon.py ....
astroquery\data\README.rst .
astroquery\eso\tests\test_eso.py ..
astroquery\eso\tests\test_eso_remote.py F..F..ss.......................
astroquery\fermi\tests\test_fermi.py ....
astroquery\gama\tests\test_gama.py ..
astroquery\gama\tests\test_gama_remote.py ..
astroquery\heasarc\tests\test_heasarc_remote.py .
astroquery\ibe\tests\test_ibe.py ......
astroquery\ibe\tests\test_ibe_remote.py ..
astroquery\irsa\tests\test_irsa.py .............................
astroquery\irsa\tests\test_irsa_remote.py ......
astroquery\irsa_dust\tests\test_irsa_dust.py ..................................................................
astroquery\irsa_dust\tests\test_irsa_dust_remote.py ..........................................
astroquery\lamda\tests\test_lamda.py .
astroquery\lamda\tests\test_lamda_remote.py .
astroquery\lcogt\tests\test_lcogt.py .............................
astroquery\lcogt\tests\test_lcogt_remote.py FFFFFFFF
astroquery\magpis\tests\test_magpis.py ...x
astroquery\magpis\tests\test_magpis_remote.py ..
astroquery\nasa_ads\tests\test_nasaads.py F
astroquery\ned\tests\test_ned.py ..........................
astroquery\ned\tests\test_ned_remote.py ....................
astroquery\nist\tests\test_nist.py ...
astroquery\nist\tests\test_nist_remote.py ..
astroquery\nrao\tests\test_nrao.py ..
astroquery\nrao\tests\test_nrao_remote.py ..
astroquery\nvas\tests\test_nvas.py ........
astroquery\nvas\tests\test_nvas_remote.py ...
astroquery\ogle\tests\test_ogle.py ...
astroquery\open_exoplanet_catalogue\utils.py ..
astroquery\open_exoplanet_catalogue\tests\test_open_exoplanet_catalogue_local.py .
astroquery\open_exoplanet_catalogue\tests\test_open_exoplanet_catalogue_remote.py .
astroquery\sdss\tests\test_sdss.py ...............................................................................................................................................................
astroquery\sdss\tests\test_sdss_remote.py FFFFFFF.FFFFxxFF
astroquery\sha\tests\test_sha.py .....
astroquery\simbad\core.py ....
astroquery\simbad\tests\test_simbad.py ..............F........FF.F.F.F....FFFF.FF..FF...FF....F
astroquery\simbad\tests\test_simbad_remote.py .........................
astroquery\skyview\tests\test_skyview.py ..
astroquery\skyview\tests\test_skyview_remote.py ..FFFFFFFFFFFFFFFFFF.
astroquery\splatalogue\tests\test_splatalogue.py ..........
astroquery\splatalogue\tests\test_utils.py ...
astroquery\template_module\tests\test_module.py .
astroquery\template_module\tests\test_module_remote.py .
astroquery\tests\test_internet.py .
astroquery\ukidss\tests\test_ukidss.py ...........
astroquery\ukidss\tests\test_ukidss_remote.py ......
astroquery\utils\commons.py .
astroquery\utils\url_helpers.py .
astroquery\utils\tests\test_url_helpers.py .
astroquery\utils\tests\test_utils.py .......................
astroquery\vizier\tests\test_vizier.py ............................
astroquery\vizier\tests\test_vizier_remote.py ..............
astroquery\xmatch\tests\test_xmatch.py .....
astroquery\xmatch\tests\test_xmatch_remote.py ....
..\docs\api.rst .
..\docs\gallery.rst .
..\docs\index.rst .
..\docs\query.rst .
..\docs\release_notice_v0.2.rst .
..\docs\template.rst .
..\docs\testing.rst .
..\docs\utils.rst .
..\docs\alfalfa\alfalfa.rst .
..\docs\alma\alma.rst .
..\docs\atomic\atomic.rst .
..\docs\besancon\besancon.rst .
..\docs\cosmosim\cosmosim.rst .
..\docs\eso\eso.rst .
..\docs\fermi\fermi.rst .
..\docs\gama\gama.rst .
..\docs\heasarc\heasarc.rst .
..\docs\ibe\ibe.rst .
..\docs\irsa\irsa.rst .
..\docs\irsa\irsa_dust.rst .
..\docs\lamda\lamda.rst .
..\docs\magpis\magpis.rst .
..\docs\nasa_ads\nasa_ads.rst .
..\docs\ned\ned.rst .
..\docs\nist\nist.rst .
..\docs\nrao\nrao.rst .
..\docs\nvas\nvas.rst .
..\docs\ogle\ogle.rst .
..\docs\open_exoplanet_catalogue\open_exoplanet_catalogue.rst .
..\docs\sdss\sdss.rst .
..\docs\sha\sha.rst .
..\docs\simbad\simbad.rst .
..\docs\skyview\skyview.rst .
..\docs\splatalogue\splatalogue.rst .
..\docs\ukidss\ukidss.rst .
..\docs\vizier\vizier.rst .
..\docs\xmatch\xmatch.rst .
================================== FAILURES ===================================
______________________________ TestAlma.test_m83 ______________________________
self = <astroquery.alma.tests.test_alma_remote.TestAlma instance at 0x000000000A57F2C8>
temp_dir = 'c:\\-\\appdata\\local\\temp\\tmpyvnjel'
def test_m83(self, temp_dir):
alma = Alma()
alma.cache_location = temp_dir
m83_data = alma.query_object('M83')
uids = np.unique(m83_data['Member ous id'])
> link_list = alma.stage_data(uids)
astroquery\alma\tests\test_alma_remote.py:68:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <astroquery.alma.core.AlmaClass object at 0x0000000009F4D8D0>
uids = <MaskedColumn name='Member ous id' dtype='str32' length=8>
uid://A001/X122/.../X144/X156
uid://A001/X144/X15a
uid://A001/X144/X15e
uid://A002/X3216af/X31
def stage_data(self, uids):
"""
Stage ALMA data
Parameters
----------
uids : list or str
A list of valid UIDs or a single UID.
UIDs should have the form: 'uid://A002/X391d0b/X7b'
Returns
-------
data_file_table : Table
A table containing 3 columns: the UID, the file URL (for future
downloading), and the file size
"""
"""
With log.set_level(10)
INFO: Staging files... [astroquery.alma.core]
DEBUG: First request URL: https://almascience.eso.org/rh/submission [astroquery.alma.core]
DEBUG: First request payload: {'dataset': [u'ALMA+uid___A002_X3b3400_X90f']} [astroquery.alma.core]
DEBUG: First response URL: https://almascience.eso.org/rh/checkAuthenticationStatus/3f98de33-197e-4692-9afa-496842032ea9/submission [astroquery.alma.core]
DEBUG: Request ID: 3f98de33-197e-4692-9afa-496842032ea9 [astroquery.alma.core]
DEBUG: Submission URL: https://almascience.eso.org/rh/submission/3f98de33-197e-4692-9afa-496842032ea9 [astroquery.alma.core]
.DEBUG: Data list URL: https://almascience.eso.org/rh/requests/anonymous/786823226 [astroquery.alma.core]
"""
if isinstance(uids, six.string_types + (np.bytes_,)):
uids = [uids]
if not isinstance(uids, (list, tuple, np.ndarray)):
raise TypeError("Datasets must be given as a list of strings.")
log.info("Staging files...")
self._get_dataarchive_url()
url = urljoin(self.dataarchive_url, 'rh/submission')
log.debug("First request URL: {0}".format(url))
# 'ALMA+uid___A002_X391d0b_X7b'
payload = {'dataset': ['ALMA+' + clean_uid(uid) for uid in uids]}
log.debug("First request payload: {0}".format(payload))
self._staging_log = {'first_post_url': url}
# Request staging for the UIDs
# This component cannot be cached, since the returned data can change
# if new data are uploaded
response = self._request('POST', url, data=payload,
timeout=self.TIMEOUT, cache=False)
self._staging_log['initial_response'] = response
log.debug("First response URL: {0}".format(response.url))
if response.status_code == 405:
raise HTTPError("Received an error 405: this may indicate you "
"have already staged the data. Try downloading "
"the file URLs directly with download_files.")
response.raise_for_status()
if 'j_spring_cas_security_check' in response.url:
time.sleep(1)
# CANNOT cache this stage: it not a real data page! results in
# infinite loops
response = self._request('POST', url, data=payload,
timeout=self.TIMEOUT, cache=False)
self._staging_log['initial_response'] = response
if 'j_spring_cas_security_check' in response.url:
log.warn("Staging request was not successful. Try again?")
response.raise_for_status()
if 'j_spring_cas_security_check' in response.url:
> raise RemoteServiceError("Could not access data. This error "
"can arise if the data are private and "
"you do not have access rights or are "
"not logged in.")
E RemoteServiceError: Could not access data. This error can arise if the data are private and you do not have access rights or are not logged in.
astroquery\alma\core.py:254: RemoteServiceError
---------------------------- Captured stdout call -----------------------------
INFO: Staging files... [astroquery.alma.core]
---------------------------- Captured stderr call -----------------------------
WARNING: Staging request was not successful. Try again? [astroquery.alma.core]
__________________________ TestAlma.test_stage_data ___________________________
self = <astroquery.alma.tests.test_alma_remote.TestAlma instance at 0x000000000A778208>
temp_dir = 'c:\\-\\appdata\\local\\temp\\tmpkdybmi'
def test_stage_data(self, temp_dir):
alma = Alma()
alma.cache_location = temp_dir
result_s = alma.query_object('Sgr A*')
assert b'2011.0.00217.S' in result_s['Project code']
> assert b'uid://A002/X47ed8e/X3cd' in result_s['Asdm uid']
E assert 'uid://A002/X47ed8e/X3cd' in <MaskedColumn name='Asdm uid' dtype='str32' description=u'UID of the ASDM cont...X8a4\n uid://A002/X836a4d/X8a4\n uid://A002/X835491/X817\n uid://A002/X835491/X817
astroquery\alma\tests\test_alma_remote.py:88: AssertionError
__________________________ TestAlma.test_doc_example __________________________
self = <astroquery.alma.tests.test_alma_remote.TestAlma instance at 0x000000000A8B58C8>
temp_dir = 'c:\\-\\appdata\\local\\temp\\tmprvnsoi'
def test_doc_example(self, temp_dir):
alma = Alma()
alma.cache_location = temp_dir
alma2 = Alma()
alma2.cache_location = temp_dir
m83_data = alma.query_object('M83')
# the order can apparently sometimes change
> assert set(m83_data.colnames) == set(all_colnames)
E assert set(['Asdm ui...ous id', ...]) == set(['Asdm uid...ration', ...])
E Extra items in the left set:
E 'Group ous id'
E 'Pub'
E Extra items in the right set:
E 'QA0 Status'
E 'Project abstract'
E Use -v to get the full diff
astroquery\alma\tests\test_alma_remote.py:112: AssertionError
_____________________________ TestAlma.test_query _____________________________
self = <astroquery.alma.tests.test_alma_remote.TestAlma instance at 0x000000000B241FC8>
temp_dir = 'c:\\-\\appdata\\local\\temp\\tmp_kndni'
def test_query(self, temp_dir):
alma = Alma()
alma.cache_location = temp_dir
result = alma.query(payload={'start_date': '<11-11-2011'},
public=False, science=True)
# now 535?
> assert len(result) == 621
E assert 159 == 621
E + where 159 = len(<[UnicodeDecodeError("'utf8' codec can't decode byte 0xf8 in position 0: invalid start byte") raised in repr()] SafeRepr object at 0xaab8c88>)
astroquery\alma\tests\test_alma_remote.py:143: AssertionError
____________________________ TestAlma.test_cycle1 _____________________________
self = <astroquery.alma.tests.test_alma_remote.TestAlma instance at 0x000000000AB0CA48>
temp_dir = 'c:\\-\\appdata\\local\\temp\\tmptksqbm'
@pytest.mark.bigdata
def test_cycle1(self, temp_dir):
# About 500 MB
alma = Alma()
alma.cache_location = temp_dir
target = 'NGC4945'
project_code = '2012.1.00912.S'
payload = {'project_code': project_code,
'source_name_alma': target, }
result = alma.query(payload=payload)
assert len(result) == 1
# Need new Alma() instances each time
a1 = alma()
uid_url_table_mous = a1.stage_data(result['Member ous id'])
a2 = alma()
uid_url_table_asdm = a2.stage_data(result['Asdm uid'])
# I believe the fixes as part of #495 have resulted in removal of a
# redundancy in the table creation, so a 1-row table is OK here.
# A 2-row table may not be OK any more, but that's what it used to
# be...
assert len(uid_url_table_asdm) == 1
> assert len(uid_url_table_mous) == 2
E assert 3 == 2
E + where 3 = len(<Table length=3>\n ...uid___A002_X5ca961_X17f\2012.1.00912.S_uid___A002_X5ca961_X17f.asdm.sdm.tar ...)
astroquery\alma\tests\test_alma_remote.py:169: AssertionError
---------------------------- Captured stdout call -----------------------------
INFO: Staging files... [astroquery.alma.core]
..INFO: Staging files... [astroquery.alma.core]
.
____________________________ TestAlma.test_cycle0 _____________________________
self = <astroquery.alma.tests.test_alma_remote.TestAlma instance at 0x000000000B5B3688>
temp_dir = 'c:\\-\\appdata\\local\\temp\\tmpcr05lz'
def test_cycle0(self, temp_dir):
# About 20 MB
alma = Alma()
alma.cache_location = temp_dir
target = 'NGC4945'
project_code = '2011.0.00121.S'
payload = {'project_code': project_code,
'source_name_alma': target, }
result = alma.query(payload=payload)
assert len(result) == 1
alma1 = alma()
alma2 = alma()
uid_url_table_mous = alma1.stage_data(result['Member ous id'])
uid_url_table_asdm = alma2.stage_data(result['Asdm uid'])
assert len(uid_url_table_asdm) == 1
assert len(uid_url_table_mous) == 32
> assert uid_url_table_mous[0]['URL'].split("/")[-1] == '2011.0.00121.S_2012-08-16_001_of_002.tar'
E assert 'ALMA\2011.0....01_of_002.tar' == '2011.0.00121....01_of_002.tar'
E - ALMA\2011.0.00121.S_2012-08-16_001_of_002.tar\2011.0.00121.S_2012-08-16_001_of_002.tar
E + 2011.0.00121.S_2012-08-16_001_of_002.tar
astroquery\alma\tests\test_alma_remote.py:215: AssertionError
---------------------------- Captured stdout call -----------------------------
INFO: Staging files... [astroquery.alma.core]
..INFO: Staging files... [astroquery.alma.core]
.
____________________________ TestEso.test_SgrAstar ____________________________
self = <astroquery.eso.tests.test_eso_remote.TestEso instance at 0x000000000A1AC408>
temp_dir = 'c:\\-\\appdata\\local\\temp\\tmpm2gdwv'
def test_SgrAstar(self, temp_dir):
eso = Eso()
eso.cache_location = temp_dir
instruments = eso.list_instruments(cache=False)
# in principle, we should run both of these tests
# result_i = eso.query_instrument('midi', target='Sgr A*')
# Equivalent, does not depend on SESAME:
result_i = eso.query_instrument('midi', coord1=266.41681662,
coord2=-29.00782497, cache=False)
> surveys = eso.list_surveys(cache=False)
astroquery\eso\tests\test_eso_remote.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <astroquery.eso.core.EsoClass object at 0x000000000A7B5F28>
cache = False
def list_surveys(self, cache=True):
""" List all the available surveys (phase 3) in the ESO archive.
Returns
-------
survey_list : list of strings
cache : bool
Cache the response for faster subsequent retrieval
"""
if self._survey_list is None:
survey_list_response = self._request(
"GET", "http://archive.eso.org/wdb/wdb/adp/phase3_main/form",
cache=cache)
root = BeautifulSoup(survey_list_response.content, 'html5lib')
self._survey_list = []
collections_table = root.find('table', id='collections_table')
other_collections = root.find('select', id='collection_name_option')
> for element in (collections_table.findAll('input', type='checkbox') +
other_collections.findAll('option')):
E AttributeError: 'NoneType' object has no attribute 'findAll'
astroquery\eso\core.py:272: AttributeError
__________________________ TestEso.test_empty_return __________________________
self = <astroquery.eso.tests.test_eso_remote.TestEso instance at 0x000000000B3DDE88>
def test_empty_return(self):
# test for empty return with an object from the North
eso = Eso()
> surveys = eso.list_surveys(cache=False)
astroquery\eso\tests\test_eso_remote.py:88:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <astroquery.eso.core.EsoClass object at 0x000000000A0589E8>
cache = False
def list_surveys(self, cache=True):
""" List all the available surveys (phase 3) in the ESO archive.
Returns
-------
survey_list : list of strings
cache : bool
Cache the response for faster subsequent retrieval
"""
if self._survey_list is None:
survey_list_response = self._request(
"GET", "http://archive.eso.org/wdb/wdb/adp/phase3_main/form",
cache=cache)
root = BeautifulSoup(survey_list_response.content, 'html5lib')
self._survey_list = []
collections_table = root.find('table', id='collections_table')
other_collections = root.find('select', id='collection_name_option')
> for element in (collections_table.findAll('input', type='checkbox') +
other_collections.findAll('option')):
E AttributeError: 'NoneType' object has no attribute 'findAll'
astroquery\eso\core.py:272: AttributeError
______________________ TestLcogt.test_query_object_meta _______________________
self = <astroquery.lcogt.tests.test_lcogt_remote.TestLcogt instance at 0x000000000A778888>
def test_query_object_meta(self):
> response = lcogt.core.Lcogt.query_object_async('M1', catalog='lco_img')
astroquery\lcogt\tests\test_lcogt_remote.py:25:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\lcogt\core.py:135: in query_object_async
cache=cache)
astroquery\query.py:188: in _request
auth=auth)
astroquery\query.py:60: in request
stream=stream, auth=auth)
C:\-\lib\site-packages\requests\sessions.py:475: in request
resp = self.send(prep, **send_kwargs)
C:\-\lib\site-packages\requests\sessions.py:585: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0x0000000008F316A0>
request = <PreparedRequest [GET]>, stream = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0x000000000AB12748>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E ConnectionError: HTTPConnectionPool(host='lcogtarchive.ipac.caltech.edu', port=80): Max retries exceeded with url: /cgi-bin/Gator/nph-query?catalog=lco_img&outfmt=3&outrows=500&objstr=M1 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000000000AB12198>: Failed to establish a new connection: [Errno 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte',))
C:\-\lib\site-packages\requests\adapters.py:467: ConnectionError
______________________ TestLcogt.test_query_object_phot _______________________
self = <astroquery.lcogt.tests.test_lcogt_remote.TestLcogt instance at 0x000000000B3ABA08>
def test_query_object_phot(self):
> response = lcogt.core.Lcogt.query_object_async('M1', catalog='lco_cat')
astroquery\lcogt\tests\test_lcogt_remote.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\lcogt\core.py:135: in query_object_async
cache=cache)
astroquery\query.py:188: in _request
auth=auth)
astroquery\query.py:60: in request
stream=stream, auth=auth)
C:\-\lib\site-packages\requests\sessions.py:475: in request
resp = self.send(prep, **send_kwargs)
C:\-\lib\site-packages\requests\sessions.py:585: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0x0000000008F316A0>
request = <PreparedRequest [GET]>, stream = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0x000000000B2C5D30>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E ConnectionError: HTTPConnectionPool(host='lcogtarchive.ipac.caltech.edu', port=80): Max retries exceeded with url: /cgi-bin/Gator/nph-query?catalog=lco_cat&outfmt=3&outrows=500&objstr=M1 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000000000B69C3C8>: Failed to establish a new connection: [Errno 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte',))
C:\-\lib\site-packages\requests\adapters.py:467: ConnectionError
___________________ TestLcogt.test_query_region_cone_async ____________________
self = <astroquery.lcogt.tests.test_lcogt_remote.TestLcogt instance at 0x0000000009795D08>
def test_query_region_cone_async(self):
response = lcogt.core.Lcogt.query_region_async(
> 'm31', catalog='lco_img', spatial='Cone', radius=2 * u.arcmin)
astroquery\lcogt\tests\test_lcogt_remote.py:34:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\lcogt\core.py:203: in query_region_async
cache=cache)
astroquery\query.py:188: in _request
auth=auth)
astroquery\query.py:60: in request
stream=stream, auth=auth)
C:\-\lib\site-packages\requests\sessions.py:475: in request
resp = self.send(prep, **send_kwargs)
C:\-\lib\site-packages\requests\sessions.py:585: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0x0000000008F316A0>
request = <PreparedRequest [GET]>, stream = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0x000000000B66AF98>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E ConnectionError: HTTPConnectionPool(host='lcogtarchive.ipac.caltech.edu', port=80): Max retries exceeded with url: /cgi-bin/Gator/nph-query?radunits=arcmin&outfmt=3&objstr=m31&catalog=lco_img&outrows=500&spatial=Cone&radius=2.0 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000000000A89ADD8>: Failed to establish a new connection: [Errno 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte',))
C:\-\lib\site-packages\requests\adapters.py:467: ConnectionError
______________________ TestLcogt.test_query_region_cone _______________________
self = <astroquery.lcogt.tests.test_lcogt_remote.TestLcogt instance at 0x000000000A1AA2C8>
def test_query_region_cone(self):
result = lcogt.core.Lcogt.query_region(
> 'm31', catalog='lco_img', spatial='Cone', radius=2 * u.arcmin)
astroquery\lcogt\tests\test_lcogt_remote.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\utils\class_or_instance.py:25: in f
return self.fn(obj, *args, **kwds)
astroquery\utils\process_asyncs.py:26: in newmethod
response = getattr(self, async_method_name)(*args, **kwargs)
astroquery\lcogt\core.py:203: in query_region_async
cache=cache)
astroquery\query.py:188: in _request
auth=auth)
astroquery\query.py:60: in request
stream=stream, auth=auth)
C:\-\lib\site-packages\requests\sessions.py:475: in request
resp = self.send(prep, **send_kwargs)
C:\-\lib\site-packages\requests\sessions.py:585: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0x0000000008F316A0>
request = <PreparedRequest [GET]>, stream = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0x000000000B39B5C0>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E ConnectionError: HTTPConnectionPool(host='lcogtarchive.ipac.caltech.edu', port=80): Max retries exceeded with url: /cgi-bin/Gator/nph-query?radunits=arcmin&outfmt=3&objstr=m31&catalog=lco_img&outrows=500&spatial=Cone&radius=2.0 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000000000B39B2E8>: Failed to establish a new connection: [Errno 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte',))
C:\-\lib\site-packages\requests\adapters.py:467: ConnectionError
____________________ TestLcogt.test_query_region_box_async ____________________
self = <astroquery.lcogt.tests.test_lcogt_remote.TestLcogt instance at 0x000000000B767DC8>
def test_query_region_box_async(self):
response = lcogt.core.Lcogt.query_region_async(
"00h42m44.330s +41d16m07.50s", catalog='lco_img', spatial='Box',
> width=2 * u.arcmin)
astroquery\lcogt\tests\test_lcogt_remote.py:46:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\lcogt\core.py:203: in query_region_async
cache=cache)
astroquery\query.py:188: in _request
auth=auth)
astroquery\query.py:60: in request
stream=stream, auth=auth)
C:\-\lib\site-packages\requests\sessions.py:475: in request
resp = self.send(prep, **send_kwargs)
C:\-\lib\site-packages\requests\sessions.py:585: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0x0000000008F316A0>
request = <PreparedRequest [GET]>, stream = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0x000000000A7B0F60>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E ConnectionError: HTTPConnectionPool(host='lcogtarchive.ipac.caltech.edu', port=80): Max retries exceeded with url: /cgi-bin/Gator/nph-query?catalog=lco_img&outfmt=3&outrows=500&objstr=10.6847083333+%2B41.26875&spatial=Box&size=120.0 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000000000A7B0DA0>: Failed to establish a new connection: [Errno 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte',))
C:\-\lib\site-packages\requests\adapters.py:467: ConnectionError
_______________________ TestLcogt.test_query_region_box _______________________
self = <astroquery.lcogt.tests.test_lcogt_remote.TestLcogt instance at 0x000000000AB22E48>
def test_query_region_box(self):
result = lcogt.core.Lcogt.query_region(
"00h42m44.330s +41d16m07.50s", catalog='lco_img', spatial='Box',
> width=2 * u.arcmin)
astroquery\lcogt\tests\test_lcogt_remote.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\utils\class_or_instance.py:25: in f
return self.fn(obj, *args, **kwds)
astroquery\utils\process_asyncs.py:26: in newmethod
response = getattr(self, async_method_name)(*args, **kwargs)
astroquery\lcogt\core.py:203: in query_region_async
cache=cache)
astroquery\query.py:188: in _request
auth=auth)
astroquery\query.py:60: in request
stream=stream, auth=auth)
C:\-\lib\site-packages\requests\sessions.py:475: in request
resp = self.send(prep, **send_kwargs)
C:\-\lib\site-packages\requests\sessions.py:585: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0x0000000008F316A0>
request = <PreparedRequest [GET]>, stream = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0x000000000A7B5898>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E ConnectionError: HTTPConnectionPool(host='lcogtarchive.ipac.caltech.edu', port=80): Max retries exceeded with url: /cgi-bin/Gator/nph-query?catalog=lco_img&outfmt=3&outrows=500&objstr=10.6847083333+%2B41.26875&spatial=Box&size=120.0 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000000000A7B51D0>: Failed to establish a new connection: [Errno 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte',))
C:\-\lib\site-packages\requests\adapters.py:467: ConnectionError
__________________ TestLcogt.test_query_region_async_polygon __________________
self = <astroquery.lcogt.tests.test_lcogt_remote.TestLcogt instance at 0x000000000B285B88>
def test_query_region_async_polygon(self):
polygon = [SkyCoord(ra=10.1, dec=10.1, unit=(u.deg, u.deg),
frame='icrs'),
SkyCoord(ra=10.0, dec=10.1, unit=(u.deg, u.deg),
frame='icrs'),
SkyCoord(ra=10.0, dec=10.0, unit=(u.deg, u.deg),
frame='icrs')]
response = lcogt.core.Lcogt.query_region_async(
> "m31", catalog="lco_img", spatial="Polygon", polygon=polygon)
astroquery\lcogt\tests\test_lcogt_remote.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\lcogt\core.py:203: in query_region_async
cache=cache)
astroquery\query.py:188: in _request
auth=auth)
astroquery\query.py:60: in request
stream=stream, auth=auth)
C:\-\lib\site-packages\requests\sessions.py:475: in request
resp = self.send(prep, **send_kwargs)
C:\-\lib\site-packages\requests\sessions.py:585: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0x0000000008F316A0>
request = <PreparedRequest [GET]>, stream = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0x000000000A892DD8>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E ConnectionError: HTTPConnectionPool(host='lcogtarchive.ipac.caltech.edu', port=80): Max retries exceeded with url: /cgi-bin/Gator/nph-query?catalog=lco_img&outfmt=3&outrows=500&polygon=10.1+%2B10.1%2C10.0+%2B10.1%2C10.0+%2B10.0&objstr=m31&spatial=Polygon (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000000000B2C5C50>: Failed to establish a new connection: [Errno 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte',))
C:\-\lib\site-packages\requests\adapters.py:467: ConnectionError
_____________________ TestLcogt.test_query_region_polygon _____________________
self = <astroquery.lcogt.tests.test_lcogt_remote.TestLcogt instance at 0x000000000BAE8F88>
def test_query_region_polygon(self):
polygon = [(10.1, 10.1), (10.0, 10.1), (10.0, 10.0)]
result = lcogt.core.Lcogt.query_region(
> "m31", catalog="lco_img", spatial="Polygon", polygon=polygon)
astroquery\lcogt\tests\test_lcogt_remote.py:69:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\utils\class_or_instance.py:25: in f
return self.fn(obj, *args, **kwds)
astroquery\utils\process_asyncs.py:26: in newmethod
response = getattr(self, async_method_name)(*args, **kwargs)
astroquery\lcogt\core.py:203: in query_region_async
cache=cache)
astroquery\query.py:188: in _request
auth=auth)
astroquery\query.py:60: in request
stream=stream, auth=auth)
C:\-\lib\site-packages\requests\sessions.py:475: in request
resp = self.send(prep, **send_kwargs)
C:\-\lib\site-packages\requests\sessions.py:585: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0x0000000008F316A0>
request = <PreparedRequest [GET]>, stream = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0x000000000A45AE48>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E ConnectionError: HTTPConnectionPool(host='lcogtarchive.ipac.caltech.edu', port=80): Max retries exceeded with url: /cgi-bin/Gator/nph-query?catalog=lco_img&outfmt=3&outrows=500&polygon=10.1+%2B10.1%2C10.0+%2B10.1%2C10.0+%2B10.0&objstr=m31&spatial=Polygon (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000000000A45AE80>: Failed to establish a new connection: [Errno 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte',))
C:\-\lib\site-packages\requests\adapters.py:467: ConnectionError
_________________________________ test_simple _________________________________
@remote_data
def test_simple():
x = nasa_ads.ADS.query_simple(
> "^Persson Origin of water around deeply embedded low-mass protostars")
astroquery\nasa_ads\tests\test_nasaads.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\utils\class_or_instance.py:25: in f
return self.fn(obj, *args, **kwds)
astroquery\nasa_ads\core.py:51: in query_simple
response.raise_for_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Response [504]>
def raise_for_status(self):
"""Raises stored :class:`HTTPError`, if one occurred."""
http_error_msg = ''
if 400 <= self.status_code < 500:
http_error_msg = '%s Client Error: %s for url: %s' % (self.status_code, self.reason, self.url)
elif 500 <= self.status_code < 600:
http_error_msg = '%s Server Error: %s for url: %s' % (self.status_code, self.reason, self.url)
if http_error_msg:
> raise HTTPError(http_error_msg, response=self)
E HTTPError: 504 Server Error: Gateway Time-out for url: http://adswww.harvard.edu/cgi-bin/basic_connect
C:\-\lib\site-packages\requests\models.py:844: HTTPError
_____________________________ test_images_timeout _____________________________
@remote_data
def test_images_timeout():
"""
An independent timeout test to verify that test_images_timeout in the
TestSDSSRemote class should be working. Consider this a regression test.
"""
coords = coordinates.SkyCoord('0h8m05.63s +14d50m23.3s')
xid = sdss.SDSS.query_region(coords)
> assert len(xid) == 18
E assert 2 == 18
E + where 2 = len(<Table length=2>\n titleSkyserver_Erro...COLOR=pink><font color=red><br>An error has occurred.</font></H3></BODY></HTML>)
astroquery\sdss\tests\test_sdss_remote.py:17: AssertionError
_____________________ TestSDSSRemote.test_images_timeout ______________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000FE3FC88>
def test_images_timeout(self):
"""
This test *must* be run before `test_sdss_image` because that query
caches!
"""
xid = sdss.SDSS.query_region(self.coords)
> assert len(xid) == 18
E assert 2 == 18
E + where 2 = len(<Table length=2>\n titleSkyserver_Erro...COLOR=pink><font color=red><br>An error has occurred.</font></H3></BODY></HTML>)
astroquery\sdss\tests\test_sdss_remote.py:34: AssertionError
______________________ TestSDSSRemote.test_sdss_spectrum ______________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000FDD8748>
def test_sdss_spectrum(self):
xid = sdss.SDSS.query_region(self.coords, spectro=True)
assert isinstance(xid, Table)
> sp = sdss.SDSS.get_spectra(matches=xid)
astroquery\sdss\tests\test_sdss_remote.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\sdss\core.py:606: in get_spectra
data_release=data_release)
astroquery\sdss\core.py:581: in get_spectra_async
instrument=row['instrument'].decode().lower(),
C:\-\lib\site-packages\astropy\table\row.py:46: in __getitem__
return self._table.columns[item][self._index]
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <TableColumns names=('titleSkyserver_Errortitle')>, item = 'instrument'
def __getitem__(self, item):
"""Get items from a TableColumns object.
::
tc = TableColumns(cols=[Column(name='a'), Column(name='b'), Column(name='c')])
tc['a'] # Column('a')
tc[1] # Column('b')
tc['a', 'b'] # <TableColumns names=('a', 'b')>
tc[1:3] # <TableColumns names=('b', 'c')>
"""
if isinstance(item, six.string_types):
> return OrderedDict.__getitem__(self, item)
E KeyError: u'instrument'
C:\-\lib\site-packages\astropy\table\table.py:98: KeyError
____________________ TestSDSSRemote.test_sdss_spectrum_mjd ____________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000FE28EC8>
def test_sdss_spectrum_mjd(self):
> sp = sdss.SDSS.get_spectra(plate=2345, fiberID=572)
astroquery\sdss\tests\test_sdss_remote.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\sdss\core.py:606: in get_spectra
data_release=data_release)
astroquery\sdss\core.py:581: in get_spectra_async
instrument=row['instrument'].decode().lower(),
C:\-\lib\site-packages\astropy\table\row.py:46: in __getitem__
return self._table.columns[item][self._index]
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <TableColumns names=('titleSkyserver_Errortitle')>, item = 'instrument'
def __getitem__(self, item):
"""Get items from a TableColumns object.
::
tc = TableColumns(cols=[Column(name='a'), Column(name='b'), Column(name='c')])
tc['a'] # Column('a')
tc[1] # Column('b')
tc['a', 'b'] # <TableColumns names=('a', 'b')>
tc[1:3] # <TableColumns names=('b', 'c')>
"""
if isinstance(item, six.string_types):
> return OrderedDict.__getitem__(self, item)
E KeyError: u'instrument'
C:\-\lib\site-packages\astropy\table\table.py:98: KeyError
__________________ TestSDSSRemote.test_sdss_spectrum_coords ___________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000FDBCC48>
def test_sdss_spectrum_coords(self):
> sp = sdss.SDSS.get_spectra(self.coords)
astroquery\sdss\tests\test_sdss_remote.py:48:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\sdss\core.py:606: in get_spectra
data_release=data_release)
astroquery\sdss\core.py:581: in get_spectra_async
instrument=row['instrument'].decode().lower(),
C:\-\lib\site-packages\astropy\table\row.py:46: in __getitem__
return self._table.columns[item][self._index]
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <TableColumns names=('titleSkyserver_Errortitle')>, item = 'instrument'
def __getitem__(self, item):
"""Get items from a TableColumns object.
::
tc = TableColumns(cols=[Column(name='a'), Column(name='b'), Column(name='c')])
tc['a'] # Column('a')
tc[1] # Column('b')
tc['a', 'b'] # <TableColumns names=('a', 'b')>
tc[1:3] # <TableColumns names=('b', 'c')>
"""
if isinstance(item, six.string_types):
> return OrderedDict.__getitem__(self, item)
E KeyError: u'instrument'
C:\-\lib\site-packages\astropy\table\table.py:98: KeyError
________________________ TestSDSSRemote.test_sdss_sql _________________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000FDB3E08>
def test_sdss_sql(self):
query = """
select top 10
z, ra, dec, bestObjID
from
specObj
where
class = 'galaxy'
and z > 0.3
and zWarning = 0
"""
> xid = sdss.SDSS.query_sql(query)
astroquery\sdss\tests\test_sdss_remote.py:61:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\utils\class_or_instance.py:25: in f
return self.fn(obj, *args, **kwds)
astroquery\utils\process_asyncs.py:29: in newmethod
result = self._parse_result(response, verbose=verbose)
astroquery\sdss\core.py:843: in _parse_result
comments='#'))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
fname = <_io.BytesIO object at 0x000000000A0838E0>, dtype = None, comments = '#'
delimiter = ',', skip_header = 1, skip_footer = 0
converters = [<numpy.lib._iotools.StringConverter object at 0x000000000AA3F9E8>]
missing_values = [['']], filling_values = [None], usecols = None
names = ['DOCTYPE_html_PUBLIC_W3CDTD_XHTML_10_TransitionalEN_httpwwww3orgTRxhtml1DTDxhtml1transitionaldtd']
excludelist = None, deletechars = None, replace_space = '_', autostrip = False
case_sensitive = True, defaultfmt = 'f%i', unpack = None, usemask = False
loose = True, invalid_raise = True, max_rows = None
def genfromtxt(fname, dtype=float, comments='#', delimiter=None,
skip_header=0, skip_footer=0, converters=None,
missing_values=None, filling_values=None, usecols=None,
names=None, excludelist=None, deletechars=None,
replace_space='_', autostrip=False, case_sensitive=True,
defaultfmt="f%i", unpack=None, usemask=False, loose=True,
invalid_raise=True, max_rows=None):
"""
Load data from a text file, with missing values handled as specified.
Each line past the first `skip_header` lines is split at the `delimiter`
character, and characters following the `comments` character are discarded.
Parameters
----------
fname : file, str, list of str, generator
File, filename, list, or generator to read. If the filename
extension is `.gz` or `.bz2`, the file is first decompressed. Mote
that generators must return byte strings in Python 3k. The strings
in a list or produced by a generator are treated as lines.
dtype : dtype, optional
Data type of the resulting array.
If None, the dtypes will be determined by the contents of each
column, individually.
comments : str, optional
The character used to indicate the start of a comment.
All the characters occurring on a line after a comment are discarded
delimiter : str, int, or sequence, optional
The string used to separate values. By default, any consecutive
whitespaces act as delimiter. An integer or sequence of integers
can also be provided as width(s) of each field.
skiprows : int, optional
`skiprows` was removed in numpy 1.10. Please use `skip_header` instead.
skip_header : int, optional
The number of lines to skip at the beginning of the file.
skip_footer : int, optional
The number of lines to skip at the end of the file.
converters : variable, optional
The set of functions that convert the data of a column to a value.
The converters can also be used to provide a default value
for missing data: ``converters = {3: lambda s: float(s or 0)}``.
missing : variable, optional
`missing` was removed in numpy 1.10. Please use `missing_values`
instead.
missing_values : variable, optional
The set of strings corresponding to missing data.
filling_values : variable, optional
The set of values to be used as default when the data are missing.
usecols : sequence, optional
Which columns to read, with 0 being the first. For example,
``usecols = (1, 4, 5)`` will extract the 2nd, 5th and 6th columns.
names : {None, True, str, sequence}, optional
If `names` is True, the field names are read from the first valid line
after the first `skip_header` lines.
If `names` is a sequence or a single-string of comma-separated names,
the names will be used to define the field names in a structured dtype.
If `names` is None, the names of the dtype fields will be used, if any.
excludelist : sequence, optional
A list of names to exclude. This list is appended to the default list
['return','file','print']. Excluded names are appended an underscore:
for example, `file` would become `file_`.
deletechars : str, optional
A string combining invalid characters that must be deleted from the
names.
defaultfmt : str, optional
A format used to define default field names, such as "f%i" or "f_%02i".
autostrip : bool, optional
Whether to automatically strip white spaces from the variables.
replace_space : char, optional
Character(s) used in replacement of white spaces in the variables
names. By default, use a '_'.
case_sensitive : {True, False, 'upper', 'lower'}, optional
If True, field names are case sensitive.
If False or 'upper', field names are converted to upper case.
If 'lower', field names are converted to lower case.
unpack : bool, optional
If True, the returned array is transposed, so that arguments may be
unpacked using ``x, y, z = loadtxt(...)``
usemask : bool, optional
If True, return a masked array.
If False, return a regular array.
loose : bool, optional
If True, do not raise errors for invalid values.
invalid_raise : bool, optional
If True, an exception is raised if an inconsistency is detected in the
number of columns.
If False, a warning is emitted and the offending lines are skipped.
max_rows : int, optional
The maximum number of rows to read. Must not be used with skip_footer
at the same time. If given, the value must be at least 1. Default is
to read the entire file.
.. versionadded:: 1.10.0
Returns
-------
out : ndarray
Data read from the text file. If `usemask` is True, this is a
masked array.
See Also
--------
numpy.loadtxt : equivalent function when no data is missing.
Notes
-----
* When spaces are used as delimiters, or when no delimiter has been given
as input, there should not be any missing data between two fields.
* When the variables are named (either by a flexible dtype or with `names`,
there must not be any header in the file (else a ValueError
exception is raised).
* Individual values are not stripped of spaces by default.
When using a custom converter, make sure the function does remove spaces.
References
----------
.. [1] Numpy User Guide, section `I/O with Numpy
<http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html>`_.
Examples
---------
>>> from io import StringIO
>>> import numpy as np
Comma delimited file with mixed dtype
>>> s = StringIO("1,1.3,abcde")
>>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'),
... ('mystring','S5')], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
Using dtype = None
>>> s.seek(0) # needed for StringIO example only
>>> data = np.genfromtxt(s, dtype=None,
... names = ['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
Specifying dtype and names
>>> s.seek(0)
>>> data = np.genfromtxt(s, dtype="i8,f8,S5",
... names=['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
An example with fixed-width columns
>>> s = StringIO("11.3abcde")
>>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'],
... delimiter=[1,3,5])
>>> data
array((1, 1.3, 'abcde'),
dtype=[('intvar', '<i8'), ('fltvar', '<f8'), ('strvar', '|S5')])
"""
if max_rows is not None:
if skip_footer:
raise ValueError(
"The keywords 'skip_footer' and 'max_rows' can not be "
"specified at the same time.")
if max_rows < 1:
raise ValueError("'max_rows' must be at least 1.")
# Py3 data conversions to bytes, for convenience
if comments is not None:
comments = asbytes(comments)
if isinstance(delimiter, unicode):
delimiter = asbytes(delimiter)
if isinstance(missing_values, (unicode, list, tuple)):
missing_values = asbytes_nested(missing_values)
#
if usemask:
from numpy.ma import MaskedArray, make_mask_descr
# Check the input dictionary of converters
user_converters = converters or {}
if not isinstance(user_converters, dict):
raise TypeError(
"The input argument 'converter' should be a valid dictionary "
"(got '%s' instead)" % type(user_converters))
# Initialize the filehandle, the LineSplitter and the NameValidator
own_fhd = False
try:
if isinstance(fname, basestring):
if sys.version_info[0] == 2:
fhd = iter(np.lib._datasource.open(fname, 'rbU'))
else:
fhd = iter(np.lib._datasource.open(fname, 'rb'))
own_fhd = True
else:
fhd = iter(fname)
except TypeError:
raise TypeError(
"fname must be a string, filehandle, list of strings, "
"or generator. Got %s instead." % type(fname))
split_line = LineSplitter(delimiter=delimiter, comments=comments,
autostrip=autostrip)._handyman
validate_names = NameValidator(excludelist=excludelist,
deletechars=deletechars,
case_sensitive=case_sensitive,
replace_space=replace_space)
# Skip the first `skip_header` rows
for i in range(skip_header):
next(fhd)
# Keep on until we find the first valid values
first_values = None
try:
while not first_values:
first_line = next(fhd)
if names is True:
if comments in first_line:
first_line = (
asbytes('').join(first_line.split(comments)[1:]))
first_values = split_line(first_line)
except StopIteration:
# return an empty array if the datafile is empty
first_line = asbytes('')
first_values = []
warnings.warn('genfromtxt: Empty input file: "%s"' % fname)
# Should we take the first values as names ?
if names is True:
fval = first_values[0].strip()
if fval in comments:
del first_values[0]
# Check the columns to use: make sure `usecols` is a list
if usecols is not None:
try:
usecols = [_.strip() for _ in usecols.split(",")]
except AttributeError:
try:
usecols = list(usecols)
except TypeError:
usecols = [usecols, ]
nbcols = len(usecols or first_values)
# Check the names and overwrite the dtype.names if needed
if names is True:
names = validate_names([_bytes_to_name(_.strip())
for _ in first_values])
first_line = asbytes('')
elif _is_string_like(names):
names = validate_names([_.strip() for _ in names.split(',')])
elif names:
names = validate_names(names)
# Get the dtype
if dtype is not None:
dtype = easy_dtype(dtype, defaultfmt=defaultfmt, names=names,
excludelist=excludelist,
deletechars=deletechars,
case_sensitive=case_sensitive,
replace_space=replace_space)
# Make sure the names is a list (for 2.5)
if names is not None:
names = list(names)
if usecols:
for (i, current) in enumerate(usecols):
# if usecols is a list of names, convert to a list of indices
if _is_string_like(current):
usecols[i] = names.index(current)
elif current < 0:
usecols[i] = current + len(first_values)
# If the dtype is not None, make sure we update it
if (dtype is not None) and (len(dtype) > nbcols):
descr = dtype.descr
dtype = np.dtype([descr[_] for _ in usecols])
names = list(dtype.names)
# If `names` is not None, update the names
elif (names is not None) and (len(names) > nbcols):
names = [names[_] for _ in usecols]
elif (names is not None) and (dtype is not None):
names = list(dtype.names)
# Process the missing values ...............................
# Rename missing_values for convenience
user_missing_values = missing_values or ()
# Define the list of missing_values (one column: one list)
missing_values = [list([asbytes('')]) for _ in range(nbcols)]
# We have a dictionary: process it field by field
if isinstance(user_missing_values, dict):
# Loop on the items
for (key, val) in user_missing_values.items():
# Is the key a string ?
if _is_string_like(key):
try:
# Transform it into an integer
key = names.index(key)
except ValueError:
# We couldn't find it: the name must have been dropped
continue
# Redefine the key as needed if it's a column number
if usecols:
try:
key = usecols.index(key)
except ValueError:
pass
# Transform the value as a list of string
if isinstance(val, (list, tuple)):
val = [str(_) for _ in val]
else:
val = [str(val), ]
# Add the value(s) to the current list of missing
if key is None:
# None acts as default
for miss in missing_values:
miss.extend(val)
else:
missing_values[key].extend(val)
# We have a sequence : each item matches a column
elif isinstance(user_missing_values, (list, tuple)):
for (value, entry) in zip(user_missing_values, missing_values):
value = str(value)
if value not in entry:
entry.append(value)
# We have a string : apply it to all entries
elif isinstance(user_missing_values, bytes):
user_value = user_missing_values.split(asbytes(","))
for entry in missing_values:
entry.extend(user_value)
# We have something else: apply it to all entries
else:
for entry in missing_values:
entry.extend([str(user_missing_values)])
# Process the filling_values ...............................
# Rename the input for convenience
user_filling_values = filling_values
if user_filling_values is None:
user_filling_values = []
# Define the default
filling_values = [None] * nbcols
# We have a dictionary : update each entry individually
if isinstance(user_filling_values, dict):
for (key, val) in user_filling_values.items():
if _is_string_like(key):
try:
# Transform it into an integer
key = names.index(key)
except ValueError:
# We couldn't find it: the name must have been dropped,
continue
# Redefine the key if it's a column number and usecols is defined
if usecols:
try:
key = usecols.index(key)
except ValueError:
pass
# Add the value to the list
filling_values[key] = val
# We have a sequence : update on a one-to-one basis
elif isinstance(user_filling_values, (list, tuple)):
n = len(user_filling_values)
if (n <= nbcols):
filling_values[:n] = user_filling_values
else:
filling_values = user_filling_values[:nbcols]
# We have something else : use it for all entries
else:
filling_values = [user_filling_values] * nbcols
# Initialize the converters ................................
if dtype is None:
# Note: we can't use a [...]*nbcols, as we would have 3 times the same
# ... converter, instead of 3 different converters.
converters = [StringConverter(None, missing_values=miss, default=fill)
for (miss, fill) in zip(missing_values, filling_values)]
else:
dtype_flat = flatten_dtype(dtype, flatten_base=True)
# Initialize the converters
if len(dtype_flat) > 1:
# Flexible type : get a converter from each dtype
zipit = zip(dtype_flat, missing_values, filling_values)
converters = [StringConverter(dt, locked=True,
missing_values=miss, default=fill)
for (dt, miss, fill) in zipit]
else:
# Set to a default converter (but w/ different missing values)
zipit = zip(missing_values, filling_values)
converters = [StringConverter(dtype, locked=True,
missing_values=miss, default=fill)
for (miss, fill) in zipit]
# Update the converters to use the user-defined ones
uc_update = []
for (j, conv) in user_converters.items():
# If the converter is specified by column names, use the index instead
if _is_string_like(j):
try:
j = names.index(j)
i = j
except ValueError:
continue
elif usecols:
try:
i = usecols.index(j)
except ValueError:
# Unused converter specified
continue
else:
i = j
# Find the value to test - first_line is not filtered by usecols:
if len(first_line):
testing_value = first_values[j]
else:
testing_value = None
converters[i].update(conv, locked=True,
testing_value=testing_value,
default=filling_values[i],
missing_values=missing_values[i],)
uc_update.append((i, conv))
# Make sure we have the corrected keys in user_converters...
user_converters.update(uc_update)
# Fixme: possible error as following variable never used.
#miss_chars = [_.missing_values for _ in converters]
# Initialize the output lists ...
# ... rows
rows = []
append_to_rows = rows.append
# ... masks
if usemask:
masks = []
append_to_masks = masks.append
# ... invalid
invalid = []
append_to_invalid = invalid.append
# Parse each line
for (i, line) in enumerate(itertools.chain([first_line, ], fhd)):
values = split_line(line)
nbvalues = len(values)
# Skip an empty line
if nbvalues == 0:
continue
if usecols:
# Select only the columns we need
try:
values = [values[_] for _ in usecols]
except IndexError:
append_to_invalid((i + skip_header + 1, nbvalues))
continue
elif nbvalues != nbcols:
append_to_invalid((i + skip_header + 1, nbvalues))
continue
# Store the values
append_to_rows(tuple(values))
if usemask:
append_to_masks(tuple([v.strip() in m
for (v, m) in zip(values,
missing_values)]))
if len(rows) == max_rows:
break
if own_fhd:
fhd.close()
# Upgrade the converters (if needed)
if dtype is None:
for (i, converter) in enumerate(converters):
current_column = [itemgetter(i)(_m) for _m in rows]
try:
converter.iterupgrade(current_column)
except ConverterLockError:
errmsg = "Converter #%i is locked and cannot be upgraded: " % i
current_column = map(itemgetter(i), rows)
for (j, value) in enumerate(current_column):
try:
converter.upgrade(value)
except (ConverterError, ValueError):
errmsg += "(occurred line #%i for value '%s')"
errmsg %= (j + 1 + skip_header, value)
raise ConverterError(errmsg)
# Check that we don't have invalid values
nbinvalid = len(invalid)
if nbinvalid > 0:
nbrows = len(rows) + nbinvalid - skip_footer
# Construct the error message
template = " Line #%%i (got %%i columns instead of %i)" % nbcols
if skip_footer > 0:
nbinvalid_skipped = len([_ for _ in invalid
if _[0] > nbrows + skip_header])
invalid = invalid[:nbinvalid - nbinvalid_skipped]
skip_footer -= nbinvalid_skipped
#
# nbrows -= skip_footer
# errmsg = [template % (i, nb)
# for (i, nb) in invalid if i < nbrows]
# else:
errmsg = [template % (i, nb)
for (i, nb) in invalid]
if len(errmsg):
errmsg.insert(0, "Some errors were detected !")
errmsg = "\n".join(errmsg)
# Raise an exception ?
if invalid_raise:
> raise ValueError(errmsg)
E ValueError: Some errors were detected !
E Line #60 (got 4 columns instead of 1)
E Line #123 (got 2 columns instead of 1)
E Line #124 (got 2 columns instead of 1)
E Line #129 (got 4 columns instead of 1)
E Line #131 (got 4 columns instead of 1)
E Line #132 (got 2 columns instead of 1)
E Line #134 (got 2 columns instead of 1)
C:\-\lib\site-packages\numpy\lib\npyio.py:1769: ValueError
_______________________ TestSDSSRemote.test_sdss_image ________________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000B2A8348>
def test_sdss_image(self):
xid = sdss.SDSS.query_region(self.coords)
assert isinstance(xid, Table)
> img = sdss.SDSS.get_images(matches=xid)
astroquery\sdss\tests\test_sdss_remote.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\sdss\core.py:749: in get_images
band=band, timeout=timeout, get_query_payload=get_query_payload)
astroquery\sdss\core.py:722: in get_images_async
link = linkstr.format(base=conf.sas_baseurl, run=row['run'],
C:\-\lib\site-packages\astropy\table\row.py:46: in __getitem__
return self._table.columns[item][self._index]
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <TableColumns names=('titleSkyserver_Errortitle')>, item = 'run'
def __getitem__(self, item):
"""Get items from a TableColumns object.
::
tc = TableColumns(cols=[Column(name='a'), Column(name='b'), Column(name='c')])
tc['a'] # Column('a')
tc[1] # Column('b')
tc['a', 'b'] # <TableColumns names=('a', 'b')>
tc[1:3] # <TableColumns names=('b', 'c')>
"""
if isinstance(item, six.string_types):
> return OrderedDict.__getitem__(self, item)
E KeyError: u'run'
C:\-\lib\site-packages\astropy\table\table.py:98: KeyError
_____________________ TestSDSSRemote.test_sdss_image_run ______________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x0000000009932CC8>
def test_sdss_image_run(self):
> img = sdss.SDSS.get_images(run=1904, camcol=3, field=164)
astroquery\sdss\tests\test_sdss_remote.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\sdss\core.py:749: in get_images
band=band, timeout=timeout, get_query_payload=get_query_payload)
astroquery\sdss\core.py:710: in get_images_async
matches = self._parse_result(r)
astroquery\sdss\core.py:843: in _parse_result
comments='#'))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
fname = <_io.BytesIO object at 0x000000000B27FF10>, dtype = None, comments = '#'
delimiter = ',', skip_header = 1, skip_footer = 0
converters = [<numpy.lib._iotools.StringConverter object at 0x000000000AA56898>]
missing_values = [['']], filling_values = [None], usecols = None
names = ['DOCTYPE_html_PUBLIC_W3CDTD_XHTML_10_TransitionalEN_httpwwww3orgTRxhtml1DTDxhtml1transitionaldtd']
excludelist = None, deletechars = None, replace_space = '_', autostrip = False
case_sensitive = True, defaultfmt = 'f%i', unpack = None, usemask = False
loose = True, invalid_raise = True, max_rows = None
def genfromtxt(fname, dtype=float, comments='#', delimiter=None,
skip_header=0, skip_footer=0, converters=None,
missing_values=None, filling_values=None, usecols=None,
names=None, excludelist=None, deletechars=None,
replace_space='_', autostrip=False, case_sensitive=True,
defaultfmt="f%i", unpack=None, usemask=False, loose=True,
invalid_raise=True, max_rows=None):
"""
Load data from a text file, with missing values handled as specified.
Each line past the first `skip_header` lines is split at the `delimiter`
character, and characters following the `comments` character are discarded.
Parameters
----------
fname : file, str, list of str, generator
File, filename, list, or generator to read. If the filename
extension is `.gz` or `.bz2`, the file is first decompressed. Mote
that generators must return byte strings in Python 3k. The strings
in a list or produced by a generator are treated as lines.
dtype : dtype, optional
Data type of the resulting array.
If None, the dtypes will be determined by the contents of each
column, individually.
comments : str, optional
The character used to indicate the start of a comment.
All the characters occurring on a line after a comment are discarded
delimiter : str, int, or sequence, optional
The string used to separate values. By default, any consecutive
whitespaces act as delimiter. An integer or sequence of integers
can also be provided as width(s) of each field.
skiprows : int, optional
`skiprows` was removed in numpy 1.10. Please use `skip_header` instead.
skip_header : int, optional
The number of lines to skip at the beginning of the file.
skip_footer : int, optional
The number of lines to skip at the end of the file.
converters : variable, optional
The set of functions that convert the data of a column to a value.
The converters can also be used to provide a default value
for missing data: ``converters = {3: lambda s: float(s or 0)}``.
missing : variable, optional
`missing` was removed in numpy 1.10. Please use `missing_values`
instead.
missing_values : variable, optional
The set of strings corresponding to missing data.
filling_values : variable, optional
The set of values to be used as default when the data are missing.
usecols : sequence, optional
Which columns to read, with 0 being the first. For example,
``usecols = (1, 4, 5)`` will extract the 2nd, 5th and 6th columns.
names : {None, True, str, sequence}, optional
If `names` is True, the field names are read from the first valid line
after the first `skip_header` lines.
If `names` is a sequence or a single-string of comma-separated names,
the names will be used to define the field names in a structured dtype.
If `names` is None, the names of the dtype fields will be used, if any.
excludelist : sequence, optional
A list of names to exclude. This list is appended to the default list
['return','file','print']. Excluded names are appended an underscore:
for example, `file` would become `file_`.
deletechars : str, optional
A string combining invalid characters that must be deleted from the
names.
defaultfmt : str, optional
A format used to define default field names, such as "f%i" or "f_%02i".
autostrip : bool, optional
Whether to automatically strip white spaces from the variables.
replace_space : char, optional
Character(s) used in replacement of white spaces in the variables
names. By default, use a '_'.
case_sensitive : {True, False, 'upper', 'lower'}, optional
If True, field names are case sensitive.
If False or 'upper', field names are converted to upper case.
If 'lower', field names are converted to lower case.
unpack : bool, optional
If True, the returned array is transposed, so that arguments may be
unpacked using ``x, y, z = loadtxt(...)``
usemask : bool, optional
If True, return a masked array.
If False, return a regular array.
loose : bool, optional
If True, do not raise errors for invalid values.
invalid_raise : bool, optional
If True, an exception is raised if an inconsistency is detected in the
number of columns.
If False, a warning is emitted and the offending lines are skipped.
max_rows : int, optional
The maximum number of rows to read. Must not be used with skip_footer
at the same time. If given, the value must be at least 1. Default is
to read the entire file.
.. versionadded:: 1.10.0
Returns
-------
out : ndarray
Data read from the text file. If `usemask` is True, this is a
masked array.
See Also
--------
numpy.loadtxt : equivalent function when no data is missing.
Notes
-----
* When spaces are used as delimiters, or when no delimiter has been given
as input, there should not be any missing data between two fields.
* When the variables are named (either by a flexible dtype or with `names`,
there must not be any header in the file (else a ValueError
exception is raised).
* Individual values are not stripped of spaces by default.
When using a custom converter, make sure the function does remove spaces.
References
----------
.. [1] Numpy User Guide, section `I/O with Numpy
<http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html>`_.
Examples
---------
>>> from io import StringIO
>>> import numpy as np
Comma delimited file with mixed dtype
>>> s = StringIO("1,1.3,abcde")
>>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'),
... ('mystring','S5')], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
Using dtype = None
>>> s.seek(0) # needed for StringIO example only
>>> data = np.genfromtxt(s, dtype=None,
... names = ['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
Specifying dtype and names
>>> s.seek(0)
>>> data = np.genfromtxt(s, dtype="i8,f8,S5",
... names=['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
An example with fixed-width columns
>>> s = StringIO("11.3abcde")
>>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'],
... delimiter=[1,3,5])
>>> data
array((1, 1.3, 'abcde'),
dtype=[('intvar', '<i8'), ('fltvar', '<f8'), ('strvar', '|S5')])
"""
if max_rows is not None:
if skip_footer:
raise ValueError(
"The keywords 'skip_footer' and 'max_rows' can not be "
"specified at the same time.")
if max_rows < 1:
raise ValueError("'max_rows' must be at least 1.")
# Py3 data conversions to bytes, for convenience
if comments is not None:
comments = asbytes(comments)
if isinstance(delimiter, unicode):
delimiter = asbytes(delimiter)
if isinstance(missing_values, (unicode, list, tuple)):
missing_values = asbytes_nested(missing_values)
#
if usemask:
from numpy.ma import MaskedArray, make_mask_descr
# Check the input dictionary of converters
user_converters = converters or {}
if not isinstance(user_converters, dict):
raise TypeError(
"The input argument 'converter' should be a valid dictionary "
"(got '%s' instead)" % type(user_converters))
# Initialize the filehandle, the LineSplitter and the NameValidator
own_fhd = False
try:
if isinstance(fname, basestring):
if sys.version_info[0] == 2:
fhd = iter(np.lib._datasource.open(fname, 'rbU'))
else:
fhd = iter(np.lib._datasource.open(fname, 'rb'))
own_fhd = True
else:
fhd = iter(fname)
except TypeError:
raise TypeError(
"fname must be a string, filehandle, list of strings, "
"or generator. Got %s instead." % type(fname))
split_line = LineSplitter(delimiter=delimiter, comments=comments,
autostrip=autostrip)._handyman
validate_names = NameValidator(excludelist=excludelist,
deletechars=deletechars,
case_sensitive=case_sensitive,
replace_space=replace_space)
# Skip the first `skip_header` rows
for i in range(skip_header):
next(fhd)
# Keep on until we find the first valid values
first_values = None
try:
while not first_values:
first_line = next(fhd)
if names is True:
if comments in first_line:
first_line = (
asbytes('').join(first_line.split(comments)[1:]))
first_values = split_line(first_line)
except StopIteration:
# return an empty array if the datafile is empty
first_line = asbytes('')
first_values = []
warnings.warn('genfromtxt: Empty input file: "%s"' % fname)
# Should we take the first values as names ?
if names is True:
fval = first_values[0].strip()
if fval in comments:
del first_values[0]
# Check the columns to use: make sure `usecols` is a list
if usecols is not None:
try:
usecols = [_.strip() for _ in usecols.split(",")]
except AttributeError:
try:
usecols = list(usecols)
except TypeError:
usecols = [usecols, ]
nbcols = len(usecols or first_values)
# Check the names and overwrite the dtype.names if needed
if names is True:
names = validate_names([_bytes_to_name(_.strip())
for _ in first_values])
first_line = asbytes('')
elif _is_string_like(names):
names = validate_names([_.strip() for _ in names.split(',')])
elif names:
names = validate_names(names)
# Get the dtype
if dtype is not None:
dtype = easy_dtype(dtype, defaultfmt=defaultfmt, names=names,
excludelist=excludelist,
deletechars=deletechars,
case_sensitive=case_sensitive,
replace_space=replace_space)
# Make sure the names is a list (for 2.5)
if names is not None:
names = list(names)
if usecols:
for (i, current) in enumerate(usecols):
# if usecols is a list of names, convert to a list of indices
if _is_string_like(current):
usecols[i] = names.index(current)
elif current < 0:
usecols[i] = current + len(first_values)
# If the dtype is not None, make sure we update it
if (dtype is not None) and (len(dtype) > nbcols):
descr = dtype.descr
dtype = np.dtype([descr[_] for _ in usecols])
names = list(dtype.names)
# If `names` is not None, update the names
elif (names is not None) and (len(names) > nbcols):
names = [names[_] for _ in usecols]
elif (names is not None) and (dtype is not None):
names = list(dtype.names)
# Process the missing values ...............................
# Rename missing_values for convenience
user_missing_values = missing_values or ()
# Define the list of missing_values (one column: one list)
missing_values = [list([asbytes('')]) for _ in range(nbcols)]
# We have a dictionary: process it field by field
if isinstance(user_missing_values, dict):
# Loop on the items
for (key, val) in user_missing_values.items():
# Is the key a string ?
if _is_string_like(key):
try:
# Transform it into an integer
key = names.index(key)
except ValueError:
# We couldn't find it: the name must have been dropped
continue
# Redefine the key as needed if it's a column number
if usecols:
try:
key = usecols.index(key)
except ValueError:
pass
# Transform the value as a list of string
if isinstance(val, (list, tuple)):
val = [str(_) for _ in val]
else:
val = [str(val), ]
# Add the value(s) to the current list of missing
if key is None:
# None acts as default
for miss in missing_values:
miss.extend(val)
else:
missing_values[key].extend(val)
# We have a sequence : each item matches a column
elif isinstance(user_missing_values, (list, tuple)):
for (value, entry) in zip(user_missing_values, missing_values):
value = str(value)
if value not in entry:
entry.append(value)
# We have a string : apply it to all entries
elif isinstance(user_missing_values, bytes):
user_value = user_missing_values.split(asbytes(","))
for entry in missing_values:
entry.extend(user_value)
# We have something else: apply it to all entries
else:
for entry in missing_values:
entry.extend([str(user_missing_values)])
# Process the filling_values ...............................
# Rename the input for convenience
user_filling_values = filling_values
if user_filling_values is None:
user_filling_values = []
# Define the default
filling_values = [None] * nbcols
# We have a dictionary : update each entry individually
if isinstance(user_filling_values, dict):
for (key, val) in user_filling_values.items():
if _is_string_like(key):
try:
# Transform it into an integer
key = names.index(key)
except ValueError:
# We couldn't find it: the name must have been dropped,
continue
# Redefine the key if it's a column number and usecols is defined
if usecols:
try:
key = usecols.index(key)
except ValueError:
pass
# Add the value to the list
filling_values[key] = val
# We have a sequence : update on a one-to-one basis
elif isinstance(user_filling_values, (list, tuple)):
n = len(user_filling_values)
if (n <= nbcols):
filling_values[:n] = user_filling_values
else:
filling_values = user_filling_values[:nbcols]
# We have something else : use it for all entries
else:
filling_values = [user_filling_values] * nbcols
# Initialize the converters ................................
if dtype is None:
# Note: we can't use a [...]*nbcols, as we would have 3 times the same
# ... converter, instead of 3 different converters.
converters = [StringConverter(None, missing_values=miss, default=fill)
for (miss, fill) in zip(missing_values, filling_values)]
else:
dtype_flat = flatten_dtype(dtype, flatten_base=True)
# Initialize the converters
if len(dtype_flat) > 1:
# Flexible type : get a converter from each dtype
zipit = zip(dtype_flat, missing_values, filling_values)
converters = [StringConverter(dt, locked=True,
missing_values=miss, default=fill)
for (dt, miss, fill) in zipit]
else:
# Set to a default converter (but w/ different missing values)
zipit = zip(missing_values, filling_values)
converters = [StringConverter(dtype, locked=True,
missing_values=miss, default=fill)
for (miss, fill) in zipit]
# Update the converters to use the user-defined ones
uc_update = []
for (j, conv) in user_converters.items():
# If the converter is specified by column names, use the index instead
if _is_string_like(j):
try:
j = names.index(j)
i = j
except ValueError:
continue
elif usecols:
try:
i = usecols.index(j)
except ValueError:
# Unused converter specified
continue
else:
i = j
# Find the value to test - first_line is not filtered by usecols:
if len(first_line):
testing_value = first_values[j]
else:
testing_value = None
converters[i].update(conv, locked=True,
testing_value=testing_value,
default=filling_values[i],
missing_values=missing_values[i],)
uc_update.append((i, conv))
# Make sure we have the corrected keys in user_converters...
user_converters.update(uc_update)
# Fixme: possible error as following variable never used.
#miss_chars = [_.missing_values for _ in converters]
# Initialize the output lists ...
# ... rows
rows = []
append_to_rows = rows.append
# ... masks
if usemask:
masks = []
append_to_masks = masks.append
# ... invalid
invalid = []
append_to_invalid = invalid.append
# Parse each line
for (i, line) in enumerate(itertools.chain([first_line, ], fhd)):
values = split_line(line)
nbvalues = len(values)
# Skip an empty line
if nbvalues == 0:
continue
if usecols:
# Select only the columns we need
try:
values = [values[_] for _ in usecols]
except IndexError:
append_to_invalid((i + skip_header + 1, nbvalues))
continue
elif nbvalues != nbcols:
append_to_invalid((i + skip_header + 1, nbvalues))
continue
# Store the values
append_to_rows(tuple(values))
if usemask:
append_to_masks(tuple([v.strip() in m
for (v, m) in zip(values,
missing_values)]))
if len(rows) == max_rows:
break
if own_fhd:
fhd.close()
# Upgrade the converters (if needed)
if dtype is None:
for (i, converter) in enumerate(converters):
current_column = [itemgetter(i)(_m) for _m in rows]
try:
converter.iterupgrade(current_column)
except ConverterLockError:
errmsg = "Converter #%i is locked and cannot be upgraded: " % i
current_column = map(itemgetter(i), rows)
for (j, value) in enumerate(current_column):
try:
converter.upgrade(value)
except (ConverterError, ValueError):
errmsg += "(occurred line #%i for value '%s')"
errmsg %= (j + 1 + skip_header, value)
raise ConverterError(errmsg)
# Check that we don't have invalid values
nbinvalid = len(invalid)
if nbinvalid > 0:
nbrows = len(rows) + nbinvalid - skip_footer
# Construct the error message
template = " Line #%%i (got %%i columns instead of %i)" % nbcols
if skip_footer > 0:
nbinvalid_skipped = len([_ for _ in invalid
if _[0] > nbrows + skip_header])
invalid = invalid[:nbinvalid - nbinvalid_skipped]
skip_footer -= nbinvalid_skipped
#
# nbrows -= skip_footer
# errmsg = [template % (i, nb)
# for (i, nb) in invalid if i < nbrows]
# else:
errmsg = [template % (i, nb)
for (i, nb) in invalid]
if len(errmsg):
errmsg.insert(0, "Some errors were detected !")
errmsg = "\n".join(errmsg)
# Raise an exception ?
if invalid_raise:
> raise ValueError(errmsg)
E ValueError: Some errors were detected !
E Line #60 (got 4 columns instead of 1)
E Line #123 (got 2 columns instead of 1)
E Line #124 (got 2 columns instead of 1)
E Line #129 (got 4 columns instead of 1)
E Line #131 (got 4 columns instead of 1)
E Line #132 (got 2 columns instead of 1)
E Line #134 (got 2 columns instead of 1)
C:\-\lib\site-packages\numpy\lib\npyio.py:1769: ValueError
____________________ TestSDSSRemote.test_sdss_image_coord _____________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000AA04988>
def test_sdss_image_coord(self):
> img = sdss.SDSS.get_images(self.coords)
astroquery\sdss\tests\test_sdss_remote.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\sdss\core.py:749: in get_images
band=band, timeout=timeout, get_query_payload=get_query_payload)
astroquery\sdss\core.py:710: in get_images_async
matches = self._parse_result(r)
astroquery\sdss\core.py:843: in _parse_result
comments='#'))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
fname = <_io.BytesIO object at 0x000000000B23B150>, dtype = None, comments = '#'
delimiter = ',', skip_header = 1, skip_footer = 0
converters = [<numpy.lib._iotools.StringConverter object at 0x000000000A5A9B70>]
missing_values = [['']], filling_values = [None], usecols = None
names = ['DOCTYPE_html_PUBLIC_W3CDTD_XHTML_10_TransitionalEN_httpwwww3orgTRxhtml1DTDxhtml1transitionaldtd']
excludelist = None, deletechars = None, replace_space = '_', autostrip = False
case_sensitive = True, defaultfmt = 'f%i', unpack = None, usemask = False
loose = True, invalid_raise = True, max_rows = None
def genfromtxt(fname, dtype=float, comments='#', delimiter=None,
skip_header=0, skip_footer=0, converters=None,
missing_values=None, filling_values=None, usecols=None,
names=None, excludelist=None, deletechars=None,
replace_space='_', autostrip=False, case_sensitive=True,
defaultfmt="f%i", unpack=None, usemask=False, loose=True,
invalid_raise=True, max_rows=None):
"""
Load data from a text file, with missing values handled as specified.
Each line past the first `skip_header` lines is split at the `delimiter`
character, and characters following the `comments` character are discarded.
Parameters
----------
fname : file, str, list of str, generator
File, filename, list, or generator to read. If the filename
extension is `.gz` or `.bz2`, the file is first decompressed. Mote
that generators must return byte strings in Python 3k. The strings
in a list or produced by a generator are treated as lines.
dtype : dtype, optional
Data type of the resulting array.
If None, the dtypes will be determined by the contents of each
column, individually.
comments : str, optional
The character used to indicate the start of a comment.
All the characters occurring on a line after a comment are discarded
delimiter : str, int, or sequence, optional
The string used to separate values. By default, any consecutive
whitespaces act as delimiter. An integer or sequence of integers
can also be provided as width(s) of each field.
skiprows : int, optional
`skiprows` was removed in numpy 1.10. Please use `skip_header` instead.
skip_header : int, optional
The number of lines to skip at the beginning of the file.
skip_footer : int, optional
The number of lines to skip at the end of the file.
converters : variable, optional
The set of functions that convert the data of a column to a value.
The converters can also be used to provide a default value
for missing data: ``converters = {3: lambda s: float(s or 0)}``.
missing : variable, optional
`missing` was removed in numpy 1.10. Please use `missing_values`
instead.
missing_values : variable, optional
The set of strings corresponding to missing data.
filling_values : variable, optional
The set of values to be used as default when the data are missing.
usecols : sequence, optional
Which columns to read, with 0 being the first. For example,
``usecols = (1, 4, 5)`` will extract the 2nd, 5th and 6th columns.
names : {None, True, str, sequence}, optional
If `names` is True, the field names are read from the first valid line
after the first `skip_header` lines.
If `names` is a sequence or a single-string of comma-separated names,
the names will be used to define the field names in a structured dtype.
If `names` is None, the names of the dtype fields will be used, if any.
excludelist : sequence, optional
A list of names to exclude. This list is appended to the default list
['return','file','print']. Excluded names are appended an underscore:
for example, `file` would become `file_`.
deletechars : str, optional
A string combining invalid characters that must be deleted from the
names.
defaultfmt : str, optional
A format used to define default field names, such as "f%i" or "f_%02i".
autostrip : bool, optional
Whether to automatically strip white spaces from the variables.
replace_space : char, optional
Character(s) used in replacement of white spaces in the variables
names. By default, use a '_'.
case_sensitive : {True, False, 'upper', 'lower'}, optional
If True, field names are case sensitive.
If False or 'upper', field names are converted to upper case.
If 'lower', field names are converted to lower case.
unpack : bool, optional
If True, the returned array is transposed, so that arguments may be
unpacked using ``x, y, z = loadtxt(...)``
usemask : bool, optional
If True, return a masked array.
If False, return a regular array.
loose : bool, optional
If True, do not raise errors for invalid values.
invalid_raise : bool, optional
If True, an exception is raised if an inconsistency is detected in the
number of columns.
If False, a warning is emitted and the offending lines are skipped.
max_rows : int, optional
The maximum number of rows to read. Must not be used with skip_footer
at the same time. If given, the value must be at least 1. Default is
to read the entire file.
.. versionadded:: 1.10.0
Returns
-------
out : ndarray
Data read from the text file. If `usemask` is True, this is a
masked array.
See Also
--------
numpy.loadtxt : equivalent function when no data is missing.
Notes
-----
* When spaces are used as delimiters, or when no delimiter has been given
as input, there should not be any missing data between two fields.
* When the variables are named (either by a flexible dtype or with `names`,
there must not be any header in the file (else a ValueError
exception is raised).
* Individual values are not stripped of spaces by default.
When using a custom converter, make sure the function does remove spaces.
References
----------
.. [1] Numpy User Guide, section `I/O with Numpy
<http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html>`_.
Examples
---------
>>> from io import StringIO
>>> import numpy as np
Comma delimited file with mixed dtype
>>> s = StringIO("1,1.3,abcde")
>>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'),
... ('mystring','S5')], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
Using dtype = None
>>> s.seek(0) # needed for StringIO example only
>>> data = np.genfromtxt(s, dtype=None,
... names = ['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
Specifying dtype and names
>>> s.seek(0)
>>> data = np.genfromtxt(s, dtype="i8,f8,S5",
... names=['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
An example with fixed-width columns
>>> s = StringIO("11.3abcde")
>>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'],
... delimiter=[1,3,5])
>>> data
array((1, 1.3, 'abcde'),
dtype=[('intvar', '<i8'), ('fltvar', '<f8'), ('strvar', '|S5')])
"""
if max_rows is not None:
if skip_footer:
raise ValueError(
"The keywords 'skip_footer' and 'max_rows' can not be "
"specified at the same time.")
if max_rows < 1:
raise ValueError("'max_rows' must be at least 1.")
# Py3 data conversions to bytes, for convenience
if comments is not None:
comments = asbytes(comments)
if isinstance(delimiter, unicode):
delimiter = asbytes(delimiter)
if isinstance(missing_values, (unicode, list, tuple)):
missing_values = asbytes_nested(missing_values)
#
if usemask:
from numpy.ma import MaskedArray, make_mask_descr
# Check the input dictionary of converters
user_converters = converters or {}
if not isinstance(user_converters, dict):
raise TypeError(
"The input argument 'converter' should be a valid dictionary "
"(got '%s' instead)" % type(user_converters))
# Initialize the filehandle, the LineSplitter and the NameValidator
own_fhd = False
try:
if isinstance(fname, basestring):
if sys.version_info[0] == 2:
fhd = iter(np.lib._datasource.open(fname, 'rbU'))
else:
fhd = iter(np.lib._datasource.open(fname, 'rb'))
own_fhd = True
else:
fhd = iter(fname)
except TypeError:
raise TypeError(
"fname must be a string, filehandle, list of strings, "
"or generator. Got %s instead." % type(fname))
split_line = LineSplitter(delimiter=delimiter, comments=comments,
autostrip=autostrip)._handyman
validate_names = NameValidator(excludelist=excludelist,
deletechars=deletechars,
case_sensitive=case_sensitive,
replace_space=replace_space)
# Skip the first `skip_header` rows
for i in range(skip_header):
next(fhd)
# Keep on until we find the first valid values
first_values = None
try:
while not first_values:
first_line = next(fhd)
if names is True:
if comments in first_line:
first_line = (
asbytes('').join(first_line.split(comments)[1:]))
first_values = split_line(first_line)
except StopIteration:
# return an empty array if the datafile is empty
first_line = asbytes('')
first_values = []
warnings.warn('genfromtxt: Empty input file: "%s"' % fname)
# Should we take the first values as names ?
if names is True:
fval = first_values[0].strip()
if fval in comments:
del first_values[0]
# Check the columns to use: make sure `usecols` is a list
if usecols is not None:
try:
usecols = [_.strip() for _ in usecols.split(",")]
except AttributeError:
try:
usecols = list(usecols)
except TypeError:
usecols = [usecols, ]
nbcols = len(usecols or first_values)
# Check the names and overwrite the dtype.names if needed
if names is True:
names = validate_names([_bytes_to_name(_.strip())
for _ in first_values])
first_line = asbytes('')
elif _is_string_like(names):
names = validate_names([_.strip() for _ in names.split(',')])
elif names:
names = validate_names(names)
# Get the dtype
if dtype is not None:
dtype = easy_dtype(dtype, defaultfmt=defaultfmt, names=names,
excludelist=excludelist,
deletechars=deletechars,
case_sensitive=case_sensitive,
replace_space=replace_space)
# Make sure the names is a list (for 2.5)
if names is not None:
names = list(names)
if usecols:
for (i, current) in enumerate(usecols):
# if usecols is a list of names, convert to a list of indices
if _is_string_like(current):
usecols[i] = names.index(current)
elif current < 0:
usecols[i] = current + len(first_values)
# If the dtype is not None, make sure we update it
if (dtype is not None) and (len(dtype) > nbcols):
descr = dtype.descr
dtype = np.dtype([descr[_] for _ in usecols])
names = list(dtype.names)
# If `names` is not None, update the names
elif (names is not None) and (len(names) > nbcols):
names = [names[_] for _ in usecols]
elif (names is not None) and (dtype is not None):
names = list(dtype.names)
# Process the missing values ...............................
# Rename missing_values for convenience
user_missing_values = missing_values or ()
# Define the list of missing_values (one column: one list)
missing_values = [list([asbytes('')]) for _ in range(nbcols)]
# We have a dictionary: process it field by field
if isinstance(user_missing_values, dict):
# Loop on the items
for (key, val) in user_missing_values.items():
# Is the key a string ?
if _is_string_like(key):
try:
# Transform it into an integer
key = names.index(key)
except ValueError:
# We couldn't find it: the name must have been dropped
continue
# Redefine the key as needed if it's a column number
if usecols:
try:
key = usecols.index(key)
except ValueError:
pass
# Transform the value as a list of string
if isinstance(val, (list, tuple)):
val = [str(_) for _ in val]
else:
val = [str(val), ]
# Add the value(s) to the current list of missing
if key is None:
# None acts as default
for miss in missing_values:
miss.extend(val)
else:
missing_values[key].extend(val)
# We have a sequence : each item matches a column
elif isinstance(user_missing_values, (list, tuple)):
for (value, entry) in zip(user_missing_values, missing_values):
value = str(value)
if value not in entry:
entry.append(value)
# We have a string : apply it to all entries
elif isinstance(user_missing_values, bytes):
user_value = user_missing_values.split(asbytes(","))
for entry in missing_values:
entry.extend(user_value)
# We have something else: apply it to all entries
else:
for entry in missing_values:
entry.extend([str(user_missing_values)])
# Process the filling_values ...............................
# Rename the input for convenience
user_filling_values = filling_values
if user_filling_values is None:
user_filling_values = []
# Define the default
filling_values = [None] * nbcols
# We have a dictionary : update each entry individually
if isinstance(user_filling_values, dict):
for (key, val) in user_filling_values.items():
if _is_string_like(key):
try:
# Transform it into an integer
key = names.index(key)
except ValueError:
# We couldn't find it: the name must have been dropped,
continue
# Redefine the key if it's a column number and usecols is defined
if usecols:
try:
key = usecols.index(key)
except ValueError:
pass
# Add the value to the list
filling_values[key] = val
# We have a sequence : update on a one-to-one basis
elif isinstance(user_filling_values, (list, tuple)):
n = len(user_filling_values)
if (n <= nbcols):
filling_values[:n] = user_filling_values
else:
filling_values = user_filling_values[:nbcols]
# We have something else : use it for all entries
else:
filling_values = [user_filling_values] * nbcols
# Initialize the converters ................................
if dtype is None:
# Note: we can't use a [...]*nbcols, as we would have 3 times the same
# ... converter, instead of 3 different converters.
converters = [StringConverter(None, missing_values=miss, default=fill)
for (miss, fill) in zip(missing_values, filling_values)]
else:
dtype_flat = flatten_dtype(dtype, flatten_base=True)
# Initialize the converters
if len(dtype_flat) > 1:
# Flexible type : get a converter from each dtype
zipit = zip(dtype_flat, missing_values, filling_values)
converters = [StringConverter(dt, locked=True,
missing_values=miss, default=fill)
for (dt, miss, fill) in zipit]
else:
# Set to a default converter (but w/ different missing values)
zipit = zip(missing_values, filling_values)
converters = [StringConverter(dtype, locked=True,
missing_values=miss, default=fill)
for (miss, fill) in zipit]
# Update the converters to use the user-defined ones
uc_update = []
for (j, conv) in user_converters.items():
# If the converter is specified by column names, use the index instead
if _is_string_like(j):
try:
j = names.index(j)
i = j
except ValueError:
continue
elif usecols:
try:
i = usecols.index(j)
except ValueError:
# Unused converter specified
continue
else:
i = j
# Find the value to test - first_line is not filtered by usecols:
if len(first_line):
testing_value = first_values[j]
else:
testing_value = None
converters[i].update(conv, locked=True,
testing_value=testing_value,
default=filling_values[i],
missing_values=missing_values[i],)
uc_update.append((i, conv))
# Make sure we have the corrected keys in user_converters...
user_converters.update(uc_update)
# Fixme: possible error as following variable never used.
#miss_chars = [_.missing_values for _ in converters]
# Initialize the output lists ...
# ... rows
rows = []
append_to_rows = rows.append
# ... masks
if usemask:
masks = []
append_to_masks = masks.append
# ... invalid
invalid = []
append_to_invalid = invalid.append
# Parse each line
for (i, line) in enumerate(itertools.chain([first_line, ], fhd)):
values = split_line(line)
nbvalues = len(values)
# Skip an empty line
if nbvalues == 0:
continue
if usecols:
# Select only the columns we need
try:
values = [values[_] for _ in usecols]
except IndexError:
append_to_invalid((i + skip_header + 1, nbvalues))
continue
elif nbvalues != nbcols:
append_to_invalid((i + skip_header + 1, nbvalues))
continue
# Store the values
append_to_rows(tuple(values))
if usemask:
append_to_masks(tuple([v.strip() in m
for (v, m) in zip(values,
missing_values)]))
if len(rows) == max_rows:
break
if own_fhd:
fhd.close()
# Upgrade the converters (if needed)
if dtype is None:
for (i, converter) in enumerate(converters):
current_column = [itemgetter(i)(_m) for _m in rows]
try:
converter.iterupgrade(current_column)
except ConverterLockError:
errmsg = "Converter #%i is locked and cannot be upgraded: " % i
current_column = map(itemgetter(i), rows)
for (j, value) in enumerate(current_column):
try:
converter.upgrade(value)
except (ConverterError, ValueError):
errmsg += "(occurred line #%i for value '%s')"
errmsg %= (j + 1 + skip_header, value)
raise ConverterError(errmsg)
# Check that we don't have invalid values
nbinvalid = len(invalid)
if nbinvalid > 0:
nbrows = len(rows) + nbinvalid - skip_footer
# Construct the error message
template = " Line #%%i (got %%i columns instead of %i)" % nbcols
if skip_footer > 0:
nbinvalid_skipped = len([_ for _ in invalid
if _[0] > nbrows + skip_header])
invalid = invalid[:nbinvalid - nbinvalid_skipped]
skip_footer -= nbinvalid_skipped
#
# nbrows -= skip_footer
# errmsg = [template % (i, nb)
# for (i, nb) in invalid if i < nbrows]
# else:
errmsg = [template % (i, nb)
for (i, nb) in invalid]
if len(errmsg):
errmsg.insert(0, "Some errors were detected !")
errmsg = "\n".join(errmsg)
# Raise an exception ?
if invalid_raise:
> raise ValueError(errmsg)
E ValueError: Some errors were detected !
E Line #60 (got 4 columns instead of 1)
E Line #123 (got 2 columns instead of 1)
E Line #124 (got 2 columns instead of 1)
E Line #129 (got 4 columns instead of 1)
E Line #131 (got 4 columns instead of 1)
E Line #132 (got 2 columns instead of 1)
E Line #134 (got 2 columns instead of 1)
C:\-\lib\site-packages\numpy\lib\npyio.py:1769: ValueError
______________________ TestSDSSRemote.test_sdss_specobj _______________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000B7478C8>
def test_sdss_specobj(self):
colnames = ['ra', 'dec', 'objid', 'run', 'rerun', 'camcol', 'field',
'z', 'plate', 'mjd', 'fiberID', 'specobjid', 'run2d',
'instrument']
dtypes = [float, float, int, int, int, int, int, float, int, int, int,
int, int, bytes]
data = [
[46.8390680395307, 5.16972676625711, 1237670015125750016, 5714,
301, 2, 185, -0.0006390358, 2340, 53733, 291, 2634685834112034816,
26, 'SDSS'],
[46.8705377929765, 5.42458826592292, 1237670015662621224, 5714,
301, 3, 185, 0, 2340, 53733, 3, 2634606669274834944, 26, 'SDSS'],
[46.8899751105478, 5.09432755808192, 1237670015125815346, 5714,
301, 2, 186, -4.898809E-05, 2340, 53733, 287, 2634684734600407040,
26, 'SDSS'],
[46.8954031261838, 5.9739184644185, 1237670016199491831, 5714,
301, 4, 185, 0, 2340, 53733, 329, 2634696279472498688, 26,
'SDSS'],
[46.9155836662379, 5.50671723824944, 1237670015662686398, 5714,
301, 3, 186, 0, 2340, 53733, 420, 2634721293362030592, 26,
'SDSS']]
table = Table(data=[x for x in zip(*data)],
> names=colnames, dtype=dtypes)
astroquery\sdss\tests\test_sdss_remote.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\-\lib\site-packages\astropy\table\table.py:371: in __init__
init_func(data, names, dtype, n_cols, copy)
C:\-\lib\site-packages\astropy\table\table.py:628: in _init_from_list
copy=copy, copy_indices=self._init_indices)
C:\-\lib\site-packages\astropy\table\column.py:706: in __new__
copy=copy, copy_indices=copy_indices)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'astropy.table.column.Column'>
data = (1237670015125750016L, 1237670015662621224L, 1237670015125815346L, 1237670016199491831L, 1237670015662686398L)
name = 'objid', dtype = <type 'int'>, shape = (), length = 0, description = None
unit = None, format = None, meta = None, copy = True, copy_indices = True
def __new__(cls, data=None, name=None,
dtype=None, shape=(), length=0,
description=None, unit=None, format=None, meta=None,
copy=False, copy_indices=True):
if data is None:
dtype = (np.dtype(dtype).str, shape)
self_data = np.zeros(length, dtype=dtype)
elif isinstance(data, BaseColumn) and hasattr(data, '_name'):
# When unpickling a MaskedColumn, ``data`` will be a bare
# BaseColumn with none of the expected attributes. In this case
# do NOT execute this block which initializes from ``data``
# attributes.
self_data = np.array(data.data, dtype=dtype, copy=copy)
if description is None:
description = data.description
if unit is None:
unit = unit or data.unit
if format is None:
format = data.format
if meta is None:
meta = deepcopy(data.meta)
if name is None:
name = data.name
elif isinstance(data, Quantity):
if unit is None:
self_data = np.array(data, dtype=dtype, copy=copy)
unit = data.unit
else:
self_data = np.array(data.to(unit), dtype=dtype, copy=copy)
if description is None:
description = data.info.description
if format is None:
format = data.info.format
if meta is None:
meta = deepcopy(data.info.meta)
else:
> self_data = np.array(data, dtype=dtype, copy=copy)
E OverflowError: Python int too large to convert to C long
C:\-\lib\site-packages\astropy\table\column.py:140: OverflowError
______________________ TestSDSSRemote.test_sdss_photoobj ______________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000A8C5848>
def test_sdss_photoobj(self):
colnames = ['ra', 'dec', 'objid', 'run', 'rerun', 'camcol', 'field']
dtypes = [float, float, int, int, int, int, int]
data = [
[2.01401566011947, 14.9014376776107, 1237653651835846751,
1904, 301, 3, 164],
[2.01643436080644, 14.8109761280994, 1237653651835846753,
1904, 301, 3, 164],
[2.03003450430003, 14.7653903655885, 1237653651835846845,
1904, 301, 3, 164],
[2.01347376262532, 14.8681488509887, 1237653651835846661,
1904, 301, 3, 164],
[2.18077144165426, 14.8482787058708, 1237653651835847302,
1904, 301, 3, 164]]
table = Table(data=[x for x in zip(*data)], names=colnames,
> dtype=dtypes)
astroquery\sdss\tests\test_sdss_remote.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\-\lib\site-packages\astropy\table\table.py:371: in __init__
init_func(data, names, dtype, n_cols, copy)
C:\-\lib\site-packages\astropy\table\table.py:628: in _init_from_list
copy=copy, copy_indices=self._init_indices)
C:\-\lib\site-packages\astropy\table\column.py:706: in __new__
copy=copy, copy_indices=copy_indices)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'astropy.table.column.Column'>
data = (1237653651835846751L, 1237653651835846753L, 1237653651835846845L, 1237653651835846661L, 1237653651835847302L)
name = 'objid', dtype = <type 'int'>, shape = (), length = 0, description = None
unit = None, format = None, meta = None, copy = True, copy_indices = True
def __new__(cls, data=None, name=None,
dtype=None, shape=(), length=0,
description=None, unit=None, format=None, meta=None,
copy=False, copy_indices=True):
if data is None:
dtype = (np.dtype(dtype).str, shape)
self_data = np.zeros(length, dtype=dtype)
elif isinstance(data, BaseColumn) and hasattr(data, '_name'):
# When unpickling a MaskedColumn, ``data`` will be a bare
# BaseColumn with none of the expected attributes. In this case
# do NOT execute this block which initializes from ``data``
# attributes.
self_data = np.array(data.data, dtype=dtype, copy=copy)
if description is None:
description = data.description
if unit is None:
unit = unit or data.unit
if format is None:
format = data.format
if meta is None:
meta = deepcopy(data.meta)
if name is None:
name = data.name
elif isinstance(data, Quantity):
if unit is None:
self_data = np.array(data, dtype=dtype, copy=copy)
unit = data.unit
else:
self_data = np.array(data.to(unit), dtype=dtype, copy=copy)
if description is None:
description = data.info.description
if format is None:
format = data.info.format
if meta is None:
meta = deepcopy(data.info.meta)
else:
> self_data = np.array(data, dtype=dtype, copy=copy)
E OverflowError: Python int too large to convert to C long
C:\-\lib\site-packages\astropy\table\column.py:140: OverflowError
_________________ TestSDSSRemote.test_query_non_default_field _________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000B838DC8>
def test_query_non_default_field(self):
# A regression test for #469
> query1 = sdss.SDSS.query_region(self.coords, fields=['r'])
astroquery\sdss\tests\test_sdss_remote.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\utils\class_or_instance.py:25: in f
return self.fn(obj, *args, **kwds)
astroquery\utils\process_asyncs.py:29: in newmethod
result = self._parse_result(response, verbose=verbose)
astroquery\sdss\core.py:843: in _parse_result
comments='#'))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
fname = <_io.BytesIO object at 0x000000000B27B8E0>, dtype = None, comments = '#'
delimiter = ',', skip_header = 1, skip_footer = 0
converters = [<numpy.lib._iotools.StringConverter object at 0x000000000B66A470>]
missing_values = [['']], filling_values = [None], usecols = None
names = ['DOCTYPE_html_PUBLIC_W3CDTD_XHTML_10_TransitionalEN_httpwwww3orgTRxhtml1DTDxhtml1transitionaldtd']
excludelist = None, deletechars = None, replace_space = '_', autostrip = False
case_sensitive = True, defaultfmt = 'f%i', unpack = None, usemask = False
loose = True, invalid_raise = True, max_rows = None
def genfromtxt(fname, dtype=float, comments='#', delimiter=None,
skip_header=0, skip_footer=0, converters=None,
missing_values=None, filling_values=None, usecols=None,
names=None, excludelist=None, deletechars=None,
replace_space='_', autostrip=False, case_sensitive=True,
defaultfmt="f%i", unpack=None, usemask=False, loose=True,
invalid_raise=True, max_rows=None):
"""
Load data from a text file, with missing values handled as specified.
Each line past the first `skip_header` lines is split at the `delimiter`
character, and characters following the `comments` character are discarded.
Parameters
----------
fname : file, str, list of str, generator
File, filename, list, or generator to read. If the filename
extension is `.gz` or `.bz2`, the file is first decompressed. Mote
that generators must return byte strings in Python 3k. The strings
in a list or produced by a generator are treated as lines.
dtype : dtype, optional
Data type of the resulting array.
If None, the dtypes will be determined by the contents of each
column, individually.
comments : str, optional
The character used to indicate the start of a comment.
All the characters occurring on a line after a comment are discarded
delimiter : str, int, or sequence, optional
The string used to separate values. By default, any consecutive
whitespaces act as delimiter. An integer or sequence of integers
can also be provided as width(s) of each field.
skiprows : int, optional
`skiprows` was removed in numpy 1.10. Please use `skip_header` instead.
skip_header : int, optional
The number of lines to skip at the beginning of the file.
skip_footer : int, optional
The number of lines to skip at the end of the file.
converters : variable, optional
The set of functions that convert the data of a column to a value.
The converters can also be used to provide a default value
for missing data: ``converters = {3: lambda s: float(s or 0)}``.
missing : variable, optional
`missing` was removed in numpy 1.10. Please use `missing_values`
instead.
missing_values : variable, optional
The set of strings corresponding to missing data.
filling_values : variable, optional
The set of values to be used as default when the data are missing.
usecols : sequence, optional
Which columns to read, with 0 being the first. For example,
``usecols = (1, 4, 5)`` will extract the 2nd, 5th and 6th columns.
names : {None, True, str, sequence}, optional
If `names` is True, the field names are read from the first valid line
after the first `skip_header` lines.
If `names` is a sequence or a single-string of comma-separated names,
the names will be used to define the field names in a structured dtype.
If `names` is None, the names of the dtype fields will be used, if any.
excludelist : sequence, optional
A list of names to exclude. This list is appended to the default list
['return','file','print']. Excluded names are appended an underscore:
for example, `file` would become `file_`.
deletechars : str, optional
A string combining invalid characters that must be deleted from the
names.
defaultfmt : str, optional
A format used to define default field names, such as "f%i" or "f_%02i".
autostrip : bool, optional
Whether to automatically strip white spaces from the variables.
replace_space : char, optional
Character(s) used in replacement of white spaces in the variables
names. By default, use a '_'.
case_sensitive : {True, False, 'upper', 'lower'}, optional
If True, field names are case sensitive.
If False or 'upper', field names are converted to upper case.
If 'lower', field names are converted to lower case.
unpack : bool, optional
If True, the returned array is transposed, so that arguments may be
unpacked using ``x, y, z = loadtxt(...)``
usemask : bool, optional
If True, return a masked array.
If False, return a regular array.
loose : bool, optional
If True, do not raise errors for invalid values.
invalid_raise : bool, optional
If True, an exception is raised if an inconsistency is detected in the
number of columns.
If False, a warning is emitted and the offending lines are skipped.
max_rows : int, optional
The maximum number of rows to read. Must not be used with skip_footer
at the same time. If given, the value must be at least 1. Default is
to read the entire file.
.. versionadded:: 1.10.0
Returns
-------
out : ndarray
Data read from the text file. If `usemask` is True, this is a
masked array.
See Also
--------
numpy.loadtxt : equivalent function when no data is missing.
Notes
-----
* When spaces are used as delimiters, or when no delimiter has been given
as input, there should not be any missing data between two fields.
* When the variables are named (either by a flexible dtype or with `names`,
there must not be any header in the file (else a ValueError
exception is raised).
* Individual values are not stripped of spaces by default.
When using a custom converter, make sure the function does remove spaces.
References
----------
.. [1] Numpy User Guide, section `I/O with Numpy
<http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html>`_.
Examples
---------
>>> from io import StringIO
>>> import numpy as np
Comma delimited file with mixed dtype
>>> s = StringIO("1,1.3,abcde")
>>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'),
... ('mystring','S5')], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
Using dtype = None
>>> s.seek(0) # needed for StringIO example only
>>> data = np.genfromtxt(s, dtype=None,
... names = ['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
Specifying dtype and names
>>> s.seek(0)
>>> data = np.genfromtxt(s, dtype="i8,f8,S5",
... names=['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
An example with fixed-width columns
>>> s = StringIO("11.3abcde")
>>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'],
... delimiter=[1,3,5])
>>> data
array((1, 1.3, 'abcde'),
dtype=[('intvar', '<i8'), ('fltvar', '<f8'), ('strvar', '|S5')])
"""
if max_rows is not None:
if skip_footer:
raise ValueError(
"The keywords 'skip_footer' and 'max_rows' can not be "
"specified at the same time.")
if max_rows < 1:
raise ValueError("'max_rows' must be at least 1.")
# Py3 data conversions to bytes, for convenience
if comments is not None:
comments = asbytes(comments)
if isinstance(delimiter, unicode):
delimiter = asbytes(delimiter)
if isinstance(missing_values, (unicode, list, tuple)):
missing_values = asbytes_nested(missing_values)
#
if usemask:
from numpy.ma import MaskedArray, make_mask_descr
# Check the input dictionary of converters
user_converters = converters or {}
if not isinstance(user_converters, dict):
raise TypeError(
"The input argument 'converter' should be a valid dictionary "
"(got '%s' instead)" % type(user_converters))
# Initialize the filehandle, the LineSplitter and the NameValidator
own_fhd = False
try:
if isinstance(fname, basestring):
if sys.version_info[0] == 2:
fhd = iter(np.lib._datasource.open(fname, 'rbU'))
else:
fhd = iter(np.lib._datasource.open(fname, 'rb'))
own_fhd = True
else:
fhd = iter(fname)
except TypeError:
raise TypeError(
"fname must be a string, filehandle, list of strings, "
"or generator. Got %s instead." % type(fname))
split_line = LineSplitter(delimiter=delimiter, comments=comments,
autostrip=autostrip)._handyman
validate_names = NameValidator(excludelist=excludelist,
deletechars=deletechars,
case_sensitive=case_sensitive,
replace_space=replace_space)
# Skip the first `skip_header` rows
for i in range(skip_header):
next(fhd)
# Keep on until we find the first valid values
first_values = None
try:
while not first_values:
first_line = next(fhd)
if names is True:
if comments in first_line:
first_line = (
asbytes('').join(first_line.split(comments)[1:]))
first_values = split_line(first_line)
except StopIteration:
# return an empty array if the datafile is empty
first_line = asbytes('')
first_values = []
warnings.warn('genfromtxt: Empty input file: "%s"' % fname)
# Should we take the first values as names ?
if names is True:
fval = first_values[0].strip()
if fval in comments:
del first_values[0]
# Check the columns to use: make sure `usecols` is a list
if usecols is not None:
try:
usecols = [_.strip() for _ in usecols.split(",")]
except AttributeError:
try:
usecols = list(usecols)
except TypeError:
usecols = [usecols, ]
nbcols = len(usecols or first_values)
# Check the names and overwrite the dtype.names if needed
if names is True:
names = validate_names([_bytes_to_name(_.strip())
for _ in first_values])
first_line = asbytes('')
elif _is_string_like(names):
names = validate_names([_.strip() for _ in names.split(',')])
elif names:
names = validate_names(names)
# Get the dtype
if dtype is not None:
dtype = easy_dtype(dtype, defaultfmt=defaultfmt, names=names,
excludelist=excludelist,
deletechars=deletechars,
case_sensitive=case_sensitive,
replace_space=replace_space)
# Make sure the names is a list (for 2.5)
if names is not None:
names = list(names)
if usecols:
for (i, current) in enumerate(usecols):
# if usecols is a list of names, convert to a list of indices
if _is_string_like(current):
usecols[i] = names.index(current)
elif current < 0:
usecols[i] = current + len(first_values)
# If the dtype is not None, make sure we update it
if (dtype is not None) and (len(dtype) > nbcols):
descr = dtype.descr
dtype = np.dtype([descr[_] for _ in usecols])
names = list(dtype.names)
# If `names` is not None, update the names
elif (names is not None) and (len(names) > nbcols):
names = [names[_] for _ in usecols]
elif (names is not None) and (dtype is not None):
names = list(dtype.names)
# Process the missing values ...............................
# Rename missing_values for convenience
user_missing_values = missing_values or ()
# Define the list of missing_values (one column: one list)
missing_values = [list([asbytes('')]) for _ in range(nbcols)]
# We have a dictionary: process it field by field
if isinstance(user_missing_values, dict):
# Loop on the items
for (key, val) in user_missing_values.items():
# Is the key a string ?
if _is_string_like(key):
try:
# Transform it into an integer
key = names.index(key)
except ValueError:
# We couldn't find it: the name must have been dropped
continue
# Redefine the key as needed if it's a column number
if usecols:
try:
key = usecols.index(key)
except ValueError:
pass
# Transform the value as a list of string
if isinstance(val, (list, tuple)):
val = [str(_) for _ in val]
else:
val = [str(val), ]
# Add the value(s) to the current list of missing
if key is None:
# None acts as default
for miss in missing_values:
miss.extend(val)
else:
missing_values[key].extend(val)
# We have a sequence : each item matches a column
elif isinstance(user_missing_values, (list, tuple)):
for (value, entry) in zip(user_missing_values, missing_values):
value = str(value)
if value not in entry:
entry.append(value)
# We have a string : apply it to all entries
elif isinstance(user_missing_values, bytes):
user_value = user_missing_values.split(asbytes(","))
for entry in missing_values:
entry.extend(user_value)
# We have something else: apply it to all entries
else:
for entry in missing_values:
entry.extend([str(user_missing_values)])
# Process the filling_values ...............................
# Rename the input for convenience
user_filling_values = filling_values
if user_filling_values is None:
user_filling_values = []
# Define the default
filling_values = [None] * nbcols
# We have a dictionary : update each entry individually
if isinstance(user_filling_values, dict):
for (key, val) in user_filling_values.items():
if _is_string_like(key):
try:
# Transform it into an integer
key = names.index(key)
except ValueError:
# We couldn't find it: the name must have been dropped,
continue
# Redefine the key if it's a column number and usecols is defined
if usecols:
try:
key = usecols.index(key)
except ValueError:
pass
# Add the value to the list
filling_values[key] = val
# We have a sequence : update on a one-to-one basis
elif isinstance(user_filling_values, (list, tuple)):
n = len(user_filling_values)
if (n <= nbcols):
filling_values[:n] = user_filling_values
else:
filling_values = user_filling_values[:nbcols]
# We have something else : use it for all entries
else:
filling_values = [user_filling_values] * nbcols
# Initialize the converters ................................
if dtype is None:
# Note: we can't use a [...]*nbcols, as we would have 3 times the same
# ... converter, instead of 3 different converters.
converters = [StringConverter(None, missing_values=miss, default=fill)
for (miss, fill) in zip(missing_values, filling_values)]
else:
dtype_flat = flatten_dtype(dtype, flatten_base=True)
# Initialize the converters
if len(dtype_flat) > 1:
# Flexible type : get a converter from each dtype
zipit = zip(dtype_flat, missing_values, filling_values)
converters = [StringConverter(dt, locked=True,
missing_values=miss, default=fill)
for (dt, miss, fill) in zipit]
else:
# Set to a default converter (but w/ different missing values)
zipit = zip(missing_values, filling_values)
converters = [StringConverter(dtype, locked=True,
missing_values=miss, default=fill)
for (miss, fill) in zipit]
# Update the converters to use the user-defined ones
uc_update = []
for (j, conv) in user_converters.items():
# If the converter is specified by column names, use the index instead
if _is_string_like(j):
try:
j = names.index(j)
i = j
except ValueError:
continue
elif usecols:
try:
i = usecols.index(j)
except ValueError:
# Unused converter specified
continue
else:
i = j
# Find the value to test - first_line is not filtered by usecols:
if len(first_line):
testing_value = first_values[j]
else:
testing_value = None
converters[i].update(conv, locked=True,
testing_value=testing_value,
default=filling_values[i],
missing_values=missing_values[i],)
uc_update.append((i, conv))
# Make sure we have the corrected keys in user_converters...
user_converters.update(uc_update)
# Fixme: possible error as following variable never used.
#miss_chars = [_.missing_values for _ in converters]
# Initialize the output lists ...
# ... rows
rows = []
append_to_rows = rows.append
# ... masks
if usemask:
masks = []
append_to_masks = masks.append
# ... invalid
invalid = []
append_to_invalid = invalid.append
# Parse each line
for (i, line) in enumerate(itertools.chain([first_line, ], fhd)):
values = split_line(line)
nbvalues = len(values)
# Skip an empty line
if nbvalues == 0:
continue
if usecols:
# Select only the columns we need
try:
values = [values[_] for _ in usecols]
except IndexError:
append_to_invalid((i + skip_header + 1, nbvalues))
continue
elif nbvalues != nbcols:
append_to_invalid((i + skip_header + 1, nbvalues))
continue
# Store the values
append_to_rows(tuple(values))
if usemask:
append_to_masks(tuple([v.strip() in m
for (v, m) in zip(values,
missing_values)]))
if len(rows) == max_rows:
break
if own_fhd:
fhd.close()
# Upgrade the converters (if needed)
if dtype is None:
for (i, converter) in enumerate(converters):
current_column = [itemgetter(i)(_m) for _m in rows]
try:
converter.iterupgrade(current_column)
except ConverterLockError:
errmsg = "Converter #%i is locked and cannot be upgraded: " % i
current_column = map(itemgetter(i), rows)
for (j, value) in enumerate(current_column):
try:
converter.upgrade(value)
except (ConverterError, ValueError):
errmsg += "(occurred line #%i for value '%s')"
errmsg %= (j + 1 + skip_header, value)
raise ConverterError(errmsg)
# Check that we don't have invalid values
nbinvalid = len(invalid)
if nbinvalid > 0:
nbrows = len(rows) + nbinvalid - skip_footer
# Construct the error message
template = " Line #%%i (got %%i columns instead of %i)" % nbcols
if skip_footer > 0:
nbinvalid_skipped = len([_ for _ in invalid
if _[0] > nbrows + skip_header])
invalid = invalid[:nbinvalid - nbinvalid_skipped]
skip_footer -= nbinvalid_skipped
#
# nbrows -= skip_footer
# errmsg = [template % (i, nb)
# for (i, nb) in invalid if i < nbrows]
# else:
errmsg = [template % (i, nb)
for (i, nb) in invalid]
if len(errmsg):
errmsg.insert(0, "Some errors were detected !")
errmsg = "\n".join(errmsg)
# Raise an exception ?
if invalid_raise:
> raise ValueError(errmsg)
E ValueError: Some errors were detected !
E Line #60 (got 4 columns instead of 1)
E Line #123 (got 2 columns instead of 1)
E Line #124 (got 2 columns instead of 1)
E Line #129 (got 4 columns instead of 1)
E Line #131 (got 4 columns instead of 1)
E Line #132 (got 2 columns instead of 1)
E Line #134 (got 2 columns instead of 1)
C:\-\lib\site-packages\numpy\lib\npyio.py:1769: ValueError
______________________ TestSDSSRemote.test_query_crossid ______________________
self = <astroquery.sdss.tests.test_sdss_remote.TestSDSSRemote instance at 0x000000000B5F9288>
def test_query_crossid(self):
> query1 = sdss.SDSS.query_crossid(self.coords)
astroquery\sdss\tests\test_sdss_remote.py:151:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
astroquery\utils\class_or_instance.py:25: in f
return self.fn(obj, *args, **kwds)
astroquery\utils\process_asyncs.py:29: in newmethod
result = self._parse_result(response, verbose=verbose)
astroquery\sdss\core.py:843: in _parse_result
comments='#'))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
fname = <_io.BytesIO object at 0x0000000009F3B990>, dtype = None, comments = '#'
delimiter = ',', skip_header = 1, skip_footer = 0
converters = [<numpy.lib._iotools.StringConverter object at 0x000000000FDAEA20>]
missing_values = [['']], filling_values = [None], usecols = None
names = ['DOCTYPE_html_PUBLIC_W3CDTD_XHTML_10_TransitionalEN_httpwwww3orgTRxhtml1DTDxhtml1transitionaldtd']
excludelist = None, deletechars = None, replace_space = '_', autostrip = False
case_sensitive = True, defaultfmt = 'f%i', unpack = None, usemask = False
loose = True, invalid_raise = True, max_rows = None
def genfromtxt(fname, dtype=float, comments='#', delimiter=None,
skip_header=0, skip_footer=0, converters=None,
missing_values=None, filling_values=None, usecols=None,
names=None, excludelist=None, deletechars=None,
replace_space='_', autostrip=False, case_sensitive=True,
defaultfmt="f%i", unpack=None, usemask=False, loose=True,
invalid_raise=True, max_rows=None):
"""
Load data from a text file, with missing values handled as specified.
Each line past the first `skip_header` lines is split at the `delimiter`
character, and characters following the `comments` character are discarded.
Parameters
----------
fname : file, str, list of str, generator
File, filename, list, or generator to read. If the filename
extension is `.gz` or `.bz2`, the file is first decompressed. Mote
that generators must return byte strings in Python 3k. The strings
in a list or produced by a generator are treated as lines.
dtype : dtype, optional
Data type of the resulting array.
If None, the dtypes will be determined by the contents of each
column, individually.
comments : str, optional
The character used to indicate the start of a comment.
All the characters occurring on a line after a comment are discarded
delimiter : str, int, or sequence, optional
The string used to separate values. By default, any consecutive
whitespaces act as delimiter. An integer or sequence of integers
can also be provided as width(s) of each field.
skiprows : int, optional
`skiprows` was removed in numpy 1.10. Please use `skip_header` instead.
skip_header : int, optional
The number of lines to skip at the beginning of the file.
skip_footer : int, optional
The number of lines to skip at the end of the file.
converters : variable, optional
The set of functions that convert the data of a column to a value.
The converters can also be used to provide a default value
for missing data: ``converters = {3: lambda s: float(s or 0)}``.
missing : variable, optional
`missing` was removed in numpy 1.10. Please use `missing_values`
instead.
missing_values : variable, optional
The set of strings corresponding to missing data.
filling_values : variable, optional
The set of values to be used as default when the data are missing.
usecols : sequence, optional
Which columns to read, with 0 being the first. For example,
``usecols = (1, 4, 5)`` will extract the 2nd, 5th and 6th columns.
names : {None, True, str, sequence}, optional
If `names` is True, the field names are read from the first valid line
after the first `skip_header` lines.
If `names` is a sequence or a single-string of comma-separated names,
the names will be used to define the field names in a structured dtype.
If `names` is None, the names of the dtype fields will be used, if any.
excludelist : sequence, optional
A list of names to exclude. This list is appended to the default list
['return','file','print']. Excluded names are appended an underscore:
for example, `file` would become `file_`.
deletechars : str, optional
A string combining invalid characters that must be deleted from the
names.
defaultfmt : str, optional
A format used to define default field names, such as "f%i" or "f_%02i".
autostrip : bool, optional
Whether to automatically strip white spaces from the variables.
replace_space : char, optional
Character(s) used in replacement of white spaces in the variables
names. By default, use a '_'.
case_sensitive : {True, False, 'upper', 'lower'}, optional
If True, field names are case sensitive.
If False or 'upper', field names are converted to upper case.
If 'lower', field names are converted to lower case.
unpack : bool, optional
If True, the returned array is transposed, so that arguments may be
unpacked using ``x, y, z = loadtxt(...)``
usemask : bool, optional
If True, return a masked array.
If False, return a regular array.
loose : bool, optional
If True, do not raise errors for invalid values.
invalid_raise : bool, optional
If True, an exception is raised if an inconsistency is detected in the
number of columns.
If False, a warning is emitted and the offending lines are skipped.
max_rows : int, optional
The maximum number of rows to read. Must not be used with skip_footer
at the same time. If given, the value must be at least 1. Default is
to read the entire file.
.. versionadded:: 1.10.0
Returns
-------
out : ndarray
Data read from the text file. If `usemask` is True, this is a
masked array.
See Also
--------
numpy.loadtxt : equivalent function when no data is missing.
Notes
-----
* When spaces are used as delimiters, or when no delimiter has been given
as input, there should not be any missing data between two fields.
* When the variables are named (either by a flexible dtype or with `names`,
there must not be any header in the file (else a ValueError
exception is raised).
* Individual values are not stripped of spaces by default.
When using a custom converter, make sure the function does remove spaces.
References
----------
.. [1] Numpy User Guide, section `I/O with Numpy
<http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html>`_.
Examples
---------
>>> from io import StringIO
>>> import numpy as np
Comma delimited file with mixed dtype
>>> s = StringIO("1,1.3,abcde")
>>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'),
... ('mystring','S5')], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
Using dtype = None
>>> s.seek(0) # needed for StringIO example only
>>> data = np.genfromtxt(s, dtype=None,
... names = ['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
Specifying dtype and names
>>> s.seek(0)
>>> data = np.genfromtxt(s, dtype="i8,f8,S5",
... names=['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
An example with fixed-width columns
>>> s = StringIO("11.3abcde")
>>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'],
... delimiter=[1,3,5])
>>> data
array((1, 1.3, 'abcde'),
dtype=[('intvar', '<i8'), ('fltvar', '<f8'), ('strvar', '|S5')])
"""
if max_rows is not None:
if skip_footer:
raise ValueError(
"The keywords 'skip_footer' and 'max_rows' can not be "
"specified at the same time.")
if max_rows < 1:
raise ValueError("'max_rows' must be at least 1.")
# Py3 data conversions to bytes, for convenience
if comments is not None:
comments = asbytes(comments)
if isinstance(delimiter, unicode):
delimiter = asbytes(delimiter)
if isinstance(missing_values, (unicode, list, tuple)):
missing_values = asbytes_nested(missing_values)
#
if usemask:
from numpy.ma import MaskedArray, make_mask_descr
# Check the input dictionary of converters
user_converters = converters or {}
if not isinstance(user_converters, dict):
raise TypeError(
"The input argument 'converter' should be a valid dictionary "
"(got '%s' instead)" % type(user_converters))
# Initialize the filehandle, the LineSplitter and the NameValidator
own_fhd = False
try:
if isinstance(fname, basestring):
if sys.version_info[0] == 2:
fhd = iter(np.lib._datasource.open(fname, 'rbU'))
else:
fhd = iter(np.lib._datasource.open(fname, 'rb'))
own_fhd = True
else:
fhd = iter(fname)
except TypeError:
raise TypeError(
"fname must be a string, filehandle, list of strings, "
"or generator. Got %s instead." % type(fname))
split_line = LineSplitter(delimiter=delimiter, comments=comments,
autostrip=autostrip)._handyman
validate_names = NameValidator(excludelist=excludelist,
deletechars=deletechars,
case_sensitive=case_sensitive,
replace_space=replace_space)
# Skip the first `skip_header` rows
for i in range(skip_header):
next(fhd)
# Keep on until we find the first valid values
first_values = None
try:
while not first_values:
first_line = next(fhd)
if names is True:
if comments in first_line:
first_line = (
asbytes('').join(first_line.split(comments)[1:]))
first_values = split_line(first_line)
except StopIteration:
# return an empty array if the datafile is empty
first_line = asbytes('')
first_values = []
warnings.warn('genfromtxt: Empty input file: "%s"' % fname)
# Should we take the first values as names ?
if names is True:
fval = first_values[0].strip()
if fval in comments:
del first_values[0]
# Check the columns to use: make sure `usecols` is a list
if usecols is not None:
try:
usecols = [_.strip() for _ in usecols.split(",")]
except AttributeError:
try:
usecols = list(usecols)
except TypeError:
usecols = [usecols, ]
nbcols = len(usecols or first_values)
# Check the names and overwrite the dtype.names if needed
if names is True:
names = validate_names([_bytes_to_name(_.strip())
for _ in first_values])
first_line = asbytes('')
elif _is_string_like(names):
names = validate_names([_.strip() for _ in names.split(',')])
elif names:
names = validate_names(names)
# Get the dtype
if dtype is not None:
dtype = easy_dtype(dtype, defaultfmt=defaultfmt, names=names,
excludelist=excludelist,
deletechars=deletechars,
case_sensitive=case_sensitive,
replace_space=replace_space)
# Make sure the names is a list (for 2.5)
if names is not None:
names = list(names)
if usecols:
for (i, current) in enumerate(usecols):
# if usecols is a list of names, convert to a list of indices
if _is_string_like(current):
usecols[i] = names.index(current)
elif current < 0:
usecols[i] = current + len(first_values)
# If the dtype is not None, make sure we update it
if (dtype is not None) and (len(dtype) > nbcols):
descr = dtype.descr
dtype = np.dtype([descr[_] for _ in usecols])
names = list(dtype.names)
# If `names` is not None, update the names
elif (names is not None) and (len(names) > nbcols):
names = [names[_] for _ in usecols]
elif (names is not None) and (dtype is not None):
names = list(dtype.names)
# Process the missing values ...............................
# Rename missing_values for convenience
user_missing_values = missing_values or ()
# Define the list of missing_values (one column: one list)
missing_values = [list([asbytes('')]) for _ in range(nbcols)]
# We have a dictionary: process it field by field
if isinstance(user_missing_values, dict):
# Loop on the items
for (key, val) in user_missing_values.items():
# Is the key a string ?
if _is_string_like(key):
try:
# Transform it into an integer
key = names.index(key)
except ValueError:
# We couldn't find it: the name must have been dropped
continue
# Redefine the key as needed if it's a column number
if usecols:
try:
key = usecols.index(key)
except ValueError:
pass
# Transform the value as a list of string
if isinstance(val, (list, tuple)):
val = [str(_) for _ in val]
else:
val = [str(val), ]
# Add the value(s) to the current list of missing
if key is None:
# None acts as default
for miss in missing_values:
miss.extend(val)
else:
missing_values[key].extend(val)
# We have a sequence : each item matches a column
elif isinstance(user_missing_values, (list, tuple)):
for (value, entry) in zip(user_missing_values, missing_values):
value = str(value)
if value not in entry:
entry.append(value)
# We have a string : apply it to all entries
elif isinstance(user_missing_values, bytes):
user_value = user_missing_values.split(asbytes(","))
for entry in missing_values:
entry.extend(user_value)
# We have something else: apply it to all entries
else:
for entry in missing_values:
entry.extend([str(user_missing_values)])
# Process the filling_values ...............................
# Rename the input for convenience
user_filling_values = filling_values
if user_filling_values is None:
user_filling_values = []
# Define the default
filling_values = [None] * nbcols
# We have a dictionary : update each entry individually
if isinstance(user_filling_values, dict):
for (key, val) in user_filling_values.items():
if _is_string_like(key):
try:
# Transform it into an integer
key = names.index(key)
except ValueError:
# We couldn't find it: the name must have been dropped,
continue
# Redefine the key if it's a column number and usecols is defined
if usecols:
try:
key = usecols.index(key)
except ValueError:
pass
# Add the value to the list
filling_values[key] = val
# We have a sequence : update on a one-to-one basis
elif isinstance(user_filling_values, (list, tuple)):
n = len(user_filling_values)
if (n <= nbcols):
filling_values[:n] = user_filling_values
else:
filling_values = user_filling_values[:nbcols]
# We have something else : use it for all entries
else:
filling_values = [user_filling_values] * nbcols
# Initialize the converters ................................
if dtype is None:
# Note: we can't use a [...]*nbcols, as we would have 3 times the same
# ... converter, instead of 3 different converters.
converters = [StringConverter(None, missing_values=miss, default=fill)
for (miss, fill) in zip(missing_values, filling_values)]
else:
dtype_flat = flatten_dtype(dtype, flatten_base=True)
# Initialize the converters
if len(dtype_flat) > 1:
# Flexible type : get a converter from each dtype
zipit = zip(dtype_flat, missing_values, filling_values)
converters = [StringConverter(dt, locked=True,
missing_values=miss, default=fill)
for (dt, miss, fill) in zipit]
else:
# Set to a default converter (but w/ different missing values)
zipit = zip(missing_values, filling_values)
converters = [StringConverter(dtype, locked=True,
missing_values=miss, default=fill)
for (miss, fill) in zipit]
# Update the converters to use the user-defined ones
uc_update = []
for (j, conv) in user_converters.items():
# If the converter is specified by column names, use the index instead
if _is_string_like(j):
try:
j = names.index(j)
i = j
except ValueError:
continue
elif usecols:
try:
i = usecols.index(j)
except ValueError:
# Unused converter specified
continue
else:
i = j
# Find the value to test - first_line is not filtered by usecols:
if len(first_line):
testing_value = first_values[j]
else:
testing_value = None
converters[i].update(conv, locked=True,
testing_value=testing_value,
default=filling_values[i],
missing_values=missing_values[i],)
uc_update.append((i, conv))
# Make sure we have the corrected keys in user_converters...
user_converters.update(uc_update)
# Fixme: possible error as following variable never used.
#miss_chars = [_.missing_values for _ in converters]
# Initialize the output lists ...
# ... rows
rows = []
append_to_rows = rows.append
# ... masks
if usemask:
masks = []
append_to_masks = masks.append
# ... invalid
invalid = []
append_to_invalid = invalid.append
# Parse each line
for (i, line) in enumerate(itertools.chain([first_line, ], fhd)):
values = split_line(line)
nbvalues = len(values)
# Skip an empty line
if nbvalues == 0:
continue
if usecols:
# Select only the columns we need
try:
values = [values[_] for _ in usecols]
except IndexError:
append_to_invalid((i + skip_header + 1, nbvalues))
continue
elif nbvalues != nbcols:
append_to_invalid((i + skip_header + 1, nbvalues))
continue
# Store the values
append_to_rows(tuple(values))
if usemask:
append_to_masks(tuple([v.strip() in m
for (v, m) in zip(values,
missing_values)]))
if len(rows) == max_rows:
break
if own_fhd:
fhd.close()
# Upgrade the converters (if needed)
if dtype is None:
for (i, converter) in enumerate(converters):
current_column = [itemgetter(i)(_m) for _m in rows]
try:
converter.iterupgrade(current_column)
except ConverterLockError:
errmsg = "Converter #%i is locked and cannot be upgraded: " % i
current_column = map(itemgetter(i), rows)
for (j, value) in enumerate(current_column):
try:
converter.upgrade(value)
except (ConverterError, ValueError):
errmsg += "(occurred line #%i for value '%s')"
errmsg %= (j + 1 + skip_header, value)
raise ConverterError(errmsg)
# Check that we don't have invalid values
nbinvalid = len(invalid)
if nbinvalid > 0:
nbrows = len(rows) + nbinvalid - skip_footer
# Construct the error message
template = " Line #%%i (got %%i columns instead of %i)" % nbcols
if skip_footer > 0:
nbinvalid_skipped = len([_ for _ in invalid
if _[0] > nbrows + skip_header])
invalid = invalid[:nbinvalid - nbinvalid_skipped]
skip_footer -= nbinvalid_skipped
#
# nbrows -= skip_footer
# errmsg = [template % (i, nb)
# for (i, nb) in invalid if i < nbrows]
# else:
errmsg = [template % (i, nb)
for (i, nb) in invalid]
if len(errmsg):
errmsg.insert(0, "Some errors were detected !")
errmsg = "\n".join(errmsg)
# Raise an exception ?
if invalid_raise:
> raise ValueError(errmsg)
E ValueError: Some errors were detected !
E Line #60 (got 4 columns instead of 1)
E Line #123 (got 2 columns instead of 1)
E Line #124 (got 2 columns instead of 1)
E Line #129 (got 4 columns instead of 1)
E Line #131 (got 4 columns instead of 1)
E Line #132 (got 2 columns instead of 1)
E Line #134 (got 2 columns instead of 1)
C:\-\lib\site-packages\numpy\lib\npyio.py:1769: ValueError
______________________________ test_parse_result ______________________________
def test_parse_result():
result1 = simbad.core.Simbad._parse_result(
MockResponseSimbad('query id '), simbad.core.SimbadVOTableResult)
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:114: AssertionError
__________________________ test_query_bibcode_class ___________________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x00000000096DAD48>
def test_query_bibcode_class(patch_post):
result1 = simbad.core.Simbad.query_bibcode("2006ApJ*", wildcard=True)
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:182: AssertionError
_________________________ test_query_bibcode_instance _________________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x00000000094FE408>
def test_query_bibcode_instance(patch_post):
S = simbad.core.Simbad()
result2 = S.query_bibcode("2006ApJ*", wildcard=True)
> assert isinstance(result2, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:188: AssertionError
____________________________ test_query_objectids _____________________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000B5FDA88>
def test_query_objectids(patch_post):
result1 = simbad.core.Simbad.query_objectids('Polaris')
result2 = simbad.core.Simbad().query_objectids('Polaris')
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:201: AssertionError
______________________________ test_query_bibobj ______________________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x0000000008853808>
def test_query_bibobj(patch_post):
result1 = simbad.core.Simbad.query_bibobj('2005A&A.430.165F')
result2 = simbad.core.Simbad().query_bibobj('2005A&A.430.165F')
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:215: AssertionError
_____________________________ test_query_catalog ______________________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000D982FC8>
def test_query_catalog(patch_post):
result1 = simbad.core.Simbad.query_catalog('m')
result2 = simbad.core.Simbad().query_catalog('m')
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:229: AssertionError
_______________ test_query_region[coordinates0-None-None-None] ________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000B76BBC8>
coordinates = <SkyCoord (ICRS): (ra, dec) in deg
(83.82208333, -80.86666667)>
radius = None, equinox = None, epoch = None
@pytest.mark.parametrize(('coordinates', 'radius', 'equinox', 'epoch'),
[(ICRS_COORDS, None, None, None),
(GALACTIC_COORDS, 5 * u.deg, 2000.0, 'J2000'),
(FK4_COORDS, '5d0m0s', None, None),
(FK5_COORDS, None, None, None)
])
def test_query_region(patch_post, coordinates, radius, equinox, epoch):
result1 = simbad.core.Simbad.query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
result2 = simbad.core.Simbad().query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:261: AssertionError
____________ test_query_region[coordinates1-radius1-2000.0-J2000] _____________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000A89BE08>
coordinates = <SkyCoord (Galactic): (l, b) in deg
(292.97916, -29.75447)>
radius = <Quantity 5.0 deg>, equinox = 2000.0, epoch = 'J2000'
@pytest.mark.parametrize(('coordinates', 'radius', 'equinox', 'epoch'),
[(ICRS_COORDS, None, None, None),
(GALACTIC_COORDS, 5 * u.deg, 2000.0, 'J2000'),
(FK4_COORDS, '5d0m0s', None, None),
(FK5_COORDS, None, None, None)
])
def test_query_region(patch_post, coordinates, radius, equinox, epoch):
result1 = simbad.core.Simbad.query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
result2 = simbad.core.Simbad().query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:261: AssertionError
______________ test_query_region[coordinates2-5d0m0s-None-None] _______________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000B3AB308>
coordinates = <SkyCoord (FK4: equinox=B1950.000, obstime=B1950.000): (ra, dec) in deg
(84.90759, -80.89403)>
radius = '5d0m0s', equinox = None, epoch = None
@pytest.mark.parametrize(('coordinates', 'radius', 'equinox', 'epoch'),
[(ICRS_COORDS, None, None, None),
(GALACTIC_COORDS, 5 * u.deg, 2000.0, 'J2000'),
(FK4_COORDS, '5d0m0s', None, None),
(FK5_COORDS, None, None, None)
])
def test_query_region(patch_post, coordinates, radius, equinox, epoch):
result1 = simbad.core.Simbad.query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
result2 = simbad.core.Simbad().query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:261: AssertionError
_______________ test_query_region[coordinates3-None-None-None] ________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000BCA9788>
coordinates = <SkyCoord (FK5: equinox=J2000.000): (ra, dec) in deg
(83.82207, -80.86667)>
radius = None, equinox = None, epoch = None
@pytest.mark.parametrize(('coordinates', 'radius', 'equinox', 'epoch'),
[(ICRS_COORDS, None, None, None),
(GALACTIC_COORDS, 5 * u.deg, 2000.0, 'J2000'),
(FK4_COORDS, '5d0m0s', None, None),
(FK5_COORDS, None, None, None)
])
def test_query_region(patch_post, coordinates, radius, equinox, epoch):
result1 = simbad.core.Simbad.query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
result2 = simbad.core.Simbad().query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:261: AssertionError
__________ test_query_region_small_radius[coordinates0-0d-None-None] __________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000BCA93C8>
coordinates = <SkyCoord (ICRS): (ra, dec) in deg
(83.82208333, -80.86666667)>
radius = '0d', equinox = None, epoch = None
@pytest.mark.parametrize(('coordinates', 'radius', 'equinox', 'epoch'),
[(ICRS_COORDS, "0d", None, None),
(GALACTIC_COORDS, 1.0 * u.marcsec, 2000.0, 'J2000')
])
def test_query_region_small_radius(patch_post, coordinates, radius,
equinox, epoch):
result1 = simbad.core.Simbad.query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
result2 = simbad.core.Simbad().query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:288: AssertionError
______ test_query_region_small_radius[coordinates1-radius1-2000.0-J2000] ______
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000AAFCC48>
coordinates = <SkyCoord (Galactic): (l, b) in deg
(292.97916, -29.75447)>
radius = <Quantity 1.0 marcsec>, equinox = 2000.0, epoch = 'J2000'
@pytest.mark.parametrize(('coordinates', 'radius', 'equinox', 'epoch'),
[(ICRS_COORDS, "0d", None, None),
(GALACTIC_COORDS, 1.0 * u.marcsec, 2000.0, 'J2000')
])
def test_query_region_small_radius(patch_post, coordinates, radius,
equinox, epoch):
result1 = simbad.core.Simbad.query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
result2 = simbad.core.Simbad().query_region(coordinates, radius=radius,
equinox=equinox, epoch=epoch)
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:288: AssertionError
_________________________ test_query_object[m1-None] __________________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000B95EB48>
object_name = 'm1', wildcard = None
@pytest.mark.parametrize(('object_name', 'wildcard'),
[("m1", None),
("m [0-9]", True),
])
def test_query_object(patch_post, object_name, wildcard):
result1 = simbad.core.Simbad.query_object(object_name,
wildcard=wildcard)
result2 = simbad.core.Simbad().query_object(object_name,
wildcard=wildcard)
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:314: AssertionError
_______________________ test_query_object[m [0-9]-True] _______________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000A9F8E88>
object_name = 'm [0-9]', wildcard = True
@pytest.mark.parametrize(('object_name', 'wildcard'),
[("m1", None),
("m [0-9]", True),
])
def test_query_object(patch_post, object_name, wildcard):
result1 = simbad.core.Simbad.query_object(object_name,
wildcard=wildcard)
result2 = simbad.core.Simbad().query_object(object_name,
wildcard=wildcard)
> assert isinstance(result1, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:314: AssertionError
____________________________ test_query_criteria1 _____________________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x000000000BAE88C8>
def test_query_criteria1(patch_post):
result = simbad.core.Simbad.query_criteria(
"region(box, GAL, 49.89 -0.3, 0.5d 0.5d)", otype='HII')
> assert isinstance(result, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:356: AssertionError
____________________________ test_query_criteria2 _____________________________
patch_post = <_pytest.monkeypatch.monkeypatch instance at 0x0000000009284E88>
def test_query_criteria2(patch_post):
S = simbad.core.Simbad()
S.add_votable_fields('ra(d)', 'dec(d)')
S.remove_votable_fields('coordinates')
assert S.get_votable_fields() == ['main_id', 'ra(d)', 'dec(d)']
result = S.query_criteria(otype='SNR')
> assert isinstance(result, Table)
E assert isinstance(None, Table)
astroquery\simbad\tests\test_simbad.py:365: AssertionError
__________________________ test_regression_issue388 ___________________________
def test_regression_issue388():
# This is a python-3 issue: content needs to be decoded?
response = MockResponseSimbad('\nvotable {main_id,coordinates}\nvotable '
'open\nquery id m1 \nvotable close')
with open(data_path('m1.data'), "rb") as f:
response.content = f.read()
parsed_table = simbad.Simbad._parse_result(response,
simbad.core.SimbadVOTableResult)
> assert parsed_table['MAIN_ID'][0] == b'M 1'
E TypeError: 'NoneType' object has no attribute '__getitem__'
astroquery\simbad\tests\test_simbad.py:419: TypeError
__________ TestSkyviewRemote.test_survey[DiffuseX-ray-survey_data0] ___________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000D808FD0>
survey = 'DiffuseX-ray'
survey_data = ['RASS Background 1', 'RASS Background 2', 'RASS Background 3', 'RASS Background 4', 'RASS Background 5', 'RASS Background 6', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(True, u'DiffuseX-ray')
_______________ TestSkyviewRemote.test_survey[UV-survey_data1] ________________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000D81E320>
survey = 'UV'
survey_data = ['GALEX Near UV', 'GALEX Far UV', 'ROSAT WFC F1', 'ROSAT WFC F2', 'EUVE 83 A', 'EUVE 171 A', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(True, u'UV')
_________ TestSkyviewRemote.test_survey[InfraredHighRes-survey_data2] _________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000A90BE80>
survey = 'InfraredHighRes'
survey_data = ['2MASS-J', '2MASS-H', '2MASS-K', 'UKIDSS-Y', 'UKIDSS-J', 'UKIDSS-H', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
> print(self.SkyView.survey_dict[survey] == survey_data, survey)
E KeyError: u'InfraredHighRes'
astroquery\skyview\tests\test_skyview_remote.py:40: KeyError
____________ TestSkyviewRemote.test_survey[WMAP/COBE-survey_data3] ____________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000D81E898>
survey = 'WMAP/COBE'
survey_data = ['WMAP ILC', 'WMAP Ka', 'WMAP K', 'WMAP Q', 'WMAP V', 'WMAP W', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
> print(self.SkyView.survey_dict[survey] == survey_data, survey)
E KeyError: u'WMAP/COBE'
astroquery\skyview\tests\test_skyview_remote.py:40: KeyError
___ TestSkyviewRemote.test_survey[GOODS/HDF/CDF(Allwavebands)-survey_data4] ___
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000D81ED68>
survey = 'GOODS/HDF/CDF(Allwavebands)'
survey_data = ['GOODS: Chandra ACIS HB', 'GOODS: Chandra ACIS FB', 'GOODS: Chandra ACIS SB', 'GOODS: VLT VIMOS U', 'GOODS: VLT VIMOS R', 'GOODS: HST ACS B', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
> print(self.SkyView.survey_dict[survey] == survey_data, survey)
E KeyError: u'GOODS/HDF/CDF(Allwavebands)'
astroquery\skyview\tests\test_skyview_remote.py:40: KeyError
____________ TestSkyviewRemote.test_survey[GammaRay-survey_data5] _____________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000D81EFD0>
survey = 'GammaRay'
survey_data = ['Fermi 5', 'Fermi 4', 'Fermi 3', 'Fermi 2', 'Fermi 1', 'EGRET (3D)', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(True, u'GammaRay')
___________ TestSkyviewRemote.test_survey[Optical:DSS-survey_data6] ___________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000C1650F0>
survey = 'Optical:DSS'
survey_data = ['DSS', 'DSS1 Blue', 'DSS1 Red', 'DSS2 Red', 'DSS2 Blue', 'DSS2 IR']
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(True, u'Optical:DSS')
__________ TestSkyviewRemote.test_survey[Optical:SDSS-survey_data7] ___________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000C165A90>
survey = 'Optical:SDSS'
survey_data = ['SDSSg', 'SDSSi', 'SDSSr', 'SDSSu', 'SDSSz', 'SDSSdr7g', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(True, u'Optical:SDSS')
____________ TestSkyviewRemote.test_survey[HardX-ray-survey_data8] ____________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000D81E710>
survey = 'HardX-ray'
survey_data = ['INT GAL 17-35 Flux', 'INT GAL 17-60 Flux', 'INT GAL 35-80 Flux', 'INTEGRAL/SPI GC', 'GRANAT/SIGMA', 'RXTE Allsky 3-8keV Flux', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(True, u'HardX-ray')
__________ TestSkyviewRemote.test_survey[OtherOptical-survey_data9] ___________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000C165550>
survey = 'OtherOptical'
survey_data = ['Mellinger Red', 'Mellinger Green', 'Mellinger Blue', 'NEAT', 'H-Alpha Comp', 'SHASSA H', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(True, u'OtherOptical')
_____________ TestSkyviewRemote.test_survey[Radio-survey_data10] ______________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000D81EF28>
survey = 'Radio'
survey_data = ['GB6 (4850MHz)', 'VLA FIRST (1.4 GHz)', 'NVSS', 'Stripe82VLA', '1420MHz (Bonn)', 'EBHIS', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
> print(self.SkyView.survey_dict[survey] == survey_data, survey)
E KeyError: u'Radio'
astroquery\skyview\tests\test_skyview_remote.py:40: KeyError
__________ TestSkyviewRemote.test_survey[overlay_blue-survey_data11] __________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000D81ECC0>
survey = 'overlay_blue'
survey_data = ['Fermi 5', 'Fermi 4', 'Fermi 3', 'Fermi 2', 'Fermi 1', 'EGRET (3D)', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(False, u'overlay_blue')
__________ TestSkyviewRemote.test_survey[overlay_red-survey_data12] ___________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000C1657B8>
survey = 'overlay_red'
survey_data = ['None
Fermi 5Fermi 4Fermi 3Fermi 2Fermi 1EGRET (3D)EGRET <100 MeVEGRET >100 MeVCOMPTELINT GAL 17-35 FluxINT GAL 17-...erschel 350GOODS: Herschel 500CDFS: LESSGOODS: VLA North
', 'Fermi 5', 'Fermi 4', 'Fermi 3', 'Fermi 2', 'Fermi 1', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(False, u'overlay_red')
______________ TestSkyviewRemote.test_survey[IRAS-survey_data13] ______________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000D81E5F8>
survey = 'IRAS'
survey_data = ['IRIS 12', 'IRIS 25', 'IRIS 60', 'IRIS 100', 'SFD100m', 'SFD Dust Map', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
> print(self.SkyView.survey_dict[survey] == survey_data, survey)
E KeyError: u'IRAS'
astroquery\skyview\tests\test_skyview_remote.py:40: KeyError
___________ TestSkyviewRemote.test_survey[SoftX-ray-survey_data14] ____________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000C1658D0>
survey = 'SoftX-ray'
survey_data = ['RASS-Cnt Soft', 'RASS-Cnt Hard', 'RASS-Cnt Broad', 'PSPC 2.0 Deg-Int', 'PSPC 1.0 Deg-Int', 'PSPC 0.6 Deg-Int', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(True, u'SoftX-ray')
____________ TestSkyviewRemote.test_survey[SwiftBAT-survey_data15] ____________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000D81E9B0>
survey = 'SwiftBAT'
survey_data = ['BAT SNR 14-195', 'BAT SNR 14-20', 'BAT SNR 20-24', 'BAT SNR 24-35', 'BAT SNR 35-50', 'BAT SNR 50-75', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
> print(self.SkyView.survey_dict[survey] == survey_data, survey)
E KeyError: u'SwiftBAT'
astroquery\skyview\tests\test_skyview_remote.py:40: KeyError
_____________ TestSkyviewRemote.test_survey[Planck-survey_data16] _____________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000C1654A8>
survey = 'Planck'
survey_data = ['Planck 857', 'Planck 545', 'Planck 353', 'Planck 217', 'Planck 143', 'Planck 100', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
> print(self.SkyView.survey_dict[survey] == survey_data, survey)
E KeyError: u'Planck'
astroquery\skyview\tests\test_skyview_remote.py:40: KeyError
_________ TestSkyviewRemote.test_survey[overlay_green-survey_data17] __________
self = <astroquery.skyview.tests.test_skyview_remote.TestSkyviewRemote object at 0x000000000C1860F0>
survey = 'overlay_green'
survey_data = ['None
Fermi 5Fermi 4Fermi 3Fermi 2Fermi 1EGRET (3D)EGRET <100 MeVEGRET >100 MeVCOMPTELINT GAL 17-35 FluxINT GAL 17-...erschel 350GOODS: Herschel 500CDFS: LESSGOODS: VLA North
', 'Fermi 5', 'Fermi 4', 'Fermi 3', 'Fermi 2', 'Fermi 1', ...]
@pytest.mark.parametrize(('survey',
'survey_data'),
zip(survey_dict.keys(), survey_dict.values()))
def test_survey(self, survey, survey_data):
print(self.SkyView.survey_dict[survey] == survey_data, survey)
> print("Canned reference return", self.__class__.survey_dict['Radio'])
E KeyError: 'Radio'
astroquery\skyview\tests\test_skyview_remote.py:41: KeyError
---------------------------- Captured stdout call -----------------------------
(False, u'overlay_green')
======== 65 failed, 708 passed, 4 skipped, 3 xfailed in 981.41 seconds ========
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment