Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save martindurant/2656908684d3965b80c2 to your computer and use it in GitHub Desktop.
Save martindurant/2656908684d3965b80c2 to your computer and use it in GitHub Desktop.
============================= test session starts ==============================
platform darwin -- Python 2.7.11, pytest-2.8.5, py-1.4.31, pluggy-0.3.1 -- /Users/mdurant/Documents/anaconda/envs/cluster/bin/python
cachedir: .cache
rootdir: /Users/mdurant/Downloads/distributed, inifile:
collecting ... collected 299 items / 2 skipped
distributed/cli/tests/test_dscheduler.py::test_defaults PASSED
distributed/diagnostics/tests/test_plugin.py::test_diagnostic <- distributed/utils_test.py PASSED
distributed/diagnostics/tests/test_progress.py::test_dependent_keys PASSED
distributed/diagnostics/tests/test_progress.py::test_many_Progresss <- distributed/utils_test.py PASSED
distributed/diagnostics/tests/test_progress.py::test_multiprogress <- distributed/utils_test.py PASSED
distributed/diagnostics/tests/test_progress.py::test_robust_to_bad_plugin <- distributed/utils_test.py PASSED
distributed/diagnostics/tests/test_progressbar.py::test_text_progressbar PASSED
distributed/diagnostics/tests/test_progressbar.py::test_TextProgressBar_empty PASSED
distributed/diagnostics/tests/test_progressbar.py::test_progress_function PASSED
distributed/diagnostics/tests/test_progressbar.py::test_TextProgressBar_error <- distributed/utils_test.py PASSED
distributed/diagnostics/tests/test_resource_monitor.py::test_occupancy PASSED
distributed/diagnostics/tests/test_scheduler_diagnostic_widgets.py::test_scheduler_status_widget PASSED
distributed/diagnostics/tests/test_scheduler_diagnostic_widgets.py::test_scheduler_status <- distributed/utils_test.py PASSED
distributed/diagnostics/tests/test_widgets.py::test_values PASSED
distributed/diagnostics/tests/test_widgets.py::test_progressbar_done PASSED
distributed/diagnostics/tests/test_widgets.py::test_fast PASSED
distributed/diagnostics/tests/test_widgets.py::test_multi_progressbar_widget_after_close <- distributed/utils_test.py PASSED
distributed/diagnostics/tests/test_widgets.py::test_progressbar_widget <- distributed/utils_test.py PASSED
distributed/diagnostics/tests/test_widgets.py::test_multi_progressbar_widget <- distributed/utils_test.py PASSED
distributed/diagnostics/tests/test_widgets.py::test_multibar_complete <- distributed/utils_test.py PASSED
distributed/http/tests/test_scheduler_http.py::test_services_with_port <- distributed/utils_test.py PASSED
distributed/http/tests/test_scheduler_http.py::test_services <- distributed/utils_test.py PASSED
distributed/http/tests/test_scheduler_http.py::test_with_status <- distributed/utils_test.py PASSED
distributed/http/tests/test_scheduler_http.py::test_with_data <- distributed/utils_test.py PASSED
distributed/http/tests/test_scheduler_http.py::test_broadcast <- distributed/utils_test.py PASSED
distributed/http/tests/test_scheduler_http.py::test_proxy <- distributed/utils_test.py PASSED
distributed/http/tests/test_scheduler_http.py::test_simple <- distributed/utils_test.py PASSED
distributed/http/tests/test_scheduler_http.py::test_processing <- distributed/utils_test.py PASSED
distributed/http/tests/test_worker_http.py::test_services_port <- distributed/utils_test.py PASSED
distributed/http/tests/test_worker_http.py::test_services <- distributed/utils_test.py PASSED
distributed/http/tests/test_worker_http.py::test_simple <- distributed/utils_test.py PASSED
distributed/tests/test_center.py::test_metadata <- distributed/utils_test.py PASSED
distributed/tests/test_client.py::test_scatter_delete PASSED
distributed/tests/test_client.py::test_gather_with_missing_worker PASSED
distributed/tests/test_client.py::test_pack_data PASSED
distributed/tests/test_client.py::test_pack_data_with_key_mapping PASSED
distributed/tests/test_client.py::test_gather_errors_voluminously PASSED
distributed/tests/test_client.py::test_gather_scatter SKIPPED
distributed/tests/test_client.py::test_clear PASSED
distributed/tests/test_client.py::test_scatter_round_robins_between_calls PASSED
distributed/tests/test_client.py::test_broadcast_to_workers <- distributed/utils_test.py PASSED
distributed/tests/test_cluster.py::test_cluster SKIPPED
distributed/tests/test_collections.py::test_futures_to_dask_dataframe PASSED
distributed/tests/test_collections.py::test_futures_to_dask_arrays PASSED
distributed/tests/test_collections.py::test_futures_to_dask_bag FAILED
distributed/tests/test_collections.py::test_futures_to_dask_array SKIPPED
distributed/tests/test_collections.py::test__futures_to_dask_array <- distributed/utils_test.py PASSED
distributed/tests/test_collections.py::test_no_divisions <- distributed/utils_test.py PASSED
distributed/tests/test_collections.py::test__future_to_dask_array <- distributed/utils_test.py PASSED
distributed/tests/test_collections.py::test__futures_to_dask_bag <- distributed/utils_test.py PASSED
distributed/tests/test_collections.py::test__futures_to_dask_arrays <- distributed/utils_test.py PASSED
distributed/tests/test_collections.py::test__futures_to_dask_dataframe <- distributed/utils_test.py PASSED
distributed/tests/test_collections.py::test_dataframes <- distributed/utils_test.py PASSED
distributed/tests/test_collections.py::test__futures_to_collection <- distributed/utils_test.py PASSED
distributed/tests/test_collections.py::test__dask_array_collections <- distributed/utils_test.py PASSED
distributed/tests/test_core.py::test_server PASSED
distributed/tests/test_core.py::test_rpc PASSED
distributed/tests/test_core.py::test_rpc_inputs PASSED
distributed/tests/test_core.py::test_rpc_with_many_connections PASSED
distributed/tests/test_core.py::test_large_packets SKIPPED
distributed/tests/test_core.py::test_identity PASSED
distributed/tests/test_core.py::test_ports PASSED
distributed/tests/test_core.py::test_errors PASSED
distributed/tests/test_core.py::test_coerce_to_rpc PASSED
distributed/tests/test_executor.py::test_thread PASSED
distributed/tests/test_executor.py::test_sync_exceptions PASSED
distributed/tests/test_executor.py::test_gather_sync PASSED
distributed/tests/test_executor.py::test_get_sync PASSED
distributed/tests/test_executor.py::test_submit_errors PASSED
distributed/tests/test_executor.py::test_as_completed PASSED
distributed/tests/test_executor.py::test_wait_sync PASSED
distributed/tests/test_executor.py::test_stress_gc[func0-100] PASSED
distributed/tests/test_executor.py::test_stress_gc[func1-1000] PASSED
distributed/tests/test_executor.py::test_submit_after_failed_worker PASSED
distributed/tests/test_executor.py::test_gather_after_failed_worker PASSED
distributed/tests/test_executor.py::test_gather_then_submit_after_failed_workers SKIPPED
distributed/tests/test_executor.py::test_global_executors PASSED
distributed/tests/test_executor.py::test_get_with_error_sync PASSED
distributed/tests/test_executor.py::test_directed_scatter_sync PASSED
distributed/tests/test_executor.py::test_iterator_scatter PASSED
distributed/tests/test_executor.py::test_queue_scatter PASSED
distributed/tests/test_executor.py::test_queue_gather PASSED
distributed/tests/test_executor.py::test_iterator_gather PASSED
distributed/tests/test_executor.py::test_traceback_sync PASSED
distributed/tests/test_executor.py::test_restart_sync_no_center PASSED
distributed/tests/test_executor.py::test_restart_sync PASSED
distributed/tests/test_executor.py::test_restart_fast PASSED
distributed/tests/test_executor.py::test_upload_file_sync PASSED
distributed/tests/test_executor.py::test_upload_file_exception_sync PASSED
distributed/tests/test_executor.py::test_sync_compute PASSED
distributed/tests/test_executor.py::test_start_is_idempotent PASSED
distributed/tests/test_executor.py::test_executor_with_scheduler PASSED
distributed/tests/test_executor.py::test_bad_address SKIPPED
distributed/tests/test_executor.py::test_badly_serialized_input_stderr xfail
distributed/tests/test_executor.py::test_repr_sync PASSED
distributed/tests/test_executor.py::test_broadcast PASSED
distributed/tests/test_executor.py::test_cancel PASSED
distributed/tests/test_executor.py::test_map_iterator_sync PASSED
distributed/tests/test_executor.py::test_Future_exception_sync PASSED
distributed/tests/test_executor.py::test_persist PASSED
distributed/tests/test_executor.py::test_futures_of PASSED
distributed/tests/test_executor.py::test_run_sync PASSED
distributed/tests/test_executor.py::test_diagnostic_ui
[### HANGS ###]
=================================== FAILURES ===================================
___________________________ test_futures_to_dask_bag ___________________________
loop = <tornado.platform.kqueue.KQueueIOLoop object at 0x116f0c050>
def test_futures_to_dask_bag(loop):
import dask.bag as db
with cluster() as (c, [a, b]):
with Executor(('127.0.0.1', c['port']), loop=loop) as e:
data = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
futures = e.scatter(data)
b = futures_to_dask_bag(futures)
assert isinstance(b, db.Bag)
> assert b.map(lambda x: x + 1).sum().compute(get=e.get) == sum(range(2, 11))
distributed/tests/test_collections.py:241:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../Documents/anaconda/envs/cluster/lib/python2.7/site-packages/dask/base.py:37: in compute
return compute(self, **kwargs)[0]
../../Documents/anaconda/envs/cluster/lib/python2.7/site-packages/dask/base.py:110: in compute
results = get(dsk, keys, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <[StreamClosedError("Stream is closed") raised in repr()] SafeRepr object at 0x114d10440>
dsk = {'sum-aggregate-fae945675e7ff4283a77834ad0b836820': (<built-in function sum>, [('sum-part-fae945675e7ff4283a77834ad0b8...rtools.imap'>, <function <lambda> at 0x116ece9b0>, <Future: status: finished, key: 0c5d22771e3e76a169f41169b3fce912>))}
keys = [['sum-aggregate-fae945675e7ff4283a77834ad0b836820']], kwargs = {}
status = 'error', result = StreamClosedError('Stream is closed',)
def get(self, dsk, keys, **kwargs):
""" Compute dask graph
Parameters
----------
dsk: dict
keys: object, or nested lists of objects
restrictions: dict (optional)
A mapping of {key: {set of worker hostnames}} that restricts where
jobs can take place
Examples
--------
>>> from operator import add # doctest: +SKIP
>>> e = Executor('127.0.0.1:8787') # doctest: +SKIP
>>> e.get({'x': (add, 1, 2)}, 'x') # doctest: +SKIP
3
See Also
--------
Executor.compute: Compute asynchronous collections
"""
status, result = sync(self.loop, self._get, dsk, keys,
raise_on_error=False, **kwargs)
if status == 'error':
> raise result
E StreamClosedError: Stream is closed
distributed/executor.py:964: StreamClosedError
----------------------------- Captured stdout call -----------------------------
Setting global dask scheduler to use distributed
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Start Scheduler at: 192.168.0.13:56139
distributed.worker - INFO - Start worker at: 127.0.0.1:56141
distributed.worker - INFO - Waiting to connect to: 127.0.0.1:56139
distributed.worker - INFO - Start worker at: 127.0.0.1:56142
distributed.worker - INFO - Waiting to connect to: 127.0.0.1:56139
distributed.core - INFO - Connection from 127.0.0.1:56143 to Scheduler
distributed.core - INFO - Connection from 127.0.0.1:56144 to Scheduler
distributed.scheduler - INFO - Register 127.0.0.1:56141
distributed.scheduler - INFO - Register 127.0.0.1:56142
distributed.worker - INFO - Registered to: 127.0.0.1:56139
distributed.worker - INFO - Registered to: 127.0.0.1:56139
distributed.core - INFO - Connection from 127.0.0.1:56145 to Scheduler
distributed.core - INFO - Connection from 127.0.0.1:56146 to Scheduler
distributed.core - INFO - Connection from 127.0.0.1:56147 to Scheduler
distributed.scheduler - INFO - Connection to Scheduler, 639c3923-e486-11e5-bd2b-acbc32a9ff1d
distributed.core - INFO - Connection from 127.0.0.1:56148 to Worker
distributed.core - INFO - Connection from 127.0.0.1:56149 to Worker
distributed.core - INFO - Close connection from 127.0.0.1:56148 to Worker
distributed.core - INFO - Close connection from 127.0.0.1:56149 to Worker
distributed.core - INFO - Connection from 127.0.0.1:56150 to Worker
distributed.core - INFO - Connection from 127.0.0.1:56151 to Worker
distributed.worker - INFO - gather 0 keys from peers: {}
distributed.worker - INFO - gather 0 keys from peers: {}
distributed.worker - INFO - Start job 61: execute_task - ('sum-part-fae945675e7ff4283a77834ad0b83682', 2)
distributed.worker - INFO - Start job 61: execute_task - ('sum-part-fae945675e7ff4283a77834ad0b83682', 1)
distributed.worker - INFO - Finish job 61: execute_task - ('sum-part-fae945675e7ff4283a77834ad0b83682', 2)
distributed.worker - INFO - Finish job 61: execute_task - ('sum-part-fae945675e7ff4283a77834ad0b83682', 1)
distributed.worker - INFO - gather 0 keys from peers: {}
distributed.worker - INFO - Start job 62: execute_task - ('sum-part-fae945675e7ff4283a77834ad0b83682', 0)
distributed.worker - INFO - Finish job 62: execute_task - ('sum-part-fae945675e7ff4283a77834ad0b83682', 0)
distributed.worker - INFO - gather 1 keys from peers: {"('sum-part-fae945675e7ff4283a77834ad0b83682', 1)": set([u'127.0.0.1:56141'])}
distributed.core - INFO - Connection from 127.0.0.1:56152 to Worker
distributed.core - INFO - Close connection from 127.0.0.1:56152 to Worker
distributed.worker - INFO - Start job 63: sum - sum-aggregate-fae945675e7ff4283a77834ad0b836820
distributed.worker - INFO - Finish job 63: sum - sum-aggregate-fae945675e7ff4283a77834ad0b836820
distributed.core - INFO - Lost connection: ('127.0.0.1', 56150)
distributed.core - INFO - Close connection from 127.0.0.1:56150 to Worker
distributed.utils - ERROR - fd 49 already registered
Traceback (most recent call last):
File "/Users/mdurant/Downloads/distributed/distributed/utils.py", line 229, in log_errors
yield
File "/Users/mdurant/Downloads/distributed/distributed/client.py", line 118, in gather_from_workers
StreamClosedError)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/utils.py", line 57, in ignore_exceptions
result = yield wait_iterator.next()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 383, in send_recv_from_rpc
stream = yield self.live_stream()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 368, in live_stream
stream = yield connect(self.ip, self.port, timeout=self.timeout)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 239, in connect
stream = yield gen.with_timeout(timedelta(seconds=timeout), future)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1017, in run
yielded = self.gen.send(value)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/tcpclient.py", line 168, in connect
af, addr, stream = yield connector.start()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/tcpclient.py", line 81, in start
self.try_connect(iter(self.primary_addrs))
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/tcpclient.py", line 96, in try_connect
future = self.connect(af, addr)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/tcpclient.py", line 183, in _create_stream
return stream.connect(addr)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/iostream.py", line 1088, in connect
self._add_io_state(self.io_loop.WRITE)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/iostream.py", line 931, in _add_io_state
self.fileno(), self._handle_events, self._state)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/ioloop.py", line 721, in add_handler
self._impl.register(fd, events | self.ERROR)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/platform/kqueue.py", line 40, in register
raise IOError("fd %s already registered" % fd)
IOError: fd 49 already registered
distributed.core - ERROR - fd 49 already registered
Traceback (most recent call last):
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 177, in handle_stream
result = yield gen.maybe_future(handler(stream, **msg))
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/scheduler.py", line 1120, in gather
data = yield gather_from_workers(who_has, deserialize=False)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/client.py", line 118, in gather_from_workers
StreamClosedError)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/utils.py", line 57, in ignore_exceptions
result = yield wait_iterator.next()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 383, in send_recv_from_rpc
stream = yield self.live_stream()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 368, in live_stream
stream = yield connect(self.ip, self.port, timeout=self.timeout)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 239, in connect
stream = yield gen.with_timeout(timedelta(seconds=timeout), future)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1017, in run
yielded = self.gen.send(value)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/tcpclient.py", line 168, in connect
af, addr, stream = yield connector.start()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/tcpclient.py", line 81, in start
self.try_connect(iter(self.primary_addrs))
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/tcpclient.py", line 96, in try_connect
future = self.connect(af, addr)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/tcpclient.py", line 183, in _create_stream
return stream.connect(addr)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/iostream.py", line 1088, in connect
self._add_io_state(self.io_loop.WRITE)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/iostream.py", line 931, in _add_io_state
self.fileno(), self._handle_events, self._state)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/ioloop.py", line 721, in add_handler
self._impl.register(fd, events | self.ERROR)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/platform/kqueue.py", line 40, in register
raise IOError("fd %s already registered" % fd)
IOError: fd 49 already registered
distributed.scheduler - INFO - Remove client 639c3923-e486-11e5-bd2b-acbc32a9ff1d
distributed.core - INFO - Close connection from 127.0.0.1:56151 to Worker
distributed.scheduler - INFO - Remove client None
distributed.scheduler - INFO - Close connection to Scheduler, 639c3923-e486-11e5-bd2b-acbc32a9ff1d
distributed.core - INFO - Connection from 127.0.0.1:56187 to Scheduler
distributed.core - INFO - Lost connection: ('127.0.0.1', 56147)
distributed.core - INFO - Close connection from 127.0.0.1:56147 to Scheduler
distributed.scheduler - INFO - Worker failed from closed stream: 127.0.0.1:56142
distributed.core - INFO - Close connection from 127.0.0.1:56187 to Scheduler
distributed.core - INFO - Connection from 127.0.0.1:56188 to Worker
distributed.scheduler - CRITICAL - Lost all workers
distributed.core - ERROR - '24d2a54a9853a79e8214ecc3b16e82df'
Traceback (most recent call last):
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 177, in handle_stream
result = yield gen.maybe_future(handler(stream, **msg))
File "/Users/mdurant/Downloads/distributed/distributed/scheduler.py", line 613, in remove_worker
self.heal_state()
File "/Users/mdurant/Downloads/distributed/distributed/scheduler.py", line 838, in heal_state
self.ready)
File "/Users/mdurant/Downloads/distributed/distributed/scheduler.py", line 1529, in heal
make_accessible(key)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/toolz/functoolz.py", line 353, in memof
result = func(*args, **kwargs)
File "/Users/mdurant/Downloads/distributed/distributed/scheduler.py", line 1519, in make_accessible
for dep in dependencies[key]:
KeyError: '24d2a54a9853a79e8214ecc3b16e82df'
distributed.core - ERROR - Stream is closed
Traceback (most recent call last):
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 177, in handle_stream
result = yield gen.maybe_future(handler(stream, **msg))
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/worker.py", line 193, in terminate
yield self._close(report=report)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/worker.py", line 179, in _close
self.center.unregister(address=(self.ip, self.port)))
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 384, in send_recv_from_rpc
result = yield send_recv(stream=stream, op=key, **kwargs)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 281, in send_recv
response = yield read(stream)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
yielded = self.gen.throw(*exc_info)
File "/Users/mdurant/Downloads/distributed/distributed/core.py", line 204, in read
msg = yield stream.read_until(sentinel)
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
value = future.result()
File "/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
raise_exc_info(self._exc_info)
File "<string>", line 3, in raise_exc_info
StreamClosedError: Stream is closed
distributed.core - INFO - Connection from 127.0.0.1:56193 to Worker
distributed.worker - INFO - Deleted 2 keys
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! KeyboardInterrupt !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
/Users/mdurant/Documents/anaconda/envs/cluster/lib/python2.7/multiprocessing/queues.py:117: KeyboardInterrupt
========= 1 failed, 94 passed, 8 skipped, 1 xfailed in 535.57 seconds ==========
=================================== FAILURES ===================================
_____________________________ test_coerce_address ______________________________
def test_func():
IOLoop.clear_instance()
loop = IOLoop()
loop.make_current()
cor = gen.coroutine(func)
try:
> loop.run_sync(cor, timeout=timeout)
distributed/utils_test.py:342:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tornado.platform.kqueue.KQueueIOLoop object at 0x12452beb8>
func = <function test_coerce_address at 0x123fb6bf8>, timeout = 10
def run_sync(self, func, timeout=None):
"""Starts the `IOLoop`, runs the given function, and stops the loop.
The function must return either a yieldable object or
``None``. If the function returns a yieldable object, the
`IOLoop` will run until the yieldable is resolved (and
`run_sync()` will return the yieldable's result). If it raises
an exception, the `IOLoop` will stop and the exception will be
re-raised to the caller.
The keyword-only argument ``timeout`` may be used to set
a maximum duration for the function. If the timeout expires,
a `TimeoutError` is raised.
This method is useful in conjunction with `tornado.gen.coroutine`
to allow asynchronous calls in a ``main()`` function::
@gen.coroutine
def main():
# do stuff...
if __name__ == '__main__':
IOLoop.current().run_sync(main)
.. versionchanged:: 4.3
Returning a non-``None``, non-yieldable value is now an error.
"""
future_cell = [None]
def run():
try:
result = func()
if result is not None:
from tornado.gen import convert_yielded
result = convert_yielded(result)
except Exception:
future_cell[0] = TracebackFuture()
future_cell[0].set_exc_info(sys.exc_info())
else:
if is_future(result):
future_cell[0] = result
else:
future_cell[0] = TracebackFuture()
future_cell[0].set_result(result)
self.add_future(future_cell[0], lambda future: self.stop())
self.add_callback(run)
if timeout is not None:
timeout_handle = self.add_timeout(self.time() + timeout, self.stop)
self.start()
if timeout is not None:
self.remove_timeout(timeout_handle)
if not future_cell[0].done():
> raise TimeoutError('Operation timed out after %s seconds' % timeout)
E tornado.ioloop.TimeoutError: Operation timed out after 10 seconds
../../Documents/anaconda/lib/python3.4/site-packages/tornado/ioloop.py:452: TimeoutError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Start Scheduler at: 192.168.0.13:54849
distributed.worker - INFO - Start worker at: 192.168.0.13:54850
distributed.worker - INFO - Waiting to connect to: 192.168.0.13:54849
distributed.worker - INFO - Start worker at: 192.168.0.13:54852
distributed.worker - INFO - Waiting to connect to: 192.168.0.13:54849
distributed.worker - INFO - Start worker at: 127.0.0.2:54854
distributed.worker - INFO - Waiting to connect to: 192.168.0.13:54849
distributed.core - INFO - Connection from 192.168.0.13:54851 to Scheduler
distributed.core - INFO - Connection from 192.168.0.13:54853 to Scheduler
distributed.core - INFO - Connection from 192.168.0.13:54855 to Scheduler
distributed.scheduler - INFO - Register 127.0.0.2:54854
distributed.scheduler - INFO - Register 192.168.0.13:54850
distributed.scheduler - INFO - Register 192.168.0.13:54852
distributed.worker - INFO - Registered to: 192.168.0.13:54849
distributed.worker - INFO - Registered to: 192.168.0.13:54849
distributed.worker - INFO - Registered to: 192.168.0.13:54849
distributed.scheduler - INFO - Remove client None
distributed.core - INFO - Connection from 192.168.0.13:54864 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54865 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54866 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54867 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54868 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54869 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54870 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54871 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54872 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54873 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54874 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54875 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54876 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54877 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54878 to Worker
distributed.core - INFO - Connection from 192.168.0.13:54879 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54865 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54866 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54867 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54868 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54869 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54870 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54871 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54872 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54873 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54874 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54875 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54876 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54877 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54878 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54879 to Worker
distributed.core - INFO - Close connection from 192.168.0.13:54864 to Worker
distributed.scheduler - INFO - Worker failed from closed stream: 127.0.0.2:54854
======== 1 failed, 282 passed, 17 skipped, 1 xfailed in 134.26 seconds =========
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment