Skip to content

Instantly share code, notes, and snippets.

@csbond007
Created March 30, 2017 21:12
Show Gist options
  • Save csbond007/2fd36a18993c1fd8b572f836bc7ef7ab to your computer and use it in GitHub Desktop.
Save csbond007/2fd36a18993c1fd8b572f836bc7ef7ab to your computer and use it in GitHub Desktop.
Postgres_Errors
17/03/30 20:55:38 ERROR JobScheduler: Error running job streaming job 1490905450000 ms.0
org.apache.spark.SparkException: An exception was raised by Python:
Traceback (most recent call last):
File "/home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/streaming/util.py", line 65, in call
r = self.func(t, *rdds)
File "/home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/streaming/dstream.py", line 159, in <lambd
a>
func = lambda t, rdd: old_func(rdd)
File "/home/centos/release1_5/ihealthdata/consumer/kaustav_ihealth_anomaly_detection.py", line 1235, in <lambda>
self.clean.foreachRDD(lambda rdd: self.empty_rdd() if rdd.count() == 0 else self.process_rdd(rdd))
File "/home/centos/release1_5/ihealthdata/consumer/kaustav_ihealth_anomaly_detection.py", line 1221, in process_rdd
self.streamrdd_to_df(rdd)
File "/home/centos/release1_5/ihealthdata/consumer/kaustav_ihealth_anomaly_detection.py", line 1214, in streamrdd_to_d
f
actual_activity_id, activity_id, current_time, heartrate, seqno)
File "/home/centos/release1_5/ihealthdata/consumer/kaustav_ihealth_anomaly_detection.py", line 614, in Numenta_Operati
ons
anomalyScore)
File "/home/centos/release1_5/ihealthdata/persistence/pgsql_connector.py", line 170, in insert_cardiac_exception
con, meta = self.connect(self.user, self.password, self.db, self.host)
File "/home/centos/release1_5/ihealthdata/persistence/pgsql_connector.py", line 40, in connect
meta = sqlalchemy.MetaData(bind=conn, reflect=True)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", line 3561, in __init__
self.reflect()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", line 3766, in reflect
with bind.connect() as conn:
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2082, in connect
return self._connection_cls(self, **kwargs)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 90, in __init__
if connection is not None else engine.raw_connection()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2168, in raw_connection
self.pool.unique_connection, _connection)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2142, in _wrap_pool_connect
e, dialect, self)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1456, in _handle_dbapi_exce
ption_noconnection
exc_info
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2138, in _wrap_pool_connect
return fn()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 328, in unique_connection
return _ConnectionFairy._checkout(self)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 766, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 516, in checkout
rec = pool._do_get()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 1138, in _do_get
self._dec_overflow()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 1135, in _do_get
return self._create_connection()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 333, in _create_connection
return _ConnectionRecord(self)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 461, in __init__
self.__connect(first_connect_check=True)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 651, in __connect
connection = pool._invoke_creator(self)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 105, in connect
return dialect.connect(*cargs, **cparams)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 393, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/centos/release1_5/lib/python2.7/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
OperationalError: (psycopg2.OperationalError) FATAL: sorry, too many clients already
at org.apache.spark.streaming.api.python.TransformFunction.callPythonTransformFunction(PythonDStream.scala:95)
at org.apache.spark.streaming.api.python.TransformFunction.apply(PythonDStream.scala:78)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStr
eam.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.sca
la:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.sca
la:51)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:2
47)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:246)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
17/03/30 20:55:38 INFO BlockManager: Removing RDD 6824
17/03/30 20:55:38 INFO ReceivedBlockTracker: Deleting batches:
17/03/30 20:55:38 INFO InputInfoTracker: remove old batch metadata: 1490905430000 ms
Traceback (most recent call last):
File "/home/centos/release1_5/ihealthdata/consumer/kaustav_ihealth_anomaly_detection.py", line 1249, in <module>
c.trigger_stream()
File "/home/centos/release1_5/ihealthdata/consumer/kaustav_ihealth_anomaly_detection.py", line 1238, in trigger_stream
self.ssc.awaitTermination() # Wait for the computation to terminate
File "/home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/streaming/context.py", line 206, in awaitT
ermination
File "/home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py", line 1133, in __cal
l__
File "/home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py", line 319, in get_return
_value
py4j.protocol.Py4JJavaError: An error occurred while calling o32.awaitTermination.
: org.apache.spark.SparkException: An exception was raised by Python:
Traceback (most recent call last):
File "/home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/streaming/util.py", line 65, in call
r = self.func(t, *rdds)
File "/home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/streaming/dstream.py", line 159, in <lambd
a>
func = lambda t, rdd: old_func(rdd)
File "/home/centos/release1_5/ihealthdata/consumer/kaustav_ihealth_anomaly_detection.py", line 1235, in <lambda>
self.clean.foreachRDD(lambda rdd: self.empty_rdd() if rdd.count() == 0 else self.process_rdd(rdd))
File "/home/centos/release1_5/ihealthdata/consumer/kaustav_ihealth_anomaly_detection.py", line 1221, in process_rdd
self.streamrdd_to_df(rdd)
File "/home/centos/release1_5/ihealthdata/consumer/kaustav_ihealth_anomaly_detection.py", line 1214, in streamrdd_to_d
f
actual_activity_id, activity_id, current_time, heartrate, seqno)
File "/home/centos/release1_5/ihealthdata/consumer/kaustav_ihealth_anomaly_detection.py", line 614, in Numenta_Operati
ons
anomalyScore)
File "/home/centos/release1_5/ihealthdata/persistence/pgsql_connector.py", line 170, in insert_cardiac_exception
con, meta = self.connect(self.user, self.password, self.db, self.host)
File "/home/centos/release1_5/ihealthdata/persistence/pgsql_connector.py", line 40, in connect
meta = sqlalchemy.MetaData(bind=conn, reflect=True)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", line 3561, in __init__
self.reflect()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", line 3766, in reflect
with bind.connect() as conn:
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2082, in connect
return self._connection_cls(self, **kwargs)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 90, in __init__
if connection is not None else engine.raw_connection()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2168, in raw_connection
self.pool.unique_connection, _connection)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2142, in _wrap_pool_connect
e, dialect, self)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1456, in _handle_dbapi_exce
ption_noconnection
exc_info
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2138, in _wrap_pool_connect
return fn()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 328, in unique_connection
return _ConnectionFairy._checkout(self)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 766, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 516, in checkout
rec = pool._do_get()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 1138, in _do_get
self._dec_overflow()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 1135, in _do_get
return self._create_connection()
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 333, in _create_connection
return _ConnectionRecord(self)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 461, in __init__
self.__connect(first_connect_check=True)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/pool.py", line 651, in __connect
connection = pool._invoke_creator(self)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 105, in connect
return dialect.connect(*cargs, **cparams)
File "/home/centos/release1_5/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 393, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/centos/release1_5/lib/python2.7/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
OperationalError: (psycopg2.OperationalError) FATAL: sorry, too many clients already
at org.apache.spark.streaming.api.python.TransformFunction.callPythonTransformFunction(PythonDStream.scala:95)
at org.apache.spark.streaming.api.python.TransformFunction.apply(PythonDStream.scala:78)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStr
eam.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.sca
la:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.sca
la:51)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:2
47)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:246)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
17/03/30 20:55:38 INFO SparkContext: Starting job: call at /home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3
-src.zip/py4j/java_gateway.py:2230
17/03/30 20:55:38 INFO DAGScheduler: Got job 6580 (call at /home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3
-src.zip/py4j/java_gateway.py:2230) with 3 output partitions
17/03/30 20:55:38 INFO DAGScheduler: Final stage: ResultStage 6616 (call at /home/centos/spark-2.0.2-bin-hadoop2.7/pytho
n/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py:2230)
17/03/30 20:55:38 INFO DAGScheduler: Parents of final stage: List()
17/03/30 20:55:38 INFO DAGScheduler: Missing parents: List()
17/03/30 20:55:38 INFO DAGScheduler: Submitting ResultStage 6616 (PythonRDD[20789] at call at /home/centos/spark-2.0.2-b
in-hadoop2.7/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py:2230), which has no missing parents
17/03/30 20:55:38 INFO MemoryStore: Block broadcast_6634 stored as values in memory (estimated size 14.7 KB, free 365.7
MB)
17/03/30 20:55:38 INFO MemoryStore: Block broadcast_6634_piece0 stored as bytes in memory (estimated size 5.5 KB, free 3
65.7 MB)
17/03/30 20:55:38 INFO BlockManagerInfo: Added broadcast_6634_piece0 in memory on 10.0.0.11:42585 (size: 5.5 KB, free: 3
66.1 MB)
17/03/30 20:55:38 INFO SparkContext: Created broadcast 6634 from broadcast at DAGScheduler.scala:1012
17/03/30 20:55:38 INFO DAGScheduler: Submitting 3 missing tasks from ResultStage 6616 (PythonRDD[20789] at call at /home
/centos/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py:2230)
17/03/30 20:55:38 INFO TaskSchedulerImpl: Adding task set 6616.0 with 3 tasks
17/03/30 20:55:38 INFO TaskSetManager: Starting task 0.0 in stage 6616.0 (TID 39930, ip-10-0-0-15.us-west-2.compute.inte
rnal, partition 0, NODE_LOCAL, 10500 bytes)
17/03/30 20:55:38 INFO TaskSetManager: Starting task 1.0 in stage 6616.0 (TID 39931, ip-10-0-0-15.us-west-2.compute.inte
rnal, partition 1, NODE_LOCAL, 10500 bytes)
17/03/30 20:55:38 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 39930 on executor id: 1 hostname: ip
-10-0-0-15.us-west-2.compute.internal.
17/03/30 20:55:38 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 39931 on executor id: 1 hostname: ip
-10-0-0-15.us-west-2.compute.internal.
17/03/30 20:55:38 INFO BlockManagerInfo: Added broadcast_6634_piece0 in memory on ip-10-0-0-15.us-west-2.compute.interna
l:51246 (size: 5.5 KB, free: 2004.5 MB)
17/03/30 20:55:39 INFO StreamingContext: Invoking stop(stopGracefully=false) from shutdown hook
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905460000 ms.0 from job set of time 1490905460000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1879.011 s for time 1490905460000 ms (execution: 0.306 s)
17/03/30 20:55:39 INFO PythonRDD: Removing RDD 6957 from persistence list
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905470000 ms.0 from job set of time 1490905470000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905470000 ms.0 from job set of time 1490905470000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1869.017 s for time 1490905470000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO BlockManager: Removing RDD 6957
17/03/30 20:55:39 INFO PythonRDD: Removing RDD 6956 from persistence list
17/03/30 20:55:39 INFO BlockManager: Removing RDD 6956
17/03/30 20:55:39 INFO JobGenerator: Stopping JobGenerator immediately
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905480000 ms.0 from job set of time 1490905480000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905480000 ms.0 from job set of time 1490905480000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1859.021 s for time 1490905480000 ms (execution: 0.003 s)
17/03/30 20:55:39 INFO PythonRDD: Removing RDD 6955 from persistence list
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905490000 ms.0 from job set of time 1490905490000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905490000 ms.0 from job set of time 1490905490000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1849.022 s for time 1490905490000 ms (execution: 0.001 s)
17/03/30 20:55:39 INFO BlockManager: Removing RDD 6955
17/03/30 20:55:39 INFO KafkaRDD: Removing RDD 6954 from persistence list
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905500000 ms.0 from job set of time 1490905500000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905500000 ms.0 from job set of time 1490905500000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1839.023 s for time 1490905500000 ms (execution: 0.001 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905510000 ms.0 from job set of time 1490905510000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905510000 ms.0 from job set of time 1490905510000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1829.024 s for time 1490905510000 ms (execution: 0.001 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905520000 ms.0 from job set of time 1490905520000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905520000 ms.0 from job set of time 1490905520000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1819.024 s for time 1490905520000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO RecurringTimer: Stopped timer for JobGenerator after time 1490907330000
17/03/30 20:55:39 INFO ReceivedBlockTracker: Deleting batches:
17/03/30 20:55:39 INFO InputInfoTracker: remove old batch metadata: 1490905440000 ms
17/03/30 20:55:39 INFO PythonRDD: Removing RDD 7078 from persistence list
17/03/30 20:55:39 INFO BlockManager: Removing RDD 6954
17/03/30 20:55:39 INFO BlockManager: Removing RDD 7078
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905530000 ms.0 from job set of time 1490905530000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905530000 ms.0 from job set of time 1490905530000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1809.032 s for time 1490905530000 ms (execution: 0.007 s)
17/03/30 20:55:39 INFO JobGenerator: Stopped JobGenerator
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905540000 ms.0 from job set of time 1490905540000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905540000 ms.0 from job set of time 1490905540000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1799.036 s for time 1490905540000 ms (execution: 0.004 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905550000 ms.0 from job set of time 1490905550000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905550000 ms.0 from job set of time 1490905550000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1789.036 s for time 1490905550000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905560000 ms.0 from job set of time 1490905560000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905560000 ms.0 from job set of time 1490905560000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1779.047 s for time 1490905560000 ms (execution: 0.010 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905570000 ms.0 from job set of time 1490905570000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905570000 ms.0 from job set of time 1490905570000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1769.049 s for time 1490905570000 ms (execution: 0.002 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905580000 ms.0 from job set of time 1490905580000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905580000 ms.0 from job set of time 1490905580000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1759.049 s for time 1490905580000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905590000 ms.0 from job set of time 1490905590000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905590000 ms.0 from job set of time 1490905590000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1749.050 s for time 1490905590000 ms (execution: 0.001 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905600000 ms.0 from job set of time 1490905600000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905600000 ms.0 from job set of time 1490905600000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1739.050 s for time 1490905600000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905610000 ms.0 from job set of time 1490905610000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905610000 ms.0 from job set of time 1490905610000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1729.051 s for time 1490905610000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905620000 ms.0 from job set of time 1490905620000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905620000 ms.0 from job set of time 1490905620000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1719.051 s for time 1490905620000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905630000 ms.0 from job set of time 1490905630000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905630000 ms.0 from job set of time 1490905630000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1709.052 s for time 1490905630000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905640000 ms.0 from job set of time 1490905640000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905640000 ms.0 from job set of time 1490905640000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1699.052 s for time 1490905640000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905650000 ms.0 from job set of time 1490905650000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905650000 ms.0 from job set of time 1490905650000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1689.053 s for time 1490905650000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905660000 ms.0 from job set of time 1490905660000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905660000 ms.0 from job set of time 1490905660000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1679.053 s for time 1490905660000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905670000 ms.0 from job set of time 1490905670000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905670000 ms.0 from job set of time 1490905670000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1669.054 s for time 1490905670000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905680000 ms.0 from job set of time 1490905680000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905680000 ms.0 from job set of time 1490905680000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1659.054 s for time 1490905680000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905690000 ms.0 from job set of time 1490905690000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905690000 ms.0 from job set of time 1490905690000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1649.054 s for time 1490905690000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905700000 ms.0 from job set of time 1490905700000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905700000 ms.0 from job set of time 1490905700000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1639.055 s for time 1490905700000 ms (execution: 0.000 s)
17/03/30 20:55:39 ERROR JobScheduler: Error running job streaming job 1490905460000 ms.0
py4j.Py4JException: Error while sending a command.
at py4j.CallbackClient.sendCommand(CallbackClient.java:357)
at py4j.CallbackClient.sendCommand(CallbackClient.java:316)
at py4j.reflection.PythonProxyHandler.invoke(PythonProxyHandler.java:103)
at com.sun.proxy.$Proxy19.call(Unknown Source)
at org.apache.spark.streaming.api.python.TransformFunction.callPythonTransformFunction(PythonDStream.scala:92)
at org.apache.spark.streaming.api.python.TransformFunction.apply(PythonDStream.scala:78)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStr
eam.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.sca
la:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.sca
la:51)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:2
47)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:246)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: py4j.Py4JNetworkException
at py4j.CallbackConnection.sendCommand(CallbackConnection.java:138)
at py4j.CallbackClient.sendCommand(CallbackClient.java:344)
... 24 more
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905710000 ms.0 from job set of time 1490905710000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905710000 ms.0 from job set of time 1490905710000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1629.056 s for time 1490905710000 ms (execution: 0.001 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905720000 ms.0 from job set of time 1490905720000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905720000 ms.0 from job set of time 1490905720000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1619.056 s for time 1490905720000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905730000 ms.0 from job set of time 1490905730000 ms
17/03/30 20:55:39 INFO JobScheduler: Finished job streaming job 1490905730000 ms.0 from job set of time 1490905730000 ms
17/03/30 20:55:39 INFO JobScheduler: Total delay: 1609.056 s for time 1490905730000 ms (execution: 0.000 s)
17/03/30 20:55:39 INFO JobScheduler: Starting job streaming job 1490905740000 ms.0 from job set of time 1490905740000 ms
17/03/30 20:55:39 ERROR JobScheduler: Error running job streaming job 1490905470000 ms.0
py4j.Py4JException: Cannot obtain a new communication channel
at py4j.CallbackClient.sendCommand(CallbackClient.java:340)
at py4j.CallbackClient.sendCommand(CallbackClient.java:316)
at py4j.reflection.PythonProxyHandler.invoke(PythonProxyHandler.java:103)
at com.sun.proxy.$Proxy19.call(Unknown Source)
at org.apache.spark.streaming.api.python.TransformFunction.callPythonTransformFunction(PythonDStream.scala:92)
at org.apache.spark.streaming.api.python.TransformFunction.apply(PythonDStream.scala:78)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStr
eam.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.sca
la:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.sca
la:51)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:2
47)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:246)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
17/03/30 20:55:39 ERROR PythonDStream$$anon$1: Cannot connect to Python process. It's probably dead. Stopping StreamingC
ontext.
py4j.Py4JException: Cannot obtain a new communication channel
at py4j.CallbackClient.sendCommand(CallbackClient.java:340)
at py4j.CallbackClient.sendCommand(CallbackClient.java:316)
at py4j.reflection.PythonProxyHandler.invoke(PythonProxyHandler.java:103)
at com.sun.proxy.$Proxy19.call(Unknown Source)
at org.apache.spark.streaming.api.python.TransformFunction.callPythonTransformFunction(PythonDStream.scala:92)
at org.apache.spark.streaming.api.python.TransformFunction.apply(PythonDStream.scala:78)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStr
eam.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.sca
la:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.sca
la:51)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:2
47)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:246)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
17/03/30 20:55:39 INFO JobScheduler: Stopped JobScheduler
17/03/30 20:55:39 INFO TaskSetManager: Starting task 2.0 in stage 6616.0 (TID 39932, ip-10-0-0-15.us-west-2.compute.inte
rnal, partition 2, NODE_LOCAL, 10500 bytes)
17/03/30 20:55:39 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 39932 on executor id: 1 hostname: ip
-10-0-0-15.us-west-2.compute.internal.
17/03/30 20:55:39 INFO TaskSetManager: Finished task 0.0 in stage 6616.0 (TID 39930) in 167 ms on ip-10-0-0-15.us-west-2
.compute.internal (1/3)
17/03/30 20:55:39 INFO StreamingContext: StreamingContext stopped successfully
17/03/30 20:55:39 ERROR DAGScheduler: Failed to update accumulators for task 0
org.apache.spark.SparkException: EOF reached before Python server acknowledged
at org.apache.spark.api.python.PythonAccumulatorParam.addInPlace(PythonRDD.scala:914)
at org.apache.spark.api.python.PythonAccumulatorParam.addInPlace(PythonRDD.scala:872)
at org.apache.spark.util.LegacyAccumulatorWrapper.merge(AccumulatorV2.scala:494)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$updateAccumulators$1.apply(DAGScheduler.scala:1101)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$updateAccumulators$1.apply(DAGScheduler.scala:1093)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.updateAccumulators(DAGScheduler.scala:1093)
at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1169)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1664)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
17/03/30 20:55:39 INFO SparkContext: Invoking stop() from shutdown hook
17/03/30 20:55:39 INFO TaskSetManager: Finished task 1.0 in stage 6616.0 (TID 39931) in 168 ms on ip-10-0-0-15.us-west-2
.compute.internal (2/3)
17/03/30 20:55:39 WARN StreamingContext: StreamingContext has already been stopped
17/03/30 20:55:39 ERROR DAGScheduler: Failed to update accumulators for task 1
java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.spark.api.python.PythonAccumulatorParam.addInPlace(PythonRDD.scala:910)
at org.apache.spark.api.python.PythonAccumulatorParam.addInPlace(PythonRDD.scala:872)
at org.apache.spark.util.LegacyAccumulatorWrapper.merge(AccumulatorV2.scala:494)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$updateAccumulators$1.apply(DAGScheduler.scala:1101)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$updateAccumulators$1.apply(DAGScheduler.scala:1093)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.updateAccumulators(DAGScheduler.scala:1093)
at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1169)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1664)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
17/03/30 20:55:39 INFO SparkUI: Stopped Spark web UI at http://10.0.0.11:4040
17/03/30 20:55:39 INFO DAGScheduler: Job 6580 failed: call at /home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.1
0.3-src.zip/py4j/java_gateway.py:2230, took 0.187174 s
17/03/30 20:55:39 INFO DAGScheduler: ResultStage 6616 (call at /home/centos/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.
10.3-src.zip/py4j/java_gateway.py:2230) failed in 0.185 s
17/03/30 20:55:39 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageComplete
d(org.apache.spark.scheduler.StageInfo@60a28560)
17/03/30 20:55:39 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(6580,1
490907339090,JobFailed(org.apache.spark.SparkException: Job 6580 cancelled because SparkContext was shut down))
17/03/30 20:55:39 INFO MesosCoarseGrainedSchedulerBackend: Shutting down all executors
17/03/30 20:55:39 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
17/03/30 20:55:39 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 4 is now TASK_FINISHED
17/03/30 20:55:39 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 1 is now TASK_FINISHED
17/03/30 20:55:39 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 2 is now TASK_FINISHED
17/03/30 20:55:39 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 3 is now TASK_FINISHED
17/03/30 20:55:39 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 0 is now TASK_FINISHED
I0330 20:55:39.595566 16771 sched.cpp:1995] Asked to stop the driver
I0330 20:55:39.595631 30629 sched.cpp:1187] Stopping framework 509dddcf-620b-4b87-a81c-138be21343b7-0207
17/03/30 20:55:39 INFO MesosCoarseGrainedSchedulerBackend: driver.run() returned with code DRIVER_STOPPED
17/03/30 20:55:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/03/30 20:55:39 INFO MemoryStore: MemoryStore cleared
17/03/30 20:55:39 INFO BlockManager: BlockManager stopped
17/03/30 20:55:39 INFO BlockManagerMaster: BlockManagerMaster stopped
17/03/30 20:55:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/03/30 20:55:39 INFO SparkContext: Successfully stopped SparkContext
17/03/30 20:55:39 INFO ShutdownHookManager: Shutdown hook called
17/03/30 20:55:39 INFO ShutdownHookManager: Deleting directory /tmp/spark-16b52805-ccab-41c3-8839-9e6a7d192ba2/pyspark-7
bd07a37-f668-48d9-9018-7f07968a92a6
17/03/30 20:55:39 INFO ShutdownHookManager: Deleting directory /tmp/spark-16b52805-ccab-41c3-8839-9e6a7d192ba2
(release1_5)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment