Skip to content

Instantly share code, notes, and snippets.

@mooperd
Last active October 29, 2016 17:15
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mooperd/b78740f6d70e83fb914522a81f15c4ba to your computer and use it in GitHub Desktop.
Save mooperd/b78740f6d70e83fb914522a81f15c4ba to your computer and use it in GitHub Desktop.

DC/OS v.1.8.4
Spark 2.0.1
Python Version: 3.4.3 (default, Sep 14 2016, 12:36:27)
[cqlsh 5.0.1 | Cassandra 2.2.8 | CQL spec 3.3.1 | Native protocol v4]

I have some data stored in a single node cassandra which I would like to access from apache spark running on DC/OS. I'm afraid I've become rather stuck on this problem.

cqlsh:dev> select count(*) from time;

 count
-------
 10000

(1 rows)

Warnings :
Aggregation query used without partition key

cqlsh:dev> select * from time limit 10;

 timestamp | day | hour | minute | month | second | timezone | year
-----------+-----+------+--------+-------+--------+----------+------
      4317 |  01 |   01 |     11 |    01 |     57 |   -00:00 | 1970
      3372 |  01 |   00 |     56 |    01 |     12 |   -00:00 | 1970
      1584 |  01 |   00 |     26 |    01 |     24 |   -00:00 | 1970
      7034 |  01 |   01 |     57 |    01 |     14 |   -00:00 | 1970
      9892 |  01 |   02 |     44 |    01 |     52 |   -00:00 | 1970
      9640 |  01 |   02 |     40 |    01 |     40 |   -00:00 | 1970
      9067 |  01 |   02 |     31 |    01 |     07 |   -00:00 | 1970
      4830 |  01 |   01 |     20 |    01 |     30 |   -00:00 | 1970
      2731 |  01 |   00 |     45 |    01 |     31 |   -00:00 | 1970
      5056 |  01 |   01 |     24 |    01 |     16 |   -00:00 | 1970

(10 rows)

I'm trying to read this data into spark using this program which I execute with

dcos spark run --submit-args="https://raw.githubusercontent.com/mooperd/fun-functions/master/spark-from-cassandra/squeeze.py"
import sys
from pyspark.sql import SQLContext
from pyspark import SparkContext, SparkConf

sconf = SparkConf().set("spark.cassandra.connection.host", "192.168.65.1")
sc = SparkContext(conf=sconf)

version = sys.version
log4jLogger = sc._jvm.org.apache.log4j
LOGGER = log4jLogger.LogManager.getLogger(__name__)
LOGGER.info("pyspark script logger initialized")
LOGGER.info("Python Version: " + version)

sqlContext = SQLContext(sc)

df = sqlContext.read\
    .format("org.apache.spark.sql.cassandra")\
    .options(table="time", keyspace="dev")\
    .load().show()

df.show()
df.printSchema()
df.count().show()

sc.stop()
exit()

The program fails giving this output:

stdout

--container="mesos-55156f29-842a-4396-aae1-dcc8e16f965e-S13.e1de2155-1e48-465c-b232-0c146de774cc" --docker="docker" --docker_socket="/var/run/docker.sock" --help="false" --initialize_driver_logging="true" --launcher_dir="/opt/mesosphere/packages/mesos--19a545facb66e57dfe2bb905a001a58b7eaf6004/libexec/mesos" --logbufsecs="0" --logging_level="INFO" --mapped_directory="/mnt/mesos/sandbox" --quiet="false" --sandbox_directory="/var/lib/mesos/slave/slaves/55156f29-842a-4396-aae1-dcc8e16f965e-S13/frameworks/55156f29-842a-4396-aae1-dcc8e16f965e-0002/executors/driver-20161029163237-0025/runs/e1de2155-1e48-465c-b232-0c146de774cc" --stop_timeout="20secs"
--container="mesos-55156f29-842a-4396-aae1-dcc8e16f965e-S13.e1de2155-1e48-465c-b232-0c146de774cc" --docker="docker" --docker_socket="/var/run/docker.sock" --help="false" --initialize_driver_logging="true" --launcher_dir="/opt/mesosphere/packages/mesos--19a545facb66e57dfe2bb905a001a58b7eaf6004/libexec/mesos" --logbufsecs="0" --logging_level="INFO" --mapped_directory="/mnt/mesos/sandbox" --quiet="false" --sandbox_directory="/var/lib/mesos/slave/slaves/55156f29-842a-4396-aae1-dcc8e16f965e-S13/frameworks/55156f29-842a-4396-aae1-dcc8e16f965e-0002/executors/driver-20161029163237-0025/runs/e1de2155-1e48-465c-b232-0c146de774cc" --stop_timeout="20secs"
Registered docker executor on 192.168.65.161
Starting task driver-20161029163237-0025
Traceback (most recent call last):
  File "/mnt/mesos/sandbox/squeeze.py", line 18, in <module>
    .options(table="time", keyspace="dev")\
  File "/opt/spark/dist/python/lib/pyspark.zip/pyspark/sql/dataframe.py", line 287, in show
  File "/opt/spark/dist/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py", line 933, in __call__
  File "/opt/spark/dist/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/opt/spark/dist/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py", line 312, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o33.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 192.168.65.121): java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
	at com.datastax.spark.connector.util.CountingIterator.<init>(CountingIterator.scala:4)
	at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:336)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
	at org.apache.spark.scheduler.Task.run(Task.scala:85)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 19 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897)
	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347)
	at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
	at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2183)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
	at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532)
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2182)
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2189)
	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1925)
	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1924)
	at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2562)
	at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)
	at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)
	at org.apache.spark.sql.Dataset.showString(Dataset.scala:239)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:280)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:211)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
	at com.datastax.spark.connector.util.CountingIterator.<init>(CountingIterator.scala:4)
	at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:336)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
	at org.apache.spark.scheduler.Task.run(Task.scala:85)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	... 1 more
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 19 more

stderr

I1029 09:32:37.685833 31606 logging.cpp:194] INFO level logging started!
I1029 09:32:37.686082 31606 fetcher.cpp:498] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/55156f29-842a-4396-aae1-dcc8e16f965e-S13","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"https:\/\/raw.githubusercontent.com\/mooperd\/fun-functions\/master\/spark-from-cassandra\/squeeze.py"}}],"sandbox_directory":"\/var\/lib\/mesos\/slave\/slaves\/55156f29-842a-4396-aae1-dcc8e16f965e-S13\/frameworks\/55156f29-842a-4396-aae1-dcc8e16f965e-0002\/executors\/driver-20161029163237-0025\/runs\/e1de2155-1e48-465c-b232-0c146de774cc"}
I1029 09:32:37.687400 31606 fetcher.cpp:409] Fetching URI 'https://raw.githubusercontent.com/mooperd/fun-functions/master/spark-from-cassandra/squeeze.py'
I1029 09:32:37.687413 31606 fetcher.cpp:250] Fetching directly into the sandbox directory
I1029 09:32:37.687424 31606 fetcher.cpp:187] Fetching URI 'https://raw.githubusercontent.com/mooperd/fun-functions/master/spark-from-cassandra/squeeze.py'
I1029 09:32:37.687433 31606 fetcher.cpp:134] Downloading resource from 'https://raw.githubusercontent.com/mooperd/fun-functions/master/spark-from-cassandra/squeeze.py' to '/var/lib/mesos/slave/slaves/55156f29-842a-4396-aae1-dcc8e16f965e-S13/frameworks/55156f29-842a-4396-aae1-dcc8e16f965e-0002/executors/driver-20161029163237-0025/runs/e1de2155-1e48-465c-b232-0c146de774cc/squeeze.py'
W1029 09:32:37.840898 31606 fetcher.cpp:289] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: https://raw.githubusercontent.com/mooperd/fun-functions/master/spark-from-cassandra/squeeze.py
I1029 09:32:37.841080 31606 fetcher.cpp:547] Fetched 'https://raw.githubusercontent.com/mooperd/fun-functions/master/spark-from-cassandra/squeeze.py' to '/var/lib/mesos/slave/slaves/55156f29-842a-4396-aae1-dcc8e16f965e-S13/frameworks/55156f29-842a-4396-aae1-dcc8e16f965e-0002/executors/driver-20161029163237-0025/runs/e1de2155-1e48-465c-b232-0c146de774cc/squeeze.py'
I1029 09:32:37.995684 31616 exec.cpp:161] Version: 1.0.1
I1029 09:32:37.998252 31621 exec.cpp:236] Executor registered on agent 55156f29-842a-4396-aae1-dcc8e16f965e-S13
I1029 09:32:37.998963 31621 docker.cpp:815] Running docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 1073741824 -e SPARK_EXECUTOR_OPTS=-Dspark.master=mesos://localhost:38024 -Dspark.app.name=squeeze.py -Dspark.driver.supervise=false -Dspark.submit.deployMode=cluster -Dspark.mesos.executor.docker.image=otternetworks/otter-magic:v5 -e SPARK_SCALA_VERSION=2.10 -e SPARK_JAVA_OPTS=-Dspark.mesos.executor.docker.image=otternetworks/otter-magic:v5  -e SPARK_SUBMIT_OPTS= -Dspark.mesos.driver.frameworkId=55156f29-842a-4396-aae1-dcc8e16f965e-0002-driver-20161029163237-0025  -e LIBPROCESS_IP=192.168.65.161 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-55156f29-842a-4396-aae1-dcc8e16f965e-S13.e1de2155-1e48-465c-b232-0c146de774cc -v /var/lib/mesos/slave/slaves/55156f29-842a-4396-aae1-dcc8e16f965e-S13/frameworks/55156f29-842a-4396-aae1-dcc8e16f965e-0002/executors/driver-20161029163237-0025/runs/e1de2155-1e48-465c-b232-0c146de774cc:/mnt/mesos/sandbox --net host --entrypoint /bin/sh --name mesos-55156f29-842a-4396-aae1-dcc8e16f965e-S13.e1de2155-1e48-465c-b232-0c146de774cc otternetworks/otter-magic:v5 -c ./bin/spark-submit --name squeeze.py --master mesos://zk://master.mesos:2181/mesos --driver-cores 1.0 --driver-memory 1024M --conf spark.app.name=squeeze.py --conf spark.driver.supervise=false --conf spark.mesos.executor.docker.image=otternetworks/otter-magic:v5 $MESOS_SANDBOX/squeeze.py 
16/10/29 16:32:38 INFO SparkContext: Running Spark version 2.0.0
16/10/29 16:32:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/10/29 16:32:39 WARN SparkConf: 
SPARK_JAVA_OPTS was detected (set to '-Dspark.mesos.executor.docker.image=otternetworks/otter-magic:v5 ').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)
        
16/10/29 16:32:39 WARN SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Dspark.mesos.executor.docker.image=otternetworks/otter-magic:v5 ' as a work-around.
16/10/29 16:32:39 WARN SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Dspark.mesos.executor.docker.image=otternetworks/otter-magic:v5 ' as a work-around.
16/10/29 16:32:39 INFO SecurityManager: Changing view acls to: root
16/10/29 16:32:39 INFO SecurityManager: Changing modify acls to: root
16/10/29 16:32:39 INFO SecurityManager: Changing view acls groups to: 
16/10/29 16:32:39 INFO SecurityManager: Changing modify acls groups to: 
16/10/29 16:32:39 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
16/10/29 16:32:39 INFO Utils: Successfully started service 'sparkDriver' on port 41817.
16/10/29 16:32:39 INFO SparkEnv: Registering MapOutputTracker
16/10/29 16:32:39 INFO SparkEnv: Registering BlockManagerMaster
16/10/29 16:32:39 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-05de6998-ee7b-4f8b-89b7-913de98cb469
16/10/29 16:32:39 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
16/10/29 16:32:39 INFO SparkEnv: Registering OutputCommitCoordinator
16/10/29 16:32:39 INFO log: Logging initialized @1442ms
16/10/29 16:32:39 INFO Server: jetty-9.2.z-SNAPSHOT
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@20609709{/jobs,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4fb22323{/jobs/json,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5f8f2ed4{/jobs/job,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@631e3e31{/jobs/job/json,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@47e22584{/stages,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1997ad60{/stages/json,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3fc8d908{/stages/stage,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2d359776{/stages/stage/json,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7b2d4951{/stages/pool,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5f7517f9{/stages/pool/json,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@268c9586{/storage,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@27033806{/storage/json,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@36759a76{/storage/rdd,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@41c8899e{/storage/rdd/json,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3aa0484{/environment,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3d57cfff{/environment/json,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@276b4409{/executors,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@22eaad03{/executors/json,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@713713fe{/executors/threadDump,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@729c1210{/executors/threadDump/json,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@f2a5b46{/static,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@785f7d15{/,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@31be3903{/api,null,AVAILABLE}
16/10/29 16:32:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2320a1b6{/stages/stage/kill,null,AVAILABLE}
16/10/29 16:32:39 INFO ServerConnector: Started ServerConnector@30758970{HTTP/1.1}{192.168.65.161:4040}
16/10/29 16:32:39 INFO Server: Started @1533ms
16/10/29 16:32:39 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/10/29 16:32:39 INFO SparkUI: Bound SparkUI to 192.168.65.161, and started at http://192.168.65.161:4040
16/10/29 16:32:39 INFO Utils: Copying /mnt/mesos/sandbox/squeeze.py to /tmp/spark-4d35a997-6c8a-401b-9e86-1081960417f9/userFiles-d89a0af5-3216-4a2d-942f-4e512292a101/squeeze.py
16/10/29 16:32:39 INFO SparkContext: Added file file:/mnt/mesos/sandbox/squeeze.py at spark://192.168.65.161:41817/files/squeeze.py with timestamp 1477758759750
I1029 16:32:39.898113    90 sched.cpp:226] Version: 1.0.0
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@730: Client environment:host.name=a6.dcos
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@737: Client environment:os.name=Linux
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@738: Client environment:os.arch=3.10.0-327.36.1.el7.x86_64
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@739: Client environment:os.version=#1 SMP Sun Sep 18 13:04:29 UTC 2016
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@747: Client environment:user.name=(null)
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@755: Client environment:user.home=/root
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@767: Client environment:user.dir=/opt/spark/dist
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@zookeeper_init@800: Initiating client connection, host=master.mesos:2181 sessionTimeout=10000 watcher=0x7f195300f340 sessionId=0 sessionPasswd=<null> context=0x7f198c00b810 flags=0
2016-10-29 16:32:39,902:6(0x7f1944728700):ZOO_INFO@check_events@1728: initiated connection to server [192.168.65.90:2181]
2016-10-29 16:32:39,905:6(0x7f1944728700):ZOO_INFO@check_events@1775: session establishment complete on server [192.168.65.90:2181], sessionId=0x157ccbfe58b774a, negotiated timeout=10000
I1029 16:32:39.905381    87 group.cpp:349] Group process (group(1)@192.168.65.161:40775) connected to ZooKeeper
I1029 16:32:39.905431    87 group.cpp:837] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I1029 16:32:39.905454    87 group.cpp:427] Trying to create path '/mesos' in ZooKeeper
I1029 16:32:39.911047    87 detector.cpp:152] Detected a new leader: (id='2')
I1029 16:32:39.911146    87 group.cpp:706] Trying to get '/mesos/json.info_0000000002' in ZooKeeper
I1029 16:32:39.912006    87 zookeeper.cpp:259] A new leading master (UPID=master@192.168.65.90:5050) is detected
I1029 16:32:39.912111    87 sched.cpp:330] New master detected at master@192.168.65.90:5050
I1029 16:32:39.912370    87 sched.cpp:341] No credentials provided. Attempting to register without authentication
I1029 16:32:39.915365    87 sched.cpp:743] Framework registered with 55156f29-842a-4396-aae1-dcc8e16f965e-0002-driver-20161029163237-0025
16/10/29 16:32:39 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41569.
16/10/29 16:32:39 INFO NettyBlockTransferService: Server created on 192.168.65.161:41569
16/10/29 16:32:39 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.65.161, 41569)
16/10/29 16:32:39 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.161:41569 with 366.3 MB RAM, BlockManagerId(driver, 192.168.65.161, 41569)
16/10/29 16:32:39 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.65.161, 41569)
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1d0687dd{/metrics/json,null,AVAILABLE}
16/10/29 16:32:40 INFO MesosCoarseGrainedSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/10/29 16:32:40 INFO __main__: pyspark script logger initialized
16/10/29 16:32:40 INFO __main__: Python Version: 3.4.3 (default, Sep 14 2016, 12:36:27) 
[GCC 4.8.4]
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@386c80f5{/SQL,null,AVAILABLE}
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7f119339{/SQL/json,null,AVAILABLE}
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@25d18f4d{/SQL/execution,null,AVAILABLE}
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@183e10be{/SQL/execution/json,null,AVAILABLE}
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@52187ed6{/static/sql,null,AVAILABLE}
16/10/29 16:32:40 INFO SharedState: Warehouse path is 'file:/opt/spark/dist/spark-warehouse'.
16/10/29 16:32:40 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 4 is now TASK_RUNNING
16/10/29 16:32:40 INFO NettyUtil: Found Netty's native epoll transport in the classpath, using it
16/10/29 16:32:40 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 3 is now TASK_RUNNING
16/10/29 16:32:40 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 0 is now TASK_RUNNING
16/10/29 16:32:41 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 1 is now TASK_RUNNING
16/10/29 16:32:41 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 2 is now TASK_RUNNING
16/10/29 16:32:41 INFO Cluster: New Cassandra host /192.168.65.1:9042 added
16/10/29 16:32:41 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
16/10/29 16:32:43 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.151:33624) with ID 2
16/10/29 16:32:43 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.111:58462) with ID 0
16/10/29 16:32:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.151:40535 with 366.3 MB RAM, BlockManagerId(2, 192.168.65.151, 40535)
16/10/29 16:32:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.111:43733 with 366.3 MB RAM, BlockManagerId(0, 192.168.65.111, 43733)
16/10/29 16:32:43 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.161:45768) with ID 4
16/10/29 16:32:43 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.141:39178) with ID 1
16/10/29 16:32:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.161:42447 with 366.3 MB RAM, BlockManagerId(4, 192.168.65.161, 42447)
16/10/29 16:32:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.141:43257 with 366.3 MB RAM, BlockManagerId(1, 192.168.65.141, 43257)
16/10/29 16:32:43 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.121:59668) with ID 3
16/10/29 16:32:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.121:41066 with 366.3 MB RAM, BlockManagerId(3, 192.168.65.121, 41066)
16/10/29 16:32:43 INFO CassandraSourceRelation: Input Predicates: []
16/10/29 16:32:43 INFO CassandraSourceRelation: Input Predicates: []
16/10/29 16:32:44 INFO CodeGenerator: Code generated in 157.548159 ms
16/10/29 16:32:44 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:-2
16/10/29 16:32:44 INFO DAGScheduler: Got job 0 (showString at NativeMethodAccessorImpl.java:-2) with 1 output partitions
16/10/29 16:32:44 INFO DAGScheduler: Final stage: ResultStage 0 (showString at NativeMethodAccessorImpl.java:-2)
16/10/29 16:32:44 INFO DAGScheduler: Parents of final stage: List()
16/10/29 16:32:44 INFO DAGScheduler: Missing parents: List()
16/10/29 16:32:44 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[4] at showString at NativeMethodAccessorImpl.java:-2), which has no missing parents
16/10/29 16:32:44 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 13.2 KB, free 366.3 MB)
16/10/29 16:32:44 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 6.5 KB, free 366.3 MB)
16/10/29 16:32:44 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.65.161:41569 (size: 6.5 KB, free: 366.3 MB)
16/10/29 16:32:44 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1012
16/10/29 16:32:44 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[4] at showString at NativeMethodAccessorImpl.java:-2)
16/10/29 16:32:44 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
16/10/29 16:32:44 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.65.141, partition 0, ANY, 7950 bytes)
16/10/29 16:32:44 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 0 on executor id: 1 hostname: 192.168.65.141.
16/10/29 16:32:44 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.65.141:43257 (size: 6.5 KB, free: 366.3 MB)
16/10/29 16:32:46 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 192.168.65.141): java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
	at com.datastax.spark.connector.util.CountingIterator.<init>(CountingIterator.scala:4)
	at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:336)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
	at org.apache.spark.scheduler.Task.run(Task.scala:85)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 19 more

16/10/29 16:32:46 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1, 192.168.65.141, partition 0, ANY, 7950 bytes)
16/10/29 16:32:46 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 1 on executor id: 1 hostname: 192.168.65.141.
16/10/29 16:32:46 WARN TransportChannelHandler: Exception in connection from /192.168.65.141:39178
java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
	at java.lang.Thread.run(Thread.java:745)
16/10/29 16:32:46 ERROR TaskSchedulerImpl: Lost executor 1 on 192.168.65.141: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
16/10/29 16:32:46 WARN TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1, 192.168.65.141): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
16/10/29 16:32:46 INFO DAGScheduler: Executor lost: 1 (epoch 0)
16/10/29 16:32:46 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2, 192.168.65.151, partition 0, ANY, 7950 bytes)
16/10/29 16:32:46 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
16/10/29 16:32:46 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 2 on executor id: 2 hostname: 192.168.65.151.
16/10/29 16:32:46 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, 192.168.65.141, 43257)
16/10/29 16:32:46 INFO BlockManagerMaster: Removed 1 successfully in removeExecutor
16/10/29 16:32:46 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 1 is now TASK_FAILED
16/10/29 16:32:46 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
16/10/29 16:32:46 INFO BlockManagerMaster: Removal of executor 1 requested
16/10/29 16:32:46 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 1
16/10/29 16:32:46 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.65.151:40535 (size: 6.5 KB, free: 366.3 MB)
16/10/29 16:32:48 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on executor 192.168.65.151: java.lang.NoClassDefFoundError (scala/collection/GenTraversableOnce$class) [duplicate 1]
16/10/29 16:32:48 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3, 192.168.65.121, partition 0, ANY, 7950 bytes)
16/10/29 16:32:48 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 3 on executor id: 3 hostname: 192.168.65.121.
16/10/29 16:32:48 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 5 is now TASK_RUNNING
16/10/29 16:32:48 ERROR TaskSchedulerImpl: Lost executor 2 on 192.168.65.151: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
16/10/29 16:32:48 INFO DAGScheduler: Executor lost: 2 (epoch 1)
16/10/29 16:32:48 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
16/10/29 16:32:48 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(2, 192.168.65.151, 40535)
16/10/29 16:32:48 INFO BlockManagerMaster: Removed 2 successfully in removeExecutor
16/10/29 16:32:48 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 2 is now TASK_FAILED
16/10/29 16:32:48 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
16/10/29 16:32:48 INFO BlockManagerMaster: Removal of executor 2 requested
16/10/29 16:32:48 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 2
16/10/29 16:32:49 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.141:39194) with ID 5
16/10/29 16:32:49 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.141:35218 with 366.3 MB RAM, BlockManagerId(5, 192.168.65.141, 35218)
16/10/29 16:32:49 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 6 is now TASK_RUNNING
16/10/29 16:32:51 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.151:33640) with ID 6
16/10/29 16:32:51 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.151:42933 with 366.3 MB RAM, BlockManagerId(6, 192.168.65.151, 42933)
16/10/29 16:32:51 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
2016-10-29 16:32:56,612:6(0x7f1944728700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 17ms
2016-10-29 16:32:59,958:6(0x7f1944728700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 11ms
16/10/29 16:33:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.65.121:41066 (size: 6.5 KB, free: 366.3 MB)
16/10/29 16:33:05 WARN TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3, 192.168.65.121): java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
	at com.datastax.spark.connector.util.CountingIterator.<init>(CountingIterator.scala:4)
	at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:336)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
	at org.apache.spark.scheduler.Task.run(Task.scala:85)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 19 more

16/10/29 16:33:05 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
16/10/29 16:33:05 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/10/29 16:33:05 INFO TaskSchedulerImpl: Cancelling stage 0
16/10/29 16:33:05 INFO DAGScheduler: ResultStage 0 (showString at NativeMethodAccessorImpl.java:-2) failed in 20.866 s
16/10/29 16:33:05 INFO DAGScheduler: Job 0 failed: showString at NativeMethodAccessorImpl.java:-2, took 21.003918 s
16/10/29 16:33:05 INFO SparkContext: Invoking stop() from shutdown hook
16/10/29 16:33:05 INFO SerialShutdownHooks: Successfully executed shutdown hook: Clearing session cache for C* connector
16/10/29 16:33:05 INFO ServerConnector: Stopped ServerConnector@30758970{HTTP/1.1}{192.168.65.161:4040}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2320a1b6{/stages/stage/kill,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@31be3903{/api,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@785f7d15{/,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@f2a5b46{/static,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@729c1210{/executors/threadDump/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@713713fe{/executors/threadDump,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@22eaad03{/executors/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@276b4409{/executors,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@3d57cfff{/environment/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@3aa0484{/environment,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@41c8899e{/storage/rdd/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@36759a76{/storage/rdd,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@27033806{/storage/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@268c9586{/storage,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5f7517f9{/stages/pool/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7b2d4951{/stages/pool,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2d359776{/stages/stage/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@3fc8d908{/stages/stage,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@1997ad60{/stages/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@47e22584{/stages,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@631e3e31{/jobs/job/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5f8f2ed4{/jobs/job,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@4fb22323{/jobs/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@20609709{/jobs,null,UNAVAILABLE}
16/10/29 16:33:05 INFO SparkUI: Stopped Spark web UI at http://192.168.65.161:4040
16/10/29 16:33:05 INFO MesosCoarseGrainedSchedulerBackend: Shutting down all executors
16/10/29 16:33:05 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
16/10/29 16:33:05 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 6 is now TASK_FINISHED
16/10/29 16:33:05 WARN NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(6,Executor finished with state FINISHED)] in 1 attempts
org.apache.spark.SparkException: Exception thrown in awaitResult
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:418)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.executorTerminated(MesosCoarseGrainedSchedulerBackend.scala:596)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.statusUpdate(MesosCoarseGrainedSchedulerBackend.scala:533)
Caused by: org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
	at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:127)
	at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:225)
	at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
	... 4 more
16/10/29 16:33:05 WARN TransportChannelHandler: Exception in connection from /192.168.65.121:59668
java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
	at java.lang.Thread.run(Thread.java:745)
16/10/29 16:33:08 WARN NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(6,Executor finished with state FINISHED)] in 2 attempts
org.apache.spark.SparkException: Exception thrown in awaitResult
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:418)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.executorTerminated(MesosCoarseGrainedSchedulerBackend.scala:596)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.statusUpdate(MesosCoarseGrainedSchedulerBackend.scala:533)
Caused by: org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
	at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:127)
	at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:225)
	at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
	... 4 more
16/10/29 16:33:11 WARN NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(6,Executor finished with state FINISHED)] in 3 attempts
org.apache.spark.SparkException: Exception thrown in awaitResult
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:418)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.executorTerminated(MesosCoarseGrainedSchedulerBackend.scala:596)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.statusUpdate(MesosCoarseGrainedSchedulerBackend.scala:533)
Caused by: org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
	at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:127)
	at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:225)
	at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
	... 4 more
Exception in thread "Thread-40" org.apache.spark.SparkException: Error notifying standalone scheduler's driver endpoint
	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:421)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.executorTerminated(MesosCoarseGrainedSchedulerBackend.scala:596)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.statusUpdate(MesosCoarseGrainedSchedulerBackend.scala:533)
Caused by: org.apache.spark.SparkException: Error sending message [message = RemoveExecutor(6,Executor finished with state FINISHED)]
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:119)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:418)
	... 2 more
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
	... 4 more
Caused by: org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
	at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:127)
	at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:225)
	at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
	... 4 more
I1029 16:33:11.446992    84 sched.cpp:2021] Asked to abort the driver
I1029 16:33:11.447266    84 sched.cpp:1217] Aborting framework '55156f29-842a-4396-aae1-dcc8e16f965e-0002-driver-20161029163237-0025'
16/10/29 16:33:11 INFO MesosCoarseGrainedSchedulerBackend: driver.run() returned with code DRIVER_ABORTED
2016-10-29 16:33:13,317:6(0x7f1944728700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 12ms
16/10/29 16:33:15 WARN MesosCoarseGrainedSchedulerBackend: Timed out waiting for 5 remaining executors to terminate within 10000 ms. This may leave temporary files on the mesos nodes.
I1029 16:33:15.285848   138 sched.cpp:1987] Asked to stop the driver
I1029 16:33:15.286048    82 sched.cpp:1187] Stopping framework '55156f29-842a-4396-aae1-dcc8e16f965e-0002-driver-20161029163237-0025'
16/10/29 16:33:15 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/10/29 16:33:15 INFO MemoryStore: MemoryStore cleared
16/10/29 16:33:15 INFO BlockManager: BlockManager stopped
16/10/29 16:33:15 INFO BlockManagerMaster: BlockManagerMaster stopped
16/10/29 16:33:15 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/10/29 16:33:15 INFO SparkContext: Successfully stopped SparkContext
16/10/29 16:33:15 INFO ShutdownHookManager: Shutdown hook called
16/10/29 16:33:15 INFO ShutdownHookManager: Deleting directory /tmp/spark-4d35a997-6c8a-401b-9e86-1081960417f9/pyspark-407a7204-2eea-4d03-9bbe-bc0c1cc3fc60
16/10/29 16:33:15 INFO ShutdownHookManager: Deleting directory /tmp/spark-4d35a997-6c8a-401b-9e86-1081960417f9
16/10/29 16:32:39 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/10/29 16:32:39 INFO SparkUI: Bound SparkUI to 192.168.65.161, and started at http://192.168.65.161:4040
16/10/29 16:32:39 INFO Utils: Copying /mnt/mesos/sandbox/squeeze.py to /tmp/spark-4d35a997-6c8a-401b-9e86-1081960417f9/userFiles-d89a0af5-3216-4a2d-942f-4e512292a101/squeeze.py
16/10/29 16:32:39 INFO SparkContext: Added file file:/mnt/mesos/sandbox/squeeze.py at spark://192.168.65.161:41817/files/squeeze.py with timestamp 1477758759750
I1029 16:32:39.898113    90 sched.cpp:226] Version: 1.0.0
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@730: Client environment:host.name=a6.dcos
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@737: Client environment:os.name=Linux
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@738: Client environment:os.arch=3.10.0-327.36.1.el7.x86_64
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@739: Client environment:os.version=#1 SMP Sun Sep 18 13:04:29 UTC 2016
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@747: Client environment:user.name=(null)
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@755: Client environment:user.home=/root
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@log_env@767: Client environment:user.dir=/opt/spark/dist
2016-10-29 16:32:39,899:6(0x7f1946232700):ZOO_INFO@zookeeper_init@800: Initiating client connection, host=master.mesos:2181 sessionTimeout=10000 watcher=0x7f195300f340 sessionId=0 sessionPasswd=<null> context=0x7f198c00b810 flags=0
2016-10-29 16:32:39,902:6(0x7f1944728700):ZOO_INFO@check_events@1728: initiated connection to server [192.168.65.90:2181]
2016-10-29 16:32:39,905:6(0x7f1944728700):ZOO_INFO@check_events@1775: session establishment complete on server [192.168.65.90:2181], sessionId=0x157ccbfe58b774a, negotiated timeout=10000
I1029 16:32:39.905381    87 group.cpp:349] Group process (group(1)@192.168.65.161:40775) connected to ZooKeeper
I1029 16:32:39.905431    87 group.cpp:837] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I1029 16:32:39.905454    87 group.cpp:427] Trying to create path '/mesos' in ZooKeeper
I1029 16:32:39.911047    87 detector.cpp:152] Detected a new leader: (id='2')
I1029 16:32:39.911146    87 group.cpp:706] Trying to get '/mesos/json.info_0000000002' in ZooKeeper
I1029 16:32:39.912006    87 zookeeper.cpp:259] A new leading master (UPID=master@192.168.65.90:5050) is detected
I1029 16:32:39.912111    87 sched.cpp:330] New master detected at master@192.168.65.90:5050
I1029 16:32:39.912370    87 sched.cpp:341] No credentials provided. Attempting to register without authentication
I1029 16:32:39.915365    87 sched.cpp:743] Framework registered with 55156f29-842a-4396-aae1-dcc8e16f965e-0002-driver-20161029163237-0025
16/10/29 16:32:39 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41569.
16/10/29 16:32:39 INFO NettyBlockTransferService: Server created on 192.168.65.161:41569
16/10/29 16:32:39 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.65.161, 41569)
16/10/29 16:32:39 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.161:41569 with 366.3 MB RAM, BlockManagerId(driver, 192.168.65.161, 41569)
16/10/29 16:32:39 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.65.161, 41569)
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1d0687dd{/metrics/json,null,AVAILABLE}
16/10/29 16:32:40 INFO MesosCoarseGrainedSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/10/29 16:32:40 INFO __main__: pyspark script logger initialized
16/10/29 16:32:40 INFO __main__: Python Version: 3.4.3 (default, Sep 14 2016, 12:36:27) 
[GCC 4.8.4]
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@386c80f5{/SQL,null,AVAILABLE}
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7f119339{/SQL/json,null,AVAILABLE}
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@25d18f4d{/SQL/execution,null,AVAILABLE}
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@183e10be{/SQL/execution/json,null,AVAILABLE}
16/10/29 16:32:40 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@52187ed6{/static/sql,null,AVAILABLE}
16/10/29 16:32:40 INFO SharedState: Warehouse path is 'file:/opt/spark/dist/spark-warehouse'.
16/10/29 16:32:40 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 4 is now TASK_RUNNING
16/10/29 16:32:40 INFO NettyUtil: Found Netty's native epoll transport in the classpath, using it
16/10/29 16:32:40 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 3 is now TASK_RUNNING
16/10/29 16:32:40 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 0 is now TASK_RUNNING
16/10/29 16:32:41 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 1 is now TASK_RUNNING
16/10/29 16:32:41 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 2 is now TASK_RUNNING
16/10/29 16:32:41 INFO Cluster: New Cassandra host /192.168.65.1:9042 added
16/10/29 16:32:41 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
16/10/29 16:32:43 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.151:33624) with ID 2
16/10/29 16:32:43 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.111:58462) with ID 0
16/10/29 16:32:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.151:40535 with 366.3 MB RAM, BlockManagerId(2, 192.168.65.151, 40535)
16/10/29 16:32:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.111:43733 with 366.3 MB RAM, BlockManagerId(0, 192.168.65.111, 43733)
16/10/29 16:32:43 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.161:45768) with ID 4
16/10/29 16:32:43 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.141:39178) with ID 1
16/10/29 16:32:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.161:42447 with 366.3 MB RAM, BlockManagerId(4, 192.168.65.161, 42447)
16/10/29 16:32:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.141:43257 with 366.3 MB RAM, BlockManagerId(1, 192.168.65.141, 43257)
16/10/29 16:32:43 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.121:59668) with ID 3
16/10/29 16:32:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.121:41066 with 366.3 MB RAM, BlockManagerId(3, 192.168.65.121, 41066)
16/10/29 16:32:43 INFO CassandraSourceRelation: Input Predicates: []
16/10/29 16:32:43 INFO CassandraSourceRelation: Input Predicates: []
16/10/29 16:32:44 INFO CodeGenerator: Code generated in 157.548159 ms
16/10/29 16:32:44 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:-2
16/10/29 16:32:44 INFO DAGScheduler: Got job 0 (showString at NativeMethodAccessorImpl.java:-2) with 1 output partitions
16/10/29 16:32:44 INFO DAGScheduler: Final stage: ResultStage 0 (showString at NativeMethodAccessorImpl.java:-2)
16/10/29 16:32:44 INFO DAGScheduler: Parents of final stage: List()
16/10/29 16:32:44 INFO DAGScheduler: Missing parents: List()
16/10/29 16:32:44 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[4] at showString at NativeMethodAccessorImpl.java:-2), which has no missing parents
16/10/29 16:32:44 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 13.2 KB, free 366.3 MB)
16/10/29 16:32:44 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 6.5 KB, free 366.3 MB)
16/10/29 16:32:44 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.65.161:41569 (size: 6.5 KB, free: 366.3 MB)
16/10/29 16:32:44 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1012
16/10/29 16:32:44 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[4] at showString at NativeMethodAccessorImpl.java:-2)
16/10/29 16:32:44 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
16/10/29 16:32:44 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.65.141, partition 0, ANY, 7950 bytes)
16/10/29 16:32:44 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 0 on executor id: 1 hostname: 192.168.65.141.
16/10/29 16:32:44 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.65.141:43257 (size: 6.5 KB, free: 366.3 MB)
16/10/29 16:32:46 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 192.168.65.141): java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
	at com.datastax.spark.connector.util.CountingIterator.<init>(CountingIterator.scala:4)
	at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:336)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
	at org.apache.spark.scheduler.Task.run(Task.scala:85)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 19 more

16/10/29 16:32:46 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1, 192.168.65.141, partition 0, ANY, 7950 bytes)
16/10/29 16:32:46 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 1 on executor id: 1 hostname: 192.168.65.141.
16/10/29 16:32:46 WARN TransportChannelHandler: Exception in connection from /192.168.65.141:39178
java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
	at java.lang.Thread.run(Thread.java:745)
16/10/29 16:32:46 ERROR TaskSchedulerImpl: Lost executor 1 on 192.168.65.141: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
16/10/29 16:32:46 WARN TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1, 192.168.65.141): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
16/10/29 16:32:46 INFO DAGScheduler: Executor lost: 1 (epoch 0)
16/10/29 16:32:46 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2, 192.168.65.151, partition 0, ANY, 7950 bytes)
16/10/29 16:32:46 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
16/10/29 16:32:46 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 2 on executor id: 2 hostname: 192.168.65.151.
16/10/29 16:32:46 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, 192.168.65.141, 43257)
16/10/29 16:32:46 INFO BlockManagerMaster: Removed 1 successfully in removeExecutor
16/10/29 16:32:46 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 1 is now TASK_FAILED
16/10/29 16:32:46 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
16/10/29 16:32:46 INFO BlockManagerMaster: Removal of executor 1 requested
16/10/29 16:32:46 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 1
16/10/29 16:32:46 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.65.151:40535 (size: 6.5 KB, free: 366.3 MB)
16/10/29 16:32:48 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on executor 192.168.65.151: java.lang.NoClassDefFoundError (scala/collection/GenTraversableOnce$class) [duplicate 1]
16/10/29 16:32:48 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3, 192.168.65.121, partition 0, ANY, 7950 bytes)
16/10/29 16:32:48 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 3 on executor id: 3 hostname: 192.168.65.121.
16/10/29 16:32:48 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 5 is now TASK_RUNNING
16/10/29 16:32:48 ERROR TaskSchedulerImpl: Lost executor 2 on 192.168.65.151: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
16/10/29 16:32:48 INFO DAGScheduler: Executor lost: 2 (epoch 1)
16/10/29 16:32:48 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
16/10/29 16:32:48 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(2, 192.168.65.151, 40535)
16/10/29 16:32:48 INFO BlockManagerMaster: Removed 2 successfully in removeExecutor
16/10/29 16:32:48 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 2 is now TASK_FAILED
16/10/29 16:32:48 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
16/10/29 16:32:48 INFO BlockManagerMaster: Removal of executor 2 requested
16/10/29 16:32:48 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 2
16/10/29 16:32:49 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.141:39194) with ID 5
16/10/29 16:32:49 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.141:35218 with 366.3 MB RAM, BlockManagerId(5, 192.168.65.141, 35218)
16/10/29 16:32:49 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 6 is now TASK_RUNNING
16/10/29 16:32:51 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.65.151:33640) with ID 6
16/10/29 16:32:51 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.65.151:42933 with 366.3 MB RAM, BlockManagerId(6, 192.168.65.151, 42933)
16/10/29 16:32:51 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
2016-10-29 16:32:56,612:6(0x7f1944728700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 17ms
2016-10-29 16:32:59,958:6(0x7f1944728700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 11ms
16/10/29 16:33:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.65.121:41066 (size: 6.5 KB, free: 366.3 MB)
16/10/29 16:33:05 WARN TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3, 192.168.65.121): java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
	at com.datastax.spark.connector.util.CountingIterator.<init>(CountingIterator.scala:4)
	at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:336)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
	at org.apache.spark.scheduler.Task.run(Task.scala:85)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 19 more

16/10/29 16:33:05 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
16/10/29 16:33:05 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/10/29 16:33:05 INFO TaskSchedulerImpl: Cancelling stage 0
16/10/29 16:33:05 INFO DAGScheduler: ResultStage 0 (showString at NativeMethodAccessorImpl.java:-2) failed in 20.866 s
16/10/29 16:33:05 INFO DAGScheduler: Job 0 failed: showString at NativeMethodAccessorImpl.java:-2, took 21.003918 s
16/10/29 16:33:05 INFO SparkContext: Invoking stop() from shutdown hook
16/10/29 16:33:05 INFO SerialShutdownHooks: Successfully executed shutdown hook: Clearing session cache for C* connector
16/10/29 16:33:05 INFO ServerConnector: Stopped ServerConnector@30758970{HTTP/1.1}{192.168.65.161:4040}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2320a1b6{/stages/stage/kill,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@31be3903{/api,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@785f7d15{/,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@f2a5b46{/static,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@729c1210{/executors/threadDump/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@713713fe{/executors/threadDump,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@22eaad03{/executors/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@276b4409{/executors,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@3d57cfff{/environment/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@3aa0484{/environment,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@41c8899e{/storage/rdd/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@36759a76{/storage/rdd,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@27033806{/storage/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@268c9586{/storage,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5f7517f9{/stages/pool/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7b2d4951{/stages/pool,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2d359776{/stages/stage/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@3fc8d908{/stages/stage,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@1997ad60{/stages/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@47e22584{/stages,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@631e3e31{/jobs/job/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5f8f2ed4{/jobs/job,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@4fb22323{/jobs/json,null,UNAVAILABLE}
16/10/29 16:33:05 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@20609709{/jobs,null,UNAVAILABLE}
16/10/29 16:33:05 INFO SparkUI: Stopped Spark web UI at http://192.168.65.161:4040
16/10/29 16:33:05 INFO MesosCoarseGrainedSchedulerBackend: Shutting down all executors
16/10/29 16:33:05 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
16/10/29 16:33:05 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 6 is now TASK_FINISHED
16/10/29 16:33:05 WARN NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(6,Executor finished with state FINISHED)] in 1 attempts
org.apache.spark.SparkException: Exception thrown in awaitResult
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:418)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.executorTerminated(MesosCoarseGrainedSchedulerBackend.scala:596)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.statusUpdate(MesosCoarseGrainedSchedulerBackend.scala:533)
Caused by: org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
	at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:127)
	at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:225)
	at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
	... 4 more
16/10/29 16:33:05 WARN TransportChannelHandler: Exception in connection from /192.168.65.121:59668
java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
	at java.lang.Thread.run(Thread.java:745)
16/10/29 16:33:08 WARN NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(6,Executor finished with state FINISHED)] in 2 attempts
org.apache.spark.SparkException: Exception thrown in awaitResult
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:418)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.executorTerminated(MesosCoarseGrainedSchedulerBackend.scala:596)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.statusUpdate(MesosCoarseGrainedSchedulerBackend.scala:533)
Caused by: org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
	at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:127)
	at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:225)
	at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
	... 4 more
16/10/29 16:33:11 WARN NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(6,Executor finished with state FINISHED)] in 3 attempts
org.apache.spark.SparkException: Exception thrown in awaitResult
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:418)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.executorTerminated(MesosCoarseGrainedSchedulerBackend.scala:596)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.statusUpdate(MesosCoarseGrainedSchedulerBackend.scala:533)
Caused by: org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
	at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:127)
	at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:225)
	at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
	... 4 more
Exception in thread "Thread-40" org.apache.spark.SparkException: Error notifying standalone scheduler's driver endpoint
	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:421)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.executorTerminated(MesosCoarseGrainedSchedulerBackend.scala:596)
	at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.statusUpdate(MesosCoarseGrainedSchedulerBackend.scala:533)
Caused by: org.apache.spark.SparkException: Error sending message [message = RemoveExecutor(6,Executor finished with state FINISHED)]
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:119)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:418)
	... 2 more
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
	at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
	... 4 more
Caused by: org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
	at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:127)
	at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:225)
	at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
	at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
	... 4 more
I1029 16:33:11.446992    84 sched.cpp:2021] Asked to abort the driver
I1029 16:33:11.447266    84 sched.cpp:1217] Aborting framework '55156f29-842a-4396-aae1-dcc8e16f965e-0002-driver-20161029163237-0025'
16/10/29 16:33:11 INFO MesosCoarseGrainedSchedulerBackend: driver.run() returned with code DRIVER_ABORTED
2016-10-29 16:33:13,317:6(0x7f1944728700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 12ms
16/10/29 16:33:15 WARN MesosCoarseGrainedSchedulerBackend: Timed out waiting for 5 remaining executors to terminate within 10000 ms. This may leave temporary files on the mesos nodes.
I1029 16:33:15.285848   138 sched.cpp:1987] Asked to stop the driver
I1029 16:33:15.286048    82 sched.cpp:1187] Stopping framework '55156f29-842a-4396-aae1-dcc8e16f965e-0002-driver-20161029163237-0025'
16/10/29 16:33:15 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/10/29 16:33:15 INFO MemoryStore: MemoryStore cleared
16/10/29 16:33:15 INFO BlockManager: BlockManager stopped
16/10/29 16:33:15 INFO BlockManagerMaster: BlockManagerMaster stopped
16/10/29 16:33:15 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/10/29 16:33:15 INFO SparkContext: Successfully stopped SparkContext
16/10/29 16:33:15 INFO ShutdownHookManager: Shutdown hook called
16/10/29 16:33:15 INFO ShutdownHookManager: Deleting directory /tmp/spark-4d35a997-6c8a-401b-9e86-1081960417f9/pyspark-407a7204-2eea-4d03-9bbe-bc0c1cc3fc60
16/10/29 16:33:15 INFO ShutdownHookManager: Deleting directory /tmp/spark-4d35a997-6c8a-401b-9e86-1081960417f9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment