Created
October 25, 2016 01:10
-
-
Save csbond007/f58f8aaad851f138109ef170d5d33c5c to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[ksaha@mesos101 SampleApp]$ spark-submit --class "SampleApp" --master mesos://zk://10.10.40.138:2181/mesos --jars lib/spark-cassandra-connector-1.6.1-s_2.10.jar,lib/cassandra-driver-core-3.1.1.jar, target/scala-2.10/sampleapp_2.10-1.0.jar | |
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties | |
16/10/24 21:07:33 INFO SparkContext: Running Spark version 1.6.2 | |
16/10/24 21:07:33 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
16/10/24 21:07:33 INFO SecurityManager: Changing view acls to: ksaha | |
16/10/24 21:07:33 INFO SecurityManager: Changing modify acls to: ksaha | |
16/10/24 21:07:33 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ksaha); users with modify permissions: Set(ksaha) | |
16/10/24 21:07:34 INFO Utils: Successfully started service 'sparkDriver' on port 46646. | |
16/10/24 21:07:34 INFO Slf4jLogger: Slf4jLogger started | |
16/10/24 21:07:34 INFO Remoting: Starting remoting | |
16/10/24 21:07:34 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.10.40.138:35362] | |
16/10/24 21:07:34 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 35362. | |
16/10/24 21:07:34 INFO SparkEnv: Registering MapOutputTracker | |
16/10/24 21:07:34 INFO SparkEnv: Registering BlockManagerMaster | |
16/10/24 21:07:34 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-aa49d065-8ae2-411e-aa5c-ceb507fe4086 | |
16/10/24 21:07:34 INFO MemoryStore: MemoryStore started with capacity 511.1 MB | |
16/10/24 21:07:34 INFO SparkEnv: Registering OutputCommitCoordinator | |
16/10/24 21:07:34 INFO Utils: Successfully started service 'SparkUI' on port 4040. | |
16/10/24 21:07:34 INFO SparkUI: Started SparkUI at http://10.10.40.138:4040 | |
16/10/24 21:07:34 INFO HttpFileServer: HTTP File server directory is /tmp/spark-0063f507-51be-4f5f-9a4a-277abd3a0bfc/httpd-d9d3d1cc-a9d6-478e-89f5-cece291b7495 | |
16/10/24 21:07:34 INFO HttpServer: Starting HTTP Server | |
16/10/24 21:07:34 INFO Utils: Successfully started service 'HTTP file server' on port 44765. | |
16/10/24 21:07:34 INFO SparkContext: Added JAR file:/home/ksaha/spark_sbt_eclipse_cassandra/SampleApp/lib/spark-cassandra-connector-1.6.1-s_2.10.jar at http://10.10.40.138:44765/jars/spark-cassandra-connector-1.6.1-s_2.10.jar with timestamp 1477357654864 | |
16/10/24 21:07:34 INFO SparkContext: Added JAR file:/home/ksaha/spark_sbt_eclipse_cassandra/SampleApp/lib/cassandra-driver-core-3.1.1.jar at http://10.10.40.138:44765/jars/cassandra-driver-core-3.1.1.jar with timestamp 1477357654866 | |
16/10/24 21:07:34 INFO SparkContext: Added JAR file:/home/ksaha/spark_sbt_eclipse_cassandra/SampleApp/target/scala-2.10/sampleapp_2.10-1.0.jar at http://10.10.40.138:44765/jars/sampleapp_2.10-1.0.jar with timestamp 1477357654866 | |
I1024 21:07:34.958196 29277 sched.cpp:226] Version: 1.0.1 | |
2016-10-24 21:07:34,958:29173(0x7f92ea6ce700):ZOO_INFO@log_env@726: Client environment:zookeeper.version=zookeeper C client 3.4.8 | |
2016-10-24 21:07:34,958:29173(0x7f92ea6ce700):ZOO_INFO@log_env@730: Client environment:host.name=mesos101.itp.objectfrontier.com | |
2016-10-24 21:07:34,958:29173(0x7f92ea6ce700):ZOO_INFO@log_env@737: Client environment:os.name=Linux | |
2016-10-24 21:07:34,958:29173(0x7f92ea6ce700):ZOO_INFO@log_env@738: Client environment:os.arch=3.10.0-327.36.1.el7.x86_64 | |
2016-10-24 21:07:34,958:29173(0x7f92ea6ce700):ZOO_INFO@log_env@739: Client environment:os.version=#1 SMP Sun Sep 18 13:04:29 UTC 2016 | |
2016-10-24 21:07:34,958:29173(0x7f92ea6ce700):ZOO_INFO@log_env@747: Client environment:user.name=ksaha | |
2016-10-24 21:07:34,958:29173(0x7f92ea6ce700):ZOO_INFO@log_env@755: Client environment:user.home=/home/ksaha | |
2016-10-24 21:07:34,958:29173(0x7f92ea6ce700):ZOO_INFO@log_env@767: Client environment:user.dir=/home/ksaha/spark_sbt_eclipse_cassandra/SampleApp | |
2016-10-24 21:07:34,958:29173(0x7f92ea6ce700):ZOO_INFO@zookeeper_init@800: Initiating client connection, host=10.10.40.138:2181 sessionTimeout=10000 watcher=0x7f92f26bc300 sessionId=0 sessionPasswd=<null> context=0x7f9370002f80 flags=0 | |
2016-10-24 21:07:34,959:29173(0x7f92e7dc8700):ZOO_INFO@check_events@1728: initiated connection to server [10.10.40.138:2181] | |
2016-10-24 21:07:34,971:29173(0x7f92e7dc8700):ZOO_INFO@check_events@1775: session establishment complete on server [10.10.40.138:2181], sessionId=0x157f59e05bd008b, negotiated timeout=10000 | |
I1024 21:07:34.972098 29268 group.cpp:349] Group process (group(1)@10.10.40.138:39934) connected to ZooKeeper | |
I1024 21:07:34.972151 29268 group.cpp:837] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) | |
I1024 21:07:34.972187 29268 group.cpp:427] Trying to create path '/mesos' in ZooKeeper | |
I1024 21:07:34.980566 29269 detector.cpp:152] Detected a new leader: (id='9') | |
I1024 21:07:34.980662 29268 group.cpp:706] Trying to get '/mesos/json.info_0000000009' in ZooKeeper | |
I1024 21:07:34.981081 29269 zookeeper.cpp:259] A new leading master (UPID=master@10.10.40.138:5050) is detected | |
I1024 21:07:34.981155 29274 sched.cpp:330] New master detected at master@10.10.40.138:5050 | |
I1024 21:07:34.981537 29274 sched.cpp:341] No credentials provided. Attempting to register without authentication | |
I1024 21:07:34.982522 29274 sched.cpp:743] Framework registered with 33ea2954-5fd5-494e-b4ad-8f1cb77fde51-0072 | |
16/10/24 21:07:34 INFO CoarseMesosSchedulerBackend: Registered as framework ID 33ea2954-5fd5-494e-b4ad-8f1cb77fde51-0072 | |
16/10/24 21:07:34 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39144. | |
16/10/24 21:07:34 INFO NettyBlockTransferService: Server created on 39144 | |
16/10/24 21:07:34 INFO BlockManagerMaster: Trying to register BlockManager | |
16/10/24 21:07:34 INFO BlockManagerMasterEndpoint: Registering block manager 10.10.40.138:39144 with 511.1 MB RAM, BlockManagerId(driver, 10.10.40.138, 39144) | |
16/10/24 21:07:34 INFO BlockManagerMaster: Registered BlockManager | |
16/10/24 21:07:35 INFO CoarseMesosSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0 | |
16/10/24 21:07:35 INFO SparkContext: Starting job: reduce at SampleApp.scala:24 | |
16/10/24 21:07:35 INFO DAGScheduler: Got job 0 (reduce at SampleApp.scala:24) with 2 output partitions | |
16/10/24 21:07:35 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SampleApp.scala:24) | |
16/10/24 21:07:35 INFO DAGScheduler: Parents of final stage: List() | |
16/10/24 21:07:35 INFO DAGScheduler: Missing parents: List() | |
16/10/24 21:07:35 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SampleApp.scala:20), which has no missing parents | |
16/10/24 21:07:35 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1856.0 B, free 1856.0 B) | |
16/10/24 21:07:35 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1210.0 B, free 3.0 KB) | |
16/10/24 21:07:35 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.10.40.138:39144 (size: 1210.0 B, free: 511.1 MB) | |
16/10/24 21:07:35 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006 | |
16/10/24 21:07:35 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SampleApp.scala:20) | |
16/10/24 21:07:35 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks | |
16/10/24 21:07:37 INFO CoarseMesosSchedulerBackend: Mesos task 0 is now TASK_RUNNING | |
16/10/24 21:07:38 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now TASK_RUNNING | |
16/10/24 21:07:39 INFO CoarseMesosSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (mesos103.itp.objectfrontier.com:41908) with ID 440de647-93a5-4474-80ce-b3b60f10a459-S4/0 | |
16/10/24 21:07:39 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, mesos103.itp.objectfrontier.com, partition 0,PROCESS_LOCAL, 2296 bytes) | |
16/10/24 21:07:39 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, mesos103.itp.objectfrontier.com, partition 1,PROCESS_LOCAL, 2353 bytes) | |
16/10/24 21:07:39 INFO BlockManagerMasterEndpoint: Registering block manager mesos103.itp.objectfrontier.com:33300 with 511.1 MB RAM, BlockManagerId(440de647-93a5-4474-80ce-b3b60f10a459-S4/0, mesos103.itp.objectfrontier.com, 33300) | |
16/10/24 21:07:39 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on mesos103.itp.objectfrontier.com:33300 (size: 1210.0 B, free: 511.1 MB) | |
16/10/24 21:07:39 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 771 ms on mesos103.itp.objectfrontier.com (1/2) | |
16/10/24 21:07:39 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 795 ms on mesos103.itp.objectfrontier.com (2/2) | |
16/10/24 21:07:39 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool | |
16/10/24 21:07:39 INFO DAGScheduler: ResultStage 0 (reduce at SampleApp.scala:24) finished in 4.455 s | |
16/10/24 21:07:39 INFO DAGScheduler: Job 0 finished: reduce at SampleApp.scala:24, took 4.619594 s | |
///////////////////// Pi is roughly 3.14194 | |
////////////////////////////////// | |
1 | |
//////////////////////////////// emr_labscorepopulated_data.count() /////////////////////// | |
/////////////////////////// firstRow.size /////////////////////////////// | |
///////////////////////////////////////////// firstRow.getString(admissionid)//////// | |
16/10/24 21:07:40 INFO NettyUtil: Found Netty's native epoll transport in the classpath, using it | |
16/10/24 21:07:40 INFO Cluster: New Cassandra host /10.10.40.172:9042 added | |
16/10/24 21:07:40 INFO LocalNodeFirstLoadBalancingPolicy: Added host 10.10.40.172 (datacenter1) | |
16/10/24 21:07:40 INFO Cluster: New Cassandra host /10.10.40.138:9042 added | |
16/10/24 21:07:40 INFO Cluster: New Cassandra host /10.10.40.36:9042 added | |
16/10/24 21:07:40 INFO LocalNodeFirstLoadBalancingPolicy: Added host 10.10.40.36 (datacenter1) | |
16/10/24 21:07:40 INFO CassandraConnector: Connected to Cassandra cluster: HealthCare_Cluster_2 | |
16/10/24 21:07:40 INFO SparkContext: Starting job: take at CassandraRDD.scala:121 | |
16/10/24 21:07:40 INFO DAGScheduler: Got job 1 (take at CassandraRDD.scala:121) with 1 output partitions | |
16/10/24 21:07:40 INFO DAGScheduler: Final stage: ResultStage 1 (take at CassandraRDD.scala:121) | |
16/10/24 21:07:40 INFO DAGScheduler: Parents of final stage: List() | |
16/10/24 21:07:40 INFO DAGScheduler: Missing parents: List() | |
16/10/24 21:07:40 INFO DAGScheduler: Submitting ResultStage 1 (CassandraTableScanRDD[3] at RDD at CassandraRDD.scala:15), which has no missing parents | |
16/10/24 21:07:40 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 7.3 KB, free 10.3 KB) | |
16/10/24 21:07:40 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 3.8 KB, free 14.1 KB) | |
16/10/24 21:07:40 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.10.40.138:39144 (size: 3.8 KB, free: 511.1 MB) | |
16/10/24 21:07:40 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006 | |
16/10/24 21:07:40 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (CassandraTableScanRDD[3] at RDD at CassandraRDD.scala:15) | |
16/10/24 21:07:40 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks | |
16/10/24 21:07:40 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, mesos103.itp.objectfrontier.com, partition 0,ANY, 4216 bytes) | |
16/10/24 21:07:40 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on mesos103.itp.objectfrontier.com:33300 (size: 3.8 KB, free: 511.1 MB) | |
16/10/24 21:07:40 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 2, mesos103.itp.objectfrontier.com): java.lang.NoClassDefFoundError: com/google/common/util/concurrent/AsyncFunction | |
at com.datastax.spark.connector.cql.DefaultConnectionFactory$.clusterBuilder(CassandraConnectionFactory.scala:35) | |
at com.datastax.spark.connector.cql.DefaultConnectionFactory$.createCluster(CassandraConnectionFactory.scala:92) | |
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:153) | |
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$3.apply(CassandraConnector.scala:148) | |
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$3.apply(CassandraConnector.scala:148) | |
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31) | |
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56) | |
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81) | |
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:325) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
Caused by: java.lang.ClassNotFoundException: com.google.common.util.concurrent.AsyncFunction | |
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) | |
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) | |
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) | |
... 17 more | |
16/10/24 21:07:40 INFO TaskSetManager: Starting task 0.1 in stage 1.0 (TID 3, mesos103.itp.objectfrontier.com, partition 0,ANY, 4216 bytes) | |
16/10/24 21:07:41 WARN TransportChannelHandler: Exception in connection from mesos103.itp.objectfrontier.com/10.10.40.172:41908 | |
java.io.IOException: Connection reset by peer | |
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) | |
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) | |
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) | |
at sun.nio.ch.IOUtil.read(IOUtil.java:192) | |
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) | |
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313) | |
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) | |
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242) | |
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) | |
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) | |
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) | |
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) | |
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) | |
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) | |
at java.lang.Thread.run(Thread.java:745) | |
16/10/24 21:07:41 ERROR TaskSchedulerImpl: Lost executor 440de647-93a5-4474-80ce-b3b60f10a459-S4/0 on mesos103.itp.objectfrontier.com: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. | |
16/10/24 21:07:41 WARN TaskSetManager: Lost task 0.1 in stage 1.0 (TID 3, mesos103.itp.objectfrontier.com): ExecutorLostFailure (executor 440de647-93a5-4474-80ce-b3b60f10a459-S4/0 exited caused by one of the running tasks) Reason: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. | |
16/10/24 21:07:41 INFO DAGScheduler: Executor lost: 440de647-93a5-4474-80ce-b3b60f10a459-S4/0 (epoch 0) | |
16/10/24 21:07:41 INFO BlockManagerMasterEndpoint: Trying to remove executor 440de647-93a5-4474-80ce-b3b60f10a459-S4/0 from BlockManagerMaster. | |
16/10/24 21:07:41 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(440de647-93a5-4474-80ce-b3b60f10a459-S4/0, mesos103.itp.objectfrontier.com, 33300) | |
16/10/24 21:07:41 INFO BlockManagerMaster: Removed 440de647-93a5-4474-80ce-b3b60f10a459-S4/0 successfully in removeExecutor | |
16/10/24 21:07:41 INFO CoarseMesosSchedulerBackend: Mesos task 0 is now TASK_FAILED | |
16/10/24 21:07:41 INFO CoarseMesosSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (mesos102.itp.objectfrontier.com:58530) with ID 440de647-93a5-4474-80ce-b3b60f10a459-S3/1 | |
16/10/24 21:07:41 INFO TaskSetManager: Starting task 0.2 in stage 1.0 (TID 4, mesos102.itp.objectfrontier.com, partition 0,ANY, 4216 bytes) | |
16/10/24 21:07:41 INFO BlockManagerMasterEndpoint: Registering block manager mesos102.itp.objectfrontier.com:43917 with 511.1 MB RAM, BlockManagerId(440de647-93a5-4474-80ce-b3b60f10a459-S3/1, mesos102.itp.objectfrontier.com, 43917) | |
16/10/24 21:07:42 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on mesos102.itp.objectfrontier.com:43917 (size: 3.8 KB, free: 511.1 MB) | |
16/10/24 21:07:42 INFO TaskSetManager: Lost task 0.2 in stage 1.0 (TID 4) on executor mesos102.itp.objectfrontier.com: java.lang.NoClassDefFoundError (com/google/common/util/concurrent/AsyncFunction) [duplicate 1] | |
16/10/24 21:07:42 INFO TaskSetManager: Starting task 0.3 in stage 1.0 (TID 5, mesos102.itp.objectfrontier.com, partition 0,ANY, 4216 bytes) | |
16/10/24 21:07:42 ERROR TaskSchedulerImpl: Lost executor 440de647-93a5-4474-80ce-b3b60f10a459-S3/1 on mesos102.itp.objectfrontier.com: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. | |
16/10/24 21:07:42 WARN TaskSetManager: Lost task 0.3 in stage 1.0 (TID 5, mesos102.itp.objectfrontier.com): ExecutorLostFailure (executor 440de647-93a5-4474-80ce-b3b60f10a459-S3/1 exited caused by one of the running tasks) Reason: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. | |
16/10/24 21:07:42 ERROR TaskSetManager: Task 0 in stage 1.0 failed 4 times; aborting job | |
16/10/24 21:07:42 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool | |
16/10/24 21:07:42 INFO TaskSchedulerImpl: Cancelling stage 1 | |
16/10/24 21:07:42 INFO DAGScheduler: ResultStage 1 (take at CassandraRDD.scala:121) failed in 2.050 s | |
16/10/24 21:07:42 INFO DAGScheduler: Executor lost: 440de647-93a5-4474-80ce-b3b60f10a459-S3/1 (epoch 1) | |
16/10/24 21:07:42 INFO DAGScheduler: Job 1 failed: take at CassandraRDD.scala:121, took 2.067082 s | |
16/10/24 21:07:42 INFO BlockManagerMasterEndpoint: Trying to remove executor 440de647-93a5-4474-80ce-b3b60f10a459-S3/1 from BlockManagerMaster. | |
16/10/24 21:07:42 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(440de647-93a5-4474-80ce-b3b60f10a459-S3/1, mesos102.itp.objectfrontier.com, 43917) | |
16/10/24 21:07:42 INFO BlockManagerMaster: Removed 440de647-93a5-4474-80ce-b3b60f10a459-S3/1 successfully in removeExecutor | |
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 5, mesos102.itp.objectfrontier.com): ExecutorLostFailure (executor 440de647-93a5-4474-80ce-b3b60f10a459-S3/1 exited caused by one of the running tasks) Reason: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. | |
Driver stacktrace: | |
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418) | |
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) | |
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) | |
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) | |
at scala.Option.foreach(Option.scala:236) | |
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799) | |
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640) | |
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599) | |
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588) | |
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) | |
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858) | |
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1328) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) | |
at org.apache.spark.rdd.RDD.take(RDD.scala:1302) | |
at com.datastax.spark.connector.rdd.CassandraRDD.take(CassandraRDD.scala:121) | |
at com.datastax.spark.connector.rdd.CassandraRDD.take(CassandraRDD.scala:122) | |
at SampleApp$.main(SampleApp.scala:48) | |
at SampleApp.main(SampleApp.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:498) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
16/10/24 21:07:42 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now TASK_FAILED | |
16/10/24 21:07:43 INFO CoarseMesosSchedulerBackend: Mesos task 2 is now TASK_RUNNING | |
16/10/24 21:07:45 INFO CoarseMesosSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (mesos103.itp.objectfrontier.com:41920) with ID 440de647-93a5-4474-80ce-b3b60f10a459-S4/2 | |
16/10/24 21:07:45 INFO BlockManagerMasterEndpoint: Registering block manager mesos103.itp.objectfrontier.com:37384 with 511.1 MB RAM, BlockManagerId(440de647-93a5-4474-80ce-b3b60f10a459-S4/2, mesos103.itp.objectfrontier.com, 37384) | |
16/10/24 21:07:47 INFO CoarseMesosSchedulerBackend: Mesos task 3 is now TASK_RUNNING | |
16/10/24 21:07:48 INFO CassandraConnector: Disconnected from Cassandra cluster: HealthCare_Cluster_2 | |
16/10/24 21:07:48 INFO ContextCleaner: Cleaned accumulator 1 | |
16/10/24 21:07:48 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.10.40.138:39144 in memory (size: 1210.0 B, free: 511.1 MB) | |
16/10/24 21:07:49 INFO SparkContext: Invoking stop() from shutdown hook | |
16/10/24 21:07:49 INFO SerialShutdownHooks: Successfully executed shutdown hook: Clearing session cache for C* connector | |
16/10/24 21:07:49 INFO SparkUI: Stopped Spark web UI at http://10.10.40.138:4040 | |
16/10/24 21:07:49 INFO CoarseMesosSchedulerBackend: Shutting down all executors | |
16/10/24 21:07:49 INFO CoarseMesosSchedulerBackend: Asking each executor to shut down | |
I1024 21:07:49.098374 29331 sched.cpp:1987] Asked to stop the driver | |
I1024 21:07:49.098493 29270 sched.cpp:1187] Stopping framework '33ea2954-5fd5-494e-b4ad-8f1cb77fde51-0072' | |
16/10/24 21:07:49 INFO CoarseMesosSchedulerBackend: driver.run() returned with code DRIVER_STOPPED | |
16/10/24 21:07:49 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
16/10/24 21:07:49 INFO MemoryStore: MemoryStore cleared | |
16/10/24 21:07:49 INFO BlockManager: BlockManager stopped | |
16/10/24 21:07:49 INFO BlockManagerMaster: BlockManagerMaster stopped | |
16/10/24 21:07:49 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
16/10/24 21:07:49 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
16/10/24 21:07:49 INFO SparkContext: Successfully stopped SparkContext | |
16/10/24 21:07:49 INFO ShutdownHookManager: Shutdown hook called | |
16/10/24 21:07:49 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
16/10/24 21:07:49 INFO ShutdownHookManager: Deleting directory /tmp/spark-0063f507-51be-4f5f-9a4a-277abd3a0bfc/httpd-d9d3d1cc-a9d6-478e-89f5-cece291b7495 | |
16/10/24 21:07:49 INFO ShutdownHookManager: Deleting directory /tmp/spark-0063f507-51be-4f5f-9a4a-277abd3a0bfc | |
[ksaha@mesos101 SampleApp]$ |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment