Skip to content

Instantly share code, notes, and snippets.

@yaseminn

yaseminn/Error Secret

Created Jun 9, 2015
Embed
What would you like to do?
Error
~/spark-1.3.1-bin-hadoop2.4$ bin/spark-submit --class JavaApiDemo --master local[4] cassandra.jar >> yasemin.txt
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/06/09 21:24:09 INFO SparkContext: Running Spark version 1.3.1
15/06/09 21:24:09 WARN Utils: Your hostname, inosens1 resolves to a loopback address: 127.0.1.1; using 192.168.2.40 instead (on interface wlan0)
15/06/09 21:24:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/06/09 21:24:09 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/09 21:24:09 INFO SecurityManager: Changing view acls to: inosens
15/06/09 21:24:09 INFO SecurityManager: Changing modify acls to: inosens
15/06/09 21:24:09 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(inosens); users with modify permissions: Set(inosens)
15/06/09 21:24:09 INFO Slf4jLogger: Slf4jLogger started
15/06/09 21:24:09 INFO Remoting: Starting remoting
15/06/09 21:24:10 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.2.40:50056]
15/06/09 21:24:10 INFO Utils: Successfully started service 'sparkDriver' on port 50056.
15/06/09 21:24:10 INFO SparkEnv: Registering MapOutputTracker
15/06/09 21:24:10 INFO SparkEnv: Registering BlockManagerMaster
15/06/09 21:24:10 INFO DiskBlockManager: Created local directory at /tmp/spark-17463b5c-92c3-4088-98f6-b1286ca14fe7/blockmgr-dfa60354-2cb6-4223-a175-752163c9c49b
15/06/09 21:24:10 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/06/09 21:24:10 INFO HttpFileServer: HTTP File server directory is /tmp/spark-ac237ee4-c65f-4c90-bf1e-88fac98dccd7/httpd-6e34c89b-8cc3-427d-8da4-8c6f4bd84498
15/06/09 21:24:10 INFO HttpServer: Starting HTTP Server
15/06/09 21:24:10 INFO Server: jetty-8.y.z-SNAPSHOT
15/06/09 21:24:10 INFO AbstractConnector: Started SocketConnector@0.0.0.0:55136
15/06/09 21:24:10 INFO Utils: Successfully started service 'HTTP file server' on port 55136.
15/06/09 21:24:10 INFO SparkEnv: Registering OutputCommitCoordinator
15/06/09 21:24:10 INFO Server: jetty-8.y.z-SNAPSHOT
15/06/09 21:24:10 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/06/09 21:24:10 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/06/09 21:24:10 INFO SparkUI: Started SparkUI at http://192.168.2.40:4040
15/06/09 21:24:13 INFO SparkContext: Added JAR file:/home/inosens/spark-1.3.1-bin-hadoop2.4/cassandra.jar at http://192.168.2.40:55136/jars/cassandra.jar with timestamp 1433874253619
15/06/09 21:24:13 INFO Executor: Starting executor ID <driver> on host localhost
15/06/09 21:24:13 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@192.168.2.40:50056/user/HeartbeatReceiver
15/06/09 21:24:13 INFO NettyBlockTransferService: Server created on 55131
15/06/09 21:24:13 INFO BlockManagerMaster: Trying to register BlockManager
15/06/09 21:24:13 INFO BlockManagerMasterActor: Registering block manager localhost:55131 with 265.4 MB RAM, BlockManagerId(<driver>, localhost, 55131)
15/06/09 21:24:13 INFO BlockManagerMaster: Registered BlockManager
15/06/09 21:24:15 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
15/06/09 21:24:15 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
15/06/09 21:24:15 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
15/06/09 21:24:16 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
15/06/09 21:24:16 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
15/06/09 21:24:16 INFO SparkContext: Starting job: runJob at RDDFunctions.scala:36
15/06/09 21:24:16 INFO DAGScheduler: Got job 0 (runJob at RDDFunctions.scala:36) with 4 output partitions (allowLocal=false)
15/06/09 21:24:16 INFO DAGScheduler: Final stage: Stage 0(runJob at RDDFunctions.scala:36)
15/06/09 21:24:16 INFO DAGScheduler: Parents of final stage: List()
15/06/09 21:24:16 INFO DAGScheduler: Missing parents: List()
15/06/09 21:24:16 INFO DAGScheduler: Submitting Stage 0 (ParallelCollectionRDD[0] at parallelize at JavaApiDemo.java:28), which has no missing parents
15/06/09 21:24:16 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
15/06/09 21:24:16 INFO MemoryStore: ensureFreeSpace(8208) called with curMem=0, maxMem=278302556
15/06/09 21:24:16 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 8.0 KB, free 265.4 MB)
15/06/09 21:24:16 INFO MemoryStore: ensureFreeSpace(5358) called with curMem=8208, maxMem=278302556
15/06/09 21:24:16 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 5.2 KB, free 265.4 MB)
15/06/09 21:24:16 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:55131 (size: 5.2 KB, free: 265.4 MB)
15/06/09 21:24:16 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
15/06/09 21:24:16 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:839
15/06/09 21:24:16 INFO DAGScheduler: Submitting 4 missing tasks from Stage 0 (ParallelCollectionRDD[0] at parallelize at JavaApiDemo.java:28)
15/06/09 21:24:16 INFO TaskSchedulerImpl: Adding task set 0.0 with 4 tasks
15/06/09 21:24:17 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1316 bytes)
15/06/09 21:24:17 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 1505 bytes)
15/06/09 21:24:17 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, PROCESS_LOCAL, 1316 bytes)
15/06/09 21:24:17 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, localhost, PROCESS_LOCAL, 1507 bytes)
15/06/09 21:24:17 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
15/06/09 21:24:17 INFO Executor: Running task 3.0 in stage 0.0 (TID 3)
15/06/09 21:24:17 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/06/09 21:24:17 INFO Executor: Running task 2.0 in stage 0.0 (TID 2)
15/06/09 21:24:17 INFO Executor: Fetching http://192.168.2.40:55136/jars/cassandra.jar with timestamp 1433874253619
15/06/09 21:24:17 INFO Utils: Fetching http://192.168.2.40:55136/jars/cassandra.jar to /tmp/spark-1937612e-b0eb-4648-a5a1-3353cfa8123a/userFiles-6f3dea7d-6fec-45a7-beff-2505ed317152/fetchFileTemp8303096213250327224.tmp
15/06/09 21:24:22 INFO Executor: Adding file:/tmp/spark-1937612e-b0eb-4648-a5a1-3353cfa8123a/userFiles-6f3dea7d-6fec-45a7-beff-2505ed317152/cassandra.jar to class loader
15/06/09 21:24:22 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
15/06/09 21:24:22 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
15/06/09 21:24:22 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
15/06/09 21:24:22 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
15/06/09 21:24:22 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
15/06/09 21:24:22 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
15/06/09 21:24:22 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
15/06/09 21:24:22 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
15/06/09 21:24:22 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
15/06/09 21:24:22 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
15/06/09 21:24:22 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
15/06/09 21:24:22 INFO TableWriter: Wrote 0 rows to test.people in 0.594 s.
15/06/09 21:24:22 INFO TableWriter: Wrote 0 rows to test.people in 0.597 s.
15/06/09 21:24:22 INFO Executor: Finished task 2.0 in stage 0.0 (TID 2). 1612 bytes result sent to driver
15/06/09 21:24:22 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1612 bytes result sent to driver
15/06/09 21:24:22 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 5948 ms on localhost (1/4)
15/06/09 21:24:22 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 5933 ms on localhost (2/4)
15/06/09 21:24:22 INFO TableWriter: Wrote 1 rows to test.people in 0.656 s.
15/06/09 21:24:22 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1612 bytes result sent to driver
15/06/09 21:24:22 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 5948 ms on localhost (3/4)
15/06/09 21:24:22 INFO TableWriter: Wrote 1 rows to test.people in 0.666 s.
15/06/09 21:24:22 INFO Executor: Finished task 3.0 in stage 0.0 (TID 3). 1612 bytes result sent to driver
15/06/09 21:24:22 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 5954 ms on localhost (4/4)
15/06/09 21:24:22 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/06/09 21:24:22 INFO DAGScheduler: Stage 0 (runJob at RDDFunctions.scala:36) finished in 5.994 s
15/06/09 21:24:22 INFO DAGScheduler: Job 0 finished: runJob at RDDFunctions.scala:36, took 6.190069 s
Exception in thread "main" java.io.IOException: Failed to open thrift connection to Cassandra at 127.0.0.1:9160
at com.datastax.spark.connector.cql.CassandraConnector.createThriftClient(CassandraConnector.scala:139)
at com.datastax.spark.connector.cql.CassandraConnector.createThriftClient(CassandraConnector.scala:145)
at com.datastax.spark.connector.cql.CassandraConnector.withCassandraClientDo(CassandraConnector.scala:151)
at com.datastax.spark.connector.rdd.partitioner.CassandraRDDPartitioner.partitions(CassandraRDDPartitioner.scala:131)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1512)
at org.apache.spark.rdd.RDD.count(RDD.scala:1006)
at org.apache.spark.api.java.JavaRDDLike$class.count(JavaRDDLike.scala:420)
at org.apache.spark.api.java.AbstractJavaRDDLike.count(JavaRDDLike.scala:46)
at JavaApiDemo.<init>(JavaApiDemo.java:45)
at JavaApiDemo.main(JavaApiDemo.java:88)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoSuchMethodError: org.apache.cassandra.thrift.TFramedTransportFactory.openTransport(Ljava/lang/String;I)Lorg/apache/thrift/transport/TTransport;
at com.datastax.spark.connector.cql.DefaultConnectionFactory$.createThriftClient(CassandraConnectionFactory.scala:41)
at com.datastax.spark.connector.cql.CassandraConnector.createThriftClient(CassandraConnector.scala:134)
... 28 more
15/06/09 21:24:23 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment