Skip to content

Instantly share code, notes, and snippets.

@utsengar
Last active August 2, 2016 23:54
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save utsengar/7c4c8e0f408e36a8fb6d9c9d3bd6b301 to your computer and use it in GitHub Desktop.
Save utsengar/7c4c8e0f408e36a8fb6d9c9d3bd6b301 to your computer and use it in GitHub Desktop.
spark2.0 LR load
16/08/02 23:50:06 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 7401 ms on mesos-slave9-qa-uswest2.qasql.opentable.com (1/1)
16/08/02 23:50:06 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
16/08/02 23:50:06 INFO DAGScheduler: ResultStage 2 (parquet at GLMClassificationModel.scala:77) finished in 7.402 s
16/08/02 23:50:06 INFO DAGScheduler: Job 2 finished: parquet at GLMClassificationModel.scala:77, took 7.474796 s
Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: file://spark-warehouse, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:647)
at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:466)
at org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:119)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:116)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState$$anon$1.<init>(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:427)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:411)
at org.apache.spark.mllib.classification.impl.GLMClassificationModel$SaveLoadV1_0$.loadData(GLMClassificationModel.scala:77)
at org.apache.spark.mllib.classification.LogisticRegressionModel$.load(LogisticRegression.scala:183)
at org.apache.spark.mllib.classification.LogisticRegressionModel.load(LogisticRegression.scala)
at com.opentable.ds.batch.BulkSimulator.runSimulations(BulkSimulator.java:155)
at com.opentable.ds.batch.BulkSimulator.runSparkJob(BulkSimulator.java:140)
at com.opentable.ds.runner.SingleSparkRunner.main(SingleSparkRunner.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/08/02 23:50:07 INFO SparkContext: Invoking stop() from shutdown hook
16/08/02 23:50:07 INFO ServerConnector: Stopped ServerConnector@7baf6acf{HTTP/1.1}{0.0.0.0:4040}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment