Skip to content

Instantly share code, notes, and snippets.

@ssimeonov
Created February 8, 2016 07:50
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ssimeonov/89862967f87c5c497322 to your computer and use it in GitHub Desktop.
Save ssimeonov/89862967f87c5c497322 to your computer and use it in GitHub Desktop.
➜ spark git:(master) ✗ build/sbt sql/test
Using /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
[info] Loading global plugins from /Users/sim/.sbt/0.13/plugins
[info] Loading project definition from /Users/sim/dev/spx/spark/project/project
[info] Loading project definition from /Users/sim/.sbt/0.13/staging/ad8e8574a5bcb2d22d23/sbt-pom-reader/project
[warn] Multiple resolvers having different access mechanism configured with same name 'sbt-plugin-releases'. To avoid conflict, Remove duplicate project resolvers (`resolvers`) or rename publishing resolver (`publishTo`).
[info] Loading project definition from /Users/sim/dev/spx/spark/project
[info] Set current project to spark-parent (in build file:/Users/sim/dev/spx/spark/)
[info] ANTLR: Grammar file 'org/apache/spark/sql/catalyst/parser/SparkSqlLexer.g' detected.
[info] ANTLR: Grammar file 'org/apache/spark/sql/catalyst/parser/SparkSqlParser.g' detected.
[warn] /Users/sim/dev/spx/spark/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:984: method getStackTraceString in class RichException is deprecated: Use Throwable#getStackTrace
[warn] abortStage(stage, s"Task creation failed: $e\n${e.getStackTraceString}", Some(e))
[warn]
[warn] /Users/sim/dev/spx/spark/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:1020: method getStackTraceString in class RichException is deprecated: Use Throwable#getStackTrace
[warn] abortStage(stage, s"Task serialization failed: $e\n${e.getStackTraceString}", Some(e))
[warn]
[warn] /Users/sim/dev/spx/spark/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:1047: method getStackTraceString in class RichException is deprecated: Use Throwable#getStackTrace
[warn] abortStage(stage, s"Task creation failed: $e\n${e.getStackTraceString}", Some(e))
[warn]
[warn] /Users/sim/dev/spx/spark/core/src/main/scala/org/apache/spark/util/Utils.scala:1621: method lines_! in trait ProcessBuilder is deprecated: Use stream_! instead.
[warn] (linkCmd + src.getAbsolutePath() + " " + dst.getPath() + cmdSuffix) lines_!
[warn]
[warn] /Users/sim/dev/spx/spark/core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala:385: value metrics in class ExceptionFailure is deprecated: use accumUpdates instead
[warn] (Some(e.toErrorString), e.metrics)
[warn]
[warn] /Users/sim/dev/spx/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala:661: method normalize in class TypeApi is deprecated: Use `dealias` or `etaExpand` instead
[warn] tag.in(mirror).tpe.normalize
[warn]
[warn] /Users/sim/dev/spx/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala:768: value nme in trait StandardNames is deprecated: Use `termNames` instead
[warn] val constructorSymbol = tpe.member(nme.CONSTRUCTOR)
[warn]
[warn] /Users/sim/dev/spx/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala:770: method paramss in trait MethodSymbolApi is deprecated: Use `paramLists` instead
[warn] constructorSymbol.asMethod.paramss
[warn]
[warn] /Users/sim/dev/spx/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala:778: method paramss in trait MethodSymbolApi is deprecated: Use `paramLists` instead
[warn] primaryConstructorSymbol.get.asMethod.paramss
[warn]
[warn] /Users/sim/dev/spx/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala:749: method apply in object BigDecimal is deprecated: The default conversion from Float may not do what you want. Use BigDecimal.decimal for a String representation, or explicitly convert the Float with .toDouble.
[warn] BigDecimal(f).setScale(_scale, HALF_UP).toFloat
[warn]
[warn] /Users/sim/dev/spx/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/package.scala:54: class AbstractFileClassLoader in package interpreter is deprecated: Use `scala.tools.nsc.util.AbstractFileClassLoader`
[warn] .asInstanceOf[scala.tools.nsc.interpreter.AbstractFileClassLoader]
[warn]
[info] Compiling 17 Scala sources and 2 Java sources to /Users/sim/dev/spx/spark/core/target/scala-2.11/test-classes...
[warn] /Users/sim/dev/spx/spark/core/src/test/scala/org/apache/spark/ContextCleanerSuite.scala:445: trait SynchronizedSet in package mutable is deprecated: Synchronization via traits is deprecated as it is inherently unreliable. Consider java.util.concurrent.ConcurrentHashMap[A,Unit] as an alternative.
[warn] val toBeCleanedRDDIds = new HashSet[Int] with SynchronizedSet[Int] ++= rddIds
[warn] ^
[warn] /Users/sim/dev/spx/spark/core/src/test/scala/org/apache/spark/ContextCleanerSuite.scala:446: trait SynchronizedSet in package mutable is deprecated: Synchronization via traits is deprecated as it is inherently unreliable. Consider java.util.concurrent.ConcurrentHashMap[A,Unit] as an alternative.
[warn] val toBeCleanedShuffleIds = new HashSet[Int] with SynchronizedSet[Int] ++= shuffleIds
[warn] ^
[warn] /Users/sim/dev/spx/spark/core/src/test/scala/org/apache/spark/ContextCleanerSuite.scala:447: trait SynchronizedSet in package mutable is deprecated: Synchronization via traits is deprecated as it is inherently unreliable. Consider java.util.concurrent.ConcurrentHashMap[A,Unit] as an alternative.
[warn] val toBeCleanedBroadcstIds = new HashSet[Long] with SynchronizedSet[Long] ++= broadcastIds
[warn] ^
[warn] /Users/sim/dev/spx/spark/core/src/test/scala/org/apache/spark/ContextCleanerSuite.scala:448: trait SynchronizedSet in package mutable is deprecated: Synchronization via traits is deprecated as it is inherently unreliable. Consider java.util.concurrent.ConcurrentHashMap[A,Unit] as an alternative.
[warn] val toBeCheckpointIds = new HashSet[Long] with SynchronizedSet[Long] ++= checkpointIds
[warn] ^
[warn] four warnings found
[info] Compiling 90 Scala sources to /Users/sim/dev/spx/spark/sql/catalyst/target/scala-2.11/test-classes...
[warn] /Users/sim/dev/spx/spark/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralExpressionSuite.scala:99: Octal escape literals are deprecated, use \u0000 instead.
[warn] checkEvaluation(Literal("\0"), "\0")
[warn] ^
[warn] /Users/sim/dev/spx/spark/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralExpressionSuite.scala:99: Octal escape literals are deprecated, use \u0000 instead.
[warn] checkEvaluation(Literal("\0"), "\0")
[warn] ^
[warn] /Users/sim/dev/spx/spark/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvalHelper.scala:85: method getStackTraceString in class RichException is deprecated: Use Throwable#getStackTrace
[warn] |${e.getStackTraceString}
[warn] ^
[warn] three warnings found
[info] Compiling 131 Scala sources and 16 Java sources to /Users/sim/dev/spx/spark/sql/core/target/scala-2.11/test-classes...
[warn] /Users/sim/dev/spx/spark/sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/ColumnarTestUtils.scala:52: unreachable code
[warn] case STRING => UTF8String.fromString(Random.nextString(Random.nextInt(32)))
[warn] ^
[warn] one warning found
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128M; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=1g; support was removed in 8.0
[info] FilterNodeSuite:
[info] - empty (684 milliseconds)
[info] - basic (14 milliseconds)
[info] SemiJoinSuite:
02:43:33.912 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
02:43:33.978 WARN org.apache.spark.util.Utils: Your hostname, MacSim.home resolves to a loopback address: 127.0.0.1; using 192.168.1.4 instead (on interface en0)
02:43:33.979 WARN org.apache.spark.util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
[info] - basic test using LeftSemiJoinHash (1 second, 584 milliseconds)
[info] - basic test using BroadcastLeftSemiJoinHash (87 milliseconds)
[info] - basic test using LeftSemiJoinBNL (125 milliseconds)
[info] JoinSuite:
[info] - equi-join is hash-join (80 milliseconds)
[info] - join operator selection (2 seconds, 165 milliseconds)
[info] - broadcasted hash join operator selection (370 milliseconds)
[info] - broadcasted hash outer join operator selection (109 milliseconds)
[info] - multiple-key equi-join is hash-join (11 milliseconds)
[info] - inner join where, one match per row (958 milliseconds)
[info] - inner join ON, one match per row (125 milliseconds)
[info] - inner join, where, multiple matches (233 milliseconds)
[info] - inner join, no matches (352 milliseconds)
[info] - big inner join, 4 matches per row (706 milliseconds)
[info] - cartisian product join (137 milliseconds)
[info] - left outer join (1 second, 821 milliseconds)
[info] - right outer join (943 milliseconds)
[info] - full outer join (1 second, 75 milliseconds)
[info] - broadcasted left semi join operator selection (61 milliseconds)
[info] - cross join with broadcast (485 milliseconds)
[info] - left semi join (78 milliseconds)
[info] DataFrameJoinSuite:
[info] - join - join using (142 milliseconds)
[info] - join - join using multiple columns (137 milliseconds)
[info] - join - sorted columns not in join's outputSet (522 milliseconds)
[info] - join - join using multiple columns and specifying join type (914 milliseconds)
[info] - join - join using self join (93 milliseconds)
[info] - join - self join (183 milliseconds)
[info] - join - using aliases after self join (256 milliseconds)
02:43:49.855 WARN org.apache.spark.sql.Column: Constructing trivially true equals predicate, 'key#667 = key#667'. Perhaps you need to use aliases.
02:43:49.882 WARN org.apache.spark.sql.Column: Constructing trivially true equals predicate, 'key#667 = key#667'. Perhaps you need to use aliases.
02:43:49.919 WARN org.apache.spark.sql.Column: Constructing trivially true equals predicate, 'key#667 = key#667'. Perhaps you need to use aliases.
02:43:49.947 WARN org.apache.spark.sql.Column: Constructing trivially true equals predicate, 'key#667 = key#667'. Perhaps you need to use aliases.
02:43:49.993 WARN org.apache.spark.sql.Column: Constructing trivially true equals predicate, 'key#667 = key#667'. Perhaps you need to use aliases.
02:43:50.084 WARN org.apache.spark.sql.Column: Constructing trivially true equals predicate, 'key#667 = key#667'. Perhaps you need to use aliases.
02:43:50.144 WARN org.apache.spark.sql.Column: Constructing trivially true equals predicate, 'key#667 = key#667'. Perhaps you need to use aliases.
02:43:50.182 WARN org.apache.spark.sql.Column: Constructing trivially true equals predicate, 'key#667 = key#667'. Perhaps you need to use aliases.
[info] - [SPARK-6231] join - self join auto resolve ambiguity (472 milliseconds)
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[info] - broadcast join hint (1 second, 112 milliseconds)
[info] SQLQuerySuite:
[info] - having clause (112 milliseconds)
[info] - SPARK-8010: promote numeric to string (63 milliseconds)
[info] - show functions (27 milliseconds)
[info] - describe functions (9 milliseconds)
[info] - SPARK-6743: no columns from cache (152 milliseconds)
[info] - self join with aliases (128 milliseconds)
[info] - support table.star (87 milliseconds)
[info] - self join with alias in agg (251 milliseconds)
[info] - SPARK-8668 expr function (138 milliseconds)
[info] - SPARK-4625 support SORT BY in SimpleSQLParser & DSL (33 milliseconds)
[info] - SPARK-7158 collect and take return different results (72 milliseconds)
[info] - grouping on nested fields (180 milliseconds)
[info] - SPARK-6201 IN type conversion (61 milliseconds)
[info] - SPARK-11226 Skip empty line in json file (65 milliseconds)
[info] - SPARK-8828 sum should return null if all input values are null (61 milliseconds)
[info] - aggregation with codegen (1 second, 954 milliseconds)
[info] - Add Parser of SQL COALESCE() (79 milliseconds)
[info] - SPARK-3176 Added Parser of SQL LAST() (49 milliseconds)
[info] - SPARK-2041 column name equals tablename (25 milliseconds)
[info] - SQRT (33 milliseconds)
[info] - SQRT with automatic string casts (37 milliseconds)
[info] - SPARK-2407 Added Parser of SQL SUBSTR() (117 milliseconds)
[info] - SPARK-3173 Timestamp support in the parser (233 milliseconds)
[info] - index into array (69 milliseconds)
[info] - left semi greater than predicate (63 milliseconds)
[info] - left semi greater than predicate and equal operator (154 milliseconds)
[info] - index into array of arrays (65 milliseconds)
[info] - agg (65 milliseconds)
[info] - literal in agg grouping expressions (351 milliseconds)
[info] - aggregates with nulls (203 milliseconds)
[info] - select * (41 milliseconds)
[info] - simple select (29 milliseconds)
[info] - external sorting (725 milliseconds)
[info] - limit (110 milliseconds)
[info] - CTE feature (132 milliseconds)
[info] - Allow only a single WITH clause per query (6 milliseconds)
[info] - date row *** FAILED *** (35 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Limit 1
[info] +- 'Project [unresolvedalias(cast(2015-01-28 as date),None)]
[info] +- 'UnresolvedRelation `testData`, None
[info]
[info] == Analyzed Logical Plan ==
[info] _c0: date
[info] Limit 1
[info] +- Project [cast(2015-01-28 as date) AS _c0#1524]
[info] +- Subquery testData
[info] +- LogicalRDD [key#716,value#717], MapPartitionsRDD[3] at beforeAll at BeforeAndAfterAll.scala:187
[info]
[info] == Optimized Logical Plan ==
[info] Limit 1
[info] +- Project [16463 AS _c0#1524]
[info] +- LogicalRDD [key#716,value#717], MapPartitionsRDD[3] at beforeAll at BeforeAndAfterAll.scala:187
[info]
[info] == Physical Plan ==
[info] Limit 1
[info] +- WholeStageCodegen
[info] : +- Project [16463 AS _c0#1524]
[info] : +- INPUT
[info] +- Scan ExistingRDD[key#716,value#717]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 1 == == Spark Answer - 1 ==
[info] ![2015-01-28] [2015-01-27] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:149)
[info] at org.apache.spark.sql.SQLQuerySuite$$anonfun$37.apply$mcV$sp(SQLQuerySuite.scala:584)
[info] at org.apache.spark.sql.SQLQuerySuite$$anonfun$37.apply(SQLQuerySuite.scala:584)
[info] at org.apache.spark.sql.SQLQuerySuite$$anonfun$37.apply(SQLQuerySuite.scala:584)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.SQLQuerySuite.org$scalatest$BeforeAndAfterAll$$super$run(SQLQuerySuite.scala:36)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.SQLQuerySuite.run(SQLQuerySuite.scala:36)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - from follow multiple brackets (128 milliseconds)
[info] - average (45 milliseconds)
[info] - average overflow (85 milliseconds)
[info] - count (52 milliseconds)
[info] - count distinct (64 milliseconds)
[info] - approximate count distinct (116 milliseconds)
[info] - approximate count distinct with user provided standard deviation (111 milliseconds)
[info] - null count (269 milliseconds)
[info] - count of empty table (51 milliseconds)
[info] - inner join where, one match per row (103 milliseconds)
[info] - inner join ON, one match per row (63 milliseconds)
[info] - inner join, where, multiple matches (107 milliseconds)
[info] - inner join, no matches (91 milliseconds)
[info] - big inner join, 4 matches per row (243 milliseconds)
[info] - cartesian product join (26 milliseconds)
[info] - left outer join (68 milliseconds)
[info] - right outer join (95 milliseconds)
[info] - full outer join (121 milliseconds)
[info] - SPARK-11111 null-safe join should not use cartesian product (98 milliseconds)
[info] - SPARK-3349 partitioning after limit (233 milliseconds)
[info] - mixed-case keywords (99 milliseconds)
[info] - select with table name as qualifier (21 milliseconds)
[info] - inner join ON with table name as qualifier (57 milliseconds)
[info] - qualified select with inner join ON with table name as qualifier (88 milliseconds)
[info] - system function upper() (47 milliseconds)
[info] - system function lower() (45 milliseconds)
[info] - UNION (238 milliseconds)
[info] - UNION with column mismatches (217 milliseconds)
[info] - EXCEPT (167 milliseconds)
[info] - INTERSECT (187 milliseconds)
[info] - SET commands semantics using sql() (20 milliseconds)
02:44:01.055 WARN org.apache.spark.sql.execution.SetCommand: Property mapred.reduce.tasks is deprecated, automatically converted to spark.sql.shuffle.partitions instead.
02:44:01.056 WARN org.apache.spark.sql.execution.SetCommand: Property mapred.reduce.tasks is deprecated, automatically converted to spark.sql.shuffle.partitions instead.
02:44:01.057 WARN org.apache.spark.sql.execution.SetCommand: Property mapred.reduce.tasks is deprecated, automatically converted to spark.sql.shuffle.partitions instead.
[info] - SET commands with illegal or inappropriate argument (4 milliseconds)
[info] - apply schema (143 milliseconds)
[info] - SPARK-3423 BETWEEN (89 milliseconds)
[info] - cast boolean to string (23 milliseconds)
[info] - metadata is propagated correctly (15 milliseconds)
[info] - SPARK-3371 Renaming a function expression with group by gives error (78 milliseconds)
[info] - SPARK-3813 CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END (73 milliseconds)
[info] - SPARK-3813 CASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] END (52 milliseconds)
[info] - throw errors for non-aggregate attributes with aggregation (11 milliseconds)
[info] - Test to check we can use Long.MinValue (77 milliseconds)
[info] - Floating point number format (68 milliseconds)
[info] - Auto cast integer type (60 milliseconds)
[info] - Test to check we can apply sign to expression (423 milliseconds)
[info] - Multiple join (607 milliseconds)
[info] - SPARK-3483 Special chars in column names (15 milliseconds)
[info] - SPARK-3814 Support Bitwise & operator (32 milliseconds)
[info] - SPARK-3814 Support Bitwise | operator (25 milliseconds)
[info] - SPARK-3814 Support Bitwise ^ operator (26 milliseconds)
[info] - SPARK-3814 Support Bitwise ~ operator (27 milliseconds)
[info] - SPARK-4120 Join of multiple tables does not work in SparkSQL (123 milliseconds)
[info] - SPARK-4154 Query does not work if it has 'not between' in Spark SQL and HQL (61 milliseconds)
[info] - SPARK-4207 Query which has syntax like 'not like' is not working in Spark SQL (60 milliseconds)
[info] - SPARK-4322 Grouping field with struct field as sub expression (129 milliseconds)
[info] - SPARK-4432 Fix attribute reference resolution error when using ORDER BY (70 milliseconds)
[info] - oder by asc by default when not specify ascending and descending (69 milliseconds)
[info] - Supporting relational operator '<=>' in Spark SQL (104 milliseconds)
[info] - Multi-column COUNT(DISTINCT ...) (68 milliseconds)
[info] - SPARK-4699 case sensitivity SQL query (26 milliseconds)
[info] - SPARK-6145: ORDER BY test for nested fields (341 milliseconds)
[info] - SPARK-6145: special cases (130 milliseconds)
[info] - SPARK-6898: complete support for special chars in column names (46 milliseconds)
[info] - SPARK-6583 order by aggregated function (793 milliseconds)
[info] - SPARK-7952: fix the equality check between boolean and numeric types (66 milliseconds)
[info] - SPARK-7067: order by queries for complex ExtractValue chain (73 milliseconds)
[info] - SPARK-8782: ORDER BY NULL (51 milliseconds)
[info] - SPARK-8837: use keyword in column name (45 milliseconds)
[info] - SPARK-8753: add interval type (27 milliseconds)
[info] - SPARK-8945: add and subtract expressions for interval type (60 milliseconds)
[info] - aggregation with codegen updates peak execution memory (60 milliseconds)
[info] - decimal precision with multiply/division (178 milliseconds)
[info] - SPARK-10215 Div of Decimal returns null (159 milliseconds)
[info] - precision smaller than scale (109 milliseconds)
[info] - external sorting updates peak execution memory (54 milliseconds)
[info] - SPARK-9511: error with table starting with number (50 milliseconds)
[info] - specifying database name for a temporary table is not allowed (216 milliseconds)
[info] - SPARK-10130 type coercion for IF should have children resolved first (29 milliseconds)
[info] - SPARK-10389: order by non-attribute grouping expression on Aggregate (212 milliseconds)
[info] - run sql directly on files (410 milliseconds)
[info] - SortMergeJoin returns wrong results when using UnsafeRows (389 milliseconds)
[info] - SPARK-11032: resolve having correctly (113 milliseconds)
[info] - SPARK-11303: filter should not be pushed down into sample (318 milliseconds)
[info] - Struct Star Expansion (1 second, 363 milliseconds)
[info] - Struct Star Expansion - Name conflict (70 milliseconds)
[info] - Common subexpression elimination (857 milliseconds)
[info] - SPARK-10707: nullability should be correctly propagated through set operations (1) (75 milliseconds)
[info] - SPARK-10707: nullability should be correctly propagated through set operations (2) (64 milliseconds)
[info] - rollup (278 milliseconds)
[info] - cube (157 milliseconds)
[info] - SPARK-13056: Null in map value causes NPE (81 milliseconds)
[info] - hash function (30 milliseconds)
[info] - natural join (453 milliseconds)
[info] RowSuite:
[info] - create row (1 millisecond)
[info] - SpecificMutableRow.update with null (2 milliseconds)
[info] - serialize w/ kryo (26 milliseconds)
[info] - get values by field name on Row created via .toDF (9 milliseconds)
[info] - float NaN == NaN (0 milliseconds)
[info] - double NaN == NaN (0 milliseconds)
[info] - equals and hashCode (1 millisecond)
[info] PartitionedWriteSuite:
[info] - write many partitions (1 second, 386 milliseconds)
[info] - write many partitions with repeats (1 second, 622 milliseconds)
[info] - partitioned columns should appear at the end of schema (88 milliseconds)
[info] HashJoinNodeSuite:
[info] - BuildLeft: empty (196 milliseconds)
[info] - BuildLeft: no matches (231 milliseconds)
[info] - BuildLeft: partial matches (7 milliseconds)
[info] - BuildLeft: full matches (8 milliseconds)
[info] - BuildRight: empty (6 milliseconds)
[info] - BuildRight: no matches (251 milliseconds)
[info] - BuildRight: partial matches (7 milliseconds)
[info] - BuildRight: full matches (6 milliseconds)
[info] UnsafeFixedWidthAggregationMapSuite:
[info] - supported schemas (1 millisecond)
[info] - empty map (1 millisecond)
[info] - updating values for a single key (13 milliseconds)
[info] - inserting large random keys (60 milliseconds)
[info] - test external sorting (36 milliseconds)
[info] - test external sorting with an empty map (20 milliseconds)
[info] - test external sorting with empty records (32 milliseconds)
[info] - convert to external sorter under memory pressure (SPARK-10474) (9 milliseconds)
[info] DataFrameComplexTypeSuite:
[info] - UDF on struct (31 milliseconds)
[info] - UDF on named_struct (27 milliseconds)
[info] - UDF on array (37 milliseconds)
[info] - SPARK-12477 accessing null element in array field (40 milliseconds)
[info] DDLTestSuite:
[info] - describe ddlPeople (18 milliseconds)
[info] - SPARK-7686 DescribeCommand should have correct physical plan output attributes (3 milliseconds)
[info] ExpandNodeSuite:
[info] - empty (10 milliseconds)
[info] - basic (3 milliseconds)
[info] JDBCSuite:
[info] - SELECT * (36 milliseconds)
[info] - SELECT * WHERE (simple predicates) (239 milliseconds)
[info] - SELECT * WHERE (quoted strings) (15 milliseconds)
[info] - SELECT first field (12 milliseconds)
[info] - SELECT first field when fetchSize is two (15 milliseconds)
[info] - SELECT second field (16 milliseconds)
[info] - SELECT second field when fetchSize is two (10 milliseconds)
[info] - SELECT * partitioned (14 milliseconds)
[info] - SELECT WHERE (simple predicates) partitioned (43 milliseconds)
[info] - SELECT second field partitioned (11 milliseconds)
[info] - Register JDBC query with renamed fields (8 milliseconds)
[info] - Basic API (9 milliseconds)
[info] - Basic API with FetchSize (8 milliseconds)
[info] - Partitioning via JDBCPartitioningInfo API (9 milliseconds)
[info] - Partitioning via list-of-where-clauses API (15 milliseconds)
[info] - H2 integral types (18 milliseconds)
[info] - H2 null entries (14 milliseconds)
[info] - H2 string types (12 milliseconds)
[info] - H2 time types *** FAILED *** (31 milliseconds)
[info] 1995 did not equal 1996 (JDBCSuite.scala:347)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$23.apply$mcV$sp(JDBCSuite.scala:347)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$23.apply(JDBCSuite.scala:339)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$23.apply(JDBCSuite.scala:339)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfter$$super$runTest(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.runTest(JDBCSuite.scala:38)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfter$$super$run(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfterAll$$super$run(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.run(JDBCSuite.scala:38)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - test DATE types *** FAILED *** (40 milliseconds)
[info] 1995-12-31 did not equal 1996-01-01 (JDBCSuite.scala:365)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$24.apply$mcV$sp(JDBCSuite.scala:365)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$24.apply(JDBCSuite.scala:360)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$24.apply(JDBCSuite.scala:360)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfter$$super$runTest(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.runTest(JDBCSuite.scala:38)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfter$$super$run(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfterAll$$super$run(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.run(JDBCSuite.scala:38)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - test DATE types in cache *** FAILED *** (27 milliseconds)
[info] 1995-12-31 did not equal 1996-01-01 (JDBCSuite.scala:375)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$25.apply$mcV$sp(JDBCSuite.scala:375)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$25.apply(JDBCSuite.scala:370)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$25.apply(JDBCSuite.scala:370)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfter$$super$runTest(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.runTest(JDBCSuite.scala:38)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfter$$super$run(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfterAll$$super$run(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.run(JDBCSuite.scala:38)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - test types for null value (11 milliseconds)
[info] - H2 floating-point types (45 milliseconds)
[info] - SQL query as table name (17 milliseconds)
[info] - Pass extra properties via OPTIONS (3 milliseconds)
[info] - Remap types via JdbcDialects (11 milliseconds)
[info] - Default jdbc dialect registration (1 millisecond)
[info] - quote column names by jdbc dialect (2 milliseconds)
[info] - compile filters (7 milliseconds)
[info] - Dialect unregister (0 milliseconds)
[info] - Aggregated dialects (3 milliseconds)
[info] - DB2Dialect type mapping (2 milliseconds)
[info] - PostgresDialect type mapping (2 milliseconds)
[info] - DerbyDialect jdbc type mapping (1 millisecond)
[info] - table exists query by jdbc dialect (1 millisecond)
[info] - Test DataFrame.where for Date and Timestamp *** FAILED *** (16 milliseconds)
[info] 1995-12-31 did not equal 1996-01-01 (JDBCSuite.scala:554)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$41.apply$mcV$sp(JDBCSuite.scala:554)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$41.apply(JDBCSuite.scala:548)
[info] at org.apache.spark.sql.jdbc.JDBCSuite$$anonfun$41.apply(JDBCSuite.scala:548)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfter$$super$runTest(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.runTest(JDBCSuite.scala:38)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfter$$super$run(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.org$scalatest$BeforeAndAfterAll$$super$run(JDBCSuite.scala:38)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.jdbc.JDBCSuite.run(JDBCSuite.scala:38)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - test credentials in the properties are not in plan output (8 milliseconds)
[info] - test credentials in the connection url are not in the plan output (4 milliseconds)
[info] NullableColumnBuilderSuite:
[info] - BOOLEAN column builder: empty column (2 milliseconds)
[info] - BOOLEAN column builder: buffer size auto growth (4 milliseconds)
[info] - BOOLEAN column builder: null values (2 milliseconds)
[info] - BYTE column builder: empty column (0 milliseconds)
[info] - BYTE column builder: buffer size auto growth (0 milliseconds)
[info] - BYTE column builder: null values (1 millisecond)
[info] - SHORT column builder: empty column (0 milliseconds)
[info] - SHORT column builder: buffer size auto growth (0 milliseconds)
[info] - SHORT column builder: null values (0 milliseconds)
[info] - INT column builder: empty column (1 millisecond)
[info] - INT column builder: buffer size auto growth (0 milliseconds)
[info] - INT column builder: null values (0 milliseconds)
[info] - LONG column builder: empty column (0 milliseconds)
[info] - LONG column builder: buffer size auto growth (1 millisecond)
[info] - LONG column builder: null values (0 milliseconds)
[info] - FLOAT column builder: empty column (1 millisecond)
[info] - FLOAT column builder: buffer size auto growth (0 milliseconds)
[info] - FLOAT column builder: null values (1 millisecond)
[info] - DOUBLE column builder: empty column (0 milliseconds)
[info] - DOUBLE column builder: buffer size auto growth (1 millisecond)
[info] - DOUBLE column builder: null values (0 milliseconds)
[info] - STRING column builder: empty column (0 milliseconds)
[info] - STRING column builder: buffer size auto growth (1 millisecond)
[info] - STRING column builder: null values (1 millisecond)
[info] - BINARY column builder: empty column (0 milliseconds)
[info] - BINARY column builder: buffer size auto growth (1 millisecond)
[info] - BINARY column builder: null values (1 millisecond)
[info] - COMPACT_DECIMAL column builder: empty column (0 milliseconds)
[info] - COMPACT_DECIMAL column builder: buffer size auto growth (0 milliseconds)
[info] - COMPACT_DECIMAL column builder: null values (1 millisecond)
[info] - LARGE_DECIMAL column builder: empty column (1 millisecond)
[info] - LARGE_DECIMAL column builder: buffer size auto growth (0 milliseconds)
[info] - LARGE_DECIMAL column builder: null values (1 millisecond)
[info] - STRUCT column builder: empty column (1 millisecond)
[info] - STRUCT column builder: buffer size auto growth (0 milliseconds)
[info] - STRUCT column builder: null values (0 milliseconds)
[info] - ARRAY column builder: empty column (1 millisecond)
[info] - ARRAY column builder: buffer size auto growth (1 millisecond)
[info] - ARRAY column builder: null values (1 millisecond)
[info] - MAP column builder: empty column (0 milliseconds)
[info] - MAP column builder: buffer size auto growth (2 milliseconds)
[info] - MAP column builder: null values (1 millisecond)
[info] SQLListenerSuite:
[info] - basic (37 milliseconds)
[info] - onExecutionEnd happens before onJobEnd(JobSucceeded) (4 milliseconds)
[info] - onExecutionEnd happens before multiple onJobEnd(JobSucceeded)s (3 milliseconds)
[info] - onExecutionEnd happens before onJobEnd(JobFailed) (4 milliseconds)
[info] - SPARK-11126: no memory leak when running non SQL jobs (58 milliseconds)
[info] - SPARK-13055: history listener only tracks SQL metrics (52 milliseconds)
[info] SQLListenerMemoryLeakSuite:
[info] - no memory leak (797 milliseconds)
[info] DatasetCacheSuite:
[info] - persist and unpersist (125 milliseconds)
[info] - persist and then rebind right encoder when join 2 datasets (211 milliseconds)
[info] - persist and then groupBy columns asKey, map (174 milliseconds)
[info] ExtraStrategiesSuite:
[info] - insert an extraStrategy (58 milliseconds)
[info] StringFunctionsSuite:
[info] - string concat (83 milliseconds)
[info] - string concat_ws (97 milliseconds)
[info] - string Levenshtein distance (54 milliseconds)
[info] - string regex_replace / regex_extract (102 milliseconds)
[info] - string ascii function (41 milliseconds)
[info] - string base64/unbase64 function (53 milliseconds)
[info] - string / binary substring function (48 milliseconds)
[info] - string encode/decode function (51 milliseconds)
[info] - string translate (35 milliseconds)
[info] - string trim functions (45 milliseconds)
[info] - string formatString function (48 milliseconds)
[info] - soundex function (40 milliseconds)
[info] - string instr function (39 milliseconds)
[info] - string substring_index function (40 milliseconds)
[info] - string locate function (58 milliseconds)
[info] - string padding functions (73 milliseconds)
[info] - string repeat function (42 milliseconds)
[info] - string reverse function (57 milliseconds)
[info] - string space function (18 milliseconds)
[info] - string split function (39 milliseconds)
[info] - string / binary length function (57 milliseconds)
[info] - initcap function (44 milliseconds)
[info] - number format function (233 milliseconds)
[info] ColumnExpressionSuite:
[info] - column names with space (69 milliseconds)
[info] - column names with dot (148 milliseconds)
[info] - alias (4 milliseconds)
[info] - as propagates metadata (0 milliseconds)
[info] - single explode (51 milliseconds)
[info] - explode and other columns (83 milliseconds)
[info] - aliased explode (98 milliseconds)
[info] - explode on map (52 milliseconds)
[info] - explode on map with aliases (38 milliseconds)
02:44:21.122 WARN org.apache.spark.sql.Column: Constructing trivially true equals predicate, 'i#7951 = i#7951'. Perhaps you need to use aliases.
02:44:21.162 WARN org.apache.spark.sql.Column: Constructing trivially true equals predicate, 'i#7951 = i#7951'. Perhaps you need to use aliases.
[info] - self join explode (105 milliseconds)
[info] - collect on column produced by a binary operator (40 milliseconds)
[info] - star (29 milliseconds)
[info] - star qualified by data frame object (43 milliseconds)
[info] - star qualified by table name (27 milliseconds)
[info] - + (59 milliseconds)
[info] - - (56 milliseconds)
[info] - * (67 milliseconds)
[info] - / (58 milliseconds)
[info] - % (56 milliseconds)
[info] - unary - (27 milliseconds)
[info] - unary ! (61 milliseconds)
[info] - isNull (47 milliseconds)
[info] - isNotNull (78 milliseconds)
[info] - isNaN (71 milliseconds)
[info] - nanvl (113 milliseconds)
[info] - === (61 milliseconds)
[info] - <=> (51 milliseconds)
[info] - !== (113 milliseconds)
[info] - > (50 milliseconds)
[info] - >= (50 milliseconds)
[info] - < (53 milliseconds)
[info] - <= (52 milliseconds)
[info] - between (60 milliseconds)
[info] - in (180 milliseconds)
[info] - && (60 milliseconds)
[info] - || (64 milliseconds)
[info] - SPARK-7321 when conditional statements (57 milliseconds)
[info] - sqrt (170 milliseconds)
[info] - upper (93 milliseconds)
[info] - lower (83 milliseconds)
[info] - monotonicallyIncreasingId (65 milliseconds)
[info] - spark_partition_id (31 milliseconds)
[info] - input_file_name (128 milliseconds)
[info] - columns can be compared (1 millisecond)
[info] - alias with metadata (1 millisecond)
[info] - rand (45 milliseconds)
[info] - randn (18 milliseconds)
[info] - bitwiseAND (57 milliseconds)
[info] - bitwiseOR (58 milliseconds)
[info] - bitwiseXOR (85 milliseconds)
[info] ColumnarBatchSuite:
[info] - Null Apis (8 milliseconds)
[info] - Byte Apis (3 milliseconds)
[info] - Int Apis (5 milliseconds)
[info] - Long Apis (6 milliseconds)
[info] - Double APIs (5 milliseconds)
[info] - String APIs (6 milliseconds)
[info] - Int Array (2 milliseconds)
[info] - Struct Column (2 milliseconds)
[info] - ColumnarBatch basic (5 milliseconds)
[info] - Convert rows (4 milliseconds)
[info] - Random flat schema (3 seconds, 736 milliseconds)
[info] - Random nested schema (14 seconds, 249 milliseconds)
[info] OuterJoinSuite:
[info] - basic left outer join using BroadcastHashOuterJoin (55 milliseconds)
[info] - basic left outer join using SortMergeOuterJoin (40 milliseconds)
[info] - basic right outer join using BroadcastHashOuterJoin (19 milliseconds)
[info] - basic right outer join using SortMergeOuterJoin (29 milliseconds)
[info] - basic full outer join using SortMergeOuterJoin (27 milliseconds)
[info] - left outer join with both inputs empty using BroadcastHashOuterJoin (22 milliseconds)
[info] - left outer join with both inputs empty using SortMergeOuterJoin (24 milliseconds)
[info] - right outer join with both inputs empty using BroadcastHashOuterJoin (25 milliseconds)
[info] - right outer join with both inputs empty using SortMergeOuterJoin (25 milliseconds)
[info] - full outer join with both inputs empty using SortMergeOuterJoin (22 milliseconds)
[info] ParquetPartitionDiscoverySuite:
[info] - column type inference (3 milliseconds)
[info] - parse invalid partitioned directories (10 milliseconds)
[info] - parse partition (2 milliseconds)
[info] - parse partitions (3 milliseconds)
[info] - parse partitions with type inference disabled (2 milliseconds)
[info] - read partitioned table - normal case (475 milliseconds)
[info] - read partitioned table - partition key included in Parquet file (475 milliseconds)
[info] - read partitioned table - with nulls (448 milliseconds)
[info] - read partitioned table - with nulls and partition keys are included in Parquet file (389 milliseconds)
[info] - read partitioned table - merging compatible schemas (186 milliseconds)
[info] - SPARK-7749 Non-partitioned table should have empty partition spec (75 milliseconds)
[info] - SPARK-7847: Dynamic partition directory path escaping and unescaping (230 milliseconds)
[info] - Various partition value types *** FAILED *** (322 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [unresolvedalias(cast('p_0 as tinyint),None),unresolvedalias(cast('p_1 as smallint),None),unresolvedalias(cast('p_2 as int),None),unresolvedalias(cast('p_3 as bigint),None),unresolvedalias(cast('p_4 as float),None),unresolvedalias(cast('p_5 as double),None),unresolvedalias(cast('p_6 as decimal(10,5)),None),unresolvedalias(cast('p_7 as decimal(38,18)),None),unresolvedalias(cast('p_8 as date),None),unresolvedalias(cast('p_9 as timestamp),None),unresolvedalias(cast('p_10 as string),None),unresolvedalias(cast('i as string),None)]
[info] +- Relation[i#8492,p_0#8493,p_1#8494,p_2#8495,p_3#8496L,p_4#8497,p_5#8498,p_6#8499,p_7#8500,p_8#8501,p_9#8502,p_10#8503] ParquetRelation
[info]
[info] == Analyzed Logical Plan ==
[info] p_0: tinyint, p_1: smallint, p_2: int, p_3: bigint, p_4: float, p_5: double, p_6: decimal(10,5), p_7: decimal(38,18), p_8: date, p_9: timestamp, p_10: string, i: string
[info] Project [cast(p_0#8493 as tinyint) AS p_0#8504,cast(p_1#8494 as smallint) AS p_1#8505,cast(p_2#8495 as int) AS p_2#8506,cast(p_3#8496L as bigint) AS p_3#8507L,cast(p_4#8497 as float) AS p_4#8508,cast(p_5#8498 as double) AS p_5#8509,cast(p_6#8499 as decimal(10,5)) AS p_6#8510,cast(p_7#8500 as decimal(38,18)) AS p_7#8511,cast(p_8#8501 as date) AS p_8#8512,cast(p_9#8502 as timestamp) AS p_9#8513,cast(p_10#8503 as string) AS p_10#8514,cast(i#8492 as string) AS i#8515]
[info] +- Relation[i#8492,p_0#8493,p_1#8494,p_2#8495,p_3#8496L,p_4#8497,p_5#8498,p_6#8499,p_7#8500,p_8#8501,p_9#8502,p_10#8503] ParquetRelation
[info]
[info] == Optimized Logical Plan ==
[info] Project [cast(p_0#8493 as tinyint) AS p_0#8504,cast(p_1#8494 as smallint) AS p_1#8505,p_2#8495 AS p_2#8506,p_3#8496L AS p_3#8507L,cast(p_4#8497 as float) AS p_4#8508,p_5#8498 AS p_5#8509,cast(p_6#8499 as decimal(10,5)) AS p_6#8510,cast(p_7#8500 as decimal(38,18)) AS p_7#8511,cast(p_8#8501 as date) AS p_8#8512,cast(p_9#8502 as timestamp) AS p_9#8513,p_10#8503 AS p_10#8514,i#8492 AS i#8515]
[info] +- Relation[i#8492,p_0#8493,p_1#8494,p_2#8495,p_3#8496L,p_4#8497,p_5#8498,p_6#8499,p_7#8500,p_8#8501,p_9#8502,p_10#8503] ParquetRelation
[info]
[info] == Physical Plan ==
[info] WholeStageCodegen
[info] : +- Project [cast(p_0#8493 as tinyint) AS p_0#8504,cast(p_1#8494 as smallint) AS p_1#8505,p_2#8495 AS p_2#8506,p_3#8496L AS p_3#8507L,cast(p_4#8497 as float) AS p_4#8508,p_5#8498 AS p_5#8509,cast(p_6#8499 as decimal(10,5)) AS p_6#8510,cast(p_7#8500 as decimal(38,18)) AS p_7#8511,cast(p_8#8501 as date) AS p_8#8512,cast(p_9#8502 as timestamp) AS p_9#8513,p_10#8503 AS p_10#8514,i#8492 AS i#8515]
[info] : +- INPUT
[info] +- Scan ParquetRelation[i#8492,p_8#8501,p_0#8493,p_3#8496L,p_5#8498,p_6#8499,p_10#8503,p_1#8494,p_4#8497,p_7#8500,p_2#8495,p_9#8502] InputPaths: file:/Users/sim/dev/spx/spark/target/tmp/spark-0e59eda3-6acd-488e-87f8-79e35fde663e
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 1 == == Spark Answer - 1 ==
[info] ![100,-25536,2147483647,9223372036854775807,1.5,4.5,2.12500,2.125,2015-05-23,1969-12-31 16:00:00.0,This is a string, /[]?=:,This is not a partition column] [100,-25536,2147483647,9223372036854775807,1.5,4.5,2.12500,2.125000000000000000,2015-05-22,1969-12-31 16:00:00.0,This is a string, /[]?=:,This is not a partition column] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:149)
[info] at org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite$$anonfun$13$$anonfun$apply$mcV$sp$42.apply(ParquetPartitionDiscoverySuite.scala:623)
[info] at org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite$$anonfun$13$$anonfun$apply$mcV$sp$42.apply(ParquetPartitionDiscoverySuite.scala:620)
[info] at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:125)
[info] at org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite.withTempPath(ParquetPartitionDiscoverySuite.scala:43)
[info] at org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite$$anonfun$13.apply$mcV$sp(ParquetPartitionDiscoverySuite.scala:620)
[info] at org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite$$anonfun$13.apply(ParquetPartitionDiscoverySuite.scala:582)
[info] at org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite$$anonfun$13.apply(ParquetPartitionDiscoverySuite.scala:582)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite.org$scalatest$BeforeAndAfterAll$$super$run(ParquetPartitionDiscoverySuite.scala:43)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite.run(ParquetPartitionDiscoverySuite.scala:43)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - SPARK-8037: Ignores files whose name starts with dot (277 milliseconds)
[info] - SPARK-11678: Partition discovery stops at the root path of the dataset (438 milliseconds)
[info] - use basePath to specify the root dir of a partitioned table. (171 milliseconds)
[info] - listConflictingPartitionColumns (7 milliseconds)
[info] - Parallel partition discovery (444 milliseconds)
[info] ColumnStatsSuite:
[info] - BooleanColumnStats: empty (1 millisecond)
[info] - BooleanColumnStats: non-empty (4 milliseconds)
[info] - ByteColumnStats: empty (0 milliseconds)
[info] - ByteColumnStats: non-empty (1 millisecond)
[info] - ShortColumnStats: empty (0 milliseconds)
[info] - ShortColumnStats: non-empty (1 millisecond)
[info] - IntColumnStats: empty (0 milliseconds)
[info] - IntColumnStats: non-empty (0 milliseconds)
[info] - LongColumnStats: empty (0 milliseconds)
[info] - LongColumnStats: non-empty (0 milliseconds)
[info] - FloatColumnStats: empty (0 milliseconds)
[info] - FloatColumnStats: non-empty (1 millisecond)
[info] - DoubleColumnStats: empty (0 milliseconds)
[info] - DoubleColumnStats: non-empty (0 milliseconds)
[info] - StringColumnStats: empty (0 milliseconds)
[info] - StringColumnStats: non-empty (1 millisecond)
[info] - DecimalColumnStats: empty (1 millisecond)
[info] - DecimalColumnStats: non-empty (3 milliseconds)
[info] MathExpressionsSuite:
[info] - sin (50 milliseconds)
[info] - asin (107 milliseconds)
[info] - sinh (70 milliseconds)
[info] - cos (47 milliseconds)
[info] - acos (41 milliseconds)
[info] - cosh (45 milliseconds)
[info] - tan (39 milliseconds)
[info] - atan (31 milliseconds)
[info] - tanh (33 milliseconds)
[info] - toDegrees (64 milliseconds)
[info] - toRadians (55 milliseconds)
[info] - cbrt (30 milliseconds)
[info] - ceil and ceiling (56 milliseconds)
[info] - conv (171 milliseconds)
[info] - floor (31 milliseconds)
[info] - factorial (36 milliseconds)
[info] - rint (29 milliseconds)
[info] - round (55 milliseconds)
[info] - exp (30 milliseconds)
[info] - expm1 (33 milliseconds)
[info] - signum / sign (52 milliseconds)
[info] - pow / power (176 milliseconds)
[info] - hex (210 milliseconds)
[info] - unhex (88 milliseconds)
[info] - hypot (91 milliseconds)
[info] - atan2 (106 milliseconds)
[info] - log / ln (54 milliseconds)
[info] - log10 (18 milliseconds)
[info] - log1p (28 milliseconds)
[info] - shift left (94 milliseconds)
[info] - shift right (91 milliseconds)
[info] - shift right unsigned (83 milliseconds)
[info] - binary log (48 milliseconds)
[info] - abs (140 milliseconds)
[info] - log2 (44 milliseconds)
[info] - sqrt (57 milliseconds)
[info] - negative (22 milliseconds)
[info] - positive (53 milliseconds)
[info] NullableColumnAccessorSuite:
[info] - Nullable NULL column accessor: empty column (4 milliseconds)
[info] - Nullable NULL column accessor: access null values (5 milliseconds)
[info] - Nullable BOOLEAN column accessor: empty column (0 milliseconds)
[info] - Nullable BOOLEAN column accessor: access null values (4 milliseconds)
[info] - Nullable BYTE column accessor: empty column (0 milliseconds)
[info] - Nullable BYTE column accessor: access null values (4 milliseconds)
[info] - Nullable SHORT column accessor: empty column (0 milliseconds)
[info] - Nullable SHORT column accessor: access null values (4 milliseconds)
[info] - Nullable INT column accessor: empty column (0 milliseconds)
[info] - Nullable INT column accessor: access null values (0 milliseconds)
[info] - Nullable LONG column accessor: empty column (1 millisecond)
[info] - Nullable LONG column accessor: access null values (0 milliseconds)
[info] - Nullable FLOAT column accessor: empty column (0 milliseconds)
[info] - Nullable FLOAT column accessor: access null values (3 milliseconds)
[info] - Nullable DOUBLE column accessor: empty column (1 millisecond)
[info] - Nullable DOUBLE column accessor: access null values (0 milliseconds)
[info] - Nullable STRING column accessor: empty column (0 milliseconds)
[info] - Nullable STRING column accessor: access null values (12 milliseconds)
[info] - Nullable BINARY column accessor: empty column (0 milliseconds)
[info] - Nullable BINARY column accessor: access null values (1 millisecond)
[info] - Nullable COMPACT_DECIMAL column accessor: empty column (0 milliseconds)
[info] - Nullable COMPACT_DECIMAL column accessor: access null values (5 milliseconds)
[info] - Nullable LARGE_DECIMAL column accessor: empty column (0 milliseconds)
[info] - Nullable LARGE_DECIMAL column accessor: access null values (5 milliseconds)
[info] - Nullable STRUCT column accessor: empty column (0 milliseconds)
[info] - Nullable STRUCT column accessor: access null values (9 milliseconds)
[info] - Nullable ARRAY column accessor: empty column (1 millisecond)
[info] - Nullable ARRAY column accessor: access null values (6 milliseconds)
[info] - Nullable MAP column accessor: empty column (2 milliseconds)
[info] - Nullable MAP column accessor: access null values (8 milliseconds)
[info] DateFunctionsSuite:
[info] - function current_date (23 milliseconds)
[info] - function current_timestamp and now (144 milliseconds)
[info] - timestamp comparison with date strings (122 milliseconds)
[info] - date comparison with date strings *** FAILED *** (53 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Filter ('t <= 2014-06-01)
[info] +- Project [t#9255]
[info] +- Project [_1#9252 AS i#9254,_2#9253 AS t#9255]
[info] +- LocalRelation [_1#9252,_2#9253], [[1,16436],[2,16071]]
[info]
[info] == Analyzed Logical Plan ==
[info] t: date
[info] Filter (cast(t#9255 as string) <= 2014-06-01)
[info] +- Project [t#9255]
[info] +- Project [_1#9252 AS i#9254,_2#9253 AS t#9255]
[info] +- LocalRelation [_1#9252,_2#9253], [[1,16436],[2,16071]]
[info]
[info] == Optimized Logical Plan ==
[info] Project [_2#9253 AS t#9255]
[info] +- Filter (cast(_2#9253 as string) <= 2014-06-01)
[info] +- LocalRelation [_1#9252,_2#9253], [[1,16436],[2,16071]]
[info]
[info] == Physical Plan ==
[info] WholeStageCodegen
[info] : +- Project [_2#9253 AS t#9255]
[info] : +- Filter (cast(_2#9253 as string) <= 2014-06-01)
[info] : +- INPUT
[info] +- LocalTableScan [_1#9252,_2#9253], [[1,16436],[2,16071]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 1 == == Spark Answer - 1 ==
[info] ![2014-01-01] [2013-12-31] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$4.apply$mcV$sp(DateFunctionsSuite.scala:83)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$4.apply(DateFunctionsSuite.scala:78)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$4.apply(DateFunctionsSuite.scala:78)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - date format (102 milliseconds)
[info] - year (66 milliseconds)
[info] - quarter (62 milliseconds)
[info] - month (55 milliseconds)
[info] - dayofmonth (54 milliseconds)
[info] - dayofyear (52 milliseconds)
[info] - hour *** FAILED *** (28 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [hour('a) AS hour(a)#9370,hour('b) AS hour(b)#9371,hour('c) AS hour(c)#9372]
[info] +- Project [_1#9364 AS a#9367,_2#9365 AS b#9368,_3#9366 AS c#9369]
[info] +- LocalRelation [_1#9364,_2#9365,_3#9366], [[16533,2015-04-08 13:10:15,1365451815000000]]
[info]
[info] == Analyzed Logical Plan ==
[info] hour(a): int, hour(b): int, hour(c): int
[info] Project [hour(cast(a#9367 as timestamp)) AS hour(a)#9370,hour(cast(b#9368 as timestamp)) AS hour(b)#9371,hour(c#9369) AS hour(c)#9372]
[info] +- Project [_1#9364 AS a#9367,_2#9365 AS b#9368,_3#9366 AS c#9369]
[info] +- LocalRelation [_1#9364,_2#9365,_3#9366], [[16533,2015-04-08 13:10:15,1365451815000000]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [hour(a)#9370,hour(b)#9371,hour(c)#9372], [[21,13,13]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [hour(a)#9370,hour(b)#9371,hour(c)#9372], [[21,13,13]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 1 == == Spark Answer - 1 ==
[info] ![0,13,13] [21,13,13] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:149)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$11.apply$mcV$sp(DateFunctionsSuite.scala:170)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$11.apply(DateFunctionsSuite.scala:167)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$11.apply(DateFunctionsSuite.scala:167)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - minute (55 milliseconds)
[info] - second (53 milliseconds)
[info] - weekofyear (55 milliseconds)
[info] - function date_add *** FAILED *** (27 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [date_add('d,1) AS date_add(d,1)#9438]
[info] +- Project [_1#9430 AS t#9434,_2#9431 AS d#9435,_3#9432 AS s#9436,_4#9433 AS ss#9437]
[info] +- LocalRelation [_1#9430,_2#9431,_3#9432,_4#9433], [[1433187296000000,16587,2015-06-01,2015-06-01 12:34:56],[1433273696000000,16588,2015-06-02,2015-06-02 12:34:56]]
[info]
[info] == Analyzed Logical Plan ==
[info] date_add(d,1): date
[info] Project [date_add(d#9435,1) AS date_add(d,1)#9438]
[info] +- Project [_1#9430 AS t#9434,_2#9431 AS d#9435,_3#9432 AS s#9436,_4#9433 AS ss#9437]
[info] +- LocalRelation [_1#9430,_2#9431,_3#9432,_4#9433], [[1433187296000000,16587,2015-06-01,2015-06-01 12:34:56],[1433273696000000,16588,2015-06-02,2015-06-02 12:34:56]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [date_add(d,1)#9438], [[16588],[16589]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [date_add(d,1)#9438], [[16588],[16589]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![2015-06-02] [2015-06-01]
[info] ![2015-06-03] [2015-06-02] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$15.apply$mcV$sp(DateFunctionsSuite.scala:225)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$15.apply(DateFunctionsSuite.scala:215)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$15.apply(DateFunctionsSuite.scala:215)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - function date_sub *** FAILED *** (25 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [date_sub('d,1) AS date_sub(d,1)#9448]
[info] +- Project [_1#9440 AS t#9444,_2#9441 AS d#9445,_3#9442 AS s#9446,_4#9443 AS ss#9447]
[info] +- LocalRelation [_1#9440,_2#9441,_3#9442,_4#9443], [[1433187296000000,16587,2015-06-01,2015-06-01 12:34:56],[1433273696000000,16588,2015-06-02,2015-06-02 12:34:56]]
[info]
[info] == Analyzed Logical Plan ==
[info] date_sub(d,1): date
[info] Project [date_sub(d#9445,1) AS date_sub(d,1)#9448]
[info] +- Project [_1#9440 AS t#9444,_2#9441 AS d#9445,_3#9442 AS s#9446,_4#9443 AS ss#9447]
[info] +- LocalRelation [_1#9440,_2#9441,_3#9442,_4#9443], [[1433187296000000,16587,2015-06-01,2015-06-01 12:34:56],[1433273696000000,16588,2015-06-02,2015-06-02 12:34:56]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [date_sub(d,1)#9448], [[16586],[16587]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [date_sub(d,1)#9448], [[16586],[16587]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![2015-05-31] [2015-05-30]
[info] ![2015-06-01] [2015-05-31] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$16.apply$mcV$sp(DateFunctionsSuite.scala:254)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$16.apply(DateFunctionsSuite.scala:244)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$16.apply(DateFunctionsSuite.scala:244)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - time_add *** FAILED *** (25 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [('d + interval 2 months 2 seconds) AS (d + interval 2 months 2 seconds)#9456]
[info] +- Project [_1#9450 AS n#9453,_2#9451 AS t#9454,_3#9452 AS d#9455]
[info] +- LocalRelation [_1#9450,_2#9451,_3#9452], [[1,1438412399000000,16647],[3,1451548800000000,16800]]
[info]
[info] == Analyzed Logical Plan ==
[info] (d + interval 2 months 2 seconds): date
[info] Project [cast(cast(d#9455 as timestamp) + interval 2 months 2 seconds as date) AS (d + interval 2 months 2 seconds)#9456]
[info] +- Project [_1#9450 AS n#9453,_2#9451 AS t#9454,_3#9452 AS d#9455]
[info] +- LocalRelation [_1#9450,_2#9451,_3#9452], [[1,1438412399000000,16647],[3,1451548800000000,16800]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [(d + interval 2 months 2 seconds)#9456], [[16708],[16860]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [(d + interval 2 months 2 seconds)#9456], [[16708],[16860]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![2015-09-30] [2015-09-29]
[info] ![2016-02-29] [2016-02-28] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$17.apply$mcV$sp(DateFunctionsSuite.scala:282)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$17.apply(DateFunctionsSuite.scala:275)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$17.apply(DateFunctionsSuite.scala:275)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - time_sub *** FAILED *** (24 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [('d - interval 2 months 2 seconds) AS (d - interval 2 months 2 seconds)#9464]
[info] +- Project [_1#9458 AS n#9461,_2#9459 AS t#9462,_3#9460 AS d#9463]
[info] +- LocalRelation [_1#9458,_2#9459,_3#9460], [[1,1443682801000000,16708],[3,1456732802000000,16860]]
[info]
[info] == Analyzed Logical Plan ==
[info] (d - interval 2 months 2 seconds): date
[info] Project [cast(cast(d#9463 as timestamp) - interval 2 months 2 seconds as date) AS (d - interval 2 months 2 seconds)#9464]
[info] +- Project [_1#9458 AS n#9461,_2#9459 AS t#9462,_3#9460 AS d#9463]
[info] +- LocalRelation [_1#9458,_2#9459,_3#9460], [[1,1443682801000000,16708],[3,1456732802000000,16860]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [(d - interval 2 months 2 seconds)#9464], [[16646],[16799]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [(d - interval 2 months 2 seconds)#9464], [[16646],[16799]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![2015-07-30] [2015-07-29]
[info] ![2015-12-30] [2015-12-29] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$18.apply$mcV$sp(DateFunctionsSuite.scala:298)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$18.apply(DateFunctionsSuite.scala:291)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$18.apply(DateFunctionsSuite.scala:291)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - function add_months *** FAILED *** (17 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [add_months('d,1) AS add_months(d,1)#9470]
[info] +- Project [_1#9466 AS n#9468,_2#9467 AS d#9469]
[info] +- LocalRelation [_1#9466,_2#9467], [[1,16678],[2,16494]]
[info]
[info] == Analyzed Logical Plan ==
[info] add_months(d,1): date
[info] Project [add_months(d#9469,1) AS add_months(d,1)#9470]
[info] +- Project [_1#9466 AS n#9468,_2#9467 AS d#9469]
[info] +- LocalRelation [_1#9466,_2#9467], [[1,16678],[2,16494]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [add_months(d,1)#9470], [[16708],[16525]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [add_months(d,1)#9470], [[16708],[16525]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![2015-03-31] [2015-03-30]
[info] ![2015-09-30] [2015-09-29] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$19.apply$mcV$sp(DateFunctionsSuite.scala:311)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$19.apply(DateFunctionsSuite.scala:307)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$19.apply(DateFunctionsSuite.scala:307)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - function months_between *** FAILED *** (23 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [months_between('t,'d) AS months_between(t,d)#9478]
[info] +- Project [_1#9472 AS t#9475,_2#9473 AS d#9476,_3#9474 AS s#9477]
[info] +- LocalRelation [_1#9472,_2#9473,_3#9474], [[1412145000000000,16647,2014-09-15 11:30:00],[1442430000000000,16482,2015-10-01 00:00:00]]
[info]
[info] == Analyzed Logical Plan ==
[info] months_between(t,d): double
[info] Project [months_between(t#9475,cast(d#9476 as timestamp)) AS months_between(t,d)#9478]
[info] +- Project [_1#9472 AS t#9475,_2#9473 AS d#9476,_3#9474 AS s#9477]
[info] +- LocalRelation [_1#9472,_2#9473,_3#9474], [[1412145000000000,16647,2014-09-15 11:30:00],[1442430000000000,16482,2015-10-01 00:00:00]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [months_between(t,d)#9478], [[-9.96438172],[7.0]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [months_between(t,d)#9478], [[-9.96438172],[7.0]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![-10.0] [-9.96438172]
[info] [7.0] [7.0] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$20.apply$mcV$sp(DateFunctionsSuite.scala:327)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$20.apply(DateFunctionsSuite.scala:319)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$20.apply(DateFunctionsSuite.scala:319)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - function last_day *** FAILED *** (18 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [last_day('d) AS last_day(d)#9488]
[info] +- Project [_1#9480 AS i#9482,_2#9481 AS d#9483]
[info] +- LocalRelation [_1#9480,_2#9481], [[1,2015-07-23],[2,2015-07-24]]
[info]
[info] == Analyzed Logical Plan ==
[info] last_day(d): date
[info] Project [last_day(cast(d#9483 as date)) AS last_day(d)#9488]
[info] +- Project [_1#9480 AS i#9482,_2#9481 AS d#9483]
[info] +- LocalRelation [_1#9480,_2#9481], [[1,2015-07-23],[2,2015-07-24]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [last_day(d)#9488], [[16647],[16647]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [last_day(d)#9488], [[16647],[16647]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![2015-07-31] [2015-07-30]
[info] ![2015-07-31] [2015-07-30] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$21.apply$mcV$sp(DateFunctionsSuite.scala:334)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$21.apply(DateFunctionsSuite.scala:331)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$21.apply(DateFunctionsSuite.scala:331)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - function next_day *** FAILED *** (19 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [next_day('d,MONDAY) AS next_day(d,MONDAY)#9498]
[info] +- Project [_1#9490 AS dow#9492,_2#9491 AS d#9493]
[info] +- LocalRelation [_1#9490,_2#9491], [[mon,2015-07-23],[tuesday,2015-07-20]]
[info]
[info] == Analyzed Logical Plan ==
[info] next_day(d,MONDAY): date
[info] Project [next_day(cast(d#9493 as date),MONDAY) AS next_day(d,MONDAY)#9498]
[info] +- Project [_1#9490 AS dow#9492,_2#9491 AS d#9493]
[info] +- LocalRelation [_1#9490,_2#9491], [[mon,2015-07-23],[tuesday,2015-07-20]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [next_day(d,MONDAY)#9498], [[16643],[16643]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [next_day(d,MONDAY)#9498], [[16643],[16643]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![2015-07-27] [2015-07-26]
[info] ![2015-07-27] [2015-07-26] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$22.apply$mcV$sp(DateFunctionsSuite.scala:345)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$22.apply(DateFunctionsSuite.scala:342)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$22.apply(DateFunctionsSuite.scala:342)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - function to_date *** FAILED *** (21 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [to_date('t) AS to_date(t)#9506]
[info] +- Project [_1#9500 AS d#9503,_2#9501 AS t#9504,_3#9502 AS s#9505]
[info] +- LocalRelation [_1#9500,_2#9501,_3#9502], [[16638,1437584400000000,2015-07-22 10:00:00],[16617,1420099199000000,2014-12-31]]
[info]
[info] == Analyzed Logical Plan ==
[info] to_date(t): date
[info] Project [to_date(cast(t#9504 as date)) AS to_date(t)#9506]
[info] +- Project [_1#9500 AS d#9503,_2#9501 AS t#9504,_3#9502 AS s#9505]
[info] +- LocalRelation [_1#9500,_2#9501,_3#9502], [[16638,1437584400000000,2015-07-22 10:00:00],[16617,1420099199000000,2014-12-31]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [to_date(t)#9506], [[16638],[16436]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [to_date(t)#9506], [[16638],[16436]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] [2014-12-31] [2014-12-31]
[info] ![2015-07-22] [2015-07-21] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$23.apply$mcV$sp(DateFunctionsSuite.scala:362)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$23.apply(DateFunctionsSuite.scala:353)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$23.apply(DateFunctionsSuite.scala:353)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - function trunc *** FAILED *** (20 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [trunc('t,YY) AS trunc(t,YY)#9512]
[info] +- Project [_1#9508 AS i#9510,_2#9509 AS t#9511]
[info] +- LocalRelation [_1#9508,_2#9509], [[1,1437584400000000],[2,1420012800000000]]
[info]
[info] == Analyzed Logical Plan ==
[info] trunc(t,YY): date
[info] Project [trunc(cast(t#9511 as date),YY) AS trunc(t,YY)#9512]
[info] +- Project [_1#9508 AS i#9510,_2#9509 AS t#9511]
[info] +- LocalRelation [_1#9508,_2#9509], [[1,1437584400000000],[2,1420012800000000]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [trunc(t,YY)#9512], [[16436],[16071]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [trunc(t,YY)#9512], [[16436],[16071]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![2014-01-01] [2013-12-31]
[info] ![2015-01-01] [2014-12-31] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$24.apply$mcV$sp(DateFunctionsSuite.scala:388)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$24.apply(DateFunctionsSuite.scala:383)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$24.apply(DateFunctionsSuite.scala:383)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - from_unixtime (100 milliseconds)
[info] - unix_timestamp *** FAILED *** (113 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [unix_timestamp('d,yyyy/MM/dd HH:mm:ss.S) AS unix_timestamp(d,yyyy/MM/dd HH:mm:ss.S)#9542]
[info] +- Project [_1#9530 AS d#9534,_2#9531 AS ts#9535,_3#9532 AS s#9536,_4#9533 AS ss#9537]
[info] +- LocalRelation [_1#9530,_2#9531,_3#9532,_4#9533], [[16640,1437757200300000,2015/07/24 10:00:00.5,2015-07-24 10:00:00],[16641,1437814922200000,2015/07/25 02:02:02.6,2015-07-25 02:02:02]]
[info]
[info] == Analyzed Logical Plan ==
[info] unix_timestamp(d,yyyy/MM/dd HH:mm:ss.S): bigint
[info] Project [unix_timestamp(d#9534,yyyy/MM/dd HH:mm:ss.S) AS unix_timestamp(d,yyyy/MM/dd HH:mm:ss.S)#9542L]
[info] +- Project [_1#9530 AS d#9534,_2#9531 AS ts#9535,_3#9532 AS s#9536,_4#9533 AS ss#9537]
[info] +- LocalRelation [_1#9530,_2#9531,_3#9532,_4#9533], [[16640,1437757200300000,2015/07/24 10:00:00.5,2015-07-24 10:00:00],[16641,1437814922200000,2015/07/25 02:02:02.6,2015-07-25 02:02:02]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [unix_timestamp(d,yyyy/MM/dd HH:mm:ss.S)#9542L], [[1437710400],[1437796800]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [unix_timestamp(d,yyyy/MM/dd HH:mm:ss.S)#9542L], [[1437710400],[1437796800]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![1437721200] [1437710400]
[info] ![1437807600] [1437796800] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$26.apply$mcV$sp(DateFunctionsSuite.scala:439)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$26.apply(DateFunctionsSuite.scala:424)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$26.apply(DateFunctionsSuite.scala:424)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - to_unix_timestamp *** FAILED *** (90 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [unresolvedalias('to_unix_timestamp('d,yyyy/MM/dd HH:mm:ss.S),Some(to_unix_timestamp(d,yyyy/MM/dd HH:mm:ss.S)))]
[info] +- Project [_1#9544 AS d#9548,_2#9545 AS ts#9549,_3#9546 AS s#9550,_4#9547 AS ss#9551]
[info] +- LocalRelation [_1#9544,_2#9545,_3#9546,_4#9547], [[16640,1437757200300000,2015/07/24 10:00:00.5,2015-07-24 10:00:00],[16641,1437814922200000,2015/07/25 02:02:02.6,2015-07-25 02:02:02]]
[info]
[info] == Analyzed Logical Plan ==
[info] to_unix_timestamp(d,yyyy/MM/dd HH:mm:ss.S): bigint
[info] Project [to_unix_timestamp(d#9548,yyyy/MM/dd HH:mm:ss.S) AS to_unix_timestamp(d,yyyy/MM/dd HH:mm:ss.S)#9556L]
[info] +- Project [_1#9544 AS d#9548,_2#9545 AS ts#9549,_3#9546 AS s#9550,_4#9547 AS ss#9551]
[info] +- LocalRelation [_1#9544,_2#9545,_3#9546,_4#9547], [[16640,1437757200300000,2015/07/24 10:00:00.5,2015-07-24 10:00:00],[16641,1437814922200000,2015/07/25 02:02:02.6,2015-07-25 02:02:02]]
[info]
[info] == Optimized Logical Plan ==
[info] LocalRelation [to_unix_timestamp(d,yyyy/MM/dd HH:mm:ss.S)#9556L], [[1437710400],[1437796800]]
[info]
[info] == Physical Plan ==
[info] LocalTableScan [to_unix_timestamp(d,yyyy/MM/dd HH:mm:ss.S)#9556L], [[1437710400],[1437796800]]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 2 == == Spark Answer - 2 ==
[info] ![1437721200] [1437710400]
[info] ![1437807600] [1437796800] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$27.apply$mcV$sp(DateFunctionsSuite.scala:471)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$27.apply(DateFunctionsSuite.scala:456)
[info] at org.apache.spark.sql.DateFunctionsSuite$$anonfun$27.apply(DateFunctionsSuite.scala:456)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.DateFunctionsSuite.org$scalatest$BeforeAndAfterAll$$super$run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.DateFunctionsSuite.run(DateFunctionsSuite.scala:28)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - datediff (93 milliseconds)
[info] - from_utc_timestamp (40 milliseconds)
[info] - to_utc_timestamp (35 milliseconds)
[info] PlannerSuite:
[info] - count is partially aggregated (4 milliseconds)
[info] - count distinct is partially aggregated (2 milliseconds)
[info] - mixed aggregates are partially aggregated (2 milliseconds)
[info] - sizeInBytes estimation of limit operator for broadcast hash join optimization (22 milliseconds)
[info] - InMemoryRelation statistics propagation (58 milliseconds)
[info] - SPARK-11390 explain should print PushedFilters of PhysicalRDD (103 milliseconds)
[info] - efficient limit -> project -> sort (4 milliseconds)
[info] - PartitioningCollection (22 milliseconds)
[info] - collapse adjacent repartitions (2 milliseconds)
[info] - EnsureRequirements with incompatible child partitionings which satisfy distribution (4 milliseconds)
[info] - EnsureRequirements with child partitionings with different numbers of output partitions (1 millisecond)
[info] - EnsureRequirements with compatible child partitionings that do not satisfy distribution (2 milliseconds)
[info] - EnsureRequirements with compatible child partitionings that satisfy distribution (1 millisecond)
[info] - EnsureRequirements should not repartition if only ordering requirement is unsatisfied (2 milliseconds)
[info] - EnsureRequirements adds sort when there is no existing ordering (1 millisecond)
[info] - EnsureRequirements skips sort when required ordering is prefix of existing ordering (1 millisecond)
[info] - EnsureRequirements adds sort when required ordering isn't a prefix of existing ordering (1 millisecond)
[info] - EnsureRequirements eliminates Exchange if child has Exchange with same partitioning (2 milliseconds)
[info] - EnsureRequirements does not eliminate Exchange with different partitioning (2 milliseconds)
[info] DataFrameFunctionsSuite:
[info] - array with column name (12 milliseconds)
[info] - array with column expression (4 milliseconds)
[info] - array: throw exception if putting columns of different types into an array !!! IGNORED !!!
[info] - struct with column name (13 milliseconds)
[info] - struct with column expression (8 milliseconds)
[info] - struct with column expression to be automatically named (23 milliseconds)
[info] - struct with literal columns (22 milliseconds)
[info] - struct with all literal columns (22 milliseconds)
[info] - constant functions (35 milliseconds)
[info] - bitwiseNOT (35 milliseconds)
[info] - bin (42 milliseconds)
[info] - if function (24 milliseconds)
[info] - nvl function (24 milliseconds)
[info] - misc md5 function (38 milliseconds)
[info] - misc sha1 function (40 milliseconds)
[info] - misc sha2 function (41 milliseconds)
[info] - misc crc32 function (38 milliseconds)
[info] - string function find_in_set (20 milliseconds)
[info] - conditional function: least (74 milliseconds)
[info] - conditional function: greatest (84 milliseconds)
[info] - pmod (153 milliseconds)
[info] - sort_array function (310 milliseconds)
[info] - array size function (42 milliseconds)
[info] - map size function (34 milliseconds)
[info] - array contains function (120 milliseconds)
[info] UnionNodeSuite:
[info] - empty (7 milliseconds)
[info] - self (2 milliseconds)
[info] - basic (4 milliseconds)
[info] CreateTableAsSelectSuite:
[info] - CREATE TEMPORARY TABLE AS SELECT (171 milliseconds)
02:44:52.710 WARN org.apache.hadoop.fs.FileUtil: Failed to delete file or dir [/Users/sim/dev/spx/spark/target/tmp/spark-0cfac027-3ee4-41e4-b124-509617343176/child]: it still exists.
[info] - CREATE TEMPORARY TABLE AS SELECT based on the file without write permission (6 milliseconds)
[info] - create a table, drop it and create another one with the same name (405 milliseconds)
[info] - CREATE TEMPORARY TABLE AS SELECT with IF NOT EXISTS is not allowed (1 millisecond)
[info] - a CTAS statement with column definitions is not allowed (3 milliseconds)
[info] - it is not allowed to write to a table while querying it. (40 milliseconds)
[info] LongOffsetSuite:
[info] - comparision LongOffset(1) <=> LongOffset(2) (1 millisecond)
[info] CompositeOffsetSuite:
[info] - comparision CompositeOffset(List(Some(LongOffset(1)))) <=> CompositeOffset(List(Some(LongOffset(2)))) (2 milliseconds)
[info] - comparision CompositeOffset(List(None)) <=> CompositeOffset(List(Some(LongOffset(2)))) (0 milliseconds)
[info] - invalid comparison CompositeOffset(List()) <=> CompositeOffset(List(Some(LongOffset(2)))) (2 milliseconds)
[info] - comparision CompositeOffset(ArrayBuffer(Some(LongOffset(0)), Some(LongOffset(1)))) <=> CompositeOffset(ArrayBuffer(Some(LongOffset(1)), Some(LongOffset(2)))) (0 milliseconds)
[info] - comparision CompositeOffset(ArrayBuffer(Some(LongOffset(1)), Some(LongOffset(1)))) <=> CompositeOffset(ArrayBuffer(Some(LongOffset(1)), Some(LongOffset(2)))) (1 millisecond)
[info] - invalid comparison CompositeOffset(ArrayBuffer(Some(LongOffset(2)), Some(LongOffset(1)))) <=> CompositeOffset(ArrayBuffer(Some(LongOffset(1)), Some(LongOffset(2)))) (0 milliseconds)
[info] ParquetInteroperabilitySuite:
[info] - parquet files with different physical schemas but share the same logical schema (217 milliseconds)
[info] JsonParsingOptionsSuite:
[info] - allowComments off (11 milliseconds)
[info] - allowComments on (41 milliseconds)
[info] - allowSingleQuotes off (10 milliseconds)
[info] - allowSingleQuotes on (22 milliseconds)
[info] - allowUnquotedFieldNames off (7 milliseconds)
[info] - allowUnquotedFieldNames on (21 milliseconds)
[info] - allowNumericLeadingZeros off (7 milliseconds)
[info] - allowNumericLeadingZeros on (22 milliseconds)
[info] - allowNonNumericNumbers off !!! IGNORED !!!
[info] - allowNonNumericNumbers on !!! IGNORED !!!
[info] - allowBackslashEscapingAnyCharacter off (7 milliseconds)
[info] - allowBackslashEscapingAnyCharacter on (42 milliseconds)
[info] CSVParserSuite:
[info] - Hygiene (2 milliseconds)
[info] - Regular case (1 millisecond)
[info] - Empty iter (0 milliseconds)
[info] - Embedded new line (1 millisecond)
[info] - Buffer Regular case (6 milliseconds)
[info] - Buffer Empty iter (1 millisecond)
[info] - Buffer Embedded new line (8 milliseconds)
[info] SQLConfSuite:
[info] - propagate from spark conf (0 milliseconds)
[info] - programmatic ways of basic setting and getting (1 millisecond)
[info] - parse SQL set commands (3 milliseconds)
02:44:53.734 WARN org.apache.spark.sql.execution.SetCommand: Property mapred.reduce.tasks is deprecated, automatically converted to spark.sql.shuffle.partitions instead.
[info] - deprecated property (3 milliseconds)
[info] - invalid conf value (2 milliseconds)
[info] - Test SHUFFLE_TARGET_POSTSHUFFLE_INPUT_SIZE's method (2 milliseconds)
[info] CachedTableSuite:
[info] - withColumn doesn't invalidate cached dataframe (57 milliseconds)
[info] - cache temp table (14 milliseconds)
[info] - unpersist an uncached table will not raise exception (2 milliseconds)
[info] - cache table as select (33 milliseconds)
[info] - uncaching temp table (7 milliseconds)
[info] - too big for memory (1 second, 423 milliseconds)
[info] - calling .cache() should use in-memory columnar caching (3 milliseconds)
[info] - calling .unpersist() should drop in-memory columnar cache (26 milliseconds)
[info] - isCached (3 milliseconds)
02:44:55.361 WARN org.apache.spark.sql.execution.CacheManager: Asked to cache already cached data.
[info] - SPARK-1669: cacheTable should be idempotent (5 milliseconds)
[info] - read from cached table and uncache (60 milliseconds)
[info] - correct error on uncache of non-cached table (1 millisecond)
[info] - SELECT star from cached table (38 milliseconds)
[info] - Self-join cached (120 milliseconds)
[info] - 'CACHE TABLE' and 'UNCACHE TABLE' SQL statement (30 milliseconds)
[info] - CACHE TABLE tableName AS SELECT * FROM anotherTable (26 milliseconds)
[info] - CACHE TABLE tableName AS SELECT ... (38 milliseconds)
[info] - CACHE LAZY TABLE tableName (33 milliseconds)
[info] - InMemoryRelation statistics (24 milliseconds)
[info] - Drops temporary table (2 milliseconds)
[info] - Drops cached temporary table (5 milliseconds)
[info] - Clear all cache (26 milliseconds)
[info] - Clear accumulators when uncacheTable to prevent memory leaking (142 milliseconds)
[info] - SPARK-10327 Cache Table is not working while subquery has alias in its project list (33 milliseconds)
[info] - A cached table preserves the partitioning and ordering of its cached SparkPlan (1 second, 887 milliseconds)
[info] DataFrameTungstenSuite:
[info] - test simple types (26 milliseconds)
[info] - test struct type (21 milliseconds)
[info] - test nested struct type (19 milliseconds)
[info] SampleNodeSuite:
[info] - with replacement (7 milliseconds)
[info] - without replacement (1 millisecond)
[info] InnerJoinSuite:
[info] - inner join, one match per row using BroadcastHashJoin (build=left) (30 milliseconds)
[info] - inner join, one match per row using BroadcastHashJoin (build=right) (22 milliseconds)
[info] - inner join, one match per row using SortMergeJoin (32 milliseconds)
[info] - inner join, multiple matches using BroadcastHashJoin (build=left) (29 milliseconds)
[info] - inner join, multiple matches using BroadcastHashJoin (build=right) (17 milliseconds)
[info] - inner join, multiple matches using SortMergeJoin (39 milliseconds)
[info] - inner join, no matches using BroadcastHashJoin (build=left) (21 milliseconds)
[info] - inner join, no matches using BroadcastHashJoin (build=right) (18 milliseconds)
[info] - inner join, no matches using SortMergeJoin (27 milliseconds)
[info] - inner join, null safe using BroadcastHashJoin (build=left) (27 milliseconds)
[info] - inner join, null safe using BroadcastHashJoin (build=right) (17 milliseconds)
[info] - inner join, null safe using SortMergeJoin (33 milliseconds)
[info] DatasetAggregatorSuite:
[info] - typed aggregation: TypedAggregator (180 milliseconds)
[info] - typed aggregation: TypedAggregator, expr, expr (130 milliseconds)
[info] - typed aggregation: complex case (135 milliseconds)
[info] - typed aggregation: complex result type (125 milliseconds)
[info] - typed aggregation: in project list (134 milliseconds)
[info] - typed aggregation: class input (83 milliseconds)
[info] - typed aggregation: class input with reordering (251 milliseconds)
[info] - typed aggregation: complex input (342 milliseconds)
[info] ParquetAvroCompatibilitySuite:
[info] - required primitives (260 milliseconds)
[info] - optional primitives (133 milliseconds)
[info] - non-nullable arrays (129 milliseconds)
[info] - nullable arrays (parquet-avro 1.7.0 does not properly support this) !!! IGNORED !!!
[info] - SPARK-10136 array of primitive array (116 milliseconds)
[info] - map of primitive array (126 milliseconds)
[info] - various complex types (134 milliseconds)
[info] - SPARK-9407 Push down predicates involving Parquet ENUM columns (126 milliseconds)
[info] ParquetSchemaInferenceSuite:
[info] - sql => parquet: basic types (0 milliseconds)
[info] - sql <= parquet: basic types (1 millisecond)
[info] - sql => parquet: logical integral types (0 milliseconds)
[info] - sql <= parquet: logical integral types (1 millisecond)
[info] - sql => parquet: string (0 milliseconds)
[info] - sql <= parquet: string (0 milliseconds)
[info] - sql => parquet: binary enum as string (0 milliseconds)
[info] - sql <= parquet: binary enum as string (0 milliseconds)
[info] - sql => parquet: non-nullable array - non-standard (0 milliseconds)
[info] - sql <= parquet: non-nullable array - non-standard (0 milliseconds)
[info] - sql => parquet: non-nullable array - standard (0 milliseconds)
[info] - sql <= parquet: non-nullable array - standard (1 millisecond)
[info] - sql => parquet: nullable array - non-standard (0 milliseconds)
[info] - sql <= parquet: nullable array - non-standard (1 millisecond)
[info] - sql => parquet: nullable array - standard (0 milliseconds)
[info] - sql <= parquet: nullable array - standard (0 milliseconds)
[info] - sql => parquet: map - standard (1 millisecond)
[info] - sql <= parquet: map - standard (0 milliseconds)
[info] - sql => parquet: map - non-standard (1 millisecond)
[info] - sql <= parquet: map - non-standard (0 milliseconds)
[info] - sql => parquet: struct (1 millisecond)
[info] - sql <= parquet: struct (0 milliseconds)
[info] - sql => parquet: deeply nested type - non-standard (1 millisecond)
[info] - sql <= parquet: deeply nested type - non-standard (0 milliseconds)
[info] - sql => parquet: deeply nested type - standard (1 millisecond)
[info] - sql <= parquet: deeply nested type - standard (0 milliseconds)
[info] - sql => parquet: optional types (1 millisecond)
[info] - sql <= parquet: optional types (0 milliseconds)
[info] ParquetSchemaSuite:
[info] - DataType string parser compatibility (48 milliseconds)
[info] - merge with metastore schema (6 milliseconds)
[info] - merge missing nullable fields from Metastore schema (1 millisecond)
02:45:01.098 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 2.0 (TID 4)
org.apache.spark.SparkException: Failed merging schema of file file:/Users/sim/dev/spx/spark/target/tmp/spark-45ecc179-2dcc-46da-96f4-aad0087e42f7/p=2/part-r-00000-7143e27c-7617-44b6-992d-7491a9363bff.gz.parquet:
root
|-- id: integer (nullable = true)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:812)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:807)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:807)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:782)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Failed to merge incompatible data types LongType and IntegerType
at org.apache.spark.sql.types.StructType$.merge(StructType.scala:445)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1$$anonfun$apply$3.apply(StructType.scala:403)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1$$anonfun$apply$3.apply(StructType.scala:401)
at scala.Option.map(Option.scala:146)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1.apply(StructType.scala:401)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1.apply(StructType.scala:398)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.sql.types.StructType$.merge(StructType.scala:398)
at org.apache.spark.sql.types.StructType.merge(StructType.scala:316)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:810)
... 16 more
02:45:01.098 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 2.0 (TID 5)
org.apache.spark.SparkException: Failed merging schema of file file:/Users/sim/dev/spx/spark/target/tmp/spark-45ecc179-2dcc-46da-96f4-aad0087e42f7/p=2/_metadata:
root
|-- id: integer (nullable = true)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:812)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:807)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:807)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:782)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Failed to merge incompatible data types LongType and IntegerType
at org.apache.spark.sql.types.StructType$.merge(StructType.scala:445)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1$$anonfun$apply$3.apply(StructType.scala:403)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1$$anonfun$apply$3.apply(StructType.scala:401)
at scala.Option.map(Option.scala:146)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1.apply(StructType.scala:401)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1.apply(StructType.scala:398)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.sql.types.StructType$.merge(StructType.scala:398)
at org.apache.spark.sql.types.StructType.merge(StructType.scala:316)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:810)
... 16 more
02:45:01.106 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 2.0 (TID 4, localhost): org.apache.spark.SparkException: Failed merging schema of file file:/Users/sim/dev/spx/spark/target/tmp/spark-45ecc179-2dcc-46da-96f4-aad0087e42f7/p=2/part-r-00000-7143e27c-7617-44b6-992d-7491a9363bff.gz.parquet:
root
|-- id: integer (nullable = true)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:812)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:807)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:807)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:782)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Failed to merge incompatible data types LongType and IntegerType
at org.apache.spark.sql.types.StructType$.merge(StructType.scala:445)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1$$anonfun$apply$3.apply(StructType.scala:403)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1$$anonfun$apply$3.apply(StructType.scala:401)
at scala.Option.map(Option.scala:146)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1.apply(StructType.scala:401)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1.apply(StructType.scala:398)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.sql.types.StructType$.merge(StructType.scala:398)
at org.apache.spark.sql.types.StructType.merge(StructType.scala:316)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:810)
... 16 more
02:45:01.106 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 2.0 failed 1 times; aborting job
02:45:01.106 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 2.0 (TID 5, localhost): org.apache.spark.SparkException: Failed merging schema of file file:/Users/sim/dev/spx/spark/target/tmp/spark-45ecc179-2dcc-46da-96f4-aad0087e42f7/p=2/_metadata:
root
|-- id: integer (nullable = true)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:812)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:807)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:807)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:782)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Failed to merge incompatible data types LongType and IntegerType
at org.apache.spark.sql.types.StructType$.merge(StructType.scala:445)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1$$anonfun$apply$3.apply(StructType.scala:403)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1$$anonfun$apply$3.apply(StructType.scala:401)
at scala.Option.map(Option.scala:146)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1.apply(StructType.scala:401)
at org.apache.spark.sql.types.StructType$$anonfun$merge$1.apply(StructType.scala:398)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.sql.types.StructType$.merge(StructType.scala:398)
at org.apache.spark.sql.types.StructType.merge(StructType.scala:316)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29$$anonfun$apply$8.apply(ParquetRelation.scala:810)
... 16 more
[info] - schema merging failure error message (312 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: LIST with nullable element type - 1 - standard (1 millisecond)
[info] - sql <= parquet: Backwards-compatibility: LIST with nullable element type - 2 (0 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: LIST with non-nullable element type - 1 - standard (0 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: LIST with non-nullable element type - 2 (0 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: LIST with non-nullable element type - 3 (0 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: LIST with non-nullable element type - 4 (1 millisecond)
[info] - sql <= parquet: Backwards-compatibility: LIST with non-nullable element type - 5 - parquet-avro style (0 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: LIST with non-nullable element type - 6 - parquet-thrift style (1 millisecond)
[info] - sql <= parquet: Backwards-compatibility: LIST with non-nullable element type 7 - parquet-protobuf primitive lists (0 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: LIST with non-nullable element type 8 - parquet-protobuf non-primitive lists (0 milliseconds)
[info] - sql => parquet: Backwards-compatibility: LIST with nullable element type - 1 - standard (0 milliseconds)
[info] - sql => parquet: Backwards-compatibility: LIST with nullable element type - 2 - prior to 1.4.x (0 milliseconds)
[info] - sql => parquet: Backwards-compatibility: LIST with non-nullable element type - 1 - standard (0 milliseconds)
[info] - sql => parquet: Backwards-compatibility: LIST with non-nullable element type - 2 - prior to 1.4.x (1 millisecond)
[info] - sql <= parquet: Backwards-compatibility: MAP with non-nullable value type - 1 - standard (0 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: MAP with non-nullable value type - 2 (0 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: MAP with non-nullable value type - 3 - prior to 1.4.x (1 millisecond)
[info] - sql <= parquet: Backwards-compatibility: MAP with nullable value type - 1 - standard (0 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: MAP with nullable value type - 2 (0 milliseconds)
[info] - sql <= parquet: Backwards-compatibility: MAP with nullable value type - 3 - parquet-avro style (1 millisecond)
[info] - sql => parquet: Backwards-compatibility: MAP with non-nullable value type - 1 - standard (0 milliseconds)
[info] - sql => parquet: Backwards-compatibility: MAP with non-nullable value type - 2 - prior to 1.4.x (0 milliseconds)
[info] - sql => parquet: Backwards-compatibility: MAP with nullable value type - 1 - standard (0 milliseconds)
[info] - sql => parquet: Backwards-compatibility: MAP with nullable value type - 3 - prior to 1.4.x (0 milliseconds)
[info] - sql => parquet: DECIMAL(1, 0) - standard (1 millisecond)
[info] - sql <= parquet: DECIMAL(1, 0) - standard (1 millisecond)
[info] - sql => parquet: DECIMAL(8, 3) - standard (1 millisecond)
[info] - sql <= parquet: DECIMAL(8, 3) - standard (0 milliseconds)
[info] - sql => parquet: DECIMAL(9, 3) - standard (0 milliseconds)
[info] - sql <= parquet: DECIMAL(9, 3) - standard (0 milliseconds)
[info] - sql => parquet: DECIMAL(18, 3) - standard (0 milliseconds)
[info] - sql <= parquet: DECIMAL(18, 3) - standard (0 milliseconds)
[info] - sql => parquet: DECIMAL(19, 3) - standard (1 millisecond)
[info] - sql <= parquet: DECIMAL(19, 3) - standard (0 milliseconds)
[info] - sql => parquet: DECIMAL(1, 0) - prior to 1.4.x (0 milliseconds)
[info] - sql <= parquet: DECIMAL(1, 0) - prior to 1.4.x (1 millisecond)
[info] - sql => parquet: DECIMAL(8, 3) - prior to 1.4.x (0 milliseconds)
[info] - sql <= parquet: DECIMAL(8, 3) - prior to 1.4.x (0 milliseconds)
[info] - sql => parquet: DECIMAL(9, 3) - prior to 1.4.x (0 milliseconds)
[info] - sql <= parquet: DECIMAL(9, 3) - prior to 1.4.x (0 milliseconds)
[info] - sql => parquet: DECIMAL(18, 3) - prior to 1.4.x (0 milliseconds)
[info] - sql <= parquet: DECIMAL(18, 3) - prior to 1.4.x (1 millisecond)
[info] - Clipping - simple nested struct (0 milliseconds)
[info] - Clipping - parquet-protobuf style array (0 milliseconds)
[info] - Clipping - parquet-thrift style array (1 millisecond)
[info] - Clipping - parquet-avro style array (0 milliseconds)
[info] - Clipping - parquet-hive style array (1 millisecond)
[info] - Clipping - 2-level list of required struct (0 milliseconds)
[info] - Clipping - standard array (1 millisecond)
[info] - Clipping - empty requested schema (0 milliseconds)
[info] - Clipping - disjoint field sets (1 millisecond)
[info] - Clipping - parquet-avro style map (0 milliseconds)
[info] - Clipping - standard map (1 millisecond)
[info] - Clipping - standard map with complex key (0 milliseconds)
[info] SQLExecutionSuite:
[info] - concurrent query execution (SPARK-10548) (120 milliseconds)
[info] FilteredScanSuite:
[info] - SELECT * FROM oneToTenFiltered (44 milliseconds)
[info] - SELECT a, b FROM oneToTenFiltered (18 milliseconds)
[info] - SELECT b, a FROM oneToTenFiltered (15 milliseconds)
[info] - SELECT a FROM oneToTenFiltered (14 milliseconds)
[info] - SELECT b FROM oneToTenFiltered (14 milliseconds)
[info] - SELECT a * 2 FROM oneToTenFiltered (18 milliseconds)
[info] - SELECT A AS b FROM oneToTenFiltered (18 milliseconds)
[info] - SELECT x.b, y.a FROM oneToTenFiltered x JOIN oneToTenFiltered y ON x.a = y.b (480 milliseconds)
[info] - SELECT x.a, y.b FROM oneToTenFiltered x JOIN oneToTenFiltered y ON x.a = y.b (400 milliseconds)
[info] - SELECT a, b FROM oneToTenFiltered WHERE a = 1 (15 milliseconds)
[info] - SELECT a, b FROM oneToTenFiltered WHERE a IN (1,3,5) (18 milliseconds)
[info] - SELECT a, b FROM oneToTenFiltered WHERE A = 1 (14 milliseconds)
[info] - SELECT a, b FROM oneToTenFiltered WHERE b = 2 (27 milliseconds)
[info] - SELECT a, b FROM oneToTenFiltered WHERE a IS NULL (5 milliseconds)
[info] - SELECT a, b FROM oneToTenFiltered WHERE a IS NOT NULL (17 milliseconds)
[info] - SELECT a, b FROM oneToTenFiltered WHERE a < 5 AND a > 1 (17 milliseconds)
[info] - SELECT a, b FROM oneToTenFiltered WHERE a < 3 OR a > 8 (16 milliseconds)
[info] - SELECT a, b FROM oneToTenFiltered WHERE NOT (a < 6) (15 milliseconds)
[info] - SELECT a, b, c FROM oneToTenFiltered WHERE c like 'c%' (16 milliseconds)
[info] - SELECT a, b, c FROM oneToTenFiltered WHERE c like '%D' (16 milliseconds)
[info] - SELECT a, b, c FROM oneToTenFiltered WHERE c like '%eE%' (16 milliseconds)
[info] - PushDown Returns 1: SELECT * FROM oneToTenFiltered WHERE A = 1 (11 milliseconds)
[info] - PushDown Returns 1: SELECT a FROM oneToTenFiltered WHERE A = 1 (8 milliseconds)
[info] - PushDown Returns 1: SELECT b FROM oneToTenFiltered WHERE A = 1 (8 milliseconds)
[info] - PushDown Returns 1: SELECT a, b FROM oneToTenFiltered WHERE A = 1 (8 milliseconds)
[info] - PushDown Returns 1: SELECT * FROM oneToTenFiltered WHERE a = 1 (8 milliseconds)
[info] - PushDown Returns 1: SELECT * FROM oneToTenFiltered WHERE 1 = a (9 milliseconds)
[info] - PushDown Returns 9: SELECT * FROM oneToTenFiltered WHERE a > 1 (9 milliseconds)
[info] - PushDown Returns 9: SELECT * FROM oneToTenFiltered WHERE a >= 2 (9 milliseconds)
[info] - PushDown Returns 9: SELECT * FROM oneToTenFiltered WHERE 1 < a (8 milliseconds)
[info] - PushDown Returns 9: SELECT * FROM oneToTenFiltered WHERE 2 <= a (9 milliseconds)
[info] - PushDown Returns 0: SELECT * FROM oneToTenFiltered WHERE 1 > a (8 milliseconds)
[info] - PushDown Returns 2: SELECT * FROM oneToTenFiltered WHERE 2 >= a (17 milliseconds)
[info] - PushDown Returns 0: SELECT * FROM oneToTenFiltered WHERE a < 1 (10 milliseconds)
[info] - PushDown Returns 2: SELECT * FROM oneToTenFiltered WHERE a <= 2 (8 milliseconds)
[info] - PushDown Returns 8: SELECT * FROM oneToTenFiltered WHERE a > 1 AND a < 10 (9 milliseconds)
[info] - PushDown Returns 3: SELECT * FROM oneToTenFiltered WHERE a IN (1,3,5) (9 milliseconds)
[info] - PushDown Returns 0: SELECT * FROM oneToTenFiltered WHERE a = 20 (8 milliseconds)
[info] - PushDown Returns 10: SELECT * FROM oneToTenFiltered WHERE b = 1 (8 milliseconds)
[info] - PushDown Returns 3: SELECT * FROM oneToTenFiltered WHERE a < 5 AND a > 1 (9 milliseconds)
[info] - PushDown Returns 4: SELECT * FROM oneToTenFiltered WHERE a < 3 OR a > 8 (9 milliseconds)
[info] - PushDown Returns 5: SELECT * FROM oneToTenFiltered WHERE NOT (a < 6) (9 milliseconds)
[info] - PushDown Returns 1: SELECT a, b, c FROM oneToTenFiltered WHERE c like 'c%' (8 milliseconds)
[info] - PushDown Returns 0: SELECT a, b, c FROM oneToTenFiltered WHERE c like 'C%' (9 milliseconds)
[info] - PushDown Returns 1: SELECT a, b, c FROM oneToTenFiltered WHERE c like '%D' (9 milliseconds)
[info] - PushDown Returns 0: SELECT a, b, c FROM oneToTenFiltered WHERE c like '%d' (9 milliseconds)
[info] - PushDown Returns 1: SELECT a, b, c FROM oneToTenFiltered WHERE c like '%eE%' (9 milliseconds)
[info] - PushDown Returns 0: SELECT a, b, c FROM oneToTenFiltered WHERE c like '%Ee%' (9 milliseconds)
[info] - PushDown Returns 1: SELECT c FROM oneToTenFiltered WHERE c = 'aaaaaAAAAA' (8 milliseconds)
[info] - PushDown Returns 1: SELECT c FROM oneToTenFiltered WHERE c IN ('aaaaaAAAAA', 'foo') (10 milliseconds)
[info] - PushDown Returns 10: SELECT c FROM oneToTenFiltered WHERE A + b > 9 (12 milliseconds)
[info] - PushDown Returns 3: SELECT a FROM oneToTenFiltered WHERE a + b > 9 AND b < 16 AND c IN ('bbbbbBBBBB', 'cccccCCCCC', 'dddddDDDDD', 'foo') (10 milliseconds)
[info] UnsafeRowSerializerSuite:
[info] - toUnsafeRow() test helper method (1 millisecond)
[info] - basic row serialization (2 milliseconds)
[info] - close empty input stream (0 milliseconds)
02:45:03.008 WARN org.apache.spark.storage.MemoryStore: Max memory 58.6 KB is less than the initial memory threshold 1024.0 KB needed to store a block in memory. Please configure Spark with more memory.
[info] - SPARK-10466: external sorter spilling with unsafe row serializer (202 milliseconds)
[info] - SPARK-10403: unsafe row serializer with SortShuffleManager (41 milliseconds)
[info] DataFrameImplicitsSuite:
[info] - RDD of tuples (36 milliseconds)
[info] - Seq of tuples (13 milliseconds)
[info] - RDD[Int] (17 milliseconds)
[info] - RDD[Long] (23 milliseconds)
[info] - RDD[String] (20 milliseconds)
[info] DDLSourceLoadSuite:
[info] - data sources with the same name (2 milliseconds)
[info] - load data source from format alias (0 milliseconds)
[info] - specify full classname with duplicate formats (1 millisecond)
[info] - should fail to load ORC without HiveContext (2 milliseconds)
[info] JsonSuite:
[info] - Type promotion (22 milliseconds)
[info] - Get compatible type (1 millisecond)
[info] - Complex field and type inferring with null in sampling (53 milliseconds)
[info] - Primitive field and type inferring (34 milliseconds)
[info] - Complex field and type inferring (316 milliseconds)
[info] - GetField operation on complex data type (66 milliseconds)
[info] - Type conflict in primitive field values (288 milliseconds)
[info] - Type conflict in primitive field values (Ignored) !!! IGNORED !!!
[info] - Type conflict in complex field values (39 milliseconds)
[info] - Type conflict in array elements (64 milliseconds)
[info] - Handling missing fields (6 milliseconds)
[info] - jsonFile should be based on JSONRelation (89 milliseconds)
[info] - Loading a JSON dataset from a text file (94 milliseconds)
[info] - Loading a JSON dataset primitivesAsString returns schema with primitive types as strings (100 milliseconds)
[info] - Loading a JSON dataset primitivesAsString returns complex fields as strings (278 milliseconds)
[info] - Loading a JSON dataset floatAsBigDecimal returns schema with float types as BigDecimal (32 milliseconds)
[info] - Loading a JSON dataset from a text file with SQL (81 milliseconds)
[info] - Applying schemas (96 milliseconds)
[info] - Applying schemas with MapType (102 milliseconds)
[info] - SPARK-2096 Correctly parse dot notations (63 milliseconds)
[info] - SPARK-3390 Complex arrays (89 milliseconds)
[info] - SPARK-3308 Read top level JSON arrays (28 milliseconds)
[info] - Corrupt records (82 milliseconds)
[info] - SPARK-4068: nulls in arrays (34 milliseconds)
[info] - SPARK-4228 DataFrame to JSON (430 milliseconds)
[info] - JSONRelation equality test (73 milliseconds)
[info] - SPARK-6245 JsonRDD.inferSchema on empty RDD (7 milliseconds)
[info] - SPARK-7565 MapType in JsonRDD (315 milliseconds)
[info] - SPARK-8093 Erase empty structs (5 milliseconds)
[info] - JSON with Partition (243 milliseconds)
[info] - backward compatibility *** FAILED *** (157 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] Relation[col0#13071,col1#13072,col2#13073,col3#13074,col4#13075,col5#13076,col6#13077,col7#13078L,col8#13079,col9#13080,col10#13081,col11#13082,col12#13083,col13#13084,col14#13085,col15#13086,col16#13087,col17#13088] JSONRelation
[info]
[info] == Analyzed Logical Plan ==
[info] col0: string, col1: binary, col2: null, col3: boolean, col4: tinyint, col5: smallint, col6: int, col7: bigint, col8: float, col9: double, col10: decimal(25,5), col11: decimal(6,5), col12: date, col13: timestamp, col14: array<int>, col15: map<string,bigint>, col16: struct<f1:float,f2:array<boolean>>, col17: mydensevector
[info] Relation[col0#13071,col1#13072,col2#13073,col3#13074,col4#13075,col5#13076,col6#13077,col7#13078L,col8#13079,col9#13080,col10#13081,col11#13082,col12#13083,col13#13084,col14#13085,col15#13086,col16#13087,col17#13088] JSONRelation
[info]
[info] == Optimized Logical Plan ==
[info] Relation[col0#13071,col1#13072,col2#13073,col3#13074,col4#13075,col5#13076,col6#13077,col7#13078L,col8#13079,col9#13080,col10#13081,col11#13082,col12#13083,col13#13084,col14#13085,col15#13086,col16#13087,col17#13088] JSONRelation
[info]
[info] == Physical Plan ==
[info] Scan JSONRelation[col0#13071,col1#13072,col2#13073,col3#13074,col4#13075,col5#13076,col6#13077,col7#13078L,col8#13079,col9#13080,col10#13081,col11#13082,col12#13083,col13#13084,col14#13085,col15#13086,col16#13087,col17#13088] InputPaths: file:/Users/sim/dev/spx/spark/target/tmp/spark-0e8f23ea-6ce6-4081-beba-c78f583b0459
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 9 == == Spark Answer - 9 ==
[info] ![Spark 1.2.2,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2015-01-01,2015-01-01 23:50:59.123,List(2, 3, 4),Map(a string -> 2000),[4.75,List(false, true)],org.apache.spark.sql.MyDenseVector@72be3116] [Spark 1.2.2,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2014-12-31,2015-01-01 23:50:59.123,WrappedArray(2, 3, 4),Map(a string -> 2000),[4.75,WrappedArray(false, true)],org.apache.spark.sql.MyDenseVector@7fba6e41]
[info] ![Spark 1.3.1,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2015-01-01,2015-01-01 23:50:59.123,List(2, 3, 4),Map(a string -> 2000),[4.75,List(false, true)],org.apache.spark.sql.MyDenseVector@72be3116] [Spark 1.3.1,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2014-12-31,2015-01-01 23:50:59.123,WrappedArray(2, 3, 4),Map(a string -> 2000),[4.75,WrappedArray(false, true)],org.apache.spark.sql.MyDenseVector@5dacee15]
[info] ![Spark 1.3.1,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2015-01-01,2015-01-01 23:50:59.123,List(2, 3, 4),Map(a string -> 2000),[4.75,List(false, true)],org.apache.spark.sql.MyDenseVector@72be3116] [Spark 1.3.1,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2014-12-31,2015-01-01 23:50:59.123,WrappedArray(2, 3, 4),Map(a string -> 2000),[4.75,WrappedArray(false, true)],org.apache.spark.sql.MyDenseVector@7077c1e]
[info] ![Spark 1.4.1,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2015-01-01,2015-01-01 23:50:59.123,List(2, 3, 4),Map(a string -> 2000),[4.75,List(false, true)],org.apache.spark.sql.MyDenseVector@72be3116] [Spark 1.4.1,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2014-12-31,2015-01-01 23:50:59.123,WrappedArray(2, 3, 4),Map(a string -> 2000),[4.75,WrappedArray(false, true)],org.apache.spark.sql.MyDenseVector@10410d48]
[info] ![Spark 1.4.1,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2015-01-01,2015-01-01 23:50:59.123,List(2, 3, 4),Map(a string -> 2000),[4.75,List(false, true)],org.apache.spark.sql.MyDenseVector@72be3116] [Spark 1.4.1,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2014-12-31,2015-01-01 23:50:59.123,WrappedArray(2, 3, 4),Map(a string -> 2000),[4.75,WrappedArray(false, true)],org.apache.spark.sql.MyDenseVector@2c9a64d2]
[info] ![Spark 1.5.0,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2015-01-01,2015-01-01 23:50:59.123,List(2, 3, 4),Map(a string -> 2000),[4.75,List(false, true)],org.apache.spark.sql.MyDenseVector@72be3116] [Spark 1.5.0,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2014-12-31,2015-01-01 23:50:59.123,WrappedArray(2, 3, 4),Map(a string -> 2000),[4.75,WrappedArray(false, true)],org.apache.spark.sql.MyDenseVector@3a18452a]
[info] ![Spark 1.5.0,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2015-01-01,2015-01-01 23:50:59.123,List(2, 3, 4),Map(a string -> 2000),[4.75,List(false, true)],org.apache.spark.sql.MyDenseVector@72be3116] [Spark 1.5.0,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2014-12-31,2015-01-01 23:50:59.123,WrappedArray(2, 3, 4),Map(a string -> 2000),[4.75,WrappedArray(false, true)],org.apache.spark.sql.MyDenseVector@fecf66]
[info] ![Spark 2.0.0-SNAPSHOT,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2015-01-01,2015-01-01 23:50:59.123,List(2, 3, 4),Map(a string -> 2000),[4.75,List(false, true)],org.apache.spark.sql.MyDenseVector@72be3116] [Spark 2.0.0-SNAPSHOT,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2014-12-31,2015-01-01 23:50:59.123,WrappedArray(2, 3, 4),Map(a string -> 2000),[4.75,WrappedArray(false, true)],org.apache.spark.sql.MyDenseVector@222b346c]
[info] ![Spark 2.0.0-SNAPSHOT,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2015-01-01,2015-01-01 23:50:59.123,List(2, 3, 4),Map(a string -> 2000),[4.75,List(false, true)],org.apache.spark.sql.MyDenseVector@72be3116] [Spark 2.0.0-SNAPSHOT,WrappedArray(97, 32, 115, 116, 114, 105, 110, 103, 32, 105, 110, 32, 98, 105, 110, 97, 114, 121),null,true,1,2,3,9223372036854775807,0.25,0.75,1234.23456,1.23456,2014-12-31,2015-01-01 23:50:59.123,WrappedArray(2, 3, 4),Map(a string -> 2000),[4.75,WrappedArray(false, true)],org.apache.spark.sql.MyDenseVector@5d68680f] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.execution.datasources.json.JsonSuite$$anonfun$31$$anonfun$apply$mcV$sp$86.apply(JsonSuite.scala:1426)
[info] at org.apache.spark.sql.execution.datasources.json.JsonSuite$$anonfun$31$$anonfun$apply$mcV$sp$86.apply(JsonSuite.scala:1398)
[info] at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:125)
[info] at org.apache.spark.sql.execution.datasources.json.JsonSuite.withTempPath(JsonSuite.scala:44)
[info] at org.apache.spark.sql.execution.datasources.json.JsonSuite$$anonfun$31.apply$mcV$sp(JsonSuite.scala:1398)
[info] at org.apache.spark.sql.execution.datasources.json.JsonSuite$$anonfun$31.apply(JsonSuite.scala:1332)
[info] at org.apache.spark.sql.execution.datasources.json.JsonSuite$$anonfun$31.apply(JsonSuite.scala:1332)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.execution.datasources.json.JsonSuite.org$scalatest$BeforeAndAfterAll$$super$run(JsonSuite.scala:44)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.execution.datasources.json.JsonSuite.run(JsonSuite.scala:44)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - SPARK-11544 test pathfilter (162 milliseconds)
[info] - SPARK-12057 additional corrupt records do not throw exceptions (41 milliseconds)
[info] - SPARK-12872 Support to specify the option for compression codec (210 milliseconds)
[info] - Casting long as timestamp (22 milliseconds)
[info] CoGroupedIteratorSuite:
[info] - basic (8 milliseconds)
[info] - SPARK-11393: respect the fact that GroupedIterator.hasNext is not idempotent (1 millisecond)
[info] BooleanBitSetSuite:
[info] - BooleanBitSet: empty (8 milliseconds)
[info] - BooleanBitSet: less than 1 word (1 millisecond)
[info] - BooleanBitSet: exactly 1 word (0 milliseconds)
[info] - BooleanBitSet: multiple whole words (2 milliseconds)
[info] - BooleanBitSet: multiple words and 1 more bit (1 millisecond)
[info] IntersectNodeSuite:
[info] - basic (4 milliseconds)
[info] IntegralDeltaSuite:
[info] - IntDelta: empty column (1 millisecond)
[info] - IntDelta: simple case (3 milliseconds)
[info] - IntDelta: long random series (39 milliseconds)
[info] - LongDelta: empty column (0 milliseconds)
[info] - LongDelta: simple case (1 millisecond)
[info] - LongDelta: long random series (21 milliseconds)
[info] CSVSuite:
[info] - simple csv test (106 milliseconds)
[info] - simple csv test with type inference (80 milliseconds)
[info] - test with alternative delimiter and quote (64 milliseconds)
[info] - bad encoding name (12 milliseconds)
[info] - test different encoding (68 milliseconds)
[info] - test aliases sep and encoding for delimiter and charset (55 milliseconds)
[info] - DDL test with tab separated file (65 milliseconds)
[info] - DDL test parsing decimal type (50 milliseconds)
02:45:07.924 WARN org.apache.spark.sql.execution.datasources.csv.CSVRelation: Dropping malformed line: 2015,Chevy,Volt
[info] - test for DROPMALFORMED parsing mode (39 milliseconds)
02:45:07.961 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 43.0 (TID 59)
java.lang.RuntimeException: Malformed line in FAILFAST mode: 2015,Chevy,Volt
at org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$parseCsv$3.apply(CSVRelation.scala:216)
at org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$parseCsv$3.apply(CSVRelation.scala:211)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:396)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:308)
at scala.collection.AbstractIterator.to(Iterator.scala:1194)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:300)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1194)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:287)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1194)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1807)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1807)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:45:07.963 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 43.0 (TID 59, localhost): java.lang.RuntimeException: Malformed line in FAILFAST mode: 2015,Chevy,Volt
at org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$parseCsv$3.apply(CSVRelation.scala:216)
at org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$parseCsv$3.apply(CSVRelation.scala:211)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:396)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:308)
at scala.collection.AbstractIterator.to(Iterator.scala:1194)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:300)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1194)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:287)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1194)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1807)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1807)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:45:07.963 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 43.0 failed 1 times; aborting job
[info] - test for FAILFAST parsing mode (37 milliseconds)
[info] - test for tokens more than the fields in the schema (51 milliseconds)
[info] - test with null quote character (50 milliseconds)
[info] - test with empty file and known schema (24 milliseconds)
[info] - DDL test with empty file (26 milliseconds)
[info] - DDL test with schema (32 milliseconds)
[info] - save csv (206 milliseconds)
[info] - save csv with quote (136 milliseconds)
[info] - commented lines in CSV data (32 milliseconds)
[info] - inferring schema with commented lines in CSV data (44 milliseconds)
[info] - setting comment to null disables comment support (39 milliseconds)
[info] - nullable fields with user defined null value of "null" (34 milliseconds)
[info] - save csv with compression codec option (133 milliseconds)
[info] MemorySourceStressSuite:
[info] - memory stress test (1 second, 54 milliseconds)
[info] SortSuite:
[info] - basic sorting using ExternalSort (70 milliseconds)
[info] - sort followed by limit (57 milliseconds)
02:45:12.813 WARN org.apache.spark.scheduler.TaskSetManager: Stage 12 contains a task of very large size (2062 KB). The maximum recommended task size is 100 KB.
02:45:16.055 WARN org.apache.spark.scheduler.TaskSetManager: Stage 17 contains a task of very large size (2062 KB). The maximum recommended task size is 100 KB.
[info] - sorting does not crash for large inputs (7 seconds, 497 milliseconds)
[info] - sorting updates peak execution memory (513 milliseconds)
02:45:18.157 WARN org.apache.spark.scheduler.TaskSetManager: Stage 28 contains a task of very large size (637 KB). The maximum recommended task size is 100 KB.
02:45:18.340 WARN org.apache.spark.scheduler.TaskSetManager: Stage 29 contains a task of very large size (637 KB). The maximum recommended task size is 100 KB.
02:45:18.469 WARN org.apache.spark.scheduler.TaskSetManager: Stage 31 contains a task of very large size (637 KB). The maximum recommended task size is 100 KB.
02:45:18.510 WARN org.apache.spark.scheduler.TaskSetManager: Stage 32 contains a task of very large size (637 KB). The maximum recommended task size is 100 KB.
[info] - sorting on StringType with nullable=true, sortOrder=List('a ASC) (921 milliseconds)
02:45:19.052 WARN org.apache.spark.scheduler.TaskSetManager: Stage 34 contains a task of very large size (688 KB). The maximum recommended task size is 100 KB.
02:45:19.093 WARN org.apache.spark.scheduler.TaskSetManager: Stage 35 contains a task of very large size (688 KB). The maximum recommended task size is 100 KB.
02:45:19.219 WARN org.apache.spark.scheduler.TaskSetManager: Stage 37 contains a task of very large size (688 KB). The maximum recommended task size is 100 KB.
02:45:19.271 WARN org.apache.spark.scheduler.TaskSetManager: Stage 38 contains a task of very large size (688 KB). The maximum recommended task size is 100 KB.
[info] - sorting on StringType with nullable=true, sortOrder=List('a DESC) (798 milliseconds)
02:45:19.840 WARN org.apache.spark.scheduler.TaskSetManager: Stage 40 contains a task of very large size (747 KB). The maximum recommended task size is 100 KB.
02:45:19.876 WARN org.apache.spark.scheduler.TaskSetManager: Stage 41 contains a task of very large size (747 KB). The maximum recommended task size is 100 KB.
02:45:19.993 WARN org.apache.spark.scheduler.TaskSetManager: Stage 43 contains a task of very large size (747 KB). The maximum recommended task size is 100 KB.
02:45:20.032 WARN org.apache.spark.scheduler.TaskSetManager: Stage 44 contains a task of very large size (747 KB). The maximum recommended task size is 100 KB.
[info] - sorting on StringType with nullable=false, sortOrder=List('a ASC) (693 milliseconds)
02:45:20.533 WARN org.apache.spark.scheduler.TaskSetManager: Stage 46 contains a task of very large size (767 KB). The maximum recommended task size is 100 KB.
02:45:20.597 WARN org.apache.spark.scheduler.TaskSetManager: Stage 47 contains a task of very large size (767 KB). The maximum recommended task size is 100 KB.
02:45:20.707 WARN org.apache.spark.scheduler.TaskSetManager: Stage 49 contains a task of very large size (767 KB). The maximum recommended task size is 100 KB.
02:45:20.744 WARN org.apache.spark.scheduler.TaskSetManager: Stage 50 contains a task of very large size (767 KB). The maximum recommended task size is 100 KB.
[info] - sorting on StringType with nullable=false, sortOrder=List('a DESC) (362 milliseconds)
[info] - sorting on NullType with nullable=true, sortOrder=List('a ASC) (158 milliseconds)
[info] - sorting on NullType with nullable=true, sortOrder=List('a DESC) (247 milliseconds)
[info] - sorting on NullType with nullable=false, sortOrder=List('a ASC) (129 milliseconds)
[info] - sorting on NullType with nullable=false, sortOrder=List('a DESC) (98 milliseconds)
[info] - sorting on LongType with nullable=true, sortOrder=List('a ASC) (155 milliseconds)
[info] - sorting on LongType with nullable=true, sortOrder=List('a DESC) (108 milliseconds)
[info] - sorting on LongType with nullable=false, sortOrder=List('a ASC) (103 milliseconds)
[info] - sorting on LongType with nullable=false, sortOrder=List('a DESC) (103 milliseconds)
[info] - sorting on IntegerType with nullable=true, sortOrder=List('a ASC) (138 milliseconds)
[info] - sorting on IntegerType with nullable=true, sortOrder=List('a DESC) (413 milliseconds)
[info] - sorting on IntegerType with nullable=false, sortOrder=List('a ASC) (127 milliseconds)
[info] - sorting on IntegerType with nullable=false, sortOrder=List('a DESC) (160 milliseconds)
[info] - sorting on DecimalType(20,5) with nullable=true, sortOrder=List('a ASC) (275 milliseconds)
[info] - sorting on DecimalType(20,5) with nullable=true, sortOrder=List('a DESC) (225 milliseconds)
[info] - sorting on DecimalType(20,5) with nullable=false, sortOrder=List('a ASC) (192 milliseconds)
[info] - sorting on DecimalType(20,5) with nullable=false, sortOrder=List('a DESC) (282 milliseconds)
[info] - sorting on TimestampType with nullable=true, sortOrder=List('a ASC) (216 milliseconds)
[info] - sorting on TimestampType with nullable=true, sortOrder=List('a DESC) (304 milliseconds)
[info] - sorting on TimestampType with nullable=false, sortOrder=List('a ASC) (158 milliseconds)
[info] - sorting on TimestampType with nullable=false, sortOrder=List('a DESC) (178 milliseconds)
[info] - sorting on DoubleType with nullable=true, sortOrder=List('a ASC) (216 milliseconds)
[info] - sorting on DoubleType with nullable=true, sortOrder=List('a DESC) (240 milliseconds)
[info] - sorting on DoubleType with nullable=false, sortOrder=List('a ASC) (281 milliseconds)
[info] - sorting on DoubleType with nullable=false, sortOrder=List('a DESC) (151 milliseconds)
[info] - sorting on DateType with nullable=true, sortOrder=List('a ASC) (206 milliseconds)
[info] - sorting on DateType with nullable=true, sortOrder=List('a DESC) (246 milliseconds)
[info] - sorting on DateType with nullable=false, sortOrder=List('a ASC) (152 milliseconds)
[info] - sorting on DateType with nullable=false, sortOrder=List('a DESC) (816 milliseconds)
[info] - sorting on DecimalType(10,0) with nullable=true, sortOrder=List('a ASC) (154 milliseconds)
[info] - sorting on DecimalType(10,0) with nullable=true, sortOrder=List('a DESC) (196 milliseconds)
[info] - sorting on DecimalType(10,0) with nullable=false, sortOrder=List('a ASC) (161 milliseconds)
[info] - sorting on DecimalType(10,0) with nullable=false, sortOrder=List('a DESC) (152 milliseconds)
02:45:27.627 WARN org.apache.spark.scheduler.TaskSetManager: Stage 244 contains a task of very large size (249 KB). The maximum recommended task size is 100 KB.
02:45:27.662 WARN org.apache.spark.scheduler.TaskSetManager: Stage 245 contains a task of very large size (249 KB). The maximum recommended task size is 100 KB.
02:45:27.734 WARN org.apache.spark.scheduler.TaskSetManager: Stage 247 contains a task of very large size (249 KB). The maximum recommended task size is 100 KB.
02:45:27.760 WARN org.apache.spark.scheduler.TaskSetManager: Stage 248 contains a task of very large size (249 KB). The maximum recommended task size is 100 KB.
[info] - sorting on BinaryType with nullable=true, sortOrder=List('a ASC) (279 milliseconds)
02:45:27.901 WARN org.apache.spark.scheduler.TaskSetManager: Stage 250 contains a task of very large size (242 KB). The maximum recommended task size is 100 KB.
02:45:27.931 WARN org.apache.spark.scheduler.TaskSetManager: Stage 251 contains a task of very large size (242 KB). The maximum recommended task size is 100 KB.
02:45:27.992 WARN org.apache.spark.scheduler.TaskSetManager: Stage 253 contains a task of very large size (242 KB). The maximum recommended task size is 100 KB.
02:45:28.010 WARN org.apache.spark.scheduler.TaskSetManager: Stage 254 contains a task of very large size (242 KB). The maximum recommended task size is 100 KB.
[info] - sorting on BinaryType with nullable=true, sortOrder=List('a DESC) (207 milliseconds)
02:45:28.103 WARN org.apache.spark.scheduler.TaskSetManager: Stage 256 contains a task of very large size (277 KB). The maximum recommended task size is 100 KB.
02:45:28.121 WARN org.apache.spark.scheduler.TaskSetManager: Stage 257 contains a task of very large size (277 KB). The maximum recommended task size is 100 KB.
02:45:28.175 WARN org.apache.spark.scheduler.TaskSetManager: Stage 259 contains a task of very large size (277 KB). The maximum recommended task size is 100 KB.
02:45:28.208 WARN org.apache.spark.scheduler.TaskSetManager: Stage 260 contains a task of very large size (277 KB). The maximum recommended task size is 100 KB.
[info] - sorting on BinaryType with nullable=false, sortOrder=List('a ASC) (193 milliseconds)
02:45:28.294 WARN org.apache.spark.scheduler.TaskSetManager: Stage 262 contains a task of very large size (260 KB). The maximum recommended task size is 100 KB.
02:45:28.311 WARN org.apache.spark.scheduler.TaskSetManager: Stage 263 contains a task of very large size (260 KB). The maximum recommended task size is 100 KB.
02:45:28.358 WARN org.apache.spark.scheduler.TaskSetManager: Stage 265 contains a task of very large size (260 KB). The maximum recommended task size is 100 KB.
02:45:28.376 WARN org.apache.spark.scheduler.TaskSetManager: Stage 266 contains a task of very large size (260 KB). The maximum recommended task size is 100 KB.
[info] - sorting on BinaryType with nullable=false, sortOrder=List('a DESC) (161 milliseconds)
[info] - sorting on BooleanType with nullable=true, sortOrder=List('a ASC) (123 milliseconds)
[info] - sorting on BooleanType with nullable=true, sortOrder=List('a DESC) (102 milliseconds)
[info] - sorting on BooleanType with nullable=false, sortOrder=List('a ASC) (92 milliseconds)
[info] - sorting on BooleanType with nullable=false, sortOrder=List('a DESC) (82 milliseconds)
[info] - sorting on DecimalType(38,18) with nullable=true, sortOrder=List('a ASC) (127 milliseconds)
[info] - sorting on DecimalType(38,18) with nullable=true, sortOrder=List('a DESC) (121 milliseconds)
[info] - sorting on DecimalType(38,18) with nullable=false, sortOrder=List('a ASC) (121 milliseconds)
[info] - sorting on DecimalType(38,18) with nullable=false, sortOrder=List('a DESC) (125 milliseconds)
[info] - sorting on ByteType with nullable=true, sortOrder=List('a ASC) (105 milliseconds)
[info] - sorting on ByteType with nullable=true, sortOrder=List('a DESC) (95 milliseconds)
[info] - sorting on ByteType with nullable=false, sortOrder=List('a ASC) (84 milliseconds)
[info] - sorting on ByteType with nullable=false, sortOrder=List('a DESC) (90 milliseconds)
[info] - sorting on FloatType with nullable=true, sortOrder=List('a ASC) (128 milliseconds)
[info] - sorting on FloatType with nullable=true, sortOrder=List('a DESC) (116 milliseconds)
[info] - sorting on FloatType with nullable=false, sortOrder=List('a ASC) (90 milliseconds)
[info] - sorting on FloatType with nullable=false, sortOrder=List('a DESC) (102 milliseconds)
[info] - sorting on ShortType with nullable=true, sortOrder=List('a ASC) (112 milliseconds)
[info] - sorting on ShortType with nullable=true, sortOrder=List('a DESC) (94 milliseconds)
[info] - sorting on ShortType with nullable=false, sortOrder=List('a ASC) (86 milliseconds)
[info] - sorting on ShortType with nullable=false, sortOrder=List('a DESC) (85 milliseconds)
[info] SaveLoadSuite:
[info] - save with path and load (327 milliseconds)
[info] - save with string mode and path, and load (307 milliseconds)
[info] - save with path and datasource, and load (319 milliseconds)
[info] - save with data source and options, and load (272 milliseconds)
[info] - save and save again (887 milliseconds)
[info] ExchangeCoordinatorSuite:
[info] - test estimatePartitionStartIndices - 1 Exchange (6 milliseconds)
[info] - test estimatePartitionStartIndices - 2 Exchanges (1 millisecond)
[info] - test estimatePartitionStartIndices and enforce minimal number of reducers (1 millisecond)
[info] - determining the number of reducers: aggregate operator(minNumPostShufflePartitions: 3) (133 milliseconds)
[info] - determining the number of reducers: join operator(minNumPostShufflePartitions: 3) (159 milliseconds)
[info] - determining the number of reducers: complex query 1(minNumPostShufflePartitions: 3) (194 milliseconds)
[info] - determining the number of reducers: complex query 2(minNumPostShufflePartitions: 3) (376 milliseconds)
[info] - determining the number of reducers: aggregate operator (103 milliseconds)
[info] - determining the number of reducers: join operator (169 milliseconds)
[info] - determining the number of reducers: complex query 1 (164 milliseconds)
[info] - determining the number of reducers: complex query 2 (140 milliseconds)
[info] SQLConfEntrySuite:
[info] - intConf (1 millisecond)
[info] - longConf (1 millisecond)
[info] - booleanConf (1 millisecond)
[info] - doubleConf (2 milliseconds)
[info] - stringConf (0 milliseconds)
[info] - enumConf (1 millisecond)
[info] - stringSeqConf (2 milliseconds)
[info] DataFrameStatSuite:
[info] - sample with replacement (29 milliseconds)
[info] - sample without replacement (17 milliseconds)
[info] - randomSplit (699 milliseconds)
[info] - randomSplit on reordered partitions (171 milliseconds)
[info] - pearson correlation (36 milliseconds)
[info] - covariance (33 milliseconds)
[info] - crosstab (85 milliseconds)
[info] - special crosstab elements (., '', null, ``) (197 milliseconds)
[info] - Frequent Items (78 milliseconds)
[info] - Frequent Items 2 (19 milliseconds)
[info] - sampleBy (130 milliseconds)
[info] - countMinSketch (132 milliseconds)
[info] - Bloom filter (48 milliseconds)
[info] DataFramePivotSuite:
[info] - pivot courses with literals (95 milliseconds)
[info] - pivot year with literals (64 milliseconds)
[info] - pivot courses with literals and multiple aggregations (82 milliseconds)
[info] - pivot year with string values (cast) (65 milliseconds)
[info] - pivot year with int values (62 milliseconds)
[info] - pivot courses with no values (189 milliseconds)
[info] - pivot year with no values (167 milliseconds)
[info] - pivot max values enforced (51 milliseconds)
[info] - pivot with UnresolvedFunction (50 milliseconds)
[info] BenchmarkWholeStageCodegen:
[info] - range/filter/sum !!! IGNORED !!!
[info] - stat functions !!! IGNORED !!!
[info] - aggregate with keys !!! IGNORED !!!
[info] - broadcast hash join !!! IGNORED !!!
[info] - hash and BytesToBytesMap !!! IGNORED !!!
[info] PartitionBatchPruningSuite:
[info] - SELECT key FROM pruningData WHERE key = 1 (38 milliseconds)
[info] - SELECT key FROM pruningData WHERE 1 = key (26 milliseconds)
[info] - SELECT key FROM pruningData WHERE key < 12 (22 milliseconds)
[info] - SELECT key FROM pruningData WHERE key <= 11 (23 milliseconds)
[info] - SELECT key FROM pruningData WHERE key > 88 (22 milliseconds)
[info] - SELECT key FROM pruningData WHERE key >= 89 (30 milliseconds)
[info] - SELECT key FROM pruningData WHERE 12 > key (22 milliseconds)
[info] - SELECT key FROM pruningData WHERE 11 >= key (31 milliseconds)
[info] - SELECT key FROM pruningData WHERE 88 < key (21 milliseconds)
[info] - SELECT key FROM pruningData WHERE 89 <= key (20 milliseconds)
[info] - SELECT key FROM pruningData WHERE value IS NULL (27 milliseconds)
[info] - SELECT key FROM pruningData WHERE value IS NOT NULL (26 milliseconds)
[info] - SELECT key FROM pruningData WHERE key > 8 AND key <= 21 (27 milliseconds)
[info] - SELECT key FROM pruningData WHERE key < 2 OR key > 99 (22 milliseconds)
[info] - SELECT key FROM pruningData WHERE key < 12 AND key IS NOT NULL (18 milliseconds)
[info] - SELECT key FROM pruningData WHERE key < 2 OR (key > 78 AND key < 92) (25 milliseconds)
[info] - SELECT key FROM pruningData WHERE NOT (key < 88) (39 milliseconds)
[info] - SELECT key FROM pruningData WHERE NOT (key IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30)) (36 milliseconds)
[info] - SELECT key FROM pruningData WHERE NOT (key IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30)) AND key > 88 (30 milliseconds)
[info] StreamSuite:
[info] - map with recovery (65 milliseconds)
[info] - join (81 milliseconds)
[info] - union two streams (78 milliseconds)
[info] - sql queries (31 milliseconds)
[info] DataFrameAggregateSuite:
[info] - groupBy (447 milliseconds)
[info] - rollup (155 milliseconds)
[info] - cube (270 milliseconds)
[info] - rollup overlapping columns (203 milliseconds)
[info] - cube overlapping columns (179 milliseconds)
[info] - spark.sql.retainGroupColumns config (76 milliseconds)
[info] - agg without groups (40 milliseconds)
[info] - agg without groups and functions (25 milliseconds)
[info] - average (336 milliseconds)
[info] - null average (167 milliseconds)
[info] - zero average (87 milliseconds)
[info] - count (79 milliseconds)
[info] - null count (343 milliseconds)
[info] - multiple column distinct count (373 milliseconds)
[info] - zero count (58 milliseconds)
[info] - stddev (118 milliseconds)
[info] - zero stddev (85 milliseconds)
[info] - zero sum (35 milliseconds)
[info] - zero sum distinct (64 milliseconds)
[info] - moments (151 milliseconds)
[info] - zero moments (200 milliseconds)
[info] - null moments (155 milliseconds)
[info] MultiSQLContextsSuite:
02:45:41.469 WARN org.apache.spark.SparkContext: Multiple running SparkContexts detected in the same JVM!
org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1.apply(MultiSQLContextsSuite.scala:82)
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
org.scalatest.Transformer.apply(Transformer.scala:22)
org.scalatest.Transformer.apply(Transformer.scala:20)
org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
scala.collection.immutable.List.foreach(List.scala:381)
org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:2141)
at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:2123)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:2123)
at org.apache.spark.SparkContext$.setActiveContext(SparkContext.scala:2209)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:2081)
at org.apache.spark.sql.MultiSQLContextsSuite.testCreatingNewSQLContext(MultiSQLContextsSuite.scala:63)
at org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(MultiSQLContextsSuite.scala:92)
at org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(MultiSQLContextsSuite.scala:82)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1.apply$mcV$sp(MultiSQLContextsSuite.scala:82)
at org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1.apply(MultiSQLContextsSuite.scala:82)
at org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1.apply(MultiSQLContextsSuite.scala:82)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
at org.scalatest.Suite$class.run(Suite.scala:1424)
at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
at org.apache.spark.sql.MultiSQLContextsSuite.org$scalatest$BeforeAndAfterAll$$super$run(MultiSQLContextsSuite.scala:24)
at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
at org.apache.spark.sql.MultiSQLContextsSuite.run(MultiSQLContextsSuite.scala:24)
at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
at sbt.ForkMain$Run$2.call(ForkMain.java:296)
at sbt.ForkMain$Run$2.call(ForkMain.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:45:41.522 WARN org.apache.spark.SparkContext: Multiple running SparkContexts detected in the same JVM!
org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1.apply(MultiSQLContextsSuite.scala:82)
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
org.scalatest.Transformer.apply(Transformer.scala:22)
org.scalatest.Transformer.apply(Transformer.scala:20)
org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
scala.collection.immutable.List.foreach(List.scala:381)
org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:2141)
at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:2123)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:2123)
at org.apache.spark.SparkContext$.setActiveContext(SparkContext.scala:2209)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:2081)
at org.apache.spark.sql.MultiSQLContextsSuite.testCreatingNewSQLContext(MultiSQLContextsSuite.scala:63)
at org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(MultiSQLContextsSuite.scala:92)
at org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(MultiSQLContextsSuite.scala:82)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1.apply$mcV$sp(MultiSQLContextsSuite.scala:82)
at org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1.apply(MultiSQLContextsSuite.scala:82)
at org.apache.spark.sql.MultiSQLContextsSuite$$anonfun$1.apply(MultiSQLContextsSuite.scala:82)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
at org.scalatest.Suite$class.run(Suite.scala:1424)
at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
at org.apache.spark.sql.MultiSQLContextsSuite.org$scalatest$BeforeAndAfterAll$$super$run(MultiSQLContextsSuite.scala:24)
at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
at org.apache.spark.sql.MultiSQLContextsSuite.run(MultiSQLContextsSuite.scala:24)
at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
at sbt.ForkMain$Run$2.call(ForkMain.java:296)
at sbt.ForkMain$Run$2.call(ForkMain.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[info] - test the flag to disallow creating multiple root SQLContext (114 milliseconds)
[info] CSVTypeCastSuite:
[info] - Can parse decimal type values (1 millisecond)
[info] - Can parse escaped characters (0 milliseconds)
[info] - Does not accept delimiter larger than one character (1 millisecond)
[info] - Throws exception for unsupported escaped characters (1 millisecond)
[info] - Nullable types are handled (0 milliseconds)
[info] - String type should always return the same as the input (0 milliseconds)
[info] - Throws exception for empty string with non null type (1 millisecond)
[info] - Types are cast correctly (2 milliseconds)
[info] - Float and Double Types are cast correctly with Locale (5 milliseconds)
[info] UDFSuite:
[info] - built-in fixed arity expressions (2 milliseconds)
[info] - built-in vararg expressions (4 milliseconds)
[info] - built-in expressions with multiple constructors (3 milliseconds)
[info] - count (2 milliseconds)
[info] - count distinct (3 milliseconds)
[info] - SPARK-8003 spark_partition_id (15 milliseconds)
[info] - SPARK-8005 input_file_name (160 milliseconds)
[info] - error reporting for incorrect number of arguments (1 millisecond)
[info] - error reporting for undefined functions (2 milliseconds)
[info] - Simple UDF (16 milliseconds)
[info] - ZeroArgument UDF (16 milliseconds)
[info] - TwoArgument UDF (26 milliseconds)
[info] - UDF in a WHERE (32 milliseconds)
[info] - UDF in a HAVING (54 milliseconds)
[info] - UDF in a GROUP BY (54 milliseconds)
[info] - UDFs everywhere (65 milliseconds)
[info] - struct UDF (19 milliseconds)
[info] - udf that is transformed (17 milliseconds)
[info] - type coercion for udf inputs (17 milliseconds)
[info] - udf in different types (343 milliseconds)
[info] - SPARK-11716 UDFRegistration does not include the input data type in returned UDF (96 milliseconds)
[info] ParquetEncodingSuite:
[info] - All Types Dictionary (186 milliseconds)
[info] - All Types Null (165 milliseconds)
[info] HashedRelationSuite:
[info] - GeneralHashedRelation (2 milliseconds)
[info] - UniqueKeyHashedRelation (1 millisecond)
[info] - UnsafeHashedRelation (8 milliseconds)
[info] - test serialization empty hash map (0 milliseconds)
[info] UnsafeRowSuite:
[info] - UnsafeRow Java serialization (0 milliseconds)
[info] - UnsafeRow Kryo serialization (9 milliseconds)
[info] - bitset width calculation (0 milliseconds)
[info] - writeToStream (5 milliseconds)
[info] - calling getDouble() and getFloat() on null columns (4 milliseconds)
[info] - calling get(ordinal, datatype) on null columns (4 milliseconds)
[info] - createFromByteArray and copyFrom (2 milliseconds)
[info] - calling hashCode on unsafe array returned by getArray(ordinal) (4 milliseconds)
[info] DataFrameSuite:
[info] - analysis error should be eagerly reported (11 milliseconds)
[info] - dataframe toString (0 milliseconds)
[info] - rename nested groupby (67 milliseconds)
[info] - invalid plan toString, debug mode (4 milliseconds)
[info] - access complex data (79 milliseconds)
[info] - table scan (15 milliseconds)
[info] - union all (112 milliseconds)
[info] - empty data frame (15 milliseconds)
[info] - head and take (27 milliseconds)
[info] - dataframe alias (2 milliseconds)
[info] - simple explode (33 milliseconds)
[info] - explode (1 second, 373 milliseconds)
[info] - SPARK-8930: explode should fail with a meaningful message if it takes a star (16 milliseconds)
[info] - explode alias and star (49 milliseconds)
[info] - selectExpr (47 milliseconds)
[info] - selectExpr with alias (630 milliseconds)
[info] - selectExpr with udtf (39 milliseconds)
[info] - filterExpr (124 milliseconds)
[info] - filterExpr using where (26 milliseconds)
[info] - repartition (40 milliseconds)
[info] - coalesce (37 milliseconds)
[info] - convert $"attribute name" into unresolved attribute (19 milliseconds)
[info] - convert Scala Symbol 'attrname into unresolved attribute (15 milliseconds)
[info] - select * (19 milliseconds)
[info] - simple select (18 milliseconds)
[info] - select with functions (144 milliseconds)
[info] - global sorting (439 milliseconds)
[info] - limit (96 milliseconds)
[info] - except (97 milliseconds)
[info] - intersect (428 milliseconds)
[info] - intersect - nullability (265 milliseconds)
[info] - udf (54 milliseconds)
[info] - callUDF in SQLContext (29 milliseconds)
[info] - withColumn (34 milliseconds)
[info] - replace column using withColumn (25 milliseconds)
[info] - drop column using drop (27 milliseconds)
[info] - drop columns using drop (21 milliseconds)
[info] - drop unknown column (no-op) (16 milliseconds)
[info] - drop column using drop with column reference (24 milliseconds)
[info] - drop unknown column (no-op) with column reference (22 milliseconds)
[info] - drop unknown column with same name with column reference (20 milliseconds)
[info] - drop column after join with duplicate columns using column reference (180 milliseconds)
[info] - withColumnRenamed (31 milliseconds)
[info] - describe (218 milliseconds)
[info] - apply on query results (SPARK-5462) (24 milliseconds)
[info] - inputFiles (117 milliseconds)
[info] - show !!! IGNORED !!!
[info] - showString: truncate = [true, false] (25 milliseconds)
[info] - showString(negative) (5 milliseconds)
[info] - showString(0) (6 milliseconds)
[info] - showString: array (14 milliseconds)
[info] - showString: binary (5 milliseconds)
[info] - showString: minimum column width (2 milliseconds)
[info] - SPARK-7319 showString (6 milliseconds)
[info] - SPARK-7327 show with empty dataFrame (13 milliseconds)
[info] - createDataFrame(RDD[Row], StructType) should convert UDTs (SPARK-6672) (12 milliseconds)
[info] - SPARK-6899: type should match when using codegen (36 milliseconds)
[info] - SPARK-7133: Implement struct, array, and map field accessor (110 milliseconds)
[info] - SPARK-7551: support backticks for DataFrame attribute resolution (78 milliseconds)
[info] - SPARK-7324 dropDuplicates (389 milliseconds)
[info] - SPARK-7150 range api (892 milliseconds)
[info] - SPARK-8621: support empty string column name (2 milliseconds)
[info] - SPARK-8797: sort by float column containing NaN should not crash (93 milliseconds)
[info] - SPARK-8797: sort by double column containing NaN should not crash (83 milliseconds)
[info] - NaN is greater than all other non-NaN numeric values (49 milliseconds)
[info] - SPARK-8072: Better Exception for Duplicate Columns (12 milliseconds)
[info] - SPARK-6941: Better error message for inserting into RDD-based Table (230 milliseconds)
[info] - SPARK-8608: call `show` on local DataFrame with random columns should return same value (22 milliseconds)
[info] - SPARK-8609: local DataFrame with random columns should return same value after sort (216 milliseconds)
[info] - SPARK-9083: sort with non-deterministic expressions (85 milliseconds)
[info] - Sorting columns are not in Filter and Project (139 milliseconds)
[info] - SPARK-9323: DataFrame.orderBy should support nested column name (58 milliseconds)
[info] - SPARK-9950: correctly analyze grouping/aggregating on struct fields (90 milliseconds)
[info] - SPARK-10093: Avoid transformations on executors (40 milliseconds)
[info] - SPARK-10185: Read multiple Hadoop Filesystem paths and paths with a comma in it (213 milliseconds)
[info] - SPARK-10034: Sort on Aggregate with aggregation expression named 'aggOrdering' (110 milliseconds)
[info] - SPARK-10316: respect non-deterministic expressions in PhysicalOperation (63 milliseconds)
[info] - SPARK-10539: Project should not be pushed down through Intersect or Except (420 milliseconds)
[info] - SPARK-10740: handle nondeterministic expressions correctly for set operations (293 milliseconds)
[info] - SPARK-10743: keep the name of expression if possible when do cast (4 milliseconds)
[info] - SPARK-11301: fix case sensitivity for filter on partitioned columns (147 milliseconds)
[info] - distributeBy and localSort (220 milliseconds)
[info] - fix case sensitivity of partition by (158 milliseconds)
[info] - SPARK-11633: LogicalRDD throws TreeNode Exception: Failed to Copy Node (1 second, 6 milliseconds)
[info] - SPARK-10656: completely support special chars (38 milliseconds)
[info] - SPARK-11725: correctly handle null inputs for ScalaUDF (106 milliseconds)
[info] - SPARK-12398 truncated toString (23 milliseconds)
[info] - SPARK-12512: support `.` in column name for withColumn() (51 milliseconds)
[info] - SPARK-12841: cast in filter (225 milliseconds)
[info] SerializationSuite:
[info] - [SPARK-5235] SQLContext should be serializable (5 milliseconds)
[info] ParquetFilterSuite:
[info] - filter pushdown - boolean (252 milliseconds)
[info] - filter pushdown - integer (455 milliseconds)
[info] - filter pushdown - long (394 milliseconds)
[info] - filter pushdown - float (407 milliseconds)
[info] - filter pushdown - double (395 milliseconds)
[info] - filter pushdown - string !!! IGNORED !!!
[info] - filter pushdown - binary !!! IGNORED !!!
[info] - SPARK-6554: don't push down predicates which reference partition columns (138 milliseconds)
[info] - SPARK-10829: Filter combine partition key and attribute doesn't work in DataSource scan (136 milliseconds)
[info] - SPARK-12231: test the filter and empty project in partitioned DataSource scan (141 milliseconds)
[info] - SPARK-12231: test the new projection in partitioned DataSource scan (162 milliseconds)
[info] - SPARK-11103: Filter applied on merged Parquet schema with new column fails (933 milliseconds)
[info] - SPARK-11661 Still pushdown filters returned by unhandledFilters (183 milliseconds)
[info] - SPARK-12218: 'Not' is included in Parquet filter pushdown (276 milliseconds)
[info] - SPARK-12218 Converting conjunctions into Parquet filter predicates (1 millisecond)
[info] - SPARK-11164: test the parquet filter in (445 milliseconds)
[info] SQLMetricsSuite:
[info] - LongSQLMetric should not box Long (6 milliseconds)
[info] - Normal accumulator should do boxing (4 milliseconds)
[info] - Project metrics (32 milliseconds)
[info] - Filter metrics (17 milliseconds)
[info] - WholeStageCodegen metrics (33 milliseconds)
[info] - TungstenAggregate metrics (117 milliseconds)
[info] - SortMergeJoin metrics (55 milliseconds)
[info] - SortMergeOuterJoin metrics (88 milliseconds)
[info] - BroadcastHashJoin metrics (49 milliseconds)
[info] - BroadcastHashOuterJoin metrics (87 milliseconds)
[info] - BroadcastNestedLoopJoin metrics (60 milliseconds)
[info] - BroadcastLeftSemiJoinHash metrics (42 milliseconds)
[info] - LeftSemiJoinHash metrics (43 milliseconds)
[info] - LeftSemiJoinBNL metrics (38 milliseconds)
[info] - CartesianProduct metrics (40 milliseconds)
[info] - save metrics (55 milliseconds)
[info] - metrics can be loaded by history server (66 milliseconds)
[info] DebuggingSuite:
[info] - DataFrame.debug() (39 milliseconds)
[info] BroadcastJoinSuite:
[info] - unsafe broadcast hash join updates peak execution memory (8 seconds, 97 milliseconds)
[info] - unsafe broadcast hash outer join updates peak execution memory (154 milliseconds)
[info] - unsafe broadcast left semi join updates peak execution memory (117 milliseconds)
02:46:08.401 ERROR org.apache.spark.deploy.worker.Worker: Connection to master failed! Waiting for master to reconnect...
02:46:08.402 ERROR org.apache.spark.deploy.worker.Worker: Connection to master failed! Waiting for master to reconnect...
02:46:08.405 ERROR org.apache.spark.deploy.worker.Worker: Connection to master failed! Waiting for master to reconnect...
02:46:08.405 ERROR org.apache.spark.deploy.worker.Worker: Connection to master failed! Waiting for master to reconnect...
02:46:08.407 WARN org.apache.spark.deploy.worker.Worker: Failed to connect to master localhost:57385
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@14d88818 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@f22ba5a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:326)
at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533)
at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:237)
at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
at org.apache.spark.rpc.RpcEndpointRef.ask(RpcEndpointRef.scala:62)
at org.apache.spark.rpc.netty.NettyRpcEnv.asyncSetupEndpointRefByURI(NettyRpcEnv.scala:137)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:88)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:96)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1$$anon$1.run(Worker.scala:215)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:46:08.407 WARN org.apache.spark.deploy.worker.Worker: Failed to connect to master localhost:57385
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4c439837 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e083bd1[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:326)
at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533)
at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:237)
at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
at org.apache.spark.rpc.RpcEndpointRef.ask(RpcEndpointRef.scala:62)
at org.apache.spark.rpc.netty.NettyRpcEnv.asyncSetupEndpointRefByURI(NettyRpcEnv.scala:137)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:88)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:96)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1$$anon$1.run(Worker.scala:215)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:46:08.510 WARN org.apache.spark.rpc.netty.NettyRpcEnv: RpcEnv already stopped.
02:46:08.512 WARN org.apache.spark.rpc.netty.NettyRpcEnv: RpcEnv already stopped.
[info] NestedLoopJoinNodeSuite:
[info] - BuildLeft / LeftOuter: empty (31 milliseconds)
[info] - BuildLeft / LeftOuter: no matches (27 milliseconds)
[info] - BuildLeft / LeftOuter: partial matches (12 milliseconds)
[info] - BuildLeft / LeftOuter: full matches (8 milliseconds)
[info] - BuildLeft / RightOuter: empty (3 milliseconds)
[info] - BuildLeft / RightOuter: no matches (13 milliseconds)
[info] - BuildLeft / RightOuter: partial matches (8 milliseconds)
[info] - BuildLeft / RightOuter: full matches (6 milliseconds)
[info] - BuildLeft / FullOuter: empty (4 milliseconds)
[info] - BuildLeft / FullOuter: no matches (15 milliseconds)
[info] - BuildLeft / FullOuter: partial matches (8 milliseconds)
[info] - BuildLeft / FullOuter: full matches (7 milliseconds)
[info] - BuildRight / LeftOuter: empty (2 milliseconds)
[info] - BuildRight / LeftOuter: no matches (14 milliseconds)
[info] - BuildRight / LeftOuter: partial matches (6 milliseconds)
[info] - BuildRight / LeftOuter: full matches (6 milliseconds)
[info] - BuildRight / RightOuter: empty (4 milliseconds)
[info] - BuildRight / RightOuter: no matches (7 milliseconds)
[info] - BuildRight / RightOuter: partial matches (4 milliseconds)
[info] - BuildRight / RightOuter: full matches (5 milliseconds)
[info] - BuildRight / FullOuter: empty (2 milliseconds)
[info] - BuildRight / FullOuter: no matches (9 milliseconds)
[info] - BuildRight / FullOuter: partial matches (8 milliseconds)
[info] - BuildRight / FullOuter: full matches (8 milliseconds)
[info] JDBCWriteSuite:
[info] - Basic CREATE (163 milliseconds)
[info] - CREATE with overwrite (91 milliseconds)
[info] - CREATE then INSERT to append (48 milliseconds)
[info] - CREATE then INSERT to truncate (49 milliseconds)
02:46:09.203 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 23.0 (TID 31)
org.h2.jdbc.JdbcSQLException: Column "SEQ" not found; SQL statement:
INSERT INTO TEST.INCOMPATIBLETEST (name,id,seq) VALUES (?,?,?) [42122-183]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.table.Table.getColumn(Table.java:654)
at org.h2.command.Parser.parseColumn(Parser.java:856)
at org.h2.command.Parser.parseColumnList(Parser.java:840)
at org.h2.command.Parser.parseInsert(Parser.java:1040)
at org.h2.command.Parser.parsePrepared(Parser.java:401)
at org.h2.command.Parser.parse(Parser.java:305)
at org.h2.command.Parser.parse(Parser.java:277)
at org.h2.command.Parser.prepareCommand(Parser.java:242)
at org.h2.engine.Session.prepareLocal(Session.java:446)
at org.h2.engine.Session.prepareCommand(Session.java:388)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:103)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:172)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:277)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:276)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$27.apply(RDD.scala:837)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$27.apply(RDD.scala:837)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1807)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1807)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[info] - Incompatible INSERT to append (31 milliseconds)
02:46:09.203 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 23.0 (TID 32)
org.h2.jdbc.JdbcSQLException: Column "SEQ" not found; SQL statement:
INSERT INTO TEST.INCOMPATIBLETEST (name,id,seq) VALUES (?,?,?) [42122-183]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.table.Table.getColumn(Table.java:654)
at org.h2.command.Parser.parseColumn(Parser.java:856)
at org.h2.command.Parser.parseColumnList(Parser.java:840)
at org.h2.command.Parser.parseInsert(Parser.java:1040)
at org.h2.command.Parser.parsePrepared(Parser.java:401)
at org.h2.command.Parser.parse(Parser.java:305)
at org.h2.command.Parser.parse(Parser.java:277)
at org.h2.command.Parser.prepareCommand(Parser.java:242)
at org.h2.engine.Session.prepareLocal(Session.java:446)
at org.h2.engine.Session.prepareCommand(Session.java:388)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:103)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:172)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:277)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:276)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$27.apply(RDD.scala:837)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$27.apply(RDD.scala:837)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1807)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1807)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:46:09.206 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 23.0 (TID 32, localhost): org.h2.jdbc.JdbcSQLException: Column "SEQ" not found; SQL statement:
INSERT INTO TEST.INCOMPATIBLETEST (name,id,seq) VALUES (?,?,?) [42122-183]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.table.Table.getColumn(Table.java:654)
at org.h2.command.Parser.parseColumn(Parser.java:856)
at org.h2.command.Parser.parseColumnList(Parser.java:840)
at org.h2.command.Parser.parseInsert(Parser.java:1040)
at org.h2.command.Parser.parsePrepared(Parser.java:401)
at org.h2.command.Parser.parse(Parser.java:305)
at org.h2.command.Parser.parse(Parser.java:277)
at org.h2.command.Parser.prepareCommand(Parser.java:242)
at org.h2.engine.Session.prepareLocal(Session.java:446)
at org.h2.engine.Session.prepareCommand(Session.java:388)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:103)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:172)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:277)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:276)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$27.apply(RDD.scala:837)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$27.apply(RDD.scala:837)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1807)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1807)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:46:09.206 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 23.0 failed 1 times; aborting job
[info] - INSERT to JDBC Datasource (59 milliseconds)
[info] - INSERT to JDBC Datasource with overwrite (92 milliseconds)
[info] InsertSuite:
[info] - Simple INSERT OVERWRITE a JSONRelation (136 milliseconds)
[info] - PreInsert casting and renaming (280 milliseconds)
[info] - SELECT clause generating a different number of columns is not allowed. (2 milliseconds)
[info] - INSERT OVERWRITE a JSONRelation multiple times (4 seconds, 529 milliseconds)
[info] - INSERT INTO JSONRelation for now (191 milliseconds)
[info] - it is not allowed to write to a table while querying it. (2 milliseconds)
[info] - Caching (217 milliseconds)
[info] - it's not allowed to insert into a relation that is not an InsertableRelation (27 milliseconds)
[info] UserDefinedTypeSuite:
[info] - register user type: MyDenseVector for MyLabeledPoint (31 milliseconds)
[info] - UDTs and UDFs (19 milliseconds)
[info] - Standard mode - UDTs with Parquet (141 milliseconds)
[info] - Legacy mode - UDTs with Parquet (141 milliseconds)
[info] - Standard mode - Repartition UDTs with Parquet (116 milliseconds)
[info] - Legacy mode - Repartition UDTs with Parquet (121 milliseconds)
[info] - Local UDTs (99 milliseconds)
[info] - UDTs with JSON (16 milliseconds)
[info] - SPARK-10472 UserDefinedType.typeName (0 milliseconds)
[info] - Catalyst type converter null handling for UDTs (0 milliseconds)
[info] RunLengthEncodingSuite:
[info] - RunLengthEncoding with BOOLEAN: empty column (6 milliseconds)
[info] - RunLengthEncoding with BOOLEAN: simple case (3 milliseconds)
[info] - RunLengthEncoding with BOOLEAN: run length == 1 (0 milliseconds)
[info] - RunLengthEncoding with BOOLEAN: single long run (4 milliseconds)
[info] - RunLengthEncoding with BYTE: empty column (0 milliseconds)
[info] - RunLengthEncoding with BYTE: simple case (1 millisecond)
[info] - RunLengthEncoding with BYTE: run length == 1 (0 milliseconds)
[info] - RunLengthEncoding with BYTE: single long run (5 milliseconds)
[info] - RunLengthEncoding with SHORT: empty column (0 milliseconds)
[info] - RunLengthEncoding with SHORT: simple case (0 milliseconds)
[info] - RunLengthEncoding with SHORT: run length == 1 (0 milliseconds)
[info] - RunLengthEncoding with SHORT: single long run (5 milliseconds)
[info] - RunLengthEncoding with INT: empty column (0 milliseconds)
[info] - RunLengthEncoding with INT: simple case (0 milliseconds)
[info] - RunLengthEncoding with INT: run length == 1 (0 milliseconds)
[info] - RunLengthEncoding with INT: single long run (2 milliseconds)
[info] - RunLengthEncoding with LONG: empty column (1 millisecond)
[info] - RunLengthEncoding with LONG: simple case (0 milliseconds)
[info] - RunLengthEncoding with LONG: run length == 1 (1 millisecond)
[info] - RunLengthEncoding with LONG: single long run (2 milliseconds)
[info] - RunLengthEncoding with STRING: empty column (0 milliseconds)
[info] - RunLengthEncoding with STRING: simple case (1 millisecond)
[info] - RunLengthEncoding with STRING: run length == 1 (1 millisecond)
[info] - RunLengthEncoding with STRING: single long run (1 millisecond)
[info] ParquetThriftCompatibilitySuite:
[info] - Read Parquet file generated by parquet-thrift (120 milliseconds)
[info] - SPARK-10136 list of primitive list (104 milliseconds)
[info] WholeStageCodegenSuite:
[info] - range/filter should be combined (30 milliseconds)
[info] - Aggregate should be included in WholeStageCodegen (26 milliseconds)
[info] - Aggregate with grouping keys should be included in WholeStageCodegen (71 milliseconds)
[info] - BroadcastHashJoin should be included in WholeStageCodegen (26 milliseconds)
[info] ParquetProtobufCompatibilitySuite:
[info] - unannotated array of primitive type (78 milliseconds)
[info] - unannotated array of struct (267 milliseconds)
[info] - struct with unannotated array (77 milliseconds)
[info] - unannotated array of struct with unannotated array (103 milliseconds)
[info] - unannotated array of string (77 milliseconds)
[info] ParquetQuerySuite:
[info] - simple select queries (132 milliseconds)
[info] - appending (194 milliseconds)
[info] - overwriting (154 milliseconds)
[info] - self-join (132 milliseconds)
[info] - nested data - struct with array field (113 milliseconds)
[info] - nested data - array of struct (110 milliseconds)
[info] - SPARK-1913 regression: columns only referenced by pushed down filters should remain (95 milliseconds)
[info] - SPARK-5309 strings stored using dictionary compression in parquet (233 milliseconds)
[info] - SPARK-6917 DecimalType should work with non-native types (112 milliseconds)
[info] - Enabling/disabling merging partfiles when merging parquet schema (210 milliseconds)
[info] - Enabling/disabling schema merging (218 milliseconds)
[info] - SPARK-8990 DataFrameReader.parquet() should respect user specified options (123 milliseconds)
[info] - SPARK-9119 Decimal should be correctly written into parquet (89 milliseconds)
[info] - SPARK-10005 Schema merging for nested struct (199 milliseconds)
[info] - SPARK-10301 requested schema clipping - same schema (82 milliseconds)
[info] - SPARK-11997 parquet with null partition values (138 milliseconds)
[info] - SPARK-10301 requested schema clipping - schemas with disjoint sets of fields !!! IGNORED !!!
[info] - SPARK-10301 requested schema clipping - requested schema contains physical schema (166 milliseconds)
[info] - SPARK-10301 requested schema clipping - physical schema contains requested schema (149 milliseconds)
[info] - SPARK-10301 requested schema clipping - schemas overlap but don't contain each other (82 milliseconds)
[info] - SPARK-10301 requested schema clipping - deeply nested struct (100 milliseconds)
[info] - SPARK-10301 requested schema clipping - out of order (123 milliseconds)
[info] - SPARK-10301 requested schema clipping - schema merging (171 milliseconds)
[info] - Standard mode - SPARK-10301 requested schema clipping - UDT (113 milliseconds)
[info] - Legacy mode - SPARK-10301 requested schema clipping - UDT (74 milliseconds)
[info] - expand UDT in StructType (1 millisecond)
[info] - expand UDT in ArrayType (0 milliseconds)
[info] - expand UDT in MapType (0 milliseconds)
[info] ResolvedDataSourceSuite:
[info] - jdbc (3 milliseconds)
[info] - json (1 millisecond)
[info] - parquet (1 millisecond)
[info] - error message for unknown data sources (3 milliseconds)
[info] JsonFunctionsSuite:
[info] - function get_json_object (52 milliseconds)
[info] - function get_json_object - null (38 milliseconds)
[info] - json_tuple select (90 milliseconds)
[info] - json_tuple filter and group (68 milliseconds)
[info] ParquetIOSuite:
[info] - basic data types (without binary) (124 milliseconds)
[info] - raw binary (90 milliseconds)
[info] - SPARK-11694 Parquet logical types are not being tested properly (34 milliseconds)
[info] - string (166 milliseconds)
[info] - Standard mode - fixed-length decimals (1 second, 18 milliseconds)
[info] - Legacy mode - fixed-length decimals (933 milliseconds)
[info] - date type (135 milliseconds)
[info] - Standard mode - map (104 milliseconds)
[info] - Legacy mode - map (82 milliseconds)
[info] - Standard mode - array (91 milliseconds)
[info] - Legacy mode - array (79 milliseconds)
[info] - Standard mode - array and double (114 milliseconds)
[info] - Legacy mode - array and double (83 milliseconds)
[info] - Standard mode - struct (89 milliseconds)
[info] - Legacy mode - struct (83 milliseconds)
[info] - Standard mode - nested struct with array of array as field (121 milliseconds)
[info] - Legacy mode - nested struct with array of array as field (84 milliseconds)
[info] - Standard mode - nested map with struct as value type (95 milliseconds)
[info] - Legacy mode - nested map with struct as value type (83 milliseconds)
[info] - nulls (144 milliseconds)
[info] - nones (113 milliseconds)
02:46:24.291 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 130.0 (TID 219)
org.apache.spark.sql.AnalysisException: Parquet type not supported: INT32 (UINT_32);
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.typeNotSupported$1(CatalystSchemaConverter.scala:112)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertPrimitiveField(CatalystSchemaConverter.scala:150)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:100)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$2.apply(CatalystSchemaConverter.scala:82)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$2.apply(CatalystSchemaConverter.scala:76)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.org$apache$spark$sql$execution$datasources$parquet$CatalystSchemaConverter$$convert(CatalystSchemaConverter.scala:76)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convert(CatalystSchemaConverter.scala:73)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$readSchemaFromFooter$2.apply(ParquetRelation.scala:849)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$readSchemaFromFooter$2.apply(ParquetRelation.scala:849)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$.readSchemaFromFooter(ParquetRelation.scala:849)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:806)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:782)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:46:24.292 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 130.0 (TID 219, localhost): org.apache.spark.sql.AnalysisException: Parquet type not supported: INT32 (UINT_32);
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.typeNotSupported$1(CatalystSchemaConverter.scala:112)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertPrimitiveField(CatalystSchemaConverter.scala:150)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:100)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$2.apply(CatalystSchemaConverter.scala:82)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$2.apply(CatalystSchemaConverter.scala:76)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.org$apache$spark$sql$execution$datasources$parquet$CatalystSchemaConverter$$convert(CatalystSchemaConverter.scala:76)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convert(CatalystSchemaConverter.scala:73)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$readSchemaFromFooter$2.apply(ParquetRelation.scala:849)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$readSchemaFromFooter$2.apply(ParquetRelation.scala:849)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$.readSchemaFromFooter(ParquetRelation.scala:849)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:806)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$29.apply(ParquetRelation.scala:782)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:720)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:46:24.292 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 130.0 failed 1 times; aborting job
[info] - SPARK-10113 Support for unsigned Parquet logical types (32 milliseconds)
[info] - SPARK-11692 Support for Parquet logical types, JSON and BSON (embedded types) (33 milliseconds)
[info] - compression codec (201 milliseconds)
[info] - read raw Parquet file (157 milliseconds)
[info] - write metadata (13 milliseconds)
[info] - save - overwrite (164 milliseconds)
[info] - save - ignore (141 milliseconds)
[info] - save - throw (45 milliseconds)
[info] - save - append (193 milliseconds)
[info] - SPARK-6315 regression test (38 milliseconds)
02:46:25.321 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Aborting task.
java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply$mcII$sp(ParquetIOSuite.scala:440)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply(ParquetIOSuite.scala:440)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply(ParquetIOSuite.scala:440)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:34)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegen$$anonfun$5$$anon$1.hasNext(WholeStageCodegen.scala:267)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:256)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:46:25.322 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Task attempt attempt_201602072346_0155_m_000000_0 aborted.
02:46:25.322 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 155.0 (TID 269)
org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply$mcII$sp(ParquetIOSuite.scala:440)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply(ParquetIOSuite.scala:440)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply(ParquetIOSuite.scala:440)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:34)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegen$$anonfun$5$$anon$1.hasNext(WholeStageCodegen.scala:267)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:256)
... 8 more
02:46:25.324 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 155.0 (TID 269, localhost): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply$mcII$sp(ParquetIOSuite.scala:440)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply(ParquetIOSuite.scala:440)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply(ParquetIOSuite.scala:440)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:34)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegen$$anonfun$5$$anon$1.hasNext(WholeStageCodegen.scala:267)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:256)
... 8 more
02:46:25.324 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 155.0 failed 1 times; aborting job
02:46:25.325 ERROR org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation: Aborting job.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 155.0 failed 1 times, most recent failure: Lost task 0.0 in stage 155.0 (TID 269, localhost): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply$mcII$sp(ParquetIOSuite.scala:440)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply(ParquetIOSuite.scala:440)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply(ParquetIOSuite.scala:440)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:34)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegen$$anonfun$5$$anon$1.hasNext(WholeStageCodegen.scala:267)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:256)
... 8 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1452)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1440)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1439)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1439)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1661)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1620)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1609)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:623)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1794)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1814)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:108)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:106)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:106)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:290)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:457)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$26$$anonfun$apply$2.apply$mcV$sp(ParquetIOSuite.scala:443)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$26$$anonfun$apply$2.apply(ParquetIOSuite.scala:443)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$26$$anonfun$apply$2.apply(ParquetIOSuite.scala:443)
at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$26.apply(ParquetIOSuite.scala:442)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$26.apply(ParquetIOSuite.scala:441)
at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:125)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.withTempPath(ParquetIOSuite.scala:69)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27.apply$mcV$sp(ParquetIOSuite.scala:441)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27.apply(ParquetIOSuite.scala:432)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27.apply(ParquetIOSuite.scala:432)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
at org.scalatest.Suite$class.run(Suite.scala:1424)
at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.org$scalatest$BeforeAndAfterAll$$super$run(ParquetIOSuite.scala:69)
at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.run(ParquetIOSuite.scala:69)
at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
at sbt.ForkMain$Run$2.call(ForkMain.java:296)
at sbt.ForkMain$Run$2.call(ForkMain.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
... 3 more
Caused by: java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply$mcII$sp(ParquetIOSuite.scala:440)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply(ParquetIOSuite.scala:440)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$27$$anonfun$apply$mcV$sp$3.apply(ParquetIOSuite.scala:440)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:34)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegen$$anonfun$5$$anon$1.hasNext(WholeStageCodegen.scala:267)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:256)
... 8 more
02:46:25.328 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Job job_201602072346_0000 aborted.
[info] - SPARK-6352 DirectParquetOutputCommitter (50 milliseconds)
02:46:25.356 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Aborting task.
java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply$mcII$sp(ParquetIOSuite.scala:464)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply(ParquetIOSuite.scala:464)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply(ParquetIOSuite.scala:464)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:34)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegen$$anonfun$5$$anon$1.hasNext(WholeStageCodegen.scala:267)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:256)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:46:25.357 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Task attempt attempt_201602072346_0156_m_000000_0 aborted.
02:46:25.357 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 156.0 (TID 270)
org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply$mcII$sp(ParquetIOSuite.scala:464)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply(ParquetIOSuite.scala:464)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply(ParquetIOSuite.scala:464)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:34)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegen$$anonfun$5$$anon$1.hasNext(WholeStageCodegen.scala:267)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:256)
... 8 more
02:46:25.358 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 156.0 (TID 270, localhost): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply$mcII$sp(ParquetIOSuite.scala:464)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply(ParquetIOSuite.scala:464)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply(ParquetIOSuite.scala:464)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:34)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegen$$anonfun$5$$anon$1.hasNext(WholeStageCodegen.scala:267)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:256)
... 8 more
02:46:25.358 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 156.0 failed 1 times; aborting job
02:46:25.358 ERROR org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation: Aborting job.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 156.0 failed 1 times, most recent failure: Lost task 0.0 in stage 156.0 (TID 270, localhost): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply$mcII$sp(ParquetIOSuite.scala:464)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply(ParquetIOSuite.scala:464)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply(ParquetIOSuite.scala:464)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:34)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegen$$anonfun$5$$anon$1.hasNext(WholeStageCodegen.scala:267)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:256)
... 8 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1452)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1440)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1439)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1439)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1661)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1620)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1609)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:623)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1794)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1814)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:108)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:106)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:106)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:290)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:457)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$28$$anonfun$apply$3.apply$mcV$sp(ParquetIOSuite.scala:467)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$28$$anonfun$apply$3.apply(ParquetIOSuite.scala:467)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$28$$anonfun$apply$3.apply(ParquetIOSuite.scala:467)
at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$28.apply(ParquetIOSuite.scala:466)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$28.apply(ParquetIOSuite.scala:465)
at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:125)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.withTempPath(ParquetIOSuite.scala:69)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28.apply$mcV$sp(ParquetIOSuite.scala:465)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28.apply(ParquetIOSuite.scala:456)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28.apply(ParquetIOSuite.scala:456)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
at org.scalatest.Suite$class.run(Suite.scala:1424)
at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.org$scalatest$BeforeAndAfterAll$$super$run(ParquetIOSuite.scala:69)
at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.run(ParquetIOSuite.scala:69)
at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
at sbt.ForkMain$Run$2.call(ForkMain.java:296)
at sbt.ForkMain$Run$2.call(ForkMain.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
... 3 more
Caused by: java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply$mcII$sp(ParquetIOSuite.scala:464)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply(ParquetIOSuite.scala:464)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$28$$anonfun$apply$mcV$sp$4.apply(ParquetIOSuite.scala:464)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:34)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegen$$anonfun$5$$anon$1.hasNext(WholeStageCodegen.scala:267)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:256)
... 8 more
02:46:25.358 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Job job_201602072346_0000 aborted.
[info] - SPARK-9849 DirectParquetOutputCommitter qualified name should be backward compatible (30 milliseconds)
02:46:25.391 ERROR org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation: Aborting job.
java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.JobCommitFailureParquetOutputCommitter.commitJob(ParquetIOSuite.scala:722)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitJob(WriterContainer.scala:224)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:152)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:108)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:106)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:106)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:290)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:457)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$29$$anonfun$apply$mcV$sp$30$$anonfun$30.apply$mcV$sp(ParquetIOSuite.scala:494)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$29$$anonfun$apply$mcV$sp$30$$anonfun$30.apply(ParquetIOSuite.scala:494)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$29$$anonfun$apply$mcV$sp$30$$anonfun$30.apply(ParquetIOSuite.scala:494)
at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$29$$anonfun$apply$mcV$sp$30.apply(ParquetIOSuite.scala:493)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$29$$anonfun$apply$mcV$sp$30.apply(ParquetIOSuite.scala:482)
at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:125)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.withTempPath(ParquetIOSuite.scala:69)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$29.apply$mcV$sp(ParquetIOSuite.scala:482)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$29.apply(ParquetIOSuite.scala:482)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$29.apply(ParquetIOSuite.scala:482)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
at org.scalatest.Suite$class.run(Suite.scala:1424)
at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.org$scalatest$BeforeAndAfterAll$$super$run(ParquetIOSuite.scala:69)
at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.run(ParquetIOSuite.scala:69)
at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
at sbt.ForkMain$Run$2.call(ForkMain.java:296)
at sbt.ForkMain$Run$2.call(ForkMain.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:46:25.392 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Job job_201602072346_0000 aborted.
[info] - SPARK-8121: spark.sql.parquet.output.committer.class shouldn't be overridden (34 milliseconds)
[info] - SPARK-6330 regression test (97 milliseconds)
02:46:25.523 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Aborting task.
java.lang.RuntimeException: Failed to commit task
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.commitTask$1(WriterContainer.scala:281)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.TaskCommitFailureParquetOutputCommitter.commitTask(ParquetIOSuite.scala:730)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:52)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:90)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:213)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.commitTask$1(WriterContainer.scala:276)
... 9 more
02:46:25.523 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Task attempt attempt_201602072346_0158_m_000000_0 aborted.
02:46:25.524 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 158.0 (TID 273)
org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Failed to commit task
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.commitTask$1(WriterContainer.scala:281)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
... 8 more
Caused by: java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.TaskCommitFailureParquetOutputCommitter.commitTask(ParquetIOSuite.scala:730)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:52)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:90)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:213)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.commitTask$1(WriterContainer.scala:276)
... 9 more
02:46:25.525 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 158.0 (TID 273, localhost): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Failed to commit task
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.commitTask$1(WriterContainer.scala:281)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
... 8 more
Caused by: java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.TaskCommitFailureParquetOutputCommitter.commitTask(ParquetIOSuite.scala:730)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:52)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:90)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:213)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.commitTask$1(WriterContainer.scala:276)
... 9 more
02:46:25.525 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 158.0 failed 1 times; aborting job
02:46:25.525 ERROR org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation: Aborting job.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 158.0 failed 1 times, most recent failure: Lost task 0.0 in stage 158.0 (TID 273, localhost): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Failed to commit task
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.commitTask$1(WriterContainer.scala:281)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
... 8 more
Caused by: java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.TaskCommitFailureParquetOutputCommitter.commitTask(ParquetIOSuite.scala:730)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:52)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:90)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:213)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.commitTask$1(WriterContainer.scala:276)
... 9 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1452)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1440)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1439)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1439)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1661)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1620)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1609)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:623)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1794)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1814)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:108)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:106)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:106)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:290)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:457)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32$$anonfun$apply$mcV$sp$32$$anonfun$33.apply$mcV$sp(ParquetIOSuite.scala:532)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32$$anonfun$apply$mcV$sp$32$$anonfun$33.apply(ParquetIOSuite.scala:532)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32$$anonfun$apply$mcV$sp$32$$anonfun$33.apply(ParquetIOSuite.scala:532)
at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32$$anonfun$apply$mcV$sp$32.apply(ParquetIOSuite.scala:531)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32$$anonfun$apply$mcV$sp$32.apply(ParquetIOSuite.scala:530)
at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:125)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.withTempPath(ParquetIOSuite.scala:69)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32.apply$mcV$sp(ParquetIOSuite.scala:530)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32.apply(ParquetIOSuite.scala:517)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32.apply(ParquetIOSuite.scala:517)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
at org.scalatest.Suite$class.run(Suite.scala:1424)
at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.org$scalatest$BeforeAndAfterAll$$super$run(ParquetIOSuite.scala:69)
at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.run(ParquetIOSuite.scala:69)
at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
at sbt.ForkMain$Run$2.call(ForkMain.java:296)
at sbt.ForkMain$Run$2.call(ForkMain.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:266)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
... 3 more
Caused by: java.lang.RuntimeException: Failed to commit task
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.commitTask$1(WriterContainer.scala:281)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
... 8 more
Caused by: java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.TaskCommitFailureParquetOutputCommitter.commitTask(ParquetIOSuite.scala:730)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:52)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:90)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:213)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.commitTask$1(WriterContainer.scala:276)
... 9 more
02:46:25.526 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Job job_201602072346_0000 aborted.
02:46:25.621 ERROR org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer: Aborting task.
java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.TaskCommitFailureParquetOutputCommitter.commitTask(ParquetIOSuite.scala:730)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:52)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:90)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:213)
at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:442)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
02:46:25.622 ERROR org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer: Task attempt attempt_201602072346_0159_m_000000_0 aborted.
02:46:25.622 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 159.0 (TID 274)
org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:447)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.TaskCommitFailureParquetOutputCommitter.commitTask(ParquetIOSuite.scala:730)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:52)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:90)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:213)
at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:442)
... 8 more
02:46:25.623 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 159.0 (TID 274, localhost): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:447)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.TaskCommitFailureParquetOutputCommitter.commitTask(ParquetIOSuite.scala:730)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:52)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:90)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:213)
at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:442)
... 8 more
02:46:25.624 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 159.0 failed 1 times; aborting job
02:46:25.624 ERROR org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation: Aborting job.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 159.0 failed 1 times, most recent failure: Lost task 0.0 in stage 159.0 (TID 274, localhost): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:447)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.TaskCommitFailureParquetOutputCommitter.commitTask(ParquetIOSuite.scala:730)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:52)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:90)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:213)
at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:442)
... 8 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1452)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1440)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1439)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1439)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1661)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1620)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1609)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:623)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1794)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1814)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:109)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:108)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:106)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:106)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:290)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:457)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32$$anonfun$apply$mcV$sp$33$$anonfun$34.apply$mcV$sp(ParquetIOSuite.scala:540)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32$$anonfun$apply$mcV$sp$33$$anonfun$34.apply(ParquetIOSuite.scala:538)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32$$anonfun$apply$mcV$sp$33$$anonfun$34.apply(ParquetIOSuite.scala:538)
at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32$$anonfun$apply$mcV$sp$33.apply(ParquetIOSuite.scala:538)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32$$anonfun$apply$mcV$sp$33.apply(ParquetIOSuite.scala:537)
at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:125)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.withTempPath(ParquetIOSuite.scala:69)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32.apply$mcV$sp(ParquetIOSuite.scala:537)
at org.apache.spark.sql.execution.datasources.parque[info] - SPARK-7837 Do not close output writer twice when commitTask() fails (136 milliseconds)
t.ParquetIOSuite$$anonfun$32.apply(ParquetIOSuite.scala:517)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite$$anonfun$32.apply(ParquetIOSuite.scala:517)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
at org.scalatest.Suite$class.run(Suite.scala:1424)
at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.org$scalatest$BeforeAndAfterAll$$super$run(ParquetIOSuite.scala:69)
at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
at org.apache.spark.sql.execution.datasources.parquet.ParquetIOSuite.run(ParquetIOSuite.scala:69)
at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
at sbt.ForkMain$Run$2.call(ForkMain.java:296)
at sbt.ForkMain$Run$2.call(ForkMain.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:447)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:151)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
at org.apache.spark.scheduler.Task.run(Task.scala:81)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
... 3 more
Caused by: java.lang.RuntimeException: Intentional exception for testing purposes
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.parquet.TaskCommitFailureParquetOutputCommitter.commitTask(ParquetIOSuite.scala:730)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:52)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:90)
at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:213)
at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:442)
... 8 more
02:46:25.625 ERROR org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer: Job job_201602072346_0000 aborted.
[info] - SPARK-11044 Parquet writer version fixed as version1 (113 milliseconds)
[info] - null and non-null strings (169 milliseconds)
[info] - read dictionary encoded decimals written as INT32 (107 milliseconds)
[info] - read dictionary encoded decimals written as INT64 (95 milliseconds)
[info] - read dictionary encoded decimals written as FIXED_LEN_BYTE_ARRAY (137 milliseconds)
[info] - SPARK-12589 copy() on rows returned from reader works for strings (163 milliseconds)
[info] - UnsafeRowParquetRecordReader - direct path read (87 milliseconds)
[info] LocalNodeTest:
[info] ExchangeSuite:
[info] - shuffling UnsafeRows in exchange (34 milliseconds)
[info] UnsafeKVExternalSorterSuite:
[info] - kv sorting key schema [] and value schema [] (43 milliseconds)
[info] - kv sorting key schema [int] and value schema [] (17 milliseconds)
[info] - kv sorting key schema [] and value schema [int] (8 milliseconds)
[info] - kv sorting key schema [int] and value schema [float,float,double,string,float] (5 milliseconds)
[info] - kv sorting key schema [double,string,string,int,float,string,string] and value schema [double,int,string,int,double,string,double] (255 milliseconds)
[info] - kv sorting key schema [int,string,float,int,int,string] and value schema [double,float,float,string,string,double,float,float,float,float] (830 milliseconds)
[info] - kv sorting key schema [string,double,int,int,string,string] and value schema [int,int,string,float,float,double,double] (245 milliseconds)
[info] - kv sorting key schema [int,float,float,int,float,float,int,int,float,int] and value schema [double,float,float,double] (32 milliseconds)
[info] - kv sorting key schema [double,int,string,double,float,float] and value schema [float,string] (125 milliseconds)
[info] - kv sorting with records that exceed page size (48 milliseconds)
[info] DataFrameCallbackSuite:
[info] - execute callback functions when a DataFrame action finished successfully (38 milliseconds)
[info] - execute callback functions when a DataFrame action failed (28 milliseconds)
[info] - get numRows metrics by callback (198 milliseconds)
[info] - get size metrics by callback !!! IGNORED !!!
[info] DataFrameReaderWriterSuite:
[info] - resolve default source (9 milliseconds)
[info] - resolve full class (1 millisecond)
[info] - options (2 milliseconds)
[info] - partitioning (8 milliseconds)
[info] - stream paths (2 milliseconds)
[info] - test different data types for options (2 milliseconds)
[info] TakeOrderedAndProjectNodeSuite:
[info] - asc (6 milliseconds)
[info] - desc (0 milliseconds)
[info] InferSchemaSuite:
[info] - String fields types are inferred correctly from null types (0 milliseconds)
[info] - String fields types are inferred correctly from other types (1 millisecond)
[info] - Timestamp field types are inferred correctly from other types (0 milliseconds)
[info] - Type arrays are merged to highest common type (0 milliseconds)
[info] - Null fields are handled properly when a nullValue is specified (0 milliseconds)
[info] ColumnTypeSuite:
[info] - defaultSize (1 millisecond)
[info] - actualSize (4 milliseconds)
[info] - BOOLEAN append/extract (3 milliseconds)
[info] - BYTE append/extract (0 milliseconds)
[info] - SHORT append/extract (1 millisecond)
[info] - INT append/extract (0 milliseconds)
[info] - LONG append/extract (0 milliseconds)
[info] - FLOAT append/extract (0 milliseconds)
[info] - DOUBLE append/extract (0 milliseconds)
[info] - COMPACT_DECIMAL append/extract (1 millisecond)
[info] - STRING append/extract (0 milliseconds)
[info] - NULL append/extract (0 milliseconds)
[info] - BINARY append/extract (0 milliseconds)
[info] - LARGE_DECIMAL append/extract (1 millisecond)
[info] - STRUCT append/extract (0 milliseconds)
[info] - ARRAY append/extract (1 millisecond)
[info] - MAP append/extract (0 milliseconds)
[info] - column type for decimal types with different precision (5 milliseconds)
[info] DatasetSuite:
[info] - toDS (29 milliseconds)
[info] - toDS with RDD (26 milliseconds)
[info] - SPARK-12404: Datatype Helper Serializablity (37 milliseconds)
[info] - collect, first, and take should use encoders for serialization (62 milliseconds)
[info] - coalesce, repartition (96 milliseconds)
[info] - as tuple (19 milliseconds)
[info] - as case class / collect (25 milliseconds)
[info] - as case class - reordered fields by name (11 milliseconds)
[info] - as case class - take (23 milliseconds)
[info] - map (27 milliseconds)
[info] - map with type change (52 milliseconds)
[info] - map and group by with class data (255 milliseconds)
[info] - select (26 milliseconds)
[info] - select 2 (25 milliseconds)
[info] - select 2, primitive and tuple (38 milliseconds)
[info] - select 2, primitive and class (42 milliseconds)
[info] - select 2, primitive and class, fields reordered (19 milliseconds)
[info] - filter (21 milliseconds)
[info] - foreach (11 milliseconds)
[info] - foreachPartition (14 milliseconds)
[info] - reduce (20 milliseconds)
[info] - joinWith, flat schema (56 milliseconds)
[info] - joinWith, expression condition, outer join (428 milliseconds)
[info] - joinWith tuple with primitive, expression (84 milliseconds)
[info] - joinWith class with primitive, toDF (52 milliseconds)
[info] - multi-level joinWith (113 milliseconds)
[info] - groupBy function, keys (106 milliseconds)
[info] - groupBy function, map (141 milliseconds)
[info] - groupBy function, flatMap (76 milliseconds)
[info] - groupBy function, reduce (75 milliseconds)
[info] - groupBy single field class, count (107 milliseconds)
[info] - groupBy columns, map (82 milliseconds)
[info] - groupBy columns, count (98 milliseconds)
[info] - groupBy columns asKey, map (66 milliseconds)
[info] - groupBy columns asKey tuple, map (109 milliseconds)
[info] - groupBy columns asKey class, map (105 milliseconds)
[info] - typed aggregation: expr (95 milliseconds)
[info] - typed aggregation: expr, expr (105 milliseconds)
[info] - typed aggregation: expr, expr, expr (113 milliseconds)
[info] - typed aggregation: expr, expr, expr, expr (115 milliseconds)
[info] - cogroup (536 milliseconds)
[info] - cogroup with complex data (158 milliseconds)
[info] - sample with replacement (23 milliseconds)
[info] - sample without replacement (19 milliseconds)
[info] - SPARK-11436: we should rebind right encoder when join 2 datasets (68 milliseconds)
[info] - self join (73 milliseconds)
[info] - toString (5 milliseconds)
[info] - showString: Kryo encoder (38 milliseconds)
[info] - Kryo encoder (78 milliseconds)
[info] - Kryo encoder self join (35 milliseconds)
[info] - Java encoder (33 milliseconds)
[info] - Java encoder self join (25 milliseconds)
[info] - SPARK-11894: Incorrect results are returned when using null (112 milliseconds)
[info] - change encoder with compatible schema (20 milliseconds)
[info] - verify mismatching field names fail with a good error (16 milliseconds)
[info] - runtime nullability check (56 milliseconds)
[info] - SPARK-12478: top level null field (50 milliseconds)
[info] - support inner class in Dataset (45 milliseconds)
[info] - grouping key and grouped value has field with same name (95 milliseconds)
[info] - cogroup's left and right side has field with same name (151 milliseconds)
[info] - give nice error message when the real number of fields doesn't match encoder schema (13 milliseconds)
[info] DatasetPrimitiveSuite:
[info] - toDS (13 milliseconds)
[info] - as case class / collect (22 milliseconds)
[info] - map (26 milliseconds)
[info] - filter (18 milliseconds)
[info] - foreach (7 milliseconds)
[info] - foreachPartition (7 milliseconds)
[info] - reduce (7 milliseconds)
[info] - groupBy function, keys (120 milliseconds)
[info] - groupBy function, map (76 milliseconds)
[info] - groupBy function, flatMap (83 milliseconds)
[info] - Arrays and Lists (735 milliseconds)
[info] DataFrameWindowSuite:
[info] - reuse window partitionBy (221 milliseconds)
[info] - reuse window orderBy (88 milliseconds)
[info] - lead (64 milliseconds)
[info] - lag (100 milliseconds)
[info] - lead with default value (63 milliseconds)
[info] - lag with default value (60 milliseconds)
[info] - rank functions in unspecific window (287 milliseconds)
[info] - aggregation and rows between (72 milliseconds)
[info] - aggregation and range between (80 milliseconds)
[info] - aggregation and rows between with unbounded (101 milliseconds)
[info] - aggregation and range between with unbounded (149 milliseconds)
[info] - reverse sliding range frame (1 second, 370 milliseconds)
[info] - reverse unbounded range frame (142 milliseconds)
[info] - statistical functions (302 milliseconds)
[info] - window function with aggregates (307 milliseconds)
[info] - window function with udaf (228 milliseconds)
[info] - null inputs (129 milliseconds)
[info] - last/first with ignoreNulls (267 milliseconds)
[info] - SPARK-12989 ExtractWindowExpressions treats alias as regular attribute (267 milliseconds)
[info] TextSuite:
[info] - reading text file (40 milliseconds)
[info] - SQLContext.read.text() API (28 milliseconds)
[info] - SPARK-12562 verify write.text() can handle column name beyond `value` (116 milliseconds)
[info] - error handling for invalid schema (4 milliseconds)
[info] LimitNodeSuite:
[info] - empty (3 milliseconds)
[info] - basic (1 millisecond)
[info] SQLUtilsSuite:
[info] - dfToCols should collect and transpose a data frame (13 milliseconds)
[info] DictionaryEncodingSuite:
[info] - DictionaryEncoding with INT: empty (5 milliseconds)
[info] - DictionaryEncoding with INT: simple case (2 milliseconds)
[info] - DictionaryEncoding with INT: dictionary overflow (63 milliseconds)
[info] - DictionaryEncoding with LONG: empty (0 milliseconds)
[info] - DictionaryEncoding with LONG: simple case (1 millisecond)
[info] - DictionaryEncoding with LONG: dictionary overflow (95 milliseconds)
[info] - DictionaryEncoding with STRING: empty (1 millisecond)
[info] - DictionaryEncoding with STRING: simple case (0 milliseconds)
[info] - DictionaryEncoding with STRING: dictionary overflow (142 milliseconds)
[info] LocalNodeSuite:
[info] - basic open, next, fetch, close (2 milliseconds)
[info] - asIterator (2 milliseconds)
[info] - collect (2 milliseconds)
[info] ProjectNodeSuite:
[info] - empty (11 milliseconds)
[info] - basic (2 milliseconds)
[info] ScalaReflectionRelationSuite:
[info] - query case class RDD *** FAILED *** (30 milliseconds)
[info] [a,1,1,1.0,1.0,1,1,true,1.000000000000000000,1969-12-31,1969-12-31 16:00:12.345,WrappedArray(1, 2, 3)] did not equal [a,1,1,1.0,1.0,1,1,true,1,1970-01-01,1969-12-31 16:00:12.345,List(1, 2, 3)] (ScalaReflectionRelationSuite.scala:83)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info] at org.apache.spark.sql.ScalaReflectionRelationSuite$$anonfun$1.apply$mcV$sp(ScalaReflectionRelationSuite.scala:83)
[info] at org.apache.spark.sql.ScalaReflectionRelationSuite$$anonfun$1.apply(ScalaReflectionRelationSuite.scala:78)
[info] at org.apache.spark.sql.ScalaReflectionRelationSuite$$anonfun$1.apply(ScalaReflectionRelationSuite.scala:78)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.ScalaReflectionRelationSuite.org$scalatest$BeforeAndAfterAll$$super$run(ScalaReflectionRelationSuite.scala:75)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.ScalaReflectionRelationSuite.run(ScalaReflectionRelationSuite.scala:75)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - query case class RDD with nulls (15 milliseconds)
[info] - query case class RDD with Nones (13 milliseconds)
[info] - query binary data (10 milliseconds)
[info] - query complex data (28 milliseconds)
[info] ListTablesSuite:
[info] - get all tables (84 milliseconds)
[info] - getting all Tables with a database name has no impact on returned table names (102 milliseconds)
[info] - query the returned DataFrame of tables (207 milliseconds)
[info] GroupedIteratorSuite:
[info] - basic (49 milliseconds)
[info] - group by 2 columns (18 milliseconds)
[info] - do nothing to the value iterator (2 milliseconds)
[info] SQLContextSuite:
[info] - getOrCreate instantiates SQLContext (3 milliseconds)
[info] - getOrCreate return the original SQLContext (0 milliseconds)
== Physical Plan ==
WholeStageCodegen
: +- Project [UDF(1,2) AS _c0#24619]
: +- INPUT
+- Scan OneRowRelation[]
[info] - Sessions of SQLContext (15 milliseconds)
[info] - Catalyst optimization passes are modifiable at runtime (2 milliseconds)
[info] InMemoryColumnarQuerySuite:
[info] - simple columnar query (83 milliseconds)
[info] - default size avoids broadcast (8 milliseconds)
[info] - projection (41 milliseconds)
[info] - SPARK-1436 regression: in-memory columns must be able to be accessed multiple times (95 milliseconds)
[info] - SPARK-1678 regression: compression must not lose repeated values (50 milliseconds)
[info] - with null values (66 milliseconds)
[info] - SPARK-2729 regression: timestamp data type (51 milliseconds)
[info] - SPARK-3320 regression: batched column buffer building should work with empty partitions (100 milliseconds)
[info] - SPARK-4182 Caching complex types (59 milliseconds)
[info] - decimal type (54 milliseconds)
[info] - test different data types (3 seconds, 214 milliseconds)
[info] - SPARK-10422: String column in InMemoryColumnarCache needs to override clone method (60 milliseconds)
[info] - SPARK-10859: Predicates pushed to InMemoryColumnarTableScan are not evaluated correctly (47 milliseconds)
[info] PrunedScanSuite:
[info] - SELECT * FROM oneToTenPruned (26 milliseconds)
[info] - SELECT a, b FROM oneToTenPruned (11 milliseconds)
[info] - SELECT b, a FROM oneToTenPruned (10 milliseconds)
[info] - SELECT a FROM oneToTenPruned (11 milliseconds)
[info] - SELECT a, a FROM oneToTenPruned (15 milliseconds)
[info] - SELECT b FROM oneToTenPruned (9 milliseconds)
[info] - SELECT a * 2 FROM oneToTenPruned (21 milliseconds)
[info] - SELECT A AS b FROM oneToTenPruned (11 milliseconds)
[info] - SELECT x.b, y.a FROM oneToTenPruned x JOIN oneToTenPruned y ON x.a = y.b (291 milliseconds)
[info] - SELECT x.a, y.b FROM oneToTenPruned x JOIN oneToTenPruned y ON x.a = y.b (221 milliseconds)
[info] - Columns output a,b: SELECT * FROM oneToTenPruned (11 milliseconds)
[info] - Columns output a,b: SELECT a, b FROM oneToTenPruned (7 milliseconds)
[info] - Columns output b,a: SELECT b, a FROM oneToTenPruned (7 milliseconds)
[info] - Columns output b: SELECT b, b FROM oneToTenPruned (7 milliseconds)
[info] - Columns output a: SELECT a FROM oneToTenPruned (7 milliseconds)
[info] - Columns output b: SELECT b FROM oneToTenPruned (7 milliseconds)
[info] DataFrameNaFunctionsSuite:
[info] - drop (140 milliseconds)
[info] - drop with how (107 milliseconds)
[info] - drop with threshold (49 milliseconds)
[info] - fill (94 milliseconds)
[info] - fill with map (91 milliseconds)
[info] - replace (13 milliseconds)
[info] TableScanSuite:
[info] - SELECT * FROM oneToTen (15 milliseconds)
[info] - SELECT i FROM oneToTen (11 milliseconds)
[info] - SELECT i FROM oneToTen WHERE i < 5 (14 milliseconds)
[info] - SELECT i * 2 FROM oneToTen (10 milliseconds)
[info] - SELECT a.i, b.i FROM oneToTen a JOIN oneToTen b ON a.i = b.i + 1 (340 milliseconds)
[info] - Schema and all fields *** FAILED *** (62 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [unresolvedalias('string$%Field,None),unresolvedalias(cast('binaryField as string),None),unresolvedalias('booleanField,None),unresolvedalias('byteField,None),unresolvedalias('shortField,None),unresolvedalias('int_Field,None),unresolvedalias('longField_:,<>=+/~^,None),unresolvedalias('floatField,None),unresolvedalias('doubleField,None),unresolvedalias('decimalField1,None),unresolvedalias('decimalField2,None),unresolvedalias('dateField,None),unresolvedalias('timestampField,None),unresolvedalias('varcharField,None),unresolvedalias('charField,None),unresolvedalias('arrayFieldSimple,None),unresolvedalias('arrayFieldComplex,None),unresolvedalias('mapFieldSimple,None),unresolvedalias('mapFieldComplex,None),unresolvedalias('structFieldSimple,None),unresolvedalias('structFieldComplex,None)]
[info] +- 'UnresolvedRelation `tableWithSchema`, None
[info]
[info] == Analyzed Logical Plan ==
[info] string$%Field: string, binaryField: string, booleanField: boolean, byteField: tinyint, shortField: smallint, int_Field: int, longField_:,<>=+/~^: bigint, floatField: float, doubleField: double, decimalField1: decimal(10,0), decimalField2: decimal(9,2), dateField: date, timestampField: timestamp, varcharField: string, charField: string, arrayFieldSimple: array<int>, arrayFieldComplex: array<map<string,struct<key:bigint>>>, mapFieldSimple: map<int,string>, mapFieldComplex: map<map<string,float>,struct<key:bigint>>, structFieldSimple: struct<key:int,Value:string>, structFieldComplex: struct<key:array<string>,Value:struct<value_(2):array<date>>>
[info] Project [string$%Field#25626,cast(binaryField#25627 as string) AS binaryField#25651,booleanField#25628,byteField#25629,shortField#25630,int_Field#25631,longField_:,<>=+/~^#25632L,floatField#25633,doubleField#25634,decimalField1#25635,decimalField2#25636,dateField#25637,timestampField#25638,varcharField#25639,charField#25640,arrayFieldSimple#25641,arrayFieldComplex#25642,mapFieldSimple#25643,mapFieldComplex#25644,structFieldSimple#25645,structFieldComplex#25646]
[info] +- Subquery tablewithschema
[info] +- Relation[string$%Field#25626,binaryField#25627,booleanField#25628,ByteField#25629,shortField#25630,int_Field#25631,longField_:,<>=+/~^#25632L,floatField#25633,doubleField#25634,decimalField1#25635,decimalField2#25636,dateField#25637,timestampField#25638,varcharField#25639,charField#25640,arrayFieldSimple#25641,arrayFieldComplex#25642,mapFieldSimple#25643,mapFieldComplex#25644,structFieldSimple#25645,structFieldComplex#25646] AllDataTypesScan(1,10,StructType(StructField(string$%Field,StringType,true), StructField(binaryField,BinaryType,true), StructField(booleanField,BooleanType,true), StructField(ByteField,ByteType,true), StructField(shortField,ShortType,true), StructField(int_Field,IntegerType,true), StructField(longField_:,<>=+/~^,LongType,true), StructField(floatField,FloatType,true), StructField(doubleField,DoubleType,true), StructField(decimalField1,DecimalType(10,0),true), StructField(decimalField2,DecimalType(9,2),true), StructField(dateField,DateType,true), StructField(timestampField,TimestampType,true), StructField(varcharField,StringType,true), StructField(charField,StringType,true), StructField(arrayFieldSimple,ArrayType(IntegerType,true),true), StructField(arrayFieldComplex,ArrayType(MapType(StringType,StructType(StructField(key,LongType,true)),true),true),true), StructField(mapFieldSimple,MapType(IntegerType,StringType,true),true), StructField(mapFieldComplex,MapType(MapType(StringType,FloatType,true),StructType(StructField(key,LongType,true)),true),true), StructField(structFieldSimple,StructType(StructField(key,IntegerType,true), StructField(Value,StringType,true)),true), StructField(structFieldComplex,StructType(StructField(key,ArrayType(StringType,true),true), StructField(Value,StructType(StructField(value_(2),ArrayType(DateType,true),true)),true)),true)))
[info]
[info] == Optimized Logical Plan ==
[info] Project [string$%Field#25626,cast(binaryField#25627 as string) AS binaryField#25651,booleanField#25628,byteField#25629,shortField#25630,int_Field#25631,longField_:,<>=+/~^#25632L,floatField#25633,doubleField#25634,decimalField1#25635,decimalField2#25636,dateField#25637,timestampField#25638,varcharField#25639,charField#25640,arrayFieldSimple#25641,arrayFieldComplex#25642,mapFieldSimple#25643,mapFieldComplex#25644,structFieldSimple#25645,structFieldComplex#25646]
[info] +- Relation[string$%Field#25626,binaryField#25627,booleanField#25628,ByteField#25629,shortField#25630,int_Field#25631,longField_:,<>=+/~^#25632L,floatField#25633,doubleField#25634,decimalField1#25635,decimalField2#25636,dateField#25637,timestampField#25638,varcharField#25639,charField#25640,arrayFieldSimple#25641,arrayFieldComplex#25642,mapFieldSimple#25643,mapFieldComplex#25644,structFieldSimple#25645,structFieldComplex#25646] AllDataTypesScan(1,10,StructType(StructField(string$%Field,StringType,true), StructField(binaryField,BinaryType,true), StructField(booleanField,BooleanType,true), StructField(ByteField,ByteType,true), StructField(shortField,ShortType,true), StructField(int_Field,IntegerType,true), StructField(longField_:,<>=+/~^,LongType,true), StructField(floatField,FloatType,true), StructField(doubleField,DoubleType,true), StructField(decimalField1,DecimalType(10,0),true), StructField(decimalField2,DecimalType(9,2),true), StructField(dateField,DateType,true), StructField(timestampField,TimestampType,true), StructField(varcharField,StringType,true), StructField(charField,StringType,true), StructField(arrayFieldSimple,ArrayType(IntegerType,true),true), StructField(arrayFieldComplex,ArrayType(MapType(StringType,StructType(StructField(key,LongType,true)),true),true),true), StructField(mapFieldSimple,MapType(IntegerType,StringType,true),true), StructField(mapFieldComplex,MapType(MapType(StringType,FloatType,true),StructType(StructField(key,LongType,true)),true),true), StructField(structFieldSimple,StructType(StructField(key,IntegerType,true), StructField(Value,StringType,true)),true), StructField(structFieldComplex,StructType(StructField(key,ArrayType(StringType,true),true), StructField(Value,StructType(StructField(value_(2),ArrayType(DateType,true),true)),true)),true)))
[info]
[info] == Physical Plan ==
[info] WholeStageCodegen
[info] : +- Project [string$%Field#25626,cast(binaryField#25627 as string) AS binaryField#25651,booleanField#25628,byteField#25629,shortField#25630,int_Field#25631,longField_:,<>=+/~^#25632L,floatField#25633,doubleField#25634,decimalField1#25635,decimalField2#25636,dateField#25637,timestampField#25638,varcharField#25639,charField#25640,arrayFieldSimple#25641,arrayFieldComplex#25642,mapFieldSimple#25643,mapFieldComplex#25644,structFieldSimple#25645,structFieldComplex#25646]
[info] : +- INPUT
[info] +- Scan AllDataTypesScan(1,10,StructType(StructField(string$%Field,StringType,true), StructField(binaryField,BinaryType,true), StructField(booleanField,BooleanType,true), StructField(ByteField,ByteType,true), StructField(shortField,ShortType,true), StructField(int_Field,IntegerType,true), StructField(longField_:,<>=+/~^,LongType,true), StructField(floatField,FloatType,true), StructField(doubleField,DoubleType,true), StructField(decimalField1,DecimalType(10,0),true), StructField(decimalField2,DecimalType(9,2),true), StructField(dateField,DateType,true), StructField(timestampField,TimestampType,true), StructField(varcharField,StringType,true), StructField(charField,StringType,true), StructField(arrayFieldSimple,ArrayType(IntegerType,true),true), StructField(arrayFieldComplex,ArrayType(MapType(StringType,StructType(StructField(key,LongType,true)),true),true),true), StructField(mapFieldSimple,MapType(IntegerType,StringType,true),true), StructField(mapFieldComplex,MapType(MapType(StringType,FloatType,true),StructType(StructField(key,LongType,true)),true),true), StructField(structFieldSimple,StructType(StructField(key,IntegerType,true), StructField(Value,StringType,true)),true), StructField(structFieldComplex,StructType(StructField(key,ArrayType(StringType,true),true), StructField(Value,StructType(StructField(value_(2),ArrayType(DateType,true),true)),true)),true)))[string$%Field#25626,binaryField#25627,booleanField#25628,ByteField#25629,shortField#25630,int_Field#25631,longField_:,<>=+/~^#25632L,floatField#25633,doubleField#25634,decimalField1#25635,decimalField2#25636,dateField#25637,timestampField#25638,varcharField#25639,charField#25640,arrayFieldSimple#25641,arrayFieldComplex#25642,mapFieldSimple#25643,mapFieldComplex#25644,structFieldSimple#25645,structFieldComplex#25646]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 10 == == Spark Answer - 10 ==
[info] ![str_1,str_1,false,1,1,1,1,1.0,1.0,1,1,1970-01-01,1969-12-31 16:00:20.001,varchar_1,char_1,List(1, 2),List(Map(str_1 -> [1])),Map(1 -> 1),Map(Map(str_1 -> 1.0) -> [1]),[1,1],[List(str_1, str_2),[List(1970-01-02)]]] [str_1,str_1,false,1,1,1,1,1.0,1.0,1,1.00,1969-12-31,1969-12-31 16:00:20.001,varchar_1,char_1,WrappedArray(1, 2),WrappedArray(Map(str_1 -> [1])),Map(1 -> 1),Map(Map(str_1 -> 1.0) -> [1]),[1,1],[WrappedArray(str_1, str_2),[WrappedArray(1970-01-01)]]]
[info] ![str_10,str_10,true,10,10,10,10,10.0,10.0,10,10,1970-01-01,1969-12-31 16:00:20.01,varchar_10,char_10,List(10, 11),List(Map(str_10 -> [10])),Map(10 -> 10),Map(Map(str_10 -> 10.0) -> [10]),[10,10],[List(str_10, str_11),[List(1970-01-11)]]] [str_10,str_10,true,10,10,10,10,10.0,10.0,10,10.00,1969-12-31,1969-12-31 16:00:20.01,varchar_10,char_10,WrappedArray(10, 11),WrappedArray(Map(str_10 -> [10])),Map(10 -> 10),Map(Map(str_10 -> 10.0) -> [10]),[10,10],[WrappedArray(str_10, str_11),[WrappedArray(1970-01-10)]]]
[info] ![str_2,str_2,true,2,2,2,2,2.0,2.0,2,2,1970-01-01,1969-12-31 16:00:20.002,varchar_2,char_2,List(2, 3),List(Map(str_2 -> [2])),Map(2 -> 2),Map(Map(str_2 -> 2.0) -> [2]),[2,2],[List(str_2, str_3),[List(1970-01-03)]]] [str_2,str_2,true,2,2,2,2,2.0,2.0,2,2.00,1969-12-31,1969-12-31 16:00:20.002,varchar_2,char_2,WrappedArray(2, 3),WrappedArray(Map(str_2 -> [2])),Map(2 -> 2),Map(Map(str_2 -> 2.0) -> [2]),[2,2],[WrappedArray(str_2, str_3),[WrappedArray(1970-01-02)]]]
[info] ![str_3,str_3,false,3,3,3,3,3.0,3.0,3,3,1970-01-01,1969-12-31 16:00:20.003,varchar_3,char_3,List(3, 4),List(Map(str_3 -> [3])),Map(3 -> 3),Map(Map(str_3 -> 3.0) -> [3]),[3,3],[List(str_3, str_4),[List(1970-01-04)]]] [str_3,str_3,false,3,3,3,3,3.0,3.0,3,3.00,1969-12-31,1969-12-31 16:00:20.003,varchar_3,char_3,WrappedArray(3, 4),WrappedArray(Map(str_3 -> [3])),Map(3 -> 3),Map(Map(str_3 -> 3.0) -> [3]),[3,3],[WrappedArray(str_3, str_4),[WrappedArray(1970-01-03)]]]
[info] ![str_4,str_4,true,4,4,4,4,4.0,4.0,4,4,1970-01-01,1969-12-31 16:00:20.004,varchar_4,char_4,List(4, 5),List(Map(str_4 -> [4])),Map(4 -> 4),Map(Map(str_4 -> 4.0) -> [4]),[4,4],[List(str_4, str_5),[List(1970-01-05)]]] [str_4,str_4,true,4,4,4,4,4.0,4.0,4,4.00,1969-12-31,1969-12-31 16:00:20.004,varchar_4,char_4,WrappedArray(4, 5),WrappedArray(Map(str_4 -> [4])),Map(4 -> 4),Map(Map(str_4 -> 4.0) -> [4]),[4,4],[WrappedArray(str_4, str_5),[WrappedArray(1970-01-04)]]]
[info] ![str_5,str_5,false,5,5,5,5,5.0,5.0,5,5,1970-01-01,1969-12-31 16:00:20.005,varchar_5,char_5,List(5, 6),List(Map(str_5 -> [5])),Map(5 -> 5),Map(Map(str_5 -> 5.0) -> [5]),[5,5],[List(str_5, str_6),[List(1970-01-06)]]] [str_5,str_5,false,5,5,5,5,5.0,5.0,5,5.00,1969-12-31,1969-12-31 16:00:20.005,varchar_5,char_5,WrappedArray(5, 6),WrappedArray(Map(str_5 -> [5])),Map(5 -> 5),Map(Map(str_5 -> 5.0) -> [5]),[5,5],[WrappedArray(str_5, str_6),[WrappedArray(1970-01-05)]]]
[info] ![str_6,str_6,true,6,6,6,6,6.0,6.0,6,6,1970-01-01,1969-12-31 16:00:20.006,varchar_6,char_6,List(6, 7),List(Map(str_6 -> [6])),Map(6 -> 6),Map(Map(str_6 -> 6.0) -> [6]),[6,6],[List(str_6, str_7),[List(1970-01-07)]]] [str_6,str_6,true,6,6,6,6,6.0,6.0,6,6.00,1969-12-31,1969-12-31 16:00:20.006,varchar_6,char_6,WrappedArray(6, 7),WrappedArray(Map(str_6 -> [6])),Map(6 -> 6),Map(Map(str_6 -> 6.0) -> [6]),[6,6],[WrappedArray(str_6, str_7),[WrappedArray(1970-01-06)]]]
[info] ![str_7,str_7,false,7,7,7,7,7.0,7.0,7,7,1970-01-01,1969-12-31 16:00:20.007,varchar_7,char_7,List(7, 8),List(Map(str_7 -> [7])),Map(7 -> 7),Map(Map(str_7 -> 7.0) -> [7]),[7,7],[List(str_7, str_8),[List(1970-01-08)]]] [str_7,str_7,false,7,7,7,7,7.0,7.0,7,7.00,1969-12-31,1969-12-31 16:00:20.007,varchar_7,char_7,WrappedArray(7, 8),WrappedArray(Map(str_7 -> [7])),Map(7 -> 7),Map(Map(str_7 -> 7.0) -> [7]),[7,7],[WrappedArray(str_7, str_8),[WrappedArray(1970-01-07)]]]
[info] ![str_8,str_8,true,8,8,8,8,8.0,8.0,8,8,1970-01-01,1969-12-31 16:00:20.008,varchar_8,char_8,List(8, 9),List(Map(str_8 -> [8])),Map(8 -> 8),Map(Map(str_8 -> 8.0) -> [8]),[8,8],[List(str_8, str_9),[List(1970-01-09)]]] [str_8,str_8,true,8,8,8,8,8.0,8.0,8,8.00,1969-12-31,1969-12-31 16:00:20.008,varchar_8,char_8,WrappedArray(8, 9),WrappedArray(Map(str_8 -> [8])),Map(8 -> 8),Map(Map(str_8 -> 8.0) -> [8]),[8,8],[WrappedArray(str_8, str_9),[WrappedArray(1970-01-08)]]]
[info] ![str_9,str_9,false,9,9,9,9,9.0,9.0,9,9,1970-01-01,1969-12-31 16:00:20.009,varchar_9,char_9,List(9, 10),List(Map(str_9 -> [9])),Map(9 -> 9),Map(Map(str_9 -> 9.0) -> [9]),[9,9],[List(str_9, str_10),[List(1970-01-10)]]] [str_9,str_9,false,9,9,9,9,9.0,9.0,9,9.00,1969-12-31,1969-12-31 16:00:20.009,varchar_9,char_9,WrappedArray(9, 10),WrappedArray(Map(str_9 -> [9])),Map(9 -> 9),Map(Map(str_9 -> 9.0) -> [9]),[9,9],[WrappedArray(str_9, str_10),[WrappedArray(1970-01-09)]]] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.sources.TableScanSuite$$anonfun$1.apply$mcV$sp(TableScanSuite.scala:238)
[info] at org.apache.spark.sql.sources.TableScanSuite$$anonfun$1.apply(TableScanSuite.scala:197)
[info] at org.apache.spark.sql.sources.TableScanSuite$$anonfun$1.apply(TableScanSuite.scala:197)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.sources.TableScanSuite.org$scalatest$BeforeAndAfterAll$$super$run(TableScanSuite.scala:100)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.sources.TableScanSuite.run(TableScanSuite.scala:100)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - SELECT count(*) FROM tableWithSchema (49 milliseconds)
[info] - SELECT `string$%Field` FROM tableWithSchema (27 milliseconds)
[info] - SELECT int_Field FROM tableWithSchema WHERE int_Field < 5 (29 milliseconds)
[info] - SELECT `longField_:,<>=+/~^` * 2 FROM tableWithSchema (28 milliseconds)
[info] - SELECT structFieldSimple.key, arrayFieldSimple[1] FROM tableWithSchema a where int_Field=1 (29 milliseconds)
[info] - SELECT structFieldComplex.Value.`value_(2)` FROM tableWithSchema *** FAILED *** (23 milliseconds)
[info] Results do not match for query:
[info] == Parsed Logical Plan ==
[info] 'Project [unresolvedalias('structFieldComplex.Value.value_(2),None)]
[info] +- 'UnresolvedRelation `tableWithSchema`, None
[info]
[info] == Analyzed Logical Plan ==
[info] value_(2): array<date>
[info] Project [structFieldComplex#25646.Value.value_(2) AS value_(2)#25667]
[info] +- Subquery tablewithschema
[info] +- Relation[string$%Field#25626,binaryField#25627,booleanField#25628,ByteField#25629,shortField#25630,int_Field#25631,longField_:,<>=+/~^#25632L,floatField#25633,doubleField#25634,decimalField1#25635,decimalField2#25636,dateField#25637,timestampField#25638,varcharField#25639,charField#25640,arrayFieldSimple#25641,arrayFieldComplex#25642,mapFieldSimple#25643,mapFieldComplex#25644,structFieldSimple#25645,structFieldComplex#25646] AllDataTypesScan(1,10,StructType(StructField(string$%Field,StringType,true), StructField(binaryField,BinaryType,true), StructField(booleanField,BooleanType,true), StructField(ByteField,ByteType,true), StructField(shortField,ShortType,true), StructField(int_Field,IntegerType,true), StructField(longField_:,<>=+/~^,LongType,true), StructField(floatField,FloatType,true), StructField(doubleField,DoubleType,true), StructField(decimalField1,DecimalType(10,0),true), StructField(decimalField2,DecimalType(9,2),true), StructField(dateField,DateType,true), StructField(timestampField,TimestampType,true), StructField(varcharField,StringType,true), StructField(charField,StringType,true), StructField(arrayFieldSimple,ArrayType(IntegerType,true),true), StructField(arrayFieldComplex,ArrayType(MapType(StringType,StructType(StructField(key,LongType,true)),true),true),true), StructField(mapFieldSimple,MapType(IntegerType,StringType,true),true), StructField(mapFieldComplex,MapType(MapType(StringType,FloatType,true),StructType(StructField(key,LongType,true)),true),true), StructField(structFieldSimple,StructType(StructField(key,IntegerType,true), StructField(Value,StringType,true)),true), StructField(structFieldComplex,StructType(StructField(key,ArrayType(StringType,true),true), StructField(Value,StructType(StructField(value_(2),ArrayType(DateType,true),true)),true)),true)))
[info]
[info] == Optimized Logical Plan ==
[info] Project [structFieldComplex#25646.Value.value_(2) AS value_(2)#25667]
[info] +- Relation[string$%Field#25626,binaryField#25627,booleanField#25628,ByteField#25629,shortField#25630,int_Field#25631,longField_:,<>=+/~^#25632L,floatField#25633,doubleField#25634,decimalField1#25635,decimalField2#25636,dateField#25637,timestampField#25638,varcharField#25639,charField#25640,arrayFieldSimple#25641,arrayFieldComplex#25642,mapFieldSimple#25643,mapFieldComplex#25644,structFieldSimple#25645,structFieldComplex#25646] AllDataTypesScan(1,10,StructType(StructField(string$%Field,StringType,true), StructField(binaryField,BinaryType,true), StructField(booleanField,BooleanType,true), StructField(ByteField,ByteType,true), StructField(shortField,ShortType,true), StructField(int_Field,IntegerType,true), StructField(longField_:,<>=+/~^,LongType,true), StructField(floatField,FloatType,true), StructField(doubleField,DoubleType,true), StructField(decimalField1,DecimalType(10,0),true), StructField(decimalField2,DecimalType(9,2),true), StructField(dateField,DateType,true), StructField(timestampField,TimestampType,true), StructField(varcharField,StringType,true), StructField(charField,StringType,true), StructField(arrayFieldSimple,ArrayType(IntegerType,true),true), StructField(arrayFieldComplex,ArrayType(MapType(StringType,StructType(StructField(key,LongType,true)),true),true),true), StructField(mapFieldSimple,MapType(IntegerType,StringType,true),true), StructField(mapFieldComplex,MapType(MapType(StringType,FloatType,true),StructType(StructField(key,LongType,true)),true),true), StructField(structFieldSimple,StructType(StructField(key,IntegerType,true), StructField(Value,StringType,true)),true), StructField(structFieldComplex,StructType(StructField(key,ArrayType(StringType,true),true), StructField(Value,StructType(StructField(value_(2),ArrayType(DateType,true),true)),true)),true)))
[info]
[info] == Physical Plan ==
[info] WholeStageCodegen
[info] : +- Project [structFieldComplex#25646.Value.value_(2) AS value_(2)#25667]
[info] : +- INPUT
[info] +- Scan AllDataTypesScan(1,10,StructType(StructField(string$%Field,StringType,true), StructField(binaryField,BinaryType,true), StructField(booleanField,BooleanType,true), StructField(ByteField,ByteType,true), StructField(shortField,ShortType,true), StructField(int_Field,IntegerType,true), StructField(longField_:,<>=+/~^,LongType,true), StructField(floatField,FloatType,true), StructField(doubleField,DoubleType,true), StructField(decimalField1,DecimalType(10,0),true), StructField(decimalField2,DecimalType(9,2),true), StructField(dateField,DateType,true), StructField(timestampField,TimestampType,true), StructField(varcharField,StringType,true), StructField(charField,StringType,true), StructField(arrayFieldSimple,ArrayType(IntegerType,true),true), StructField(arrayFieldComplex,ArrayType(MapType(StringType,StructType(StructField(key,LongType,true)),true),true),true), StructField(mapFieldSimple,MapType(IntegerType,StringType,true),true), StructField(mapFieldComplex,MapType(MapType(StringType,FloatType,true),StructType(StructField(key,LongType,true)),true),true), StructField(structFieldSimple,StructType(StructField(key,IntegerType,true), StructField(Value,StringType,true)),true), StructField(structFieldComplex,StructType(StructField(key,ArrayType(StringType,true),true), StructField(Value,StructType(StructField(value_(2),ArrayType(DateType,true),true)),true)),true)))[string$%Field#25626,binaryField#25627,booleanField#25628,ByteField#25629,shortField#25630,int_Field#25631,longField_:,<>=+/~^#25632L,floatField#25633,doubleField#25634,decimalField1#25635,decimalField2#25636,dateField#25637,timestampField#25638,varcharField#25639,charField#25640,arrayFieldSimple#25641,arrayFieldComplex#25642,mapFieldSimple#25643,mapFieldComplex#25644,structFieldSimple#25645,structFieldComplex#25646]
[info] == Results ==
[info]
[info] == Results ==
[info] !== Correct Answer - 10 == == Spark Answer - 10 ==
[info] ![List(1970-01-02)] [WrappedArray(1970-01-01)]
[info] ![List(1970-01-03)] [WrappedArray(1970-01-02)]
[info] ![List(1970-01-04)] [WrappedArray(1970-01-03)]
[info] ![List(1970-01-05)] [WrappedArray(1970-01-04)]
[info] ![List(1970-01-06)] [WrappedArray(1970-01-05)]
[info] ![List(1970-01-07)] [WrappedArray(1970-01-06)]
[info] ![List(1970-01-08)] [WrappedArray(1970-01-07)]
[info] ![List(1970-01-09)] [WrappedArray(1970-01-08)]
[info] ![List(1970-01-10)] [WrappedArray(1970-01-09)]
[info] ![List(1970-01-11)] [WrappedArray(1970-01-10)] (QueryTest.scala:143)
[info] org.scalatest.exceptions.TestFailedException:
[info] at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
[info] at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info] at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
[info] at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
[info] at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:143)
[info] at org.apache.spark.sql.sources.DataSourceTest$$anonfun$sqlTest$1.apply$mcV$sp(DataSourceTest.scala:33)
[info] at org.apache.spark.sql.sources.DataSourceTest$$anonfun$sqlTest$1.apply(DataSourceTest.scala:33)
[info] at org.apache.spark.sql.sources.DataSourceTest$$anonfun$sqlTest$1.apply(DataSourceTest.scala:33)
[info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
[info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
[info] at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
[info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
[info] at scala.collection.immutable.List.foreach(List.scala:381)
[info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
[info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
[info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
[info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
[info] at org.scalatest.Suite$class.run(Suite.scala:1424)
[info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
[info] at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
[info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
[info] at org.apache.spark.sql.sources.TableScanSuite.org$scalatest$BeforeAndAfterAll$$super$run(TableScanSuite.scala:100)
[info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
[info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
[info] at org.apache.spark.sql.sources.TableScanSuite.run(TableScanSuite.scala:100)
[info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info] at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] - Caching (99 milliseconds)
[info] - defaultSource (17 milliseconds)
[info] - exceptions (4 milliseconds)
[info] - SPARK-5196 schema field with comment (4 milliseconds)
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaApplySchemaSuite.applySchema started
[info] Test test.org.apache.spark.sql.JavaApplySchemaSuite.dataFrameRDDOperations started
[info] Test test.org.apache.spark.sql.JavaApplySchemaSuite.applySchemaToJSON started
[info] Test run finished: 0 failed, 0 ignored, 3 total, 3.159s
[info] Test run started
[info] Test test.org.apache.spark.sql.sources.JavaSaveLoadSuite.saveAndLoadWithSchema started
[info] Test test.org.apache.spark.sql.sources.JavaSaveLoadSuite.saveAndLoad started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 8.819s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaRowSuite.constructSimpleRow started
[info] Test test.org.apache.spark.sql.JavaRowSuite.constructComplexRow started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.002s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testRuntimeNullabilityCheck started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testJoin started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testTake started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testForeach started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testJavaEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testPrimitiveEncoder started
[error] Test test.org.apache.spark.sql.JavaDatasetSuite.testPrimitiveEncoder failed: java.lang.AssertionError: expected:<[(1.7976931348623157E308,0.922337203685477589,1970-01-01,2016-02-07 23:47:00.404,3.4028235E38)]> but was:<[(1.7976931348623157E308,0.922337203685477589,1969-12-31,2016-02-07 23:47:00.404,3.4028235E38)]>, took 0.174 sec
[error] at test.org.apache.spark.sql.JavaDatasetSuite.testPrimitiveEncoder(JavaDatasetSuite.java:406)
[error] ...
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testCommonOperation started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testTypedAggregation started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testGroupBy started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testSetOperation started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testKryoEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testJavaBeanEncoder2 started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testCollect started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testGroupByColumn started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testKryoEncoderErrorMessageForPrivateClass started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testJavaBeanEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testTupleEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testNestedTupleEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testReduce started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testSelect started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testJavaEncoderErrorMessageForPrivateClass started
[info] Test run finished: 1 failed, 0 ignored, 21 total, 4.636s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCollectAndTake started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testVarargMethods started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCreateStructTypeFromList started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testSampleBy started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCrosstab started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCreateDataFromFromList started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testFrequentItems started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testExecution started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testTextLoad started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.pivot started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testGenericLoad started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCountMinSketch started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCreateDataFrameFromJavaBeans started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCorrelation started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testBloomFilter started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCovariance started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCreateDataFrameFromLocalJavaBeans started
[info] Test run finished: 0 failed, 0 ignored, 17 total, 3.215s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaUDFSuite.udf1Test started
[info] Test test.org.apache.spark.sql.JavaUDFSuite.udf2Test started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.141s
[info] ScalaTest
[info] Run completed in 3 minutes, 37 seconds.
[info] Total number of tests run: 1586
[info] Suites: completed 116, aborted 0
[info] Tests: succeeded 1562, failed 24, canceled 0, ignored 15, pending 0
[info] *** 24 TESTS FAILED ***
[error] Failed: Total 1633, Failed 25, Errors 0, Passed 1608, Ignored 15
[error] Failed tests:
[error] org.apache.spark.sql.SQLQuerySuite
[error] org.apache.spark.sql.DateFunctionsSuite
[error] org.apache.spark.sql.jdbc.JDBCSuite
[error] org.apache.spark.sql.sources.TableScanSuite
[error] test.org.apache.spark.sql.JavaDatasetSuite
[error] org.apache.spark.sql.ScalaReflectionRelationSuite
[error] org.apache.spark.sql.execution.datasources.json.JsonSuite
[error] org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite
[error] (sql/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 310 s, completed Feb 8, 2016 2:47:11 AM
➜ spark git:(master) ✗
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment