Created
May 15, 2016 08:53
-
-
Save vectorijk/71f4ff34e3d34a628b8a3013f0ca2aa2 to your computer and use it in GitHub Desktop.
SparkR unit test
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading required package: methods | |
Attaching package: ‘SparkR’ | |
The following object is masked from ‘package:testthat’: | |
describe | |
The following objects are masked from ‘package:stats’: | |
cov, filter, lag, na.omit, predict, sd, var, window | |
The following objects are masked from ‘package:base’: | |
as.data.frame, colnames, colnames<-, drop, intersect, rank, rbind, | |
sample, subset, summary, transform | |
functions on binary files: .... | |
binary functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
........... | |
broadcast variables: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.. | |
functions in client.R: ..... | |
test functions in sparkR.R: .....Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
........Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
..........Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
. | |
include an external JAR in SparkContext: .. | |
include R packages: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
MLlib functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.........................SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". | |
SLF4J: Defaulting to no-operation (NOP) logger implementation | |
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. | |
.May 15, 2016 8:47:57 AM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 15, 2016 8:47:57 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 15, 2016 8:47:57 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 15, 2016 8:47:57 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 15, 2016 8:47:57 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 15, 2016 8:47:57 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 15, 2016 8:47:57 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 15, 2016 8:47:57 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 65,622 | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [label] BINARY: 1 values, 21B raw, 23B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 87B for [terms, list, element, list, element] BINARY: 2 values, 42B raw, 43B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for [hasIntercept] BOOLEAN: 1 values, 1B raw, 3B comp, 1 pages, encodings: [PLAIN, BIT_PACKED] | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 49 | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 90B for [labels, list, element] BINARY: 3 values, 50B raw, 50B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 15, 2016 8:47:58 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 92 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 61B for [vectorCol] BINARY: 1 values, 18B raw, 20B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 126B for [prefixesToRewrite, key_value, key] BINARY: 2 values, 61B raw, 61B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 58B for [prefixesToRewrite, key_value, value] BINARY: 2 values, 15B raw, 17B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 12B raw, 1B comp} | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 54 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 122B for [columnsToPrune, list, element] BINARY: 2 values, 59B raw, 59B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 56 | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for [intercept] DOUBLE: 1 values, 8B raw, 10B comp, 1 pages, encodings: [PLAIN, BIT_PACKED] | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 45B for [coefficients, type] INT32: 1 values, 10B raw, 12B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for [coefficients, size] INT32: 1 values, 7B raw, 9B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for [coefficients, indices, list, element] INT32: 1 values, 13B raw, 15B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for [coefficients, values, list, element] DOUBLE: 3 values, 37B raw, 38B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 15, 2016 8:47:59 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 15, 2016 8:48:00 AM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 15, 2016 8:48:00 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 15, 2016 8:48:00 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 15, 2016 8:48:00 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 15, 2016 8:48:00 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 15, 2016 8:48:00 AM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 15, 2016 8:48:00 AM INFO: org.apache.parquet.had......................................................................... | |
parallelize() and collect(): Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
............................. | |
basic RDD functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
............................................................................................................................................................................................................................................................................................................................................................................1............................................................. | |
SerDe functionality: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
................... | |
partitionBy, groupByKey, reduceByKey etc.: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.................... | |
SparkSQL functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... | |
tests RDD function take(): Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
................ | |
the textFile() function: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
............. | |
functions in utils.R: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
................................. | |
Failed ------------------------------------------------------------------------- | |
1. Error: pipeRDD() on RDDs (@test_rdd.R#427) ---------------------------------- | |
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 811.0 failed 1 times, most recent failure: Lost task 0.0 in stage 811.0 (TID 1913, localhost): org.apache.spark.SparkException: R computation failed with | |
[1] 2 | |
[1] 3 | |
[1] 1 | |
[1] 1 | |
[1] 3 | |
[1] 2 | |
[1] 2 | |
[1] 2 | |
[1] 2 | |
[1] 2 | |
[1] 2 | |
[1] 2 | |
ignoring SIGPIPE signal | |
Calls: source ... <Anonymous> -> lapply -> lapply -> FUN -> writeRaw -> writeBin | |
Execution halted | |
ignoring SIGPIPE signal | |
Calls: source ... <Anonymous> -> lapply -> lapply -> FUN -> writeRaw -> writeBin | |
Execution halted | |
cannot open the connection | |
Calls: source ... computeFunc -> FUN -> system2 -> writeLines -> file | |
In addition: Warning message: | |
In file(con, "w") : | |
cannot open file '/tmp/Rtmpjftqho/file422bf18e60c': No such file or directory | |
Execution halted | |
cannot open the connection | |
Calls: source ... computeFunc -> FUN -> system2 -> writeLines -> file | |
In addition: Warning message: | |
In file(con, "w") : | |
cannot open file '/tmp/Rtmpjftqho/file422af18e60c': No such file or directory | |
Execution halted | |
at org.apache.spark.api.r.RRunner.compute(RRunner.scala:107) | |
at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:49) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:318) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:282) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) | |
at org.apache.spark.scheduler.Task.run(Task.scala:85) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
Driver stacktrace: | |
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437) | |
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) | |
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) | |
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) | |
at scala.Option.foreach(Option.scala:257) | |
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811) | |
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659) | |
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618) | |
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607) | |
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) | |
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1863) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1876) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1889) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1903) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:883) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:357) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:882) | |
at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:349) | |
at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45) | |
at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:498) | |
at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141) | |
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:86) | |
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38) | |
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) | |
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) | |
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) | |
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) | |
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) | |
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) | |
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244) | |
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) | |
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) | |
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) | |
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) | |
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) | |
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) | |
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) | |
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) | |
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) | |
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) | |
at java.lang.Thread.run(Thread.java:745) | |
Caused by: org.apache.spark.SparkException: R computation failed with | |
[1] 2 | |
[1] 3 | |
[1] 1 | |
[1] 1 | |
[1] 3 | |
[1] 2 | |
[1] 2 | |
[1] 2 | |
[1] 2 | |
[1] 2 | |
[1] 2 | |
[1] 2 | |
ignoring SIGPIPE signal | |
Calls: source ... <Anonymous> -> lapply -> lapply -> FUN -> writeRaw -> writeBin | |
Execution halted | |
ignoring SIGPIPE signal | |
Calls: source ... <Anonymous> -> lapply -> lapply -> FUN -> writeRaw -> writeBin | |
Execution halted | |
cannot open the connection | |
Calls: source ... computeFunc -> FUN -> system2 -> writeLines -> file | |
In addition: Warning message: | |
In file(con, "w") : | |
cannot open file '/tmp/Rtmpjftqho/file422bf18e60c': No such file or directory | |
Execution halted | |
cannot open the connection | |
Calls: source ... computeFunc -> FUN -> system2 -> writeLines -> file | |
In addition: Warning message: | |
In file(con, "w") : | |
cannot open file '/tmp/Rtmpjftqho/file422af18e60c': No such file or directory | |
Execution halted | |
at org.apache.spark.api.r.RRunner.compute(RRunner.scala:107) | |
at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:49) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:318) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:282) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) | |
at org.apache.spark.scheduler.Task.run(Task.scala:85) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
... 1 more | |
1: collect(pipeRDD(rdd, "more")) at /root/spark/R/lib/SparkR/tests/testthat/test_rdd.R:427 | |
2: collect(pipeRDD(rdd, "more")) | |
3: .local(x, ...) | |
4: callJMethod(getJRDD(x), "collect") | |
5: invokeJava(isStatic = FALSE, objId$id, methodName, ...) | |
6: stop(readString(conn)) | |
DONE =========================================================================== | |
Error: Test failures | |
Execution halted |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Rebuild and Reproduced the issue:
In addition: Warning message:
In file(con, "w") :
cannot open file '/var/folders/s_/83b0sgvj2kl2kwq4stvft_pm0000gn/T//RtmpJXzTT8/file172edf4eb6f': No such file or directory
Execution halted
cannot open the connection
Calls: source ... computeFunc -> FUN -> system2 -> writeLines -> file
In addition: Warning message:
In file(con, "w") :
cannot open file '/var/folders/s_/83b0sgvj2kl2kwq4stvft_pm0000gn/T//RtmpJXzTT8/file172fdf4eb6f': No such file or directory
Execution halted
at org.apache.spark.api.r.RRunner.compute(RRunner.scala:107)
at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:318)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:282)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.conc
1: collect(pipeRDD(rdd, "more")) at /Users/mwang/spark_ws_0904/R/lib/SparkR/tests/testthat/test_rdd.R:427
2: collect(pipeRDD(rdd, "more"))
3: .local(x, ...)
4: callJMethod(getJRDD(x), "collect")
5: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
6: stop(readString(conn))
argument "subset" is missing, with no default
1: subset(df, select = "name", drop = F) at /Users/mwang/spark_ws_0904/R/lib/SparkR/tests/testthat/test_sparkSQL.R:922
2: subset(df, select = "name", drop = F)
3: .local(x, ...)
4: x[subset, select, drop = drop]
DONE ===========================================================================
Error: Test failures
Execution halted
Had test failures; see logs.