Created
May 20, 2016 04:22
-
-
Save HyukjinKwon/6a10719d2ca67e04ece2b23a8f92dc62 to your computer and use it in GitHub Desktop.
[SPARK][R] test output (stdout) on Windows 7 32bit
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading required package: methods | |
Attaching package: 'SparkR' | |
The following object is masked from 'package:testthat': | |
describe | |
The following objects are masked from 'package:stats': | |
cov, filter, lag, na.omit, predict, sd, var, window | |
The following objects are masked from 'package:base': | |
as.data.frame, colnames, colnames<-, drop, endsWith, intersect, | |
rank, rbind, sample, startsWith, subset, summary, transform | |
binary functions: ........... | |
functions on binary files: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.... | |
broadcast variables: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.. | |
functions in client.R: ..... | |
test functions in sparkR.R: .1234Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
........Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
....567...Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
. | |
include an external JAR in SparkContext: The filename, directory name, or volume label syntax is incorrect. | |
W89 | |
include R packages: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
MLlib functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
..........................May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 65,622 | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [label] BINARY: 1 values, 21B raw, 23B comp, 1 pages, encodings: [RLE, PLAIN, BIT_PACKED] | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 87B for [terms, list, element, list, element] BINARY: 2 values, 42B raw, 43B comp, 1 pages, encodings: [RLE, PLAIN] | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for [hasIntercept] BOOLEAN: 1 values, 1B raw, 3B comp, 1 pages, encodings: [PLAIN, BIT_PACKED] | |
May 19, 2016 8:57:01 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 19, 2016 8:57:02 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 19, 2016 8:57:02 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 19, 2016 8:57:02 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 19, 2016 8:57:02 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 19, 2016 8:57:02 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 19, 2016 8:57:02 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 19, 2016 8:57:02 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 19, 2016 8:57:02 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 49 | |
May 19, 2016 8:57:02 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 90B for [labels, list, element] BINARY: 3 values, 50B raw, 50B comp, 1 pages, encodings: [RLE, PLAIN] | |
May 19, 2016 8:57:02 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 92 | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 61B for [vectorCol] BINARY: 1 values, 18B raw, 20B comp, 1 pages, encodings: [RLE, PLAIN, BIT_PACKED] | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 126B for [prefixesToRewrite, key_value, key] BINARY: 2 values, 61B raw, 61B comp, 1 pages, encodings: [RLE, PLAIN] | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 58B for [prefixesToRewrite, key_value, value] BINARY: 2 values, 15B raw, 17B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 12B raw, 1B comp} | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 19, 2016 8:57:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 54 | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 122B for [columnsToPrune, list, element] BINARY: 2 values, 59B raw, 59B comp, 1 pages, encodings: [RLE, PLAIN] | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 56 | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for [intercept] DOUBLE: 1 values, 8B raw, 10B comp, 1 pages, encodings: [PLAIN, BIT_PACKED] | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 45B for [coefficients, type] INT32: 1 values, 10B raw, 12B comp, 1 pages, encodings: [RLE, PLAIN, BIT_PACKED] | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for [coefficients, size] INT32: 1 values, 7B raw, 9B comp, 1 pages, encodings: [RLE, PLAIN, BIT_PACKED] | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for [coefficients, indices, list, element] INT32: 1 values, 13B raw, 15B comp, 1 pages, encodings: [RLE, PLAIN] | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for [coefficients, values, list, element] DOUBLE: 3 values, 37B raw, 38B comp, 1 pages, encodings: [RLE, PLAIN] | |
May 19, 2016 8:57:04 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 19, 2016 8:57:05 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 19, 2016 8:57:05 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 19, 2016 8:57:05 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 19, 2016 8:57:05 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 19, 2016 8:57:05 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 19, 2016 8:57:05 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 19, 2016 8:57:05 PM INFO: org.apache.parquet.had......................................................................... | |
parallelize() and collect(): Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
............................. | |
basic RDD functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
............................................................................................................................................................................................................................................................................................................................................................................................................................................ | |
SerDe functionality: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
................... | |
partitionBy, groupByKey, reduceByKey etc.: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.................... | |
SparkSQL functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.......................................................S..................................................................................................................................................................................................................................................a........S..................................................................................................................................................................................................................................................................................................................................................................S | |
tests RDD function take(): Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
................ | |
the textFile() function: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
............. | |
functions in utils.R: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
................................. | |
Skipped ------------------------------------------------------------------------ | |
1. create DataFrame from RDD (@test_sparkSQL.R#166) - Hive is not build with SparkSQL, skipped | |
2. test HiveContext (@test_sparkSQL.R#957) - Hive is not build with SparkSQL, skipped | |
3. Window functions on a DataFrame (@test_sparkSQL.R#2142) - Hive is not build with SparkSQL, skipped | |
Warnings ----------------------------------------------------------------------- | |
1. sparkJars tag in SparkContext (@test_includeJAR.R#32) - running command '"C:\Users\IEUser\workspace\spark\bin\../bin/spark-submit" --jars "C:\Users\IEUser\workspace\spark\bin\../R/lib/SparkR/test_support/sparktestjar_2.10-1.0.jar" C:\Users\IEUser\workspace\spark\bin\../R/lib/SparkR/tests/testthat/jarTest.R' had status 1 | |
Failed ------------------------------------------------------------------------- | |
1. Failure: Check masked functions (@test_context.R#30) ------------------------ | |
length(maskedBySparkR) not equal to length(namesOfMasked). | |
1/1 mismatches | |
[1] 22 - 20 == 2 | |
2. Failure: Check masked functions (@test_context.R#31) ------------------------ | |
sort(maskedBySparkR) not equal to sort(namesOfMasked). | |
Lengths differ: 22 vs 20 | |
3. Failure: Check masked functions (@test_context.R#40) ------------------------ | |
length(maskedCompletely) not equal to length(namesOfMaskedCompletely). | |
1/1 mismatches | |
[1] 5 - 3 == 2 | |
4. Failure: Check masked functions (@test_context.R#41) ------------------------ | |
sort(maskedCompletely) not equal to sort(namesOfMaskedCompletely). | |
Lengths differ: 5 vs 3 | |
5. Failure: sparkJars sparkPackages as comma-separated strings (@test_context.R#128) | |
`jars` not equal to c("a", "b"). | |
2/2 mismatches | |
x[1]: "C:\\Users\\IEUser\\workspace\\spark\\R\\lib\\SparkR\\tests\\testthat\\a" | |
y[1]: "a" | |
x[2]: "C:\\Users\\IEUser\\workspace\\spark\\R\\lib\\SparkR\\tests\\testthat\\b" | |
y[2]: "b" | |
6. Failure: sparkJars sparkPackages as comma-separated strings (@test_context.R#131) | |
`jars` not equal to c("abc", "def"). | |
2/2 mismatches | |
x[1]: "C:\\Users\\IEUser\\workspace\\spark\\R\\lib\\SparkR\\tests\\testthat\\abc | |
x[1]: " | |
y[1]: "abc" | |
x[2]: "C:\\Users\\IEUser\\workspace\\spark\\R\\lib\\SparkR\\tests\\testthat\\def | |
x[2]: " | |
y[2]: "def" | |
7. Failure: sparkJars sparkPackages as comma-separated strings (@test_context.R#134) | |
`jars` not equal to c("abc", "def", "xyz", "a", "b"). | |
5/5 mismatches | |
x[1]: "C:\\Users\\IEUser\\workspace\\spark\\R\\lib\\SparkR\\tests\\testthat\\abc | |
x[1]: " | |
y[1]: "abc" | |
x[2]: "C:\\Users\\IEUser\\workspace\\spark\\R\\lib\\SparkR\\tests\\testthat\\def | |
x[2]: " | |
y[2]: "def" | |
x[3]: "C:\\Users\\IEUser\\workspace\\spark\\R\\lib\\SparkR\\tests\\testthat\\xyz | |
x[3]: " | |
y[3]: "xyz" | |
x[4]: "C:\\Users\\IEUser\\workspace\\spark\\R\\lib\\SparkR\\tests\\testthat\\a" | |
y[4]: "a" | |
x[5]: "C:\\Users\\IEUser\\workspace\\spark\\R\\lib\\SparkR\\tests\\testthat\\b" | |
y[5]: "b" | |
8. Failure: sparkJars tag in SparkContext (@test_includeJAR.R#34) -------------- | |
`helloTest` not equal to "Hello, Dave". | |
1/1 mismatches | |
x[1]: NA | |
y[1]: "Hello, Dave" | |
9. Failure: sparkJars tag in SparkContext (@test_includeJAR.R#36) -------------- | |
`basicFunction` not equal to "4". | |
1/1 mismatches | |
x[1]: NA | |
y[1]: "4" | |
10. Error: subsetting (@test_sparkSQL.R#922) ----------------------------------- | |
argument "subset" is missing, with no default | |
1: subset(df, select = "name", drop = F) at C:/Users/IEUser/workspace/spark/R/lib/SparkR/tests/testthat/test_sparkSQL.R:922 | |
2: subset(df, select = "name", drop = F) | |
3: .local(x, ...) | |
4: x[subset, select, drop = drop] | |
DONE =========================================================================== | |
Error: Test failures | |
Execution halted |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment