Created
May 26, 2016 03:19
-
-
Save HyukjinKwon/4bf35184f3a30f3bce987a58ec2bbbab to your computer and use it in GitHub Desktop.
[SPARK][R] test output (stdout) on Windows 7 32bit (fixed_2)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading required package: methods | |
Attaching package: 'SparkR' | |
The following object is masked from 'package:testthat': | |
describe | |
The following objects are masked from 'package:stats': | |
cov, filter, lag, na.omit, predict, sd, var, window | |
The following objects are masked from 'package:base': | |
as.data.frame, colnames, colnames<-, drop, endsWith, intersect, | |
rank, rbind, sample, startsWith, subset, summary, transform | |
binary functions: ........... | |
functions on binary files: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.... | |
broadcast variables: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.. | |
functions in client.R: ..... | |
test functions in sparkR.R: .1234Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
........Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
..........Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
. | |
include an external JAR in SparkContext: .. | |
include R packages: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
MLlib functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
..........................May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 65,622 | |
May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [label] BINARY: 1 values, 21B raw, 23B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 87B for [terms, list, element, list, element] BINARY: 2 values, 42B raw, 43B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 25, 2016 7:42:26 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for [hasIntercept] BOOLEAN: 1 values, 1B raw, 3B comp, 1 pages, encodings: [BIT_PACKED, PLAIN] | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 49 | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 90B for [labels, list, element] BINARY: 3 values, 50B raw, 50B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 25, 2016 7:42:27 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 92 | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 61B for [vectorCol] BINARY: 1 values, 18B raw, 20B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 126B for [prefixesToRewrite, key_value, key] BINARY: 2 values, 61B raw, 61B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 58B for [prefixesToRewrite, key_value, value] BINARY: 2 values, 15B raw, 17B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 12B raw, 1B comp} | |
May 25, 2016 7:42:28 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 54 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 122B for [columnsToPrune, list, element] BINARY: 2 values, 59B raw, 59B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 56 | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for [intercept] DOUBLE: 1 values, 8B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, PLAIN] | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 45B for [coefficients, type] INT32: 1 values, 10B raw, 12B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for [coefficients, size] INT32: 1 values, 7B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for [coefficients, indices, list, element] INT32: 1 values, 13B raw, 15B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for [coefficients, values, list, element] DOUBLE: 3 values, 37B raw, 38B comp, 1 pages, encodings: [PLAIN, RLE] | |
May 25, 2016 7:42:29 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5 | |
May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: SNAPPY | |
May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
May 25, 2016 7:42:30 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
May 25, 2016 7:42:30 PM INFO: org.apache.parquet.had......................................................................... | |
parallelize() and collect(): Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
............................. | |
basic RDD functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
............................................................................................................................................................................................................................................................................................................................................................................................................................................ | |
SerDe functionality: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
................... | |
partitionBy, groupByKey, reduceByKey etc.: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.................... | |
SparkSQL functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
.......................................................S..................................................................................................................................................................................................................................................5........S..................................................................................................................................................................................................................................................................................................................................................................S | |
tests RDD function take(): Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
................ | |
the textFile() function: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
............. | |
functions in utils.R: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context | |
................................. | |
test the support SparkR on Windows: . | |
Skipped ------------------------------------------------------------------------ | |
1. create DataFrame from RDD (@test_sparkSQL.R#166) - Hive is not build with SparkSQL, skipped | |
2. test HiveContext (@test_sparkSQL.R#957) - Hive is not build with SparkSQL, skipped | |
3. Window functions on a DataFrame (@test_sparkSQL.R#2142) - Hive is not build with SparkSQL, skipped | |
Failed ------------------------------------------------------------------------- | |
1. Failure: Check masked functions (@test_context.R#30) ------------------------ | |
length(maskedBySparkR) not equal to length(namesOfMasked). | |
1/1 mismatches | |
[1] 22 - 20 == 2 | |
2. Failure: Check masked functions (@test_context.R#31) ------------------------ | |
sort(maskedBySparkR) not equal to sort(namesOfMasked). | |
Lengths differ: 22 vs 20 | |
3. Failure: Check masked functions (@test_context.R#40) ------------------------ | |
length(maskedCompletely) not equal to length(namesOfMaskedCompletely). | |
1/1 mismatches | |
[1] 5 - 3 == 2 | |
4. Failure: Check masked functions (@test_context.R#41) ------------------------ | |
sort(maskedCompletely) not equal to sort(namesOfMaskedCompletely). | |
Lengths differ: 5 vs 3 | |
5. Error: subsetting (@test_sparkSQL.R#922) ------------------------------------ | |
argument "subset" is missing, with no default | |
1: subset(df, select = "name", drop = F) at C:/Users/IEUser/workspace/spark/R/lib/SparkR/tests/testthat/test_sparkSQL.R:922 | |
2: subset(df, select = "name", drop = F) | |
3: .local(x, ...) | |
4: x[subset, select, drop = drop] | |
DONE =========================================================================== | |
Error: Test failures | |
Execution halted |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment