Skip to content

Instantly share code, notes, and snippets.

@skliarpawlo
Last active January 9, 2019 13:12
Show Gist options
  • Save skliarpawlo/dd2540315dc9a2f8978b3e1dfaa2bc01 to your computer and use it in GitHub Desktop.
Save skliarpawlo/dd2540315dc9a2f8978b3e1dfaa2bc01 to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
============================= test session starts ==============================
platform darwin -- Python 3.5.2, pytest-3.2.3, py-1.7.0, pluggy-0.4.0 -- /usr/local/bin/python3.5
cachedir: ../.cache
rootdir: /private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/execroot/__main__/bazel-out/darwin-fastbuild/bin/tool_castor/castor_integration_tests.runfiles/__main__, inifile: tox.ini
collecting ... INFO:dd.datadogpy:No agent or invalid configuration file found
collected 63 items
tests/integration/test_activities_merger.py::TestActivitiesMergerWorkflow::test_abort_with_corrupted_delta INFO:root:Starting the new global session for <class 'castor.TestCastorSession'>
Warning: Ignoring non-spark config property: es.nodes.wan.only=true
https://nexus.tubularlabs.net/repository/libs-release-local/ added as a remote repository with the name: repo-1
Ivy Default Cache set to: /Users/pavloskliar/.ivy2/cache
The jars for the packages stored in: /Users/pavloskliar/.ivy2/jars
:: loading settings :: url = jar:file:/private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/external/pypi__pyspark_2_2_0/pyspark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.apache.spark#spark-streaming-kafka-0-8_2.11 added as a dependency
org.apache.spark#spark-sql-kafka-0-10_2.11 added as a dependency
org.elasticsearch#elasticsearch-spark-20_2.11 added as a dependency
com.tubularlabs#confluent-spark-avro_2.11 added as a dependency
datastax#spark-cassandra-connector added as a dependency
mysql#mysql-connector-java added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.ivy.util.url.IvyAuthenticator (file:/private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/external/pypi__pyspark_2_2_0/pyspark/jars/ivy-2.4.0.jar) to field java.net.Authenticator.theAuthenticator
WARNING: Please consider reporting this to the maintainers of org.apache.ivy.util.url.IvyAuthenticator
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
found org.apache.spark#spark-streaming-kafka-0-8_2.11;2.1.0 in central
found org.apache.kafka#kafka_2.11;0.8.2.1 in central
found org.scala-lang.modules#scala-xml_2.11;1.0.2 in central
found com.yammer.metrics#metrics-core;2.2.0 in central
found org.slf4j#slf4j-api;1.7.16 in central
found org.scala-lang.modules#scala-parser-combinators_2.11;1.0.2 in central
found com.101tec#zkclient;0.3 in central
found log4j#log4j;1.2.17 in central
found org.apache.kafka#kafka-clients;0.8.2.1 in central
found net.jpountz.lz4#lz4;1.3.0 in central
found org.xerial.snappy#snappy-java;1.1.2.6 in central
found org.apache.spark#spark-tags_2.11;2.1.0 in central
found org.scalatest#scalatest_2.11;2.2.6 in central
found org.scala-lang#scala-reflect;2.11.8 in central
found org.spark-project.spark#unused;1.0.0 in central
found org.apache.spark#spark-sql-kafka-0-10_2.11;2.1.1 in central
found org.apache.kafka#kafka-clients;0.10.0.1 in central
found org.apache.spark#spark-tags_2.11;2.1.1 in central
found org.elasticsearch#elasticsearch-spark-20_2.11;6.0.0 in central
found com.tubularlabs#confluent-spark-avro_2.11;1.2.1 in repo-1
found datastax#spark-cassandra-connector;2.0.0-M2-s_2.11 in spark-packages
found commons-beanutils#commons-beanutils;1.8.0 in central
found org.joda#joda-convert;1.2 in central
found joda-time#joda-time;2.3 in central
found io.netty#netty-all;4.0.33.Final in central
found com.twitter#jsr166e;1.1.0 in central
found mysql#mysql-connector-java;5.1.39 in central
:: resolution report :: resolve 6997ms :: artifacts dl 11ms
:: modules in use:
com.101tec#zkclient;0.3 from central in [default]
com.tubularlabs#confluent-spark-avro_2.11;1.2.1 from repo-1 in [default]
com.twitter#jsr166e;1.1.0 from central in [default]
com.yammer.metrics#metrics-core;2.2.0 from central in [default]
commons-beanutils#commons-beanutils;1.8.0 from central in [default]
datastax#spark-cassandra-connector;2.0.0-M2-s_2.11 from spark-packages in [default]
io.netty#netty-all;4.0.33.Final from central in [default]
joda-time#joda-time;2.3 from central in [default]
log4j#log4j;1.2.17 from central in [default]
mysql#mysql-connector-java;5.1.39 from central in [default]
net.jpountz.lz4#lz4;1.3.0 from central in [default]
org.apache.kafka#kafka-clients;0.10.0.1 from central in [default]
org.apache.kafka#kafka_2.11;0.8.2.1 from central in [default]
org.apache.spark#spark-sql-kafka-0-10_2.11;2.1.1 from central in [default]
org.apache.spark#spark-streaming-kafka-0-8_2.11;2.1.0 from central in [default]
org.apache.spark#spark-tags_2.11;2.1.1 from central in [default]
org.elasticsearch#elasticsearch-spark-20_2.11;6.0.0 from central in [default]
org.joda#joda-convert;1.2 from central in [default]
org.scala-lang#scala-reflect;2.11.8 from central in [default]
org.scala-lang.modules#scala-parser-combinators_2.11;1.0.2 from central in [default]
org.scala-lang.modules#scala-xml_2.11;1.0.2 from central in [default]
org.slf4j#slf4j-api;1.7.16 from central in [default]
org.spark-project.spark#unused;1.0.0 from central in [default]
org.xerial.snappy#snappy-java;1.1.2.6 from central in [default]
:: evicted modules:
org.apache.kafka#kafka-clients;0.8.2.1 by [org.apache.kafka#kafka-clients;0.10.0.1] in [default]
org.apache.spark#spark-tags_2.11;2.1.0 by [org.apache.spark#spark-tags_2.11;2.1.1] in [default]
org.scalatest#scalatest_2.11;2.2.6 transitively in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 27 | 4 | 4 | 3 || 24 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent
confs: [default]
0 artifacts copied, 24 already retrieved (0kB/8ms)
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/01/09 13:07:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
INFO:swissarmy.testutils.dockerfixture:Setting up the following docker fixtures for testing: ['elasticsearch.docker', 'mysql.docker', 'kafka.docker', 'zookeeper.docker'] with project: castor_5ddeo
INFO:swissarmy.testutils.dockerfixture:
INFO:swissarmy.testutils.dockerfixture:Docker compose is starting containers for fixtures
INFO:swissarmy.testutils.dockerfixture:Using health checking (compose v2.1) to verify health
INFO:swissarmy.testutils.dockerfixture:Calling `docker-compose -p castor_5ddeo -f /private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/execroot/__main__/bazel-out/darwin-fastbuild/bin/tool_castor/castor_integration_tests.runfiles/__main__/tool_castor/tests/integration/../../docker-compose.yml ps| grep 'elasticsearch.docker' | grep '(healthy)'` to verify health
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying...
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying...
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying...
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying...
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying...
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying...
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying...
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying...
INFO:swissarmy.testutils.dockerfixture:Container for test fixture (elasticsearch.docker) is healthy.
INFO:swissarmy.testutils.dockerfixture:Calling `docker-compose -p castor_5ddeo -f /private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/execroot/__main__/bazel-out/darwin-fastbuild/bin/tool_castor/castor_integration_tests.runfiles/__main__/tool_castor/tests/integration/../../docker-compose.yml ps| grep 'mysql.docker' | grep '(healthy)'` to verify health
INFO:swissarmy.testutils.dockerfixture:Container for test fixture (mysql.docker) is healthy.
INFO:swissarmy.testutils.dockerfixture:Calling `docker-compose -p castor_5ddeo -f /private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/execroot/__main__/bazel-out/darwin-fastbuild/bin/tool_castor/castor_integration_tests.runfiles/__main__/tool_castor/tests/integration/../../docker-compose.yml ps| grep 'kafka.docker' | grep '(healthy)'` to verify health
INFO:swissarmy.testutils.dockerfixture:Container for test fixture (kafka.docker) is healthy.
INFO:swissarmy.testutils.dockerfixture:Calling `docker-compose -p castor_5ddeo -f /private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/execroot/__main__/bazel-out/darwin-fastbuild/bin/tool_castor/castor_integration_tests.runfiles/__main__/tool_castor/tests/integration/../../docker-compose.yml ps| grep 'zookeeper.docker' | grep '(healthy)'` to verify health
INFO:swissarmy.testutils.dockerfixture:Container for test fixture (zookeeper.docker) is healthy.
INFO:swissarmy.testutils.dockerfixture:Successfully set up fixtures, starting tests.
FAILED
tests/integration/test_activities_merger.py::TestActivitiesMergerWorkflow::test_workflow FAILED
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_base INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
FAILED
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_consecutive_runs FAILED
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_data_replace FAILED
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_future_process_to FAILED
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_incorrect_csv_schema FAILED
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_init_from_passed FAILED
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_no_table FAILED
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_recovery_checkpoint_prop_passed FAILED
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_specified_checkpoint FAILED
tests/integration/test_cassandra_loader.py::TestCassandraLoader::test_init_from_args PASSED
tests/integration/test_databus_merger.py::TestDatabusMergerWorkflow::test_workflow INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
FAILED
tests/integration/test_databus_merger.py::TestDatabusMergerWorkflow::test_workflow_no_merging FAILED
tests/integration/test_databus_merger.py::TestDatabusMergerWorkflow::test_workflow_with_array_unpack FAILED
tests/integration/test_databus_merger.py::TestDatabusMergerWorkflow::test_workflow_with_corrupted_delta FAILED
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_base INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
FAILED
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_checkpoints_for_different_topics FAILED
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_empty_delta FAILED
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_empty_delta_after_filter FAILED
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_recovery_checkpoint FAILED
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_recovery_checkpoint_and_step FAILED
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_recovery_checkpoint_disabled FAILED
tests/integration/test_databus_s3_merger.py::TestDatabusS3DeltaSource::test_include_last INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
FAILED
tests/integration/test_databus_s3_merger.py::TestDatabusS3DeltaSource::test_include_last_and_overlap FAILED
tests/integration/test_databus_s3_merger.py::TestDatabusS3DeltaSource::test_overlap FAILED
tests/integration/test_elastic_loader.py::TestElasticLoader::test_simple INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
INFO:castor.loader:Start exporting from elastic://elasticsearch.docker:60200/castor_test/test?es.read.field.as.array.include=topics
FAILED
tests/integration/test_elastic_merger.py::TestElasticMerge::test_elastic_merge_workflow INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
FAILED
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_calculate_unprocessed_parts INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
FAILED
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_get_delta_df FAILED
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_get_multiple_topics_offsets FAILED
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_get_table_offsets FAILED
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_get_topic_offsets FAILED
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_kafka_merge_init_from FAILED
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_kafka_merge_workflow FAILED
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_merge_to_non_default_db FAILED
tests/integration/test_loader_partitioned.py::TestLoaderPartitioned::test_load_to_non_default_database INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
FAILED
tests/integration/test_loader_partitioned.py::TestLoaderPartitioned::test_partitioning FAILED
tests/integration/test_mysql_loader.py::TestMysqlLoader::test_loader INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
INFO:castor.loader:Start exporting from mysql://mysql.docker:60306/castor_test/loader?user=root&password=
FAILED
tests/integration/test_quantum_merger.py::TestQuantumMerger::test_kafka_merge_workflow INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net
FAILED
tests/integration/test_quantum_merger.py::TestQuantumMerger::test_kafka_merge_workflow ERROR
tests/integration/test_schema_evolution.py::TestAvroSchemaEvolution::test_add_field INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net
FAILED
tests/integration/test_schema_evolution.py::TestAvroSchemaEvolution::test_add_field_modify_schema INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net
FAILED
tests/integration/test_schema_evolution.py::TestAvroSchemaEvolution::test_delete_field INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net
FAILED
tests/integration/test_schema_evolution.py::TestAvroSchemaEvolution::test_remove_field_modify_schema INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net
FAILED
tests/integration/test_schema_evolution.py::TestAvroSchemaEvolution::test_rename_field INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net
FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_account_level_values_per_segment INFO:root:Reusing the global session for <class 'castor.TestCastorSession'>
FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_account_level_values_per_segment ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_double_ticks_chunked FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_double_ticks_chunked ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_double_ticks_whole FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_double_ticks_whole ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_single_ticks_chunked FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_single_ticks_chunked ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_single_ticks_whole FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_single_ticks_whole ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_bucketing_gids FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_bucketing_gids ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_old_streams_ignored FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_old_streams_ignored ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_primary_fields FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_primary_fields ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_replace_merge_works FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_replace_merge_works ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_and_viewers_on_game_switch FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_and_viewers_on_game_switch ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_and_viewers_on_game_switch_none_game FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_and_viewers_on_game_switch_none_game ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_same_game FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_same_game ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_seconds_watched_for_unequal_time_ranges_between_measurements FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_seconds_watched_for_unequal_time_ranges_between_measurements ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_segment_and_stream_duration FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_segment_and_stream_duration ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_is_segmented_properly FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_is_segmented_properly ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_is_segmented_properly_when_game_is_null FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_is_segmented_properly_when_game_is_null ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_publish_date_fields FAILED
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_publish_date_fields ERROR
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_viewers_fields FAILEDStopping castor_5ddeo_kafka.docker_1 ...
Stopping castor_5ddeo_zookeeper.docker_1 ...
Stopping castor_5ddeo_mysql.docker_1 ...
Stopping castor_5ddeo_elasticsearch.docker_1 ...

Stopping castor_5ddeo_elasticsearch.docker_1 ... done

Stopping castor_5ddeo_mysql.docker_1 ... done

Stopping castor_5ddeo_kafka.docker_1 ... done

Stopping castor_5ddeo_zookeeper.docker_1 ... done
Removing castor_5ddeo_kafka.docker_1 ...
Removing castor_5ddeo_zookeeper.docker_1 ...
Removing castor_5ddeo_mysql.docker_1 ...
Removing castor_5ddeo_elasticsearch.docker_1 ...

Removing castor_5ddeo_elasticsearch.docker_1 ... done

Removing castor_5ddeo_zookeeper.docker_1 ... done

Removing castor_5ddeo_mysql.docker_1 ... done

Removing castor_5ddeo_kafka.docker_1 ... done
Removing network castor_5ddeo_default
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_viewers_fields ERROR
==================================== ERRORS ====================================
_______ ERROR at teardown of TestQuantumMerger.test_kafka_merge_workflow _______
a = ('xro856', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro856'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_quantum_merger.TestQuantumMerger testMethod=test_kafka_merge_workflow>
 def tearDown(self):
> self.spark.sql('DROP TABLE IF EXISTS {}'.format(self.table_name))
tests/integration/test_quantum_merger.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro856', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
ERROR at teardown of TestTwitchStreamMerger.test_account_level_values_per_segment
a = ('xro1015', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1015'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_account_level_values_per_segment>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1015', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
ERROR at teardown of TestTwitchStreamMerger.test_ascii_trapezoid_double_ticks_chunked
a = ('xro1069', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1069'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_double_ticks_chunked>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1069', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
ERROR at teardown of TestTwitchStreamMerger.test_ascii_trapezoid_double_ticks_whole
a = ('xro1123', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1123'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_double_ticks_whole>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1123', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
ERROR at teardown of TestTwitchStreamMerger.test_ascii_trapezoid_single_ticks_chunked
a = ('xro1177', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1177'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_single_ticks_chunked>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1177', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
ERROR at teardown of TestTwitchStreamMerger.test_ascii_trapezoid_single_ticks_whole
a = ('xro1231', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1231'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_single_ticks_whole>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1231', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_______ ERROR at teardown of TestTwitchStreamMerger.test_bucketing_gids ________
a = ('xro1285', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1285'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_bucketing_gids>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1285', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_____ ERROR at teardown of TestTwitchStreamMerger.test_old_streams_ignored _____
a = ('xro1338', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1338'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_old_streams_ignored>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1338', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_______ ERROR at teardown of TestTwitchStreamMerger.test_primary_fields ________
a = ('xro1391', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1391'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_primary_fields>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1391', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_____ ERROR at teardown of TestTwitchStreamMerger.test_replace_merge_works _____
a = ('xro1444', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1444'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_replace_merge_works>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1444', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
ERROR at teardown of TestTwitchStreamMerger.test_same_fetch_time_and_viewers_on_game_switch
a = ('xro1497', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1497'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_same_fetch_time_and_viewers_on_game_switch>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1497', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
ERROR at teardown of TestTwitchStreamMerger.test_same_fetch_time_and_viewers_on_game_switch_none_game
a = ('xro1550', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1550'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 16 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 33 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 38 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 39 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_same_fetch_time_and_viewers_on_game_switch_none_game>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1550', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
__ ERROR at teardown of TestTwitchStreamMerger.test_same_fetch_time_same_game __
a = ('xro1602', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1602'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 16 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 33 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 38 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 39 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_same_fetch_time_same_game>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1602', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
ERROR at teardown of TestTwitchStreamMerger.test_seconds_watched_for_unequal_time_ranges_between_measurements
a = ('xro1654', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1654'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 16 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 33 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 38 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 39 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_seconds_watched_for_unequal_time_ranges_between_measurements>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1654', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_ ERROR at teardown of TestTwitchStreamMerger.test_segment_and_stream_duration _
a = ('xro1706', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1706'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 16 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 33 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 38 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 39 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_segment_and_stream_duration>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1706', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
ERROR at teardown of TestTwitchStreamMerger.test_stream_is_segmented_properly _
a = ('xro1758', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1758'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 16 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 33 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 38 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 39 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_stream_is_segmented_properly>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1758', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
ERROR at teardown of TestTwitchStreamMerger.test_stream_is_segmented_properly_when_game_is_null
a = ('xro1810', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1810'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 16 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 33 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 38 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 39 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_stream_is_segmented_properly_when_game_is_null>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1810', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_ ERROR at teardown of TestTwitchStreamMerger.test_stream_publish_date_fields __
a = ('xro1862', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1862'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 16 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 33 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 38 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 39 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_stream_publish_date_fields>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1862', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_______ ERROR at teardown of TestTwitchStreamMerger.test_viewers_fields ________
a = ('xro1914', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1914'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 16 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 33 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 38 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 39 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_viewers_fields>
 def tearDown(self):
> self.spark.catalog_ext.drop_table(self.TABLE)
tests/integration/test_twitch_streams_merger.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:105: in drop_table
 '{} {}'.format(drop_statement, table_name)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1914', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
=================================== FAILURES ===================================
_________ TestActivitiesMergerWorkflow.test_abort_with_corrupted_delta _________
self = <tests.integration.test_activities_merger.TestActivitiesMergerWorkflow testMethod=test_abort_with_corrupted_delta>
 def setUp(self):
 super(TestActivitiesMergerWorkflow, self).setUp()
 
 install_databus_reader_and_writer()
 self.table_name = 'test_activities'
 self.topic = 'test.castor.activities.r1'
 self.schemas_root = absolute_path(__file__, 'resources', 'databus')
 
 # publish schema to schema-registry
 sr_remote = RemoteSchemaRegistryClient(self.sr_host)
 idl = IDLUtils(sr_remote)
> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)
tests/integration/test_activities_merger.py:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../pkg_avroplane/avroplane/fs.py:69: in __init__
 self._cache = self.discover_schemas(base_path, idl)
../pkg_avroplane/avroplane/fs.py:137: in discover_schemas
 schema = idl.to_json(schema_path)
../pkg_avroplane/avroplane/fs.py:545: in to_json
 topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)
../pkg_avroplane/avroplane/fs.py:500: in idl_to_topic_schemas
 self._locate_avrotools(),
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'avroplane.fs.IDLUtils'>
 @classmethod
 def _locate_avrotools(cls):
 """Tried different places to find avroplane-tool.jar."""
 paths = [
 '/opt/tubular/lib/avro-tools.jar',
 os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),
 ]
 for path in paths:
 if os.path.exists(path):
 return path
 
 raise NotImplementedError('Cannot find avro-tools.jar archive locally '
 'and failed to download. '
> 'Expected location is one of: {}'.format(paths))
E NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar']
../pkg_avroplane/avroplane/fs.py:631: NotImplementedError
__________________ TestActivitiesMergerWorkflow.test_workflow __________________
self = <tests.integration.test_activities_merger.TestActivitiesMergerWorkflow testMethod=test_workflow>
 def setUp(self):
 super(TestActivitiesMergerWorkflow, self).setUp()
 
 install_databus_reader_and_writer()
 self.table_name = 'test_activities'
 self.topic = 'test.castor.activities.r1'
 self.schemas_root = absolute_path(__file__, 'resources', 'databus')
 
 # publish schema to schema-registry
 sr_remote = RemoteSchemaRegistryClient(self.sr_host)
 idl = IDLUtils(sr_remote)
> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)
tests/integration/test_activities_merger.py:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../pkg_avroplane/avroplane/fs.py:69: in __init__
 self._cache = self.discover_schemas(base_path, idl)
../pkg_avroplane/avroplane/fs.py:137: in discover_schemas
 schema = idl.to_json(schema_path)
../pkg_avroplane/avroplane/fs.py:545: in to_json
 topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)
../pkg_avroplane/avroplane/fs.py:500: in idl_to_topic_schemas
 self._locate_avrotools(),
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'avroplane.fs.IDLUtils'>
 @classmethod
 def _locate_avrotools(cls):
 """Tried different places to find avroplane-tool.jar."""
 paths = [
 '/opt/tubular/lib/avro-tools.jar',
 os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),
 ]
 for path in paths:
 if os.path.exists(path):
 return path
 
 raise NotImplementedError('Cannot find avro-tools.jar archive locally '
 'and failed to download. '
> 'Expected location is one of: {}'.format(paths))
E NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar']
../pkg_avroplane/avroplane/fs.py:631: NotImplementedError
____________________ TestBarleyMashMergerWorkflow.test_base ____________________
a = ('xro41', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro41'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_base>
 def setUp(self):
 super().setUp()
 
 output_table_schema = (
 'date:string,'
 'domain:string,'
 'device_id:string,'
 'url:string,'
 'click_type:string,'
 'device_platform:string,'
 'country_code:string,'
 'gender:string,'
 'age:string,'
 'timestamp:timestamp,'
 'time_spent:long'
 )
 
 warehouse_path = tempfile.mkdtemp()
> empty_df = self.spark.createDataFrame([], output_table_schema)
tests/integration/test_barley_mash_merger.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro41', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
______________ TestBarleyMashMergerWorkflow.test_consecutive_runs ______________
a = ('xro75', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro75'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_consecutive_runs>
 def setUp(self):
 super().setUp()
 
 output_table_schema = (
 'date:string,'
 'domain:string,'
 'device_id:string,'
 'url:string,'
 'click_type:string,'
 'device_platform:string,'
 'country_code:string,'
 'gender:string,'
 'age:string,'
 'timestamp:timestamp,'
 'time_spent:long'
 )
 
 warehouse_path = tempfile.mkdtemp()
> empty_df = self.spark.createDataFrame([], output_table_schema)
tests/integration/test_barley_mash_merger.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro75', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
________________ TestBarleyMashMergerWorkflow.test_data_replace ________________
a = ('xro109', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro109'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_data_replace>
 def setUp(self):
 super().setUp()
 
 output_table_schema = (
 'date:string,'
 'domain:string,'
 'device_id:string,'
 'url:string,'
 'click_type:string,'
 'device_platform:string,'
 'country_code:string,'
 'gender:string,'
 'age:string,'
 'timestamp:timestamp,'
 'time_spent:long'
 )
 
 warehouse_path = tempfile.mkdtemp()
> empty_df = self.spark.createDataFrame([], output_table_schema)
tests/integration/test_barley_mash_merger.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro109', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_____________ TestBarleyMashMergerWorkflow.test_future_process_to ______________
a = ('xro143', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro143'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_future_process_to>
 def setUp(self):
 super().setUp()
 
 output_table_schema = (
 'date:string,'
 'domain:string,'
 'device_id:string,'
 'url:string,'
 'click_type:string,'
 'device_platform:string,'
 'country_code:string,'
 'gender:string,'
 'age:string,'
 'timestamp:timestamp,'
 'time_spent:long'
 )
 
 warehouse_path = tempfile.mkdtemp()
> empty_df = self.spark.createDataFrame([], output_table_schema)
tests/integration/test_barley_mash_merger.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro143', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
____________ TestBarleyMashMergerWorkflow.test_incorrect_csv_schema ____________
a = ('xro177', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro177'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_incorrect_csv_schema>
 def setUp(self):
 super().setUp()
 
 output_table_schema = (
 'date:string,'
 'domain:string,'
 'device_id:string,'
 'url:string,'
 'click_type:string,'
 'device_platform:string,'
 'country_code:string,'
 'gender:string,'
 'age:string,'
 'timestamp:timestamp,'
 'time_spent:long'
 )
 
 warehouse_path = tempfile.mkdtemp()
> empty_df = self.spark.createDataFrame([], output_table_schema)
tests/integration/test_barley_mash_merger.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro177', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
______________ TestBarleyMashMergerWorkflow.test_init_from_passed ______________
a = ('xro211', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro211'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_init_from_passed>
 def setUp(self):
 super().setUp()
 
 output_table_schema = (
 'date:string,'
 'domain:string,'
 'device_id:string,'
 'url:string,'
 'click_type:string,'
 'device_platform:string,'
 'country_code:string,'
 'gender:string,'
 'age:string,'
 'timestamp:timestamp,'
 'time_spent:long'
 )
 
 warehouse_path = tempfile.mkdtemp()
> empty_df = self.spark.createDataFrame([], output_table_schema)
tests/integration/test_barley_mash_merger.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro211', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
__________________ TestBarleyMashMergerWorkflow.test_no_table __________________
a = ('xro245', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro245'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_no_table>
 def setUp(self):
 super().setUp()
 
 output_table_schema = (
 'date:string,'
 'domain:string,'
 'device_id:string,'
 'url:string,'
 'click_type:string,'
 'device_platform:string,'
 'country_code:string,'
 'gender:string,'
 'age:string,'
 'timestamp:timestamp,'
 'time_spent:long'
 )
 
 warehouse_path = tempfile.mkdtemp()
> empty_df = self.spark.createDataFrame([], output_table_schema)
tests/integration/test_barley_mash_merger.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro245', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
______ TestBarleyMashMergerWorkflow.test_recovery_checkpoint_prop_passed _______
a = ('xro279', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro279'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_recovery_checkpoint_prop_passed>
 def setUp(self):
 super().setUp()
 
 output_table_schema = (
 'date:string,'
 'domain:string,'
 'device_id:string,'
 'url:string,'
 'click_type:string,'
 'device_platform:string,'
 'country_code:string,'
 'gender:string,'
 'age:string,'
 'timestamp:timestamp,'
 'time_spent:long'
 )
 
 warehouse_path = tempfile.mkdtemp()
> empty_df = self.spark.createDataFrame([], output_table_schema)
tests/integration/test_barley_mash_merger.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro279', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
____________ TestBarleyMashMergerWorkflow.test_specified_checkpoint ____________
a = ('xro313', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro313'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_specified_checkpoint>
 def setUp(self):
 super().setUp()
 
 output_table_schema = (
 'date:string,'
 'domain:string,'
 'device_id:string,'
 'url:string,'
 'click_type:string,'
 'device_platform:string,'
 'country_code:string,'
 'gender:string,'
 'age:string,'
 'timestamp:timestamp,'
 'time_spent:long'
 )
 
 warehouse_path = tempfile.mkdtemp()
> empty_df = self.spark.createDataFrame([], output_table_schema)
tests/integration/test_barley_mash_merger.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro313', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
___________________ TestDatabusMergerWorkflow.test_workflow ____________________
self = <tests.integration.test_databus_merger.TestDatabusMergerWorkflow testMethod=test_workflow>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
 
 install_databus_reader_and_writer()
 self.table_name = 'test_castor_databus'
 self.topic = 'test.castor.databus.r2'
 self.topic_no_merge = 'test.castor.databus.no.merge.r2'
 self.topic_arrays = 'test.castor.databus.arrays.r2'
 self.schemas_root = absolute_path(__file__, 'resources', 'databus')
 
 # publish schema to schema-registry
 sr_remote = RemoteSchemaRegistryClient(self.sr_host)
 idl = IDLUtils(sr_remote)
> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)
tests/integration/test_databus_merger.py:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../pkg_avroplane/avroplane/fs.py:69: in __init__
 self._cache = self.discover_schemas(base_path, idl)
../pkg_avroplane/avroplane/fs.py:137: in discover_schemas
 schema = idl.to_json(schema_path)
../pkg_avroplane/avroplane/fs.py:545: in to_json
 topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)
../pkg_avroplane/avroplane/fs.py:500: in idl_to_topic_schemas
 self._locate_avrotools(),
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'avroplane.fs.IDLUtils'>
 @classmethod
 def _locate_avrotools(cls):
 """Tried different places to find avroplane-tool.jar."""
 paths = [
 '/opt/tubular/lib/avro-tools.jar',
 os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),
 ]
 for path in paths:
 if os.path.exists(path):
 return path
 
 raise NotImplementedError('Cannot find avro-tools.jar archive locally '
 'and failed to download. '
> 'Expected location is one of: {}'.format(paths))
E NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar']
../pkg_avroplane/avroplane/fs.py:631: NotImplementedError
______________ TestDatabusMergerWorkflow.test_workflow_no_merging ______________
self = <tests.integration.test_databus_merger.TestDatabusMergerWorkflow testMethod=test_workflow_no_merging>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
 
 install_databus_reader_and_writer()
 self.table_name = 'test_castor_databus'
 self.topic = 'test.castor.databus.r2'
 self.topic_no_merge = 'test.castor.databus.no.merge.r2'
 self.topic_arrays = 'test.castor.databus.arrays.r2'
 self.schemas_root = absolute_path(__file__, 'resources', 'databus')
 
 # publish schema to schema-registry
 sr_remote = RemoteSchemaRegistryClient(self.sr_host)
 idl = IDLUtils(sr_remote)
> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)
tests/integration/test_databus_merger.py:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../pkg_avroplane/avroplane/fs.py:69: in __init__
 self._cache = self.discover_schemas(base_path, idl)
../pkg_avroplane/avroplane/fs.py:137: in discover_schemas
 schema = idl.to_json(schema_path)
../pkg_avroplane/avroplane/fs.py:545: in to_json
 topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)
../pkg_avroplane/avroplane/fs.py:500: in idl_to_topic_schemas
 self._locate_avrotools(),
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'avroplane.fs.IDLUtils'>
 @classmethod
 def _locate_avrotools(cls):
 """Tried different places to find avroplane-tool.jar."""
 paths = [
 '/opt/tubular/lib/avro-tools.jar',
 os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),
 ]
 for path in paths:
 if os.path.exists(path):
 return path
 
 raise NotImplementedError('Cannot find avro-tools.jar archive locally '
 'and failed to download. '
> 'Expected location is one of: {}'.format(paths))
E NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar']
../pkg_avroplane/avroplane/fs.py:631: NotImplementedError
__________ TestDatabusMergerWorkflow.test_workflow_with_array_unpack ___________
self = <tests.integration.test_databus_merger.TestDatabusMergerWorkflow testMethod=test_workflow_with_array_unpack>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
 
 install_databus_reader_and_writer()
 self.table_name = 'test_castor_databus'
 self.topic = 'test.castor.databus.r2'
 self.topic_no_merge = 'test.castor.databus.no.merge.r2'
 self.topic_arrays = 'test.castor.databus.arrays.r2'
 self.schemas_root = absolute_path(__file__, 'resources', 'databus')
 
 # publish schema to schema-registry
 sr_remote = RemoteSchemaRegistryClient(self.sr_host)
 idl = IDLUtils(sr_remote)
> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)
tests/integration/test_databus_merger.py:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../pkg_avroplane/avroplane/fs.py:69: in __init__
 self._cache = self.discover_schemas(base_path, idl)
../pkg_avroplane/avroplane/fs.py:137: in discover_schemas
 schema = idl.to_json(schema_path)
../pkg_avroplane/avroplane/fs.py:545: in to_json
 topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)
../pkg_avroplane/avroplane/fs.py:500: in idl_to_topic_schemas
 self._locate_avrotools(),
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'avroplane.fs.IDLUtils'>
 @classmethod
 def _locate_avrotools(cls):
 """Tried different places to find avroplane-tool.jar."""
 paths = [
 '/opt/tubular/lib/avro-tools.jar',
 os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),
 ]
 for path in paths:
 if os.path.exists(path):
 return path
 
 raise NotImplementedError('Cannot find avro-tools.jar archive locally '
 'and failed to download. '
> 'Expected location is one of: {}'.format(paths))
E NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar']
../pkg_avroplane/avroplane/fs.py:631: NotImplementedError
_________ TestDatabusMergerWorkflow.test_workflow_with_corrupted_delta _________
self = <tests.integration.test_databus_merger.TestDatabusMergerWorkflow testMethod=test_workflow_with_corrupted_delta>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
 
 install_databus_reader_and_writer()
 self.table_name = 'test_castor_databus'
 self.topic = 'test.castor.databus.r2'
 self.topic_no_merge = 'test.castor.databus.no.merge.r2'
 self.topic_arrays = 'test.castor.databus.arrays.r2'
 self.schemas_root = absolute_path(__file__, 'resources', 'databus')
 
 # publish schema to schema-registry
 sr_remote = RemoteSchemaRegistryClient(self.sr_host)
 idl = IDLUtils(sr_remote)
> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)
tests/integration/test_databus_merger.py:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../pkg_avroplane/avroplane/fs.py:69: in __init__
 self._cache = self.discover_schemas(base_path, idl)
../pkg_avroplane/avroplane/fs.py:137: in discover_schemas
 schema = idl.to_json(schema_path)
../pkg_avroplane/avroplane/fs.py:545: in to_json
 topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)
../pkg_avroplane/avroplane/fs.py:500: in idl_to_topic_schemas
 self._locate_avrotools(),
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'avroplane.fs.IDLUtils'>
 @classmethod
 def _locate_avrotools(cls):
 """Tried different places to find avroplane-tool.jar."""
 paths = [
 '/opt/tubular/lib/avro-tools.jar',
 os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),
 ]
 for path in paths:
 if os.path.exists(path):
 return path
 
 raise NotImplementedError('Cannot find avro-tools.jar archive locally '
 'and failed to download. '
> 'Expected location is one of: {}'.format(paths))
E NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar']
../pkg_avroplane/avroplane/fs.py:631: NotImplementedError
_____________________ TestDatabusMergerWorkflow.test_base ______________________
a = ('xro335', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro335'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o27', name = 'read'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)
E at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)
E at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_base>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
> test_df = self.spark.read.parquet(self.input_data_path)
tests/integration/test_databus_s3_merger.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:580: in read
 return DataFrameReader(self._wrapped)
../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py:70: in __init__
 self._jreader = spark._ssql_ctx.read()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro335', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_______ TestDatabusMergerWorkflow.test_checkpoints_for_different_topics ________
a = ('xro357', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro357'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o27', name = 'read'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)
E at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)
E at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_checkpoints_for_different_topics>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
> test_df = self.spark.read.parquet(self.input_data_path)
tests/integration/test_databus_s3_merger.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:580: in read
 return DataFrameReader(self._wrapped)
../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py:70: in __init__
 self._jreader = spark._ssql_ctx.read()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro357', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
__________________ TestDatabusMergerWorkflow.test_empty_delta __________________
a = ('xro379', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro379'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o27', name = 'read'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)
E at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)
E at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_empty_delta>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
> test_df = self.spark.read.parquet(self.input_data_path)
tests/integration/test_databus_s3_merger.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:580: in read
 return DataFrameReader(self._wrapped)
../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py:70: in __init__
 self._jreader = spark._ssql_ctx.read()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro379', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
___________ TestDatabusMergerWorkflow.test_empty_delta_after_filter ____________
a = ('xro401', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro401'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o27', name = 'read'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)
E at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)
E at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_empty_delta_after_filter>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
> test_df = self.spark.read.parquet(self.input_data_path)
tests/integration/test_databus_s3_merger.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:580: in read
 return DataFrameReader(self._wrapped)
../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py:70: in __init__
 self._jreader = spark._ssql_ctx.read()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro401', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
______________ TestDatabusMergerWorkflow.test_recovery_checkpoint ______________
a = ('xro423', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro423'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o27', name = 'read'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)
E at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)
E at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_recovery_checkpoint>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
> test_df = self.spark.read.parquet(self.input_data_path)
tests/integration/test_databus_s3_merger.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:580: in read
 return DataFrameReader(self._wrapped)
../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py:70: in __init__
 self._jreader = spark._ssql_ctx.read()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro423', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_________ TestDatabusMergerWorkflow.test_recovery_checkpoint_and_step __________
a = ('xro445', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro445'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o27', name = 'read'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)
E at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)
E at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_recovery_checkpoint_and_step>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
> test_df = self.spark.read.parquet(self.input_data_path)
tests/integration/test_databus_s3_merger.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:580: in read
 return DataFrameReader(self._wrapped)
../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py:70: in __init__
 self._jreader = spark._ssql_ctx.read()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro445', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_________ TestDatabusMergerWorkflow.test_recovery_checkpoint_disabled __________
a = ('xro467', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro467'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o27', name = 'read'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)
E at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)
E at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_recovery_checkpoint_disabled>
 def setUp(self):
 super(TestDatabusMergerWorkflow, self).setUp()
> test_df = self.spark.read.parquet(self.input_data_path)
tests/integration/test_databus_s3_merger.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:580: in read
 return DataFrameReader(self._wrapped)
../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py:70: in __init__
 self._jreader = spark._ssql_ctx.read()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro467', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
__________________ TestDatabusS3DeltaSource.test_include_last __________________
a = ('xro489', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro489'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_databus_s3_merger.TestDatabusS3DeltaSource testMethod=test_include_last>
 def setUp(self):
 super().setUp()
 
> self.spark.sql('CREATE DATABASE IF NOT EXISTS test')
tests/integration/test_databus_s3_merger.py:530:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro489', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
____________ TestDatabusS3DeltaSource.test_include_last_and_overlap ____________
a = ('xro509', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro509'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_databus_s3_merger.TestDatabusS3DeltaSource testMethod=test_include_last_and_overlap>
 def setUp(self):
 super().setUp()
 
> self.spark.sql('CREATE DATABASE IF NOT EXISTS test')
tests/integration/test_databus_s3_merger.py:530:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro509', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
____________________ TestDatabusS3DeltaSource.test_overlap _____________________
a = ('xro529', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro529'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_databus_s3_merger.TestDatabusS3DeltaSource testMethod=test_overlap>
 def setUp(self):
 super().setUp()
 
> self.spark.sql('CREATE DATABASE IF NOT EXISTS test')
tests/integration/test_databus_s3_merger.py:530:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro529', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
________________________ TestElasticLoader.test_simple _________________________
a = ('xro549', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro549'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o27', name = 'read'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)
E at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)
E at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_elastic_loader.TestElasticLoader testMethod=test_simple>
 def test_simple(self):
> self.loader.run()
tests/integration/test_elastic_loader.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
castor/loader.py:87: in run
 df = self._spark.read_ext.by_url(self._input_path)
../../pypi__sparkly_2_5_1/sparkly/reader.py:108: in by_url
 return resolver(parsed_url, parsed_qs)
../../pypi__sparkly_2_5_1/sparkly/reader.py:340: in _resolve_elastic
 **kwargs
../../pypi__sparkly_2_5_1/sparkly/reader.py:180: in elastic
 return self._basic_read(reader_options, options, parallelism)
../../pypi__sparkly_2_5_1/sparkly/reader.py:291: in _basic_read
 df = self._spark.read.load(**reader_options)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:580: in read
 return DataFrameReader(self._wrapped)
../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py:70: in __init__
 self._jreader = spark._ssql_ctx.read()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro549', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_________________ TestElasticMerge.test_elastic_merge_workflow _________________
a = ('xro571', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro571'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'currentDatabase'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_elastic_merger.TestElasticMerge testMethod=test_elastic_merge_workflow>
 def test_elastic_merge_workflow(self):
 self.base_s3 = '/tmp/tubular-tests/castor/delta/{}/'.format(uuid.uuid4().hex)
 
 query_template = 'elastic://{host}:{port}/{es_index}/{es_type}'.format(
 host=self.ELASTIC_HOST,
 port=self.ELASTIC_PORT,
 es_index=self.ELASTIC_INDEX,
 es_type=self.ELASTIC_TYPE,
 )
 query_template += '?q={query}&scroll_field=import_date'
 
 merger = Merger(
 spark=self.spark,
 delta_source=ElasticDeltaSource,
 delta_splitter=(
 'by_expression|'
 'nvl(date_format(publish_date, "y"), "-undefined")'
 ),
 merge_type='replace',
 input_path=query_template,
 output_delta_path=os.path.join(self.base_s3, 'delta=1'),
 output_schema={
 'doc_id': "_metadata['_id']",
 'views': 'views',
 '_metadata': '_metadata',
 'import_date': 'import_date',
 'publish_date': 'publish_date',
 'publish_month': "nvl(date_format(publish_date, 'y-MM'), '-undefined')",
 },
 output_unique_by=['doc_id'],
 output_resolve_by=None,
 output_partition_by=['publish_month'],
 output_table='castor_elastic',
 warehouse_path='/tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex),
 )
 
 with mock.patch.object(
 merger,
 'input_path',
 merger.input_path + '&max_docs=3',
 ), mock.patch.object(
 merger.delta_source,
 'MAX_DOCS_DIFF',
 1,
 ):
> merger.run()
tests/integration/test_elastic_merger.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
castor/merger.py:209: in run
 recovery_step=self.recovery_step,
castor/delta_source/elastic.py:44: in get_unprocessed_parts
 if spark.catalog_ext.has_table(output_table):
../../pypi__sparkly_2_5_1/sparkly/catalog.py:123: in has_table
 for table in self._spark.catalog.listTables(db_name):
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:81: in listTables
 dbName = self.currentDatabase()
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:50: in currentDatabase
 return self._jcatalog.currentDatabase()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro571', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_______________ TestKafkaMerger.test_calculate_unprocessed_parts _______________
a = ('xro592', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro592'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'listDatabases'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_calculate_unprocessed_parts>
 def setUp(self):
 super(TestKafkaMerger, self).setUp()
 self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)
 self.table_name = 'test_castor'
 self.topic = 'topic_{}'.format(uuid.uuid4().hex)
 
> if self.spark.catalog_ext.has_database('merge_db'):
tests/integration/test_kafka_merger.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:141: in has_database
 for db in self._spark.catalog.listDatabases():
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:62: in listDatabases
 iter = self._jcatalog.listDatabases().toLocalIterator()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro592', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
______________________ TestKafkaMerger.test_get_delta_df _______________________
a = ('xro613', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro613'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'listDatabases'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_get_delta_df>
 def setUp(self):
 super(TestKafkaMerger, self).setUp()
 self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)
 self.table_name = 'test_castor'
 self.topic = 'topic_{}'.format(uuid.uuid4().hex)
 
> if self.spark.catalog_ext.has_database('merge_db'):
tests/integration/test_kafka_merger.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:141: in has_database
 for db in self._spark.catalog.listDatabases():
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:62: in listDatabases
 iter = self._jcatalog.listDatabases().toLocalIterator()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro613', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_______________ TestKafkaMerger.test_get_multiple_topics_offsets _______________
a = ('xro634', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro634'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'listDatabases'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_get_multiple_topics_offsets>
 def setUp(self):
 super(TestKafkaMerger, self).setUp()
 self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)
 self.table_name = 'test_castor'
 self.topic = 'topic_{}'.format(uuid.uuid4().hex)
 
> if self.spark.catalog_ext.has_database('merge_db'):
tests/integration/test_kafka_merger.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:141: in has_database
 for db in self._spark.catalog.listDatabases():
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:62: in listDatabases
 iter = self._jcatalog.listDatabases().toLocalIterator()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro634', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
____________________ TestKafkaMerger.test_get_table_offsets ____________________
a = ('xro655', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro655'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'listDatabases'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_get_table_offsets>
 def setUp(self):
 super(TestKafkaMerger, self).setUp()
 self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)
 self.table_name = 'test_castor'
 self.topic = 'topic_{}'.format(uuid.uuid4().hex)
 
> if self.spark.catalog_ext.has_database('merge_db'):
tests/integration/test_kafka_merger.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:141: in has_database
 for db in self._spark.catalog.listDatabases():
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:62: in listDatabases
 iter = self._jcatalog.listDatabases().toLocalIterator()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro655', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
____________________ TestKafkaMerger.test_get_topic_offsets ____________________
a = ('xro676', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro676'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'listDatabases'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_get_topic_offsets>
 def setUp(self):
 super(TestKafkaMerger, self).setUp()
 self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)
 self.table_name = 'test_castor'
 self.topic = 'topic_{}'.format(uuid.uuid4().hex)
 
> if self.spark.catalog_ext.has_database('merge_db'):
tests/integration/test_kafka_merger.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:141: in has_database
 for db in self._spark.catalog.listDatabases():
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:62: in listDatabases
 iter = self._jcatalog.listDatabases().toLocalIterator()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro676', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
__________________ TestKafkaMerger.test_kafka_merge_init_from __________________
a = ('xro697', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro697'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'listDatabases'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_kafka_merge_init_from>
 def setUp(self):
 super(TestKafkaMerger, self).setUp()
 self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)
 self.table_name = 'test_castor'
 self.topic = 'topic_{}'.format(uuid.uuid4().hex)
 
> if self.spark.catalog_ext.has_database('merge_db'):
tests/integration/test_kafka_merger.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:141: in has_database
 for db in self._spark.catalog.listDatabases():
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:62: in listDatabases
 iter = self._jcatalog.listDatabases().toLocalIterator()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro697', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
__________________ TestKafkaMerger.test_kafka_merge_workflow ___________________
a = ('xro718', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro718'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'listDatabases'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_kafka_merge_workflow>
 def setUp(self):
 super(TestKafkaMerger, self).setUp()
 self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)
 self.table_name = 'test_castor'
 self.topic = 'topic_{}'.format(uuid.uuid4().hex)
 
> if self.spark.catalog_ext.has_database('merge_db'):
tests/integration/test_kafka_merger.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:141: in has_database
 for db in self._spark.catalog.listDatabases():
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:62: in listDatabases
 iter = self._jcatalog.listDatabases().toLocalIterator()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro718', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_________________ TestKafkaMerger.test_merge_to_non_default_db _________________
a = ('xro739', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro739'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'listDatabases'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_merge_to_non_default_db>
 def setUp(self):
 super(TestKafkaMerger, self).setUp()
 self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)
 self.table_name = 'test_castor'
 self.topic = 'topic_{}'.format(uuid.uuid4().hex)
 
> if self.spark.catalog_ext.has_database('merge_db'):
tests/integration/test_kafka_merger.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:141: in has_database
 for db in self._spark.catalog.listDatabases():
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:62: in listDatabases
 iter = self._jcatalog.listDatabases().toLocalIterator()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro739', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
___________ TestLoaderPartitioned.test_load_to_non_default_database ____________
a = ('xro760', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro760'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_loader_partitioned.TestLoaderPartitioned testMethod=test_load_to_non_default_database>
 def setUp(self):
 super(TestLoaderPartitioned, self).setUp()
 self.csv_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),
 'resources',
 'csv_setup.csv')
 self.csv_file_2 = os.path.join(os.path.dirname(os.path.realpath(__file__)),
 'resources',
 'csv_setup_2.csv')
 
 self.loader = Loader(
 spark=self.spark,
 input_path='csv://localhost{}?header=True'.format(self.csv_file),
 output_table='csv_data',
 output_partition_by=['country'],
 warehouse_path='/tmp/castor/{}/'.format(uuid.uuid4().hex),
 )
 
 self.loader_2 = Loader(
 spark=self.spark,
 input_path='csv://localhost{}?header=True'.format(self.csv_file_2),
 output_schema=[('name', 'name'),
 ('country', 'country'),
 ('age', 'age'),
 ('platform', 'platform'),
 ('`1-2_3`', '`1-2_3`'),
 ('`a__--b`', 'substr(`1-2_3`, 0, 3)')],
 output_table='csv_data',
 output_partition_by=['platform', 'country'],
 # Unfortunately we can't check with a small dataset that we actually have 2 files,
 # because they more likely fall into a single file.
 # At least we can check that the code isn't broken.
 output_partitions=2,
 warehouse_path='/tmp/castor/{}/'.format(uuid.uuid4().hex),
 )
> self.spark.sql("DROP TABLE IF EXISTS csv_data")
tests/integration/test_loader_partitioned.py:47:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro760', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
___________________ TestLoaderPartitioned.test_partitioning ____________________
a = ('xro780', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro780'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'sql'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 17 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 34 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 39 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 40 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_loader_partitioned.TestLoaderPartitioned testMethod=test_partitioning>
 def setUp(self):
 super(TestLoaderPartitioned, self).setUp()
 self.csv_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),
 'resources',
 'csv_setup.csv')
 self.csv_file_2 = os.path.join(os.path.dirname(os.path.realpath(__file__)),
 'resources',
 'csv_setup_2.csv')
 
 self.loader = Loader(
 spark=self.spark,
 input_path='csv://localhost{}?header=True'.format(self.csv_file),
 output_table='csv_data',
 output_partition_by=['country'],
 warehouse_path='/tmp/castor/{}/'.format(uuid.uuid4().hex),
 )
 
 self.loader_2 = Loader(
 spark=self.spark,
 input_path='csv://localhost{}?header=True'.format(self.csv_file_2),
 output_schema=[('name', 'name'),
 ('country', 'country'),
 ('age', 'age'),
 ('platform', 'platform'),
 ('`1-2_3`', '`1-2_3`'),
 ('`a__--b`', 'substr(`1-2_3`, 0, 3)')],
 output_table='csv_data',
 output_partition_by=['platform', 'country'],
 # Unfortunately we can't check with a small dataset that we actually have 2 files,
 # because they more likely fall into a single file.
 # At least we can check that the code isn't broken.
 output_partitions=2,
 warehouse_path='/tmp/castor/{}/'.format(uuid.uuid4().hex),
 )
> self.spark.sql("DROP TABLE IF EXISTS csv_data")
tests/integration/test_loader_partitioned.py:47:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:556: in sql
 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro780', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_________________________ TestMysqlLoader.test_loader __________________________
a = ('xro800', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro800'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o27', name = 'read'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)
E at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)
E at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_mysql_loader.TestMysqlLoader testMethod=test_loader>
 def test_loader(self):
> self.loader.run()
tests/integration/test_mysql_loader.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
castor/loader.py:87: in run
 df = self._spark.read_ext.by_url(self._input_path)
../../pypi__sparkly_2_5_1/sparkly/reader.py:108: in by_url
 return resolver(parsed_url, parsed_qs)
../../pypi__sparkly_2_5_1/sparkly/reader.py:350: in _resolve_mysql
 options=parsed_qs,
../../pypi__sparkly_2_5_1/sparkly/reader.py:214: in mysql
 return self._basic_read(reader_options, options, parallelism)
../../pypi__sparkly_2_5_1/sparkly/reader.py:291: in _basic_read
 df = self._spark.read.load(**reader_options)
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:580: in read
 return DataFrameReader(self._wrapped)
../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py:70: in __init__
 self._jreader = spark._ssql_ctx.read()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro800', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_________________ TestQuantumMerger.test_kafka_merge_workflow __________________
a = ('xro834', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro834'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_quantum_merger.TestQuantumMerger testMethod=test_kafka_merge_workflow>
 def test_kafka_merge_workflow(self):
 self.castor = Merger(
 spark=self.spark,
 delta_source=DatabusDeltaSource,
 delta_splitter='no_split',
 merge_type='quantum',
 input_path='databus://{}/{}?sr={}&port={}'.format(
 self.kafka_host, self.topic, self.sr_address, self.kafka_port),
 input_schema={},
 output_table=self.table_name,
 output_files_per_partition=2,
 warehouse_path='/tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex),
 )
 
 insights = [
 {
 'account_gid': 'fba_9258148868',
 'end_date': '2017-11-16',
 'metric': 'page_video_views_organic',
 'period': 'day',
 'report_name': 'facebook_page_views_3s_v1',
 'value_mult': None,
 'value_single': 2000.0,
 'video_gid': None,
 'fetch_time': 1520000000,
 },
 # Two next message to test delta dedup for day-period data
 {
 'account_gid': 'fba_9258148868',
 'end_date': '2017-11-17',
 'metric': 'page_video_views_organic',
 'period': 'day',
 'report_name': 'facebook_page_views_3s_v1',
 'value_mult': None,
 'value_single': 1000.0,
 'video_gid': None,
 'fetch_time': 1520000000,
 },
 {
 'account_gid': 'fba_9258148868',
 'end_date': '2017-11-17',
 'metric': 'page_video_views_organic',
 'period': 'day',
 'report_name': 'facebook_page_views_3s_v1',
 'value_mult': None,
 'value_single': 1000.0,
 'video_gid': None,
 'fetch_time': 1520000010,
 },
 # Two next message to test delta dedup for multi-dim data
 {
 'account_gid': 'fba_39355118462',
 'end_date': '2018-02-12',
 'metric': 'page_fans_locale',
 'period': 'lifetime',
 'report_name': 'facebook_page_fans_demo_v1',
 'value_mult': {
 'ar_AR': 1000.0,
 'bg_BG': 100.0,
 'cs_CZ': 10.0,
 },
 'value_single': None,
 'video_gid': None,
 'fetch_time': 1520000000,
 },
 {
 'account_gid': 'fba_39355118462',
 'end_date': '2018-02-12',
 'metric': 'page_fans_locale',
 'period': 'lifetime',
 'report_name': 'facebook_page_fans_demo_v1',
 'value_mult': {
 'ar_AR': 1010.0,
 'bg_BG': 100.0,
 'cs_CZ': 10.0,
 },
 'value_single': None,
 'video_gid': None,
 'fetch_time': 1520000100,
 },
 # Two next messages to test delta dedup for continuous data, as well as `date` field
 # creation.
 {
 'account_gid': 'fba_9258148868',
 'end_date': None,
 'metric': 'post_video_views_10s_organic',
 'period': 'lifetime',
 'report_name': 'facebook_post_video_views_10s_v1',
 'value_mult': None,
 'value_single': 2000.0,
 'video_gid': 'fbv_12345678',
 'fetch_time': 1520000200,
 },
 {
 'account_gid': 'fba_9258148868',
 'end_date': None,
 'metric': 'post_video_views_10s_organic',
 'period': 'lifetime',
 'report_name': 'facebook_post_video_views_10s_v1',
 'value_mult': None,
 'value_single': 3000.0,
 'video_gid': 'fbv_12345678',
 'fetch_time': 1520000500,
 },
 
 # Will test dedup for merged data and new data from delta for day-period data
 {
 'account_gid': 'fba_9258148868',
 'end_date': '2017-11-17',
 'metric': 'page_video_views_organic',
 'period': 'day',
 'report_name': 'facebook_page_views_3s_v1',
 'value_mult': None,
 'value_single': 1000.0,
 'video_gid': None,
 'fetch_time': 1520001000,
 },
 # Will test dedup for merged data and new data from delta for multi-dim data
 {
 'account_gid': 'fba_39355118462',
 'end_date': '2018-02-12',
 'metric': 'page_fans_locale',
 'period': 'lifetime',
 'report_name': 'facebook_page_fans_demo_v1',
 'value_mult': {
 'ar_AR': 1100.0,
 'bg_BG': 100.0,
 'cs_CZ': 10.0,
 },
 'value_single': None,
 'video_gid': None,
 'fetch_time': 1520001000,
 },
 # Test dedup for merged data and new data from delta for continious data.
 {
 'account_gid': 'fba_9258148868',
 'end_date': None,
 'metric': 'post_video_views_10s_organic',
 'period': 'lifetime',
 'report_name': 'facebook_post_video_views_10s_v1',
 'value_mult': None,
 'value_single': 4000.0,
 'video_gid': 'fbv_12345678',
 'fetch_time': 1520001300,
 },
 
 ]
> self._publish_data(insights[:7])
tests/integration/test_quantum_merger.py:236:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/integration/test_quantum_merger.py:80: in _publish_data
 self.databus_schema,
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro834', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
____________________ TestAvroSchemaEvolution.test_add_field ____________________
a = ('xro876', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro876'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'currentDatabase'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_schema_evolution.TestAvroSchemaEvolution testMethod=test_add_field>
 def setUp(self):
 self._register_avro_schemas()
 
 self.output_table = 'test_castor_avro'
 
> if self.spark.catalog_ext.has_table(self.output_table):
tests/integration/test_schema_evolution.py:118:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:123: in has_table
 for table in self._spark.catalog.listTables(db_name):
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:81: in listTables
 dbName = self.currentDatabase()
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:50: in currentDatabase
 return self._jcatalog.currentDatabase()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro876', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_____________ TestAvroSchemaEvolution.test_add_field_modify_schema _____________
a = ('xro897', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro897'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'currentDatabase'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_schema_evolution.TestAvroSchemaEvolution testMethod=test_add_field_modify_schema>
 def setUp(self):
 self._register_avro_schemas()
 
 self.output_table = 'test_castor_avro'
 
> if self.spark.catalog_ext.has_table(self.output_table):
tests/integration/test_schema_evolution.py:118:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:123: in has_table
 for table in self._spark.catalog.listTables(db_name):
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:81: in listTables
 dbName = self.currentDatabase()
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:50: in currentDatabase
 return self._jcatalog.currentDatabase()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro897', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
__________________ TestAvroSchemaEvolution.test_delete_field ___________________
a = ('xro918', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro918'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'currentDatabase'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_schema_evolution.TestAvroSchemaEvolution testMethod=test_delete_field>
 def setUp(self):
 self._register_avro_schemas()
 
 self.output_table = 'test_castor_avro'
 
> if self.spark.catalog_ext.has_table(self.output_table):
tests/integration/test_schema_evolution.py:118:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:123: in has_table
 for table in self._spark.catalog.listTables(db_name):
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:81: in listTables
 dbName = self.currentDatabase()
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:50: in currentDatabase
 return self._jcatalog.currentDatabase()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro918', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
___________ TestAvroSchemaEvolution.test_remove_field_modify_schema ____________
a = ('xro939', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro939'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'currentDatabase'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_schema_evolution.TestAvroSchemaEvolution testMethod=test_remove_field_modify_schema>
 def setUp(self):
 self._register_avro_schemas()
 
 self.output_table = 'test_castor_avro'
 
> if self.spark.catalog_ext.has_table(self.output_table):
tests/integration/test_schema_evolution.py:118:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:123: in has_table
 for table in self._spark.catalog.listTables(db_name):
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:81: in listTables
 dbName = self.currentDatabase()
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:50: in currentDatabase
 return self._jcatalog.currentDatabase()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro939', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
__________________ TestAvroSchemaEvolution.test_rename_field ___________________
a = ('xro960', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro960'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o28', name = 'currentDatabase'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)
E at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 18 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 35 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 40 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 41 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_schema_evolution.TestAvroSchemaEvolution testMethod=test_rename_field>
 def setUp(self):
 self._register_avro_schemas()
 
 self.output_table = 'test_castor_avro'
 
> if self.spark.catalog_ext.has_table(self.output_table):
tests/integration/test_schema_evolution.py:118:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pypi__sparkly_2_5_1/sparkly/catalog.py:123: in has_table
 for table in self._spark.catalog.listTables(db_name):
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:81: in listTables
 dbName = self.currentDatabase()
../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py:50: in currentDatabase
 return self._jcatalog.currentDatabase()
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro960', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_________ TestTwitchStreamMerger.test_account_level_values_per_segment _________
a = ('xro993', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro993'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_account_level_values_per_segment>
 def test_account_level_values_per_segment(self):
 """Tests if account gid, language, viewers are assigned properly."""
 delta = with_fields(
 self.MESSAGE,
 {
 'game.name': ['Fifa', 'Fifa', 'WoW', 'WoW', 'Fortnite', 'Fortnite'],
 'account.gid': ['tta_johny', 'tta_johny',
 'tta_john', 'tta_john',
 'tta_johny', 'tta_johny'],
 'language': ['br', 'br', 'en', 'it', 'it', 'it'],
 'fetch_time': [
 datetime(2017, 1, 1) + _ for _ in [
 timedelta(hours=1),
 timedelta(hours=2),
 timedelta(hours=3),
 timedelta(hours=4),
 timedelta(hours=5),
 timedelta(hours=6),
 ]
 ],
 },
 )
 
> self._get_merger(delta[:-2]).run()
tests/integration/test_twitch_streams_merger.py:536:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/integration/test_twitch_streams_merger.py:1286: in _get_merger
 self._create_delta_df(delta),
tests/integration/test_twitch_streams_merger.py:1344: in _create_delta_df
 parse_schema(schema),
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro993', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
_______ TestTwitchStreamMerger.test_ascii_trapezoid_double_ticks_chunked _______
a = ('xro1047', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1047'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_double_ticks_chunked>
 def test_ascii_trapezoid_double_ticks_chunked(self):
 """Process trapezoid in chunks."""
 delta = with_fields(
 self.MESSAGE,
 self.TRAPEZOID_CONF,
 )[::2]
 
> self._get_merger(delta[:3]).run()
tests/integration/test_twitch_streams_merger.py:878:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/integration/test_twitch_streams_merger.py:1286: in _get_merger
 self._create_delta_df(delta),
tests/integration/test_twitch_streams_merger.py:1344: in _create_delta_df
 parse_schema(schema),
../../pypi__pyspark_2_2_0/pyspark/sql/session.py:539: in createDataFrame
 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
../../pypi__py4j_0_10_4/py4j/java_gateway.py:1133: in __call__
 answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = ('xro1047', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
 return f(*a, **kw)
 except py4j.protocol.Py4JJavaError as e:
 s = e.java_exception.toString()
 stackTrace = '\n\t at '.join(map(lambda x: x.toString(),
 e.java_exception.getStackTrace()))
 if s.startswith('org.apache.spark.sql.AnalysisException: '):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):
 raise ParseException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):
 raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):
 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
 if s.startswith('java.lang.IllegalArgumentException: '):
> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
E pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:79: IllegalArgumentException
________ TestTwitchStreamMerger.test_ascii_trapezoid_double_ticks_whole ________
a = ('xro1101', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD')
kw = {}
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)'
 def deco(*a, **kw):
 try:
> return f(*a, **kw)
../../pypi__pyspark_2_2_0/pyspark/sql/utils.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1101'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>
target_id = 'o26', name = 'applySchemaToPythonRDD'
 def get_return_value(answer, gateway_client, target_id=None, name=None):
 """Converts an answer received from the Java gateway into a Python object.
 
 For example, string representation of integers are converted to Python
 integer, string representation of objects are converted to JavaObject
 instances, etc.
 
 :param answer: the string returned by the Java gateway
 :param gateway_client: the gateway client used to communicate with the Java
 Gateway. Only necessary if the answer is a reference (e.g., object,
 list, map)
 :param target_id: the name of the object from which the answer comes from
 (e.g., *object1* in `object1.hello()`). Optional.
 :param name: the name of the member from which the answer comes from
 (e.g., *hello* in `object1.hello()`). Optional.
 """
 if is_error(answer)[0]:
 if len(answer) > 1:
 type = answer[1]
 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
 if answer[1] == REFERENCE_TYPE:
 raise Py4JJavaError(
 "An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.
E : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)
E at scala.Option.getOrElse(Option.scala:121)
E at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)
E at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)
E at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)
E at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:280)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
E Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar
E Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
E at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
E at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
E at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
E at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)
E at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)
E at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
E at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
E at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)
E at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)
E ... 19 more
E Caused by: java.lang.reflect.InvocationTargetException
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
E at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
E at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
E ... 36 more
E Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState
E at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)
E ... 41 more
E Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState
E at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)
E at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
E at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
E ... 42 more
../../pypi__py4j_0_10_4/py4j/protocol.py:319: Py4JJavaError
During handling of the above exception, another exception occurred:
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_double_ticks_whole>
 def test_ascii_trapezoid_double_ticks_whole(self):
 """Process trapezoid in as a whole."""
 delta = with_fields(
 self.MESSAG
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment