Last active
April 23, 2024 16:42
-
-
Save ryandgoldenberg/d02337fb9588afdd451fde24323eb5ea to your computer and use it in GitHub Desktop.
Flink Hive S3 Connection
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
From the jobmanager container, ran: | |
flink run --python /tmp/hive-dwh-3/scripts/run11.py | |
Exception: | |
org.apache.flink.connectors.hive.FlinkHiveException: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "s3" | |
at org.apache.flink.connectors.hive.HiveParallelismInference.infer(HiveParallelismInference.java:98) | |
S3 connectivity works for filesystem connector, and plugin jar is located at | |
plugins/flink-s3-fs-hadoop/flink-s3-fs-hadoop-1.18.1.jar | |
Logs: | |
flink--client-3080ffd34e9b.log | |
flink--standalonesession-0-3080ffd34e9b.log | |
--- | |
Previous issue: 'connector' = 'hive' cannot be discovered | |
Solution: HiveCatalog must be current catalog to discover 'connector' = 'hive' | |
From the jobmanager container, ran: | |
flink run --python /tmp/hive-dwh-3/scripts/run10.py | |
This failed with error message: | |
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a connector using option: 'connector'='hive' | |
Caused by: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'hive' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath. | |
However, from the client logs it can be seen that the hive connector is in the classpath: | |
/opt/flink/lib/flink-sql-connector-hive-2.3.9_2.12-1.18.1.jar | |
And it looks like it has such a factory: | |
jar tvf /opt/flink/lib/flink-sql-connector-hive-2.3.9_2.12-1.18.1.jar | grep HiveDynamicTableFactory | |
6969 Tue Dec 19 22:35:10 UTC 2023 org/apache/flink/connectors/hive/HiveDynamicTableFactory.class | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
version: '2.1' | |
services: | |
jobmanager: | |
container_name: jobmanager | |
image: hive-dwh-3 | |
command: jobmanager | |
ports: | |
- "8081:8081" | |
environment: | |
- | | |
FLINK_PROPERTIES= | |
jobmanager.rpc.address: jobmanager | |
fs.s3a.aws.credentials.provider: com.amazonaws.auth.profile.ProfileCredentialsProvider | |
networks: | |
- rapid | |
volumes: | |
- type: bind | |
source: ~/.aws | |
target: /opt/flink/.aws | |
- type: bind | |
source: . | |
target: /tmp/hive-dwh-3 | |
taskmanager: | |
container_name: taskmanager | |
image: hive-dwh-3 | |
command: taskmanager | |
scale: 1 | |
environment: | |
- | | |
FLINK_PROPERTIES= | |
jobmanager.rpc.address: jobmanager | |
taskmanager.numberOfTaskSlots: 2 | |
fs.s3a.aws.credentials.provider: com.amazonaws.auth.profile.ProfileCredentialsProvider | |
depends_on: | |
- jobmanager | |
networks: | |
- rapid | |
volumes: | |
- type: bind | |
source: ~/.aws | |
target: /opt/flink/.aws | |
- type: bind | |
source: . | |
target: /tmp/hive-dwh-3 | |
client: | |
container_name: client | |
image: hive-dwh-3 | |
command: sql-client.sh | |
environment: | |
- | | |
FLINK_PROPERTIES= | |
jobmanager.rpc.address: jobmanager | |
rest.address: jobmanager | |
fs.s3a.aws.credentials.provider: com.amazonaws.auth.profile.ProfileCredentialsProvider | |
depends_on: | |
- jobmanager | |
- taskmanager | |
networks: | |
- rapid | |
volumes: | |
- type: bind | |
source: ~/.aws | |
target: /opt/flink/.aws | |
- type: bind | |
source: . | |
target: /tmp/hive-dwh-3 | |
networks: | |
rapid: | |
name: rapid |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
FROM flink:1.18.1 | |
# JDK is required to install pyflink | |
RUN apt-get update -y && apt-get install -y openjdk-11-jdk | |
ENV JAVA_HOME=/usr/lib/jvm/java-11-openjdk-arm64 | |
# Flink 1.18.1 requires Python 3.8 | |
# Install commands taken from here | |
# https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/resource-providers/standalone/docker/#using-flink-python-on-docker | |
RUN apt-get update -y && \ | |
apt-get install -y build-essential libssl-dev zlib1g-dev libbz2-dev libffi-dev liblzma-dev jq | |
RUN wget https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz && \ | |
tar -xvf Python-3.8.3.tgz && \ | |
cd Python-3.8.3 && \ | |
./configure --without-tests --enable-shared && \ | |
make -j6 && \ | |
make install && \ | |
ldconfig /usr/local/lib && \ | |
cd .. && rm -f Python-3.8.3.tgz && rm -rf Python-3.8.3 && \ | |
ln -s /usr/local/bin/python3 /usr/local/bin/python && \ | |
apt-get clean && \ | |
rm -rf /var/lib/apt/lists/* | |
# Install PyFlink | |
RUN python -m pip install --upgrade pip | |
RUN python -m pip install apache-flink==1.18.1 | |
# Install Hadoop for Hive integration | |
# Set HADOOP_CLASSPATH=`hadoop classpath` | |
# https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/hive/overview/#dependencies | |
COPY build/hadoop /opt/hadoop | |
ENV PATH="${PATH}:/opt/hadoop/bin" | |
ENV HADOOP_CLASSPATH='/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*' | |
# S3 integration | |
# https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deployment/filesystems/s3/#hadooppresto-s3-file-systems-plugins | |
RUN mkdir -p /opt/flink/plugins/flink-s3-fs-hadoop | |
RUN ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-s3-fs-hadoop/ | |
# Bugfix for SQL client bug in 1.18 | |
# https://issues.apache.org/jira/browse/FLINK-33358 | |
COPY sql-client.sh /opt/flink/bin/ | |
# Connector JARs | |
COPY lib/flink-sql-connector-hive-2.3.9_2.12-1.18.1.jar /opt/flink/lib/ | |
COPY lib/flink-sql-parquet-1.19.0.jar /opt/flink/lib/ | |
# SQL client configuration | |
COPY conf /tmp/conf | |
RUN cat /tmp/conf/flink-conf.yaml >> /opt/flink/conf/flink-conf.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2024-04-22 23:07:15,744 INFO org.apache.flink.client.cli.CliFrontend [] - -------------------------------------------------------------------------------- | |
2024-04-22 23:07:15,746 INFO org.apache.flink.client.cli.CliFrontend [] - Starting Command Line Client (Version: 1.18.1, Scala: 2.12, Rev:a8c8b1c, Date:2023-12-19T22:17:36+01:00) | |
2024-04-22 23:07:15,746 INFO org.apache.flink.client.cli.CliFrontend [] - OS current user: root | |
2024-04-22 23:07:15,851 INFO org.apache.flink.client.cli.CliFrontend [] - Current Hadoop/Kerberos user: root | |
2024-04-22 23:07:15,851 INFO org.apache.flink.client.cli.CliFrontend [] - JVM: OpenJDK 64-Bit Server VM - Ubuntu - 11/11.0.22+7-post-Ubuntu-0ubuntu222.04.1 | |
2024-04-22 23:07:15,851 INFO org.apache.flink.client.cli.CliFrontend [] - Arch: aarch64 | |
2024-04-22 23:07:15,851 INFO org.apache.flink.client.cli.CliFrontend [] - Maximum heap size: 1964 MiBytes | |
2024-04-22 23:07:15,851 INFO org.apache.flink.client.cli.CliFrontend [] - JAVA_HOME: /usr/lib/jvm/java-11-openjdk-arm64 | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - Hadoop version: 3.4.0 | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - JVM Options: | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - -XX:+IgnoreUnrecognizedVMOptions | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=java.base/sun.net.util=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.lang=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.net=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.io=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.nio=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/sun.nio.ch=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.lang.reflect=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.text=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.time=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.util=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.util.concurrent=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - -Dlog.file=/opt/flink/log/flink--client-2f6c8ea78d77.log | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - -Dlog4j.configuration=file:/opt/flink/conf/log4j-cli.properties | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - -Dlog4j.configurationFile=file:/opt/flink/conf/log4j-cli.properties | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - -Dlogback.configurationFile=file:/opt/flink/conf/logback.xml | |
2024-04-22 23:07:15,853 INFO org.apache.flink.client.cli.CliFrontend [] - Program Arguments: | |
2024-04-22 23:07:15,854 INFO org.apache.flink.client.cli.CliFrontend [] - run | |
2024-04-22 23:07:15,854 INFO org.apache.flink.client.cli.CliFrontend [] - --python | |
2024-04-22 23:07:15,854 INFO org.apache.flink.client.cli.CliFrontend [] - /tmp/hive-dwh-3/scripts/run10.py | |
2024-04-22 23:07:15,854 INFO org.apache.flink.client.cli.CliFrontend [] - Classpath: /opt/flink/lib/flink-cep-1.18.1.jar:/opt/flink/lib/flink-connector-files-1.18.1.jar:/opt/flink/lib/flink-csv-1.18.1.jar:/opt/flink/lib/flink-json-1.18.1.jar:/opt/flink/lib/flink-scala_2.12-1.18.1.jar:/opt/flink/lib/flink-sql-connector-hive-2.3.9_2.12-1.18.1.jar:/opt/flink/lib/flink-sql-connector-kafka-3.0.2-1.18.jar:/opt/flink/lib/flink-sql-parquet-1.19.0.jar:/opt/flink/lib/flink-table-api-java-uber-1.18.1.jar:/opt/flink/lib/flink-table-planner-loader-1.18.1.jar:/opt/flink/lib/flink-table-runtime-1.18.1.jar:/opt/flink/lib/log4j-1.2-api-2.17.1.jar:/opt/flink/lib/log4j-api-2.17.1.jar:/opt/flink/lib/log4j-core-2.17.1.jar:/opt/flink/lib/log4j-slf4j-impl-2.17.1.jar:/opt/flink/lib/flink-dist-1.18.1.jar:/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/kerb-client-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.5.0.jar:/opt/hadoop/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/hadoop/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/opt/hadoop/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/opt/hadoop/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.24.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.20.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.5.4.jar:/opt/hadoop/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/kerb-server-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.15.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-util-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop/share/hadoop/common/lib/dnsjava-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/jline-3.9.0.jar:/opt/hadoop/share/hadoop/common/lib/kerb-core-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/token-provider-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.9.2.jar:/opt/hadoop/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/curator-client-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/opt/hadoop/share/hadoop/common/lib/gson-2.9.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.9.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-text-1.10.0.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/hadoop/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/kerb-common-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.14.0.jar:/opt/hadoop/share/hadoop/common/lib/httpcore-4.4.13.jar:/opt/hadoop/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/opt/hadoop/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/common/lib/reload4j-1.2.22.jar:/opt/hadoop/share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/stax2-api-4.2.1.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.8.3.jar:/opt/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/hadoop/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/opt/hadoop/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/failureaccess-1.0.jar:/opt/hadoop/share/hadoop/common/lib/kerby-util-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/opt/hadoop/share/hadoop/common/lib/curator-framework-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-math3-3.6.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-2.12.7.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/kerby-config-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-kms-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-registry-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/opt/hadoop/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jline-3.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/avro-1.9.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/gson-2.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/opt/hadoop/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/opt/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/lib/jna-5.2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-commons-9.6.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/codemodel-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-tree-9.6.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/lib/objenesis-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/opt/hadoop/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-4.2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:: | |
2024-04-22 23:07:15,854 INFO org.apache.flink.client.cli.CliFrontend [] - -------------------------------------------------------------------------------- | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: blob.server.port, 6124 | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.memory.process.size, 1728m | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.bind-host, 0.0.0.0 | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.execution.failover-strategy, region | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.address, jobmanager | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: sql-client.verbose, true | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.memory.process.size, 1600m | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.port, 6123 | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: query.server.port, 6125 | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: rest.bind-address, 0.0.0.0 | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.bind-host, 0.0.0.0 | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: parallelism.default, 1 | |
2024-04-22 23:07:15,856 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.numberOfTaskSlots, 1 | |
2024-04-22 23:07:15,857 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: rest.address, 0.0.0.0 | |
2024-04-22 23:07:15,857 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: fs.s3a.aws.credentials.provider, com.amazonaws.auth.profile.ProfileCredentialsProvider | |
2024-04-22 23:07:15,857 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: sql-client.execution.result-mode, TABLEAU | |
2024-04-22 23:07:15,857 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: env.java.opts.all, --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED | |
2024-04-22 23:07:15,892 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: external-resource-gpu | |
2024-04-22 23:07:15,894 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-statsd | |
2024-04-22 23:07:15,894 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-graphite | |
2024-04-22 23:07:15,894 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-influx | |
2024-04-22 23:07:15,894 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-jmx | |
2024-04-22 23:07:15,894 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-datadog | |
2024-04-22 23:07:15,894 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-slf4j | |
2024-04-22 23:07:15,895 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-prometheus | |
2024-04-22 23:07:15,895 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: flink-s3-fs-hadoop | |
2024-04-22 23:07:15,920 WARN org.apache.flink.runtime.util.HadoopUtils [] - Could not find Hadoop configuration via any of the supported methods (Flink configuration, environment variables). | |
2024-04-22 23:07:15,931 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Hadoop user set to root (auth:SIMPLE) | |
2024-04-22 23:07:15,931 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Kerberos security is disabled. | |
2024-04-22 23:07:15,937 INFO org.apache.flink.runtime.security.modules.JaasModule [] - Jaas file will be created as /tmp/jaas-4983383388300278719.conf. | |
2024-04-22 23:07:15,940 INFO org.apache.flink.client.cli.CliFrontend [] - Running 'run' command. | |
2024-04-22 23:07:15,952 INFO org.apache.flink.client.cli.CliFrontend [] - Building program from JAR file | |
2024-04-22 23:07:15,955 INFO org.apache.flink.client.ClientUtils [] - Starting program (detached: false) | |
2024-04-22 23:07:16,001 INFO org.apache.flink.client.python.PythonEnvUtils [] - Starting Python process with environment variables: {FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download&filename=flink/flink-1.18.1/flink-1.18.1-bin-scala_2.12.tgz, PATH=/opt/flink/bin:/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/hadoop/bin, FLINK_PROPERTIES= | |
jobmanager.rpc.address: jobmanager | |
fs.s3a.aws.credentials.provider: com.amazonaws.auth.profile.ProfileCredentialsProvider | |
, JAVA_HOME=/usr/lib/jvm/java-11-openjdk-arm64, CHECK_GPG=true, FLINK_PLUGINS_DIR=/opt/flink/plugins, TERM=xterm, GPG_KEY=96AE0E32CBE6E0753CE6DF6CB078D1D3253A8D82, LANG=en_US.UTF-8, HADOOP_CLASSPATH=/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*, FLINK_CONF_DIR=/opt/flink/conf, FLINK_ASC_URL=https://downloads.apache.org/flink/flink-1.18.1/flink-1.18.1-bin-scala_2.12.tgz.asc, JAVA_VERSION=jdk-11.0.22+7, PWD=/opt/flink, LANGUAGE=en_US:en, PYTHONPATH=/tmp/pyflink/ab85868e-798f-43d0-9849-05a5a6588ae2/dbf0eedf-f817-47be-8751-0633d1d5c55a:/opt/flink/opt/python/py4j-0.10.9.7-src.zip:/opt/flink/opt/python/pyflink.zip:/opt/flink/opt/python/cloudpickle-2.2.0-src.zip, FLINK_OPT_DIR=/opt/flink/opt, FLINK_HOME=/opt/flink, MAX_LOG_FILE_NUMBER=10, FLINK_LIB_DIR=/opt/flink/lib, PYFLINK_GATEWAY_PORT=32903, GOSU_VERSION=1.11, HOSTNAME=2f6c8ea78d77, LC_ALL=en_US.UTF-8, LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:, SHLVL=1, HOME=/root, FLINK_BIN_DIR=/opt/flink/bin}, command: python -u /tmp/hive-dwh-3/scripts/run10.py | |
2024-04-22 23:07:16,002 INFO org.apache.flink.client.python.PythonDriver [] - --------------------------- Python Process Started -------------------------- | |
2024-04-22 23:07:16,847 INFO org.apache.hadoop.hive.conf.HiveConf [] - Found configuration file null | |
2024-04-22 23:07:16,957 INFO org.apache.flink.table.catalog.hive.HiveCatalog [] - Setting hive conf dir as /tmp/hive-dwh-3/hive/conf-dwh-prod | |
2024-04-22 23:07:16,998 INFO org.apache.flink.table.catalog.hive.HiveCatalog [] - Created HiveCatalog 'dwh' | |
2024-04-22 23:07:17,007 INFO hive.metastore [] - Trying to connect to metastore with URI thrift://<path> | |
2024-04-22 23:07:17,019 INFO hive.metastore [] - Opened a connection to metastore, current connections: 1 | |
2024-04-22 23:07:17,283 INFO hive.metastore [] - Connected to metastore. | |
2024-04-22 23:07:17,340 INFO org.apache.flink.table.catalog.hive.HiveCatalog [] - Connected to Hive metastore | |
2024-04-22 23:07:17,428 INFO org.apache.flink.table.catalog.CatalogManager [] - Set the current default catalog as [dwh] and the current default database as [default]. | |
2024-04-22 23:07:17,497 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlCreateCatalog does not contain a setter for field catalogName | |
2024-04-22 23:07:17,497 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlCreateCatalog cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,498 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlCreateView does not contain a setter for field viewName | |
2024-04-22 23:07:17,498 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlCreateView cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,498 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewRename does not contain a getter for field newViewIdentifier | |
2024-04-22 23:07:17,498 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewRename does not contain a setter for field newViewIdentifier | |
2024-04-22 23:07:17,498 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAlterViewRename cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,498 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewProperties does not contain a setter for field propertyList | |
2024-04-22 23:07:17,499 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAlterViewProperties cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,499 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewAs does not contain a setter for field newQuery | |
2024-04-22 23:07:17,499 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAlterViewAs cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,499 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAddPartitions does not contain a setter for field ifPartitionNotExists | |
2024-04-22 23:07:17,499 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAddPartitions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,499 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlDropPartitions does not contain a setter for field ifExists | |
2024-04-22 23:07:17,500 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlDropPartitions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,500 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowPartitions does not contain a getter for field tableIdentifier | |
2024-04-22 23:07:17,500 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowPartitions does not contain a setter for field tableIdentifier | |
2024-04-22 23:07:17,500 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dql.SqlShowPartitions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,500 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dml.SqlTruncateTable does not contain a getter for field tableNameIdentifier | |
2024-04-22 23:07:17,500 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dml.SqlTruncateTable does not contain a setter for field tableNameIdentifier | |
2024-04-22 23:07:17,500 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dml.SqlTruncateTable cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,500 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowFunctions does not contain a setter for field requireUser | |
2024-04-22 23:07:17,501 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dql.SqlShowFunctions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,501 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowProcedures does not contain a getter for field databaseName | |
2024-04-22 23:07:17,501 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowProcedures does not contain a setter for field databaseName | |
2024-04-22 23:07:17,501 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dql.SqlShowProcedures cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,501 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlReplaceTableAs does not contain a setter for field tableName | |
2024-04-22 23:07:17,501 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlReplaceTableAs cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-22 23:07:17,907 INFO org.apache.flink.table.catalog.CatalogManager [] - Set the current default catalog as [default_catalog] and the current default database as [default_database]. | |
2024-04-22 23:07:18,002 INFO org.apache.flink.client.python.PythonDriver [] - Creating dwh catalog | |
Creating table iris_shadow | |
Creating table iris_out | |
Switching back to default | |
Running insert | |
Traceback (most recent call last): | |
File "/tmp/hive-dwh-3/scripts/run10.py", line 43, in <module> | |
t_env.execute_sql("INSERT INTO iris_out SELECT * FROM iris_shadow LIMIT 50") | |
File "/opt/flink/opt/python/pyflink.zip/pyflink/table/table_environment.py", line 837, in execute_sql | |
File "/opt/flink/opt/python/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__ | |
File "/opt/flink/opt/python/pyflink.zip/pyflink/util/exceptions.py", line 146, in deco | |
File "/opt/flink/opt/python/py4j-0.10.9.7-src.zip/py4j/protocol.py", line 326, in get_return_value | |
py4j.protocol.Py4JJavaError: An error occurred while calling o8.executeSql. | |
: org.apache.flink.table.api.ValidationException: Unable to create a source for reading table 'default_catalog.default_database.iris_shadow'. | |
Table options are: | |
'Average File Size'='2059' | |
'COLUMN_STATS_ACCURATE'='true' | |
'DO_NOT_UPDATE_STATS'='true' | |
'EXTERNAL'='TRUE' | |
'File Size SD'='0' | |
'Maximum File Size'='2059' | |
'Minimum File Size'='2059' | |
'SF_Analyzer_Batch'='1695851869942' | |
'SF_Analyzer_Update'='Updated' | |
'STATS_GENERATED'='true' | |
'bumblebee_column_extensions'='{"sepal_width":{"metadata":{"pandas_dtype":"float64"},"creation_time":"2023-09-27T21:57:49.299769","modification_time":"2023-09-27T21:57:49.299769"},"petal_width":{"metadata":{"pandas_dtype":"float64"},"creation_time":"2023-09-27T21:57:49.299769","modification_time":"2023-09-27T21:57:49.299769"},"sepal_length":{"metadata":{"pandas_dtype":"float64"},"creation_time":"2023-09-27T21:57:49.299769","modification_time":"2023-09-27T21:57:49.299769"},"petal_length":{"metadata":{"pandas_dtype":"float64"},"creation_time":"2023-09-27T21:57:49.299769","modification_time":"2023-09-27T21:57:49.299769"},"target":{"metadata":{"pandas_dtype":"int64"},"creation_time":"2023-09-27T21:57:49.299769","modification_time":"2023-09-27T21:57:49.299769"}}' | |
'bumblebee_object_extensions'='{"metadata":{"durable":{"pd":{"permanence":"permanent"}}},"creation_time":"2023-09-27T21:57:49.299769","modification_time":"2023-09-27T21:57:49.299769","stored_as":"parquet"}' | |
'bumblebee_protocol_version'='2' | |
'connector'='hive' | |
'numFiles'='1' | |
'sf_table_type'='regular' | |
'table_type'='HIVE' | |
'totalSize'='2059' | |
'write_timestamp'='2023-09-27T21:57:49.00942' | |
at org.apache.flink.table.factories.FactoryUtil.createDynamicTableSource(FactoryUtil.java:219) | |
at org.apache.flink.table.factories.FactoryUtil.createDynamicTableSource(FactoryUtil.java:244) | |
at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.createDynamicTableSource(CatalogSourceTable.java:175) | |
at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:115) | |
at org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:4002) | |
at org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2872) | |
at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2432) | |
at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2346) | |
at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2291) | |
at org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:728) | |
at org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:714) | |
at org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3848) | |
at org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:618) | |
at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:229) | |
at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:205) | |
at org.apache.flink.table.planner.operations.SqlNodeConvertContext.toRelRoot(SqlNodeConvertContext.java:69) | |
at org.apache.flink.table.planner.operations.converters.SqlQueryConverter.convertSqlNode(SqlQueryConverter.java:48) | |
at org.apache.flink.table.planner.operations.converters.SqlNodeConverters.convertSqlNode(SqlNodeConverters.java:73) | |
at org.apache.flink.table.planner.operations.SqlNodeToOperationConversion.convertValidatedSqlNode(SqlNodeToOperationConversion.java:272) | |
at org.apache.flink.table.planner.operations.SqlNodeToOperationConversion.convertValidatedSqlNodeOrFail(SqlNodeToOperationConversion.java:390) | |
at org.apache.flink.table.planner.operations.SqlNodeToOperationConversion.convertSqlInsert(SqlNodeToOperationConversion.java:745) | |
at org.apache.flink.table.planner.operations.SqlNodeToOperationConversion.convertValidatedSqlNode(SqlNodeToOperationConversion.java:353) | |
at org.apache.flink.table.planner.operations.SqlNodeToOperationConversion.convert(SqlNodeToOperationConversion.java:262) | |
at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106) | |
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:728) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.base/java.lang.reflect.Method.invoke(Method.java:566) | |
at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) | |
at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) | |
at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282) | |
at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) | |
at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238) | |
at java.base/java.lang.Thread.run(Thread.java:829) | |
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a connector using option: 'connector'='hive' | |
at org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:798) | |
at org.apache.flink.table.factories.FactoryUtil.discoverTableFactory(FactoryUtil.java:772) | |
at org.apache.flink.table.factories.FactoryUtil.createDynamicTableSource(FactoryUtil.java:215) | |
... 35 more | |
Caused by: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'hive' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath. | |
Available factory identifiers are: | |
blackhole | |
datagen | |
filesystem | |
kafka | |
python-input-format | |
upsert-kafka | |
at org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:608) | |
at org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:794) | |
... 37 more | |
2024-04-22 23:07:18,002 INFO org.apache.flink.client.python.PythonDriver [] - --------------------------- Python Process Exited --------------------------- | |
2024-04-22 23:07:18,003 ERROR org.apache.flink.client.python.PythonDriver [] - Run python process failed | |
java.lang.RuntimeException: Python process exits with code: 1 | |
at org.apache.flink.client.python.PythonDriver.main(PythonDriver.java:130) ~[flink-python-1.18.1.jar:1.18.1] | |
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] | |
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] | |
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] | |
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] | |
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:105) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:851) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:245) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1095) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) ~[flink-dist-1.18.1.jar:1.18.1] | |
at java.security.AccessController.doPrivileged(Native Method) ~[?:?] | |
at javax.security.auth.Subject.doAs(Subject.java:423) [?:?] | |
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) [hadoop-common-3.4.0.jar:?] | |
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) [flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) [flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) [flink-dist-1.18.1.jar:1.18.1] | |
2024-04-22 23:07:18,007 ERROR org.apache.flink.client.cli.CliFrontend [] - Fatal error while running command line interface. | |
org.apache.flink.client.program.ProgramAbortException: java.lang.RuntimeException: Python process exits with code: 1 | |
at org.apache.flink.client.python.PythonDriver.main(PythonDriver.java:140) ~[?:?] | |
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] | |
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] | |
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] | |
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] | |
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:105) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:851) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:245) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1095) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) ~[flink-dist-1.18.1.jar:1.18.1] | |
at java.security.AccessController.doPrivileged(Native Method) ~[?:?] | |
at javax.security.auth.Subject.doAs(Subject.java:423) ~[?:?] | |
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) ~[hadoop-common-3.4.0.jar:?] | |
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) [flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) [flink-dist-1.18.1.jar:1.18.1] | |
Caused by: java.lang.RuntimeException: Python process exits with code: 1 | |
at org.apache.flink.client.python.PythonDriver.main(PythonDriver.java:130) ~[?:?] | |
... 17 more |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2024-04-23 16:27:31,310 INFO org.apache.flink.client.cli.CliFrontend [] - -------------------------------------------------------------------------------- | |
2024-04-23 16:27:31,311 INFO org.apache.flink.client.cli.CliFrontend [] - Starting Command Line Client (Version: 1.18.1, Scala: 2.12, Rev:a8c8b1c, Date:2023-12-19T22:17:36+01:00) | |
2024-04-23 16:27:31,311 INFO org.apache.flink.client.cli.CliFrontend [] - OS current user: root | |
2024-04-23 16:27:31,414 INFO org.apache.flink.client.cli.CliFrontend [] - Current Hadoop/Kerberos user: root | |
2024-04-23 16:27:31,414 INFO org.apache.flink.client.cli.CliFrontend [] - JVM: OpenJDK 64-Bit Server VM - Ubuntu - 11/11.0.22+7-post-Ubuntu-0ubuntu222.04.1 | |
2024-04-23 16:27:31,414 INFO org.apache.flink.client.cli.CliFrontend [] - Arch: aarch64 | |
2024-04-23 16:27:31,414 INFO org.apache.flink.client.cli.CliFrontend [] - Maximum heap size: 1964 MiBytes | |
2024-04-23 16:27:31,414 INFO org.apache.flink.client.cli.CliFrontend [] - JAVA_HOME: /usr/lib/jvm/java-11-openjdk-arm64 | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - Hadoop version: 3.4.0 | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - JVM Options: | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - -XX:+IgnoreUnrecognizedVMOptions | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=java.base/sun.net.util=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.lang=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.net=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.io=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.nio=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/sun.nio.ch=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.lang.reflect=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.text=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.time=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.util=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.util.concurrent=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - -Dlog.file=/opt/flink/log/flink--client-3080ffd34e9b.log | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - -Dlog4j.configuration=file:/opt/flink/conf/log4j-cli.properties | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - -Dlog4j.configurationFile=file:/opt/flink/conf/log4j-cli.properties | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - -Dlogback.configurationFile=file:/opt/flink/conf/logback.xml | |
2024-04-23 16:27:31,416 INFO org.apache.flink.client.cli.CliFrontend [] - Program Arguments: | |
2024-04-23 16:27:31,417 INFO org.apache.flink.client.cli.CliFrontend [] - run | |
2024-04-23 16:27:31,417 INFO org.apache.flink.client.cli.CliFrontend [] - --python | |
2024-04-23 16:27:31,417 INFO org.apache.flink.client.cli.CliFrontend [] - /tmp/hive-dwh-3/scripts/run11.py | |
2024-04-23 16:27:31,417 INFO org.apache.flink.client.cli.CliFrontend [] - Classpath: /opt/flink/lib/flink-cep-1.18.1.jar:/opt/flink/lib/flink-connector-files-1.18.1.jar:/opt/flink/lib/flink-csv-1.18.1.jar:/opt/flink/lib/flink-json-1.18.1.jar:/opt/flink/lib/flink-scala_2.12-1.18.1.jar:/opt/flink/lib/flink-sql-connector-hive-2.3.9_2.12-1.18.1.jar:/opt/flink/lib/flink-sql-connector-kafka-3.0.2-1.18.jar:/opt/flink/lib/flink-sql-parquet-1.19.0.jar:/opt/flink/lib/flink-table-api-java-uber-1.18.1.jar:/opt/flink/lib/flink-table-planner-loader-1.18.1.jar:/opt/flink/lib/flink-table-runtime-1.18.1.jar:/opt/flink/lib/log4j-1.2-api-2.17.1.jar:/opt/flink/lib/log4j-api-2.17.1.jar:/opt/flink/lib/log4j-core-2.17.1.jar:/opt/flink/lib/log4j-slf4j-impl-2.17.1.jar:/opt/flink/lib/flink-dist-1.18.1.jar:/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/kerb-client-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.5.0.jar:/opt/hadoop/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/hadoop/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/opt/hadoop/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/opt/hadoop/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.24.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.20.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.5.4.jar:/opt/hadoop/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/kerb-server-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.15.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-util-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop/share/hadoop/common/lib/dnsjava-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/jline-3.9.0.jar:/opt/hadoop/share/hadoop/common/lib/kerb-core-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/token-provider-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.9.2.jar:/opt/hadoop/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/curator-client-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/opt/hadoop/share/hadoop/common/lib/gson-2.9.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.9.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-text-1.10.0.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/hadoop/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/kerb-common-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.14.0.jar:/opt/hadoop/share/hadoop/common/lib/httpcore-4.4.13.jar:/opt/hadoop/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/opt/hadoop/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/common/lib/reload4j-1.2.22.jar:/opt/hadoop/share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/stax2-api-4.2.1.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.8.3.jar:/opt/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/hadoop/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/opt/hadoop/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/failureaccess-1.0.jar:/opt/hadoop/share/hadoop/common/lib/kerby-util-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/opt/hadoop/share/hadoop/common/lib/curator-framework-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-math3-3.6.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-2.12.7.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/kerby-config-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-kms-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-registry-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/opt/hadoop/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jline-3.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/avro-1.9.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/gson-2.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/opt/hadoop/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/opt/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/lib/jna-5.2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-commons-9.6.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/codemodel-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-tree-9.6.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/lib/objenesis-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/opt/hadoop/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-4.2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:: | |
2024-04-23 16:27:31,417 INFO org.apache.flink.client.cli.CliFrontend [] - -------------------------------------------------------------------------------- | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: blob.server.port, 6124 | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.memory.process.size, 1728m | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.bind-host, 0.0.0.0 | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.execution.failover-strategy, region | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.address, jobmanager | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: sql-client.verbose, true | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.memory.process.size, 1600m | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.port, 6123 | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: query.server.port, 6125 | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: rest.bind-address, 0.0.0.0 | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.bind-host, 0.0.0.0 | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: parallelism.default, 1 | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.numberOfTaskSlots, 1 | |
2024-04-23 16:27:31,419 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: rest.address, 0.0.0.0 | |
2024-04-23 16:27:31,420 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: fs.s3a.aws.credentials.provider, com.amazonaws.auth.profile.ProfileCredentialsProvider | |
2024-04-23 16:27:31,420 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: sql-client.execution.result-mode, TABLEAU | |
2024-04-23 16:27:31,420 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: env.java.opts.all, --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED | |
2024-04-23 16:27:31,455 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: external-resource-gpu | |
2024-04-23 16:27:31,457 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-statsd | |
2024-04-23 16:27:31,457 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-graphite | |
2024-04-23 16:27:31,457 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-influx | |
2024-04-23 16:27:31,457 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-jmx | |
2024-04-23 16:27:31,457 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-datadog | |
2024-04-23 16:27:31,457 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-slf4j | |
2024-04-23 16:27:31,457 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-prometheus | |
2024-04-23 16:27:31,458 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: flink-s3-fs-hadoop | |
2024-04-23 16:27:31,483 WARN org.apache.flink.runtime.util.HadoopUtils [] - Could not find Hadoop configuration via any of the supported methods (Flink configuration, environment variables). | |
2024-04-23 16:27:31,494 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Hadoop user set to root (auth:SIMPLE) | |
2024-04-23 16:27:31,494 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Kerberos security is disabled. | |
2024-04-23 16:27:31,500 INFO org.apache.flink.runtime.security.modules.JaasModule [] - Jaas file will be created as /tmp/jaas-4791109919404035802.conf. | |
2024-04-23 16:27:31,502 INFO org.apache.flink.client.cli.CliFrontend [] - Running 'run' command. | |
2024-04-23 16:27:31,513 INFO org.apache.flink.client.cli.CliFrontend [] - Building program from JAR file | |
2024-04-23 16:27:31,516 INFO org.apache.flink.client.ClientUtils [] - Starting program (detached: false) | |
2024-04-23 16:27:31,562 INFO org.apache.flink.client.python.PythonEnvUtils [] - Starting Python process with environment variables: {FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download&filename=flink/flink-1.18.1/flink-1.18.1-bin-scala_2.12.tgz, PATH=/opt/flink/bin:/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/hadoop/bin, FLINK_PROPERTIES= | |
jobmanager.rpc.address: jobmanager | |
fs.s3a.aws.credentials.provider: com.amazonaws.auth.profile.ProfileCredentialsProvider | |
, JAVA_HOME=/usr/lib/jvm/java-11-openjdk-arm64, CHECK_GPG=true, FLINK_PLUGINS_DIR=/opt/flink/plugins, TERM=xterm, GPG_KEY=96AE0E32CBE6E0753CE6DF6CB078D1D3253A8D82, LANG=en_US.UTF-8, HADOOP_CLASSPATH=/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*, FLINK_CONF_DIR=/opt/flink/conf, FLINK_ASC_URL=https://downloads.apache.org/flink/flink-1.18.1/flink-1.18.1-bin-scala_2.12.tgz.asc, JAVA_VERSION=jdk-11.0.22+7, PWD=/opt/flink, LANGUAGE=en_US:en, PYTHONPATH=/tmp/pyflink/9a5691df-4e5c-45d1-9d81-808ffacacabd/4045973a-f051-4d5e-8890-7e96cc337122:/opt/flink/opt/python/py4j-0.10.9.7-src.zip:/opt/flink/opt/python/pyflink.zip:/opt/flink/opt/python/cloudpickle-2.2.0-src.zip, FLINK_OPT_DIR=/opt/flink/opt, FLINK_HOME=/opt/flink, MAX_LOG_FILE_NUMBER=10, FLINK_LIB_DIR=/opt/flink/lib, PYFLINK_GATEWAY_PORT=35959, GOSU_VERSION=1.11, HOSTNAME=3080ffd34e9b, LC_ALL=en_US.UTF-8, LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:, SHLVL=1, HOME=/root, FLINK_BIN_DIR=/opt/flink/bin}, command: python -u /tmp/hive-dwh-3/scripts/run11.py | |
2024-04-23 16:27:31,563 INFO org.apache.flink.client.python.PythonDriver [] - --------------------------- Python Process Started -------------------------- | |
2024-04-23 16:27:32,410 INFO org.apache.hadoop.hive.conf.HiveConf [] - Found configuration file null | |
2024-04-23 16:27:32,523 INFO org.apache.flink.table.catalog.hive.HiveCatalog [] - Setting hive conf dir as /tmp/hive-dwh-3/hive/conf-dwh-prod | |
2024-04-23 16:27:32,564 INFO org.apache.flink.table.catalog.hive.HiveCatalog [] - Created HiveCatalog 'dwh' | |
2024-04-23 16:27:32,572 INFO hive.metastore [] - Trying to connect to metastore with URI thrift://<path> | |
2024-04-23 16:27:32,582 INFO hive.metastore [] - Opened a connection to metastore, current connections: 1 | |
2024-04-23 16:27:32,817 INFO hive.metastore [] - Connected to metastore. | |
2024-04-23 16:27:32,868 INFO org.apache.flink.table.catalog.hive.HiveCatalog [] - Connected to Hive metastore | |
2024-04-23 16:27:32,957 INFO org.apache.flink.table.catalog.CatalogManager [] - Set the current default catalog as [dwh] and the current default database as [default]. | |
2024-04-23 16:27:33,028 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlCreateCatalog does not contain a setter for field catalogName | |
2024-04-23 16:27:33,028 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlCreateCatalog cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,029 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlCreateView does not contain a setter for field viewName | |
2024-04-23 16:27:33,029 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlCreateView cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,029 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewRename does not contain a getter for field newViewIdentifier | |
2024-04-23 16:27:33,029 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewRename does not contain a setter for field newViewIdentifier | |
2024-04-23 16:27:33,029 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAlterViewRename cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,029 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewProperties does not contain a setter for field propertyList | |
2024-04-23 16:27:33,029 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAlterViewProperties cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,030 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewAs does not contain a setter for field newQuery | |
2024-04-23 16:27:33,030 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAlterViewAs cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,030 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAddPartitions does not contain a setter for field ifPartitionNotExists | |
2024-04-23 16:27:33,030 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAddPartitions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,030 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlDropPartitions does not contain a setter for field ifExists | |
2024-04-23 16:27:33,030 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlDropPartitions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,031 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowPartitions does not contain a getter for field tableIdentifier | |
2024-04-23 16:27:33,031 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowPartitions does not contain a setter for field tableIdentifier | |
2024-04-23 16:27:33,031 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dql.SqlShowPartitions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,031 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dml.SqlTruncateTable does not contain a getter for field tableNameIdentifier | |
2024-04-23 16:27:33,031 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dml.SqlTruncateTable does not contain a setter for field tableNameIdentifier | |
2024-04-23 16:27:33,031 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dml.SqlTruncateTable cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,032 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowFunctions does not contain a setter for field requireUser | |
2024-04-23 16:27:33,032 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dql.SqlShowFunctions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,032 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowProcedures does not contain a getter for field databaseName | |
2024-04-23 16:27:33,032 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowProcedures does not contain a setter for field databaseName | |
2024-04-23 16:27:33,032 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dql.SqlShowProcedures cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:33,032 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlReplaceTableAs does not contain a setter for field tableName | |
2024-04-23 16:27:33,032 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlReplaceTableAs cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution. | |
2024-04-23 16:27:35,531 INFO hive.metastore [] - Trying to connect to metastore with URI thrift://<path> | |
2024-04-23 16:27:35,532 INFO hive.metastore [] - Opened a connection to metastore, current connections: 2 | |
2024-04-23 16:27:35,687 INFO hive.metastore [] - Connected to metastore. | |
2024-04-23 16:27:35,803 INFO hive.metastore [] - Closed a connection to metastore, current connections: 1 | |
2024-04-23 16:27:35,838 INFO org.apache.flink.client.python.PythonDriver [] - Creating dwh catalog | |
Creating table iris_out | |
Running insert | |
Traceback (most recent call last): | |
File "/tmp/hive-dwh-3/scripts/run11.py", line 43, in <module> | |
t_env.execute_sql("INSERT INTO default_catalog.default_database.iris_out SELECT * FROM rdg_test.iris_test LIMIT 50") | |
File "/opt/flink/opt/python/pyflink.zip/pyflink/table/table_environment.py", line 837, in execute_sql | |
File "/opt/flink/opt/python/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__ | |
File "/opt/flink/opt/python/pyflink.zip/pyflink/util/exceptions.py", line 146, in deco | |
File "/opt/flink/opt/python/py4j-0.10.9.7-src.zip/py4j/protocol.py", line 326, in get_return_value | |
py4j.protocol.Py4JJavaError: An error occurred while calling o8.executeSql. | |
: org.apache.flink.connectors.hive.FlinkHiveException: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "s3" | |
at org.apache.flink.connectors.hive.HiveParallelismInference.infer(HiveParallelismInference.java:98) | |
at org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:175) | |
at org.apache.flink.connectors.hive.HiveTableSource$1.produceDataStream(HiveTableSource.java:141) | |
at org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecTableSourceScan.translateToPlanInternal(CommonExecTableSourceScan.java:140) | |
at org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:167) | |
at org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:258) | |
at org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecExchange.translateToPlanInternal(StreamExecExchange.java:99) | |
at org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:167) | |
at org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:258) | |
at org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecRank.translateToPlanInternal(StreamExecRank.java:205) | |
at org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecLimit.translateToPlanInternal(StreamExecLimit.java:127) | |
at org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:167) | |
at org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:258) | |
at org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:177) | |
at org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:167) | |
at org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:85) | |
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) | |
at scala.collection.Iterator.foreach(Iterator.scala:937) | |
at scala.collection.Iterator.foreach$(Iterator.scala:937) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) | |
at scala.collection.IterableLike.foreach(IterableLike.scala:70) | |
at scala.collection.IterableLike.foreach$(IterableLike.scala:69) | |
at scala.collection.AbstractIterable.foreach(Iterable.scala:54) | |
at scala.collection.TraversableLike.map(TraversableLike.scala:233) | |
at scala.collection.TraversableLike.map$(TraversableLike.scala:226) | |
at scala.collection.AbstractTraversable.map(Traversable.scala:104) | |
at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:84) | |
at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:184) | |
at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1277) | |
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:862) | |
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1097) | |
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:735) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.base/java.lang.reflect.Method.invoke(Method.java:566) | |
at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) | |
at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) | |
at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282) | |
at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) | |
at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238) | |
at java.base/java.lang.Thread.run(Thread.java:829) | |
Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "s3" | |
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3575) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3598) | |
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:171) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3702) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3653) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:555) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:366) | |
at org.apache.flink.connectors.hive.HiveSourceFileEnumerator.getNumFiles(HiveSourceFileEnumerator.java:195) | |
at org.apache.flink.connectors.hive.HiveTableSource.lambda$getDataStream$0(HiveTableSource.java:177) | |
at org.apache.flink.connectors.hive.HiveParallelismInference.logRunningTime(HiveParallelismInference.java:107) | |
at org.apache.flink.connectors.hive.HiveParallelismInference.infer(HiveParallelismInference.java:89) | |
... 42 more | |
2024-04-23 16:27:35,838 INFO org.apache.flink.client.python.PythonDriver [] - --------------------------- Python Process Exited --------------------------- | |
2024-04-23 16:27:35,839 ERROR org.apache.flink.client.python.PythonDriver [] - Run python process failed | |
java.lang.RuntimeException: Python process exits with code: 1 | |
at org.apache.flink.client.python.PythonDriver.main(PythonDriver.java:130) ~[flink-python-1.18.1.jar:1.18.1] | |
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] | |
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] | |
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] | |
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] | |
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:105) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:851) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:245) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1095) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) ~[flink-dist-1.18.1.jar:1.18.1] | |
at java.security.AccessController.doPrivileged(Native Method) ~[?:?] | |
at javax.security.auth.Subject.doAs(Subject.java:423) [?:?] | |
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) [hadoop-common-3.4.0.jar:?] | |
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) [flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) [flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) [flink-dist-1.18.1.jar:1.18.1] | |
2024-04-23 16:27:35,844 ERROR org.apache.flink.client.cli.CliFrontend [] - Fatal error while running command line interface. | |
org.apache.flink.client.program.ProgramAbortException: java.lang.RuntimeException: Python process exits with code: 1 | |
at org.apache.flink.client.python.PythonDriver.main(PythonDriver.java:140) ~[?:?] | |
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] | |
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] | |
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] | |
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] | |
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:105) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:851) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:245) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1095) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) ~[flink-dist-1.18.1.jar:1.18.1] | |
at java.security.AccessController.doPrivileged(Native Method) ~[?:?] | |
at javax.security.auth.Subject.doAs(Subject.java:423) ~[?:?] | |
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) ~[hadoop-common-3.4.0.jar:?] | |
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) ~[flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) [flink-dist-1.18.1.jar:1.18.1] | |
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) [flink-dist-1.18.1.jar:1.18.1] | |
Caused by: java.lang.RuntimeException: Python process exits with code: 1 | |
at org.apache.flink.client.python.PythonDriver.main(PythonDriver.java:130) ~[?:?] | |
... 17 more |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2024-04-22 23:06:12,769 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -------------------------------------------------------------------------------- | |
2024-04-22 23:06:12,771 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Preconfiguration: | |
2024-04-22 23:06:12,772 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - | |
RESOURCE_PARAMS extraction logs: | |
jvm_params: -Xmx1073741824 -Xms1073741824 -XX:MaxMetaspaceSize=268435456 | |
dynamic_configs: -D jobmanager.memory.off-heap.size=134217728b -D jobmanager.memory.jvm-overhead.min=201326592b -D jobmanager.memory.jvm-metaspace.size=268435456b -D jobmanager.memory.heap.size=1073741824b -D jobmanager.memory.jvm-overhead.max=201326592b | |
logs: WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. | |
INFO [] - Loading configuration property: blob.server.port, 6124 | |
INFO [] - Loading configuration property: taskmanager.memory.process.size, 1728m | |
INFO [] - Loading configuration property: taskmanager.bind-host, 0.0.0.0 | |
INFO [] - Loading configuration property: jobmanager.execution.failover-strategy, region | |
INFO [] - Loading configuration property: jobmanager.rpc.address, jobmanager | |
INFO [] - Loading configuration property: sql-client.verbose, true | |
INFO [] - Loading configuration property: jobmanager.memory.process.size, 1600m | |
INFO [] - Loading configuration property: jobmanager.rpc.port, 6123 | |
INFO [] - Loading configuration property: query.server.port, 6125 | |
INFO [] - Loading configuration property: rest.bind-address, 0.0.0.0 | |
INFO [] - Loading configuration property: jobmanager.bind-host, 0.0.0.0 | |
INFO [] - Loading configuration property: parallelism.default, 1 | |
INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 1 | |
INFO [] - Loading configuration property: rest.address, 0.0.0.0 | |
INFO [] - Loading configuration property: fs.s3a.aws.credentials.provider, com.amazonaws.auth.profile.ProfileCredentialsProvider | |
INFO [] - Loading configuration property: sql-client.execution.result-mode, TABLEAU | |
INFO [] - Loading configuration property: env.java.opts.all, --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED | |
INFO [] - The derived from fraction jvm overhead memory (160.000mb (167772162 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead | |
INFO [] - Final Master Memory configuration: | |
INFO [] - Total Process Memory: 1.563gb (1677721600 bytes) | |
INFO [] - Total Flink Memory: 1.125gb (1207959552 bytes) | |
INFO [] - JVM Heap: 1024.000mb (1073741824 bytes) | |
INFO [] - Off-heap: 128.000mb (134217728 bytes) | |
INFO [] - JVM Metaspace: 256.000mb (268435456 bytes) | |
INFO [] - JVM Overhead: 192.000mb (201326592 bytes) | |
2024-04-22 23:06:12,772 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -------------------------------------------------------------------------------- | |
2024-04-22 23:06:12,772 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Starting StandaloneSessionClusterEntrypoint (Version: 1.18.1, Scala: 2.12, Rev:a8c8b1c, Date:2023-12-19T22:17:36+01:00) | |
2024-04-22 23:06:12,772 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - OS current user: flink | |
2024-04-22 23:06:12,933 WARN org.apache.hadoop.util.NativeCodeLoader [] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
2024-04-22 23:06:12,955 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Current Hadoop/Kerberos user: flink | |
2024-04-22 23:06:12,955 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM: OpenJDK 64-Bit Server VM - Ubuntu - 11/11.0.22+7-post-Ubuntu-0ubuntu222.04.1 | |
2024-04-22 23:06:12,955 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Arch: aarch64 | |
2024-04-22 23:06:12,956 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Maximum heap size: 1024 MiBytes | |
2024-04-22 23:06:12,956 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JAVA_HOME: /usr/lib/jvm/java-11-openjdk-arm64 | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Hadoop version: 3.4.0 | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM Options: | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xmx1073741824 | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xms1073741824 | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -XX:MaxMetaspaceSize=268435456 | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -XX:+IgnoreUnrecognizedVMOptions | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=java.base/sun.net.util=ALL-UNNAMED | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED | |
2024-04-22 23:06:12,958 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.lang=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.net=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.io=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.nio=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/sun.nio.ch=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.lang.reflect=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.text=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.time=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.util=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.util.concurrent=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog.file=/opt/flink/log/flink--standalonesession-0-2f6c8ea78d77.log | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configuration=file:/opt/flink/conf/log4j-console.properties | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configurationFile=file:/opt/flink/conf/log4j-console.properties | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml | |
2024-04-22 23:06:12,959 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Program Arguments: | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -D | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - jobmanager.memory.off-heap.size=134217728b | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -D | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - jobmanager.memory.jvm-overhead.min=201326592b | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -D | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - jobmanager.memory.jvm-metaspace.size=268435456b | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -D | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - jobmanager.memory.heap.size=1073741824b | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -D | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - jobmanager.memory.jvm-overhead.max=201326592b | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --configDir | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - /opt/flink/conf | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --executionMode | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - cluster | |
2024-04-22 23:06:12,960 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Classpath: /opt/flink/lib/flink-cep-1.18.1.jar:/opt/flink/lib/flink-connector-files-1.18.1.jar:/opt/flink/lib/flink-csv-1.18.1.jar:/opt/flink/lib/flink-json-1.18.1.jar:/opt/flink/lib/flink-scala_2.12-1.18.1.jar:/opt/flink/lib/flink-sql-connector-hive-2.3.9_2.12-1.18.1.jar:/opt/flink/lib/flink-sql-connector-kafka-3.0.2-1.18.jar:/opt/flink/lib/flink-sql-parquet-1.19.0.jar:/opt/flink/lib/flink-table-api-java-uber-1.18.1.jar:/opt/flink/lib/flink-table-planner-loader-1.18.1.jar:/opt/flink/lib/flink-table-runtime-1.18.1.jar:/opt/flink/lib/log4j-1.2-api-2.17.1.jar:/opt/flink/lib/log4j-api-2.17.1.jar:/opt/flink/lib/log4j-core-2.17.1.jar:/opt/flink/lib/log4j-slf4j-impl-2.17.1.jar:/opt/flink/lib/flink-dist-1.18.1.jar::/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/kerb-client-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.5.0.jar:/opt/hadoop/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/hadoop/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/opt/hadoop/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/opt/hadoop/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.24.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.20.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.5.4.jar:/opt/hadoop/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/kerb-server-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.15.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-util-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop/share/hadoop/common/lib/dnsjava-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/jline-3.9.0.jar:/opt/hadoop/share/hadoop/common/lib/kerb-core-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/token-provider-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.9.2.jar:/opt/hadoop/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/curator-client-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/opt/hadoop/share/hadoop/common/lib/gson-2.9.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.9.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-text-1.10.0.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/hadoop/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/kerb-common-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.14.0.jar:/opt/hadoop/share/hadoop/common/lib/httpcore-4.4.13.jar:/opt/hadoop/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/opt/hadoop/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/common/lib/reload4j-1.2.22.jar:/opt/hadoop/share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/stax2-api-4.2.1.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.8.3.jar:/opt/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/hadoop/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/opt/hadoop/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/failureaccess-1.0.jar:/opt/hadoop/share/hadoop/common/lib/kerby-util-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/opt/hadoop/share/hadoop/common/lib/curator-framework-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-math3-3.6.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-2.12.7.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/kerby-config-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-kms-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-registry-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/opt/hadoop/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jline-3.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/avro-1.9.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/gson-2.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/opt/hadoop/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/opt/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/lib/jna-5.2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-commons-9.6.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/codemodel-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-tree-9.6.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/lib/objenesis-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/opt/hadoop/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-4.2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:: | |
2024-04-22 23:06:12,961 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -------------------------------------------------------------------------------- | |
2024-04-22 23:06:12,961 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Registered UNIX signal handlers for [TERM, HUP, INT] | |
2024-04-22 23:06:12,967 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: blob.server.port, 6124 | |
2024-04-22 23:06:12,968 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.memory.process.size, 1728m | |
2024-04-22 23:06:12,972 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.bind-host, 0.0.0.0 | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.execution.failover-strategy, region | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.address, jobmanager | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: sql-client.verbose, true | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.memory.process.size, 1600m | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.port, 6123 | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: query.server.port, 6125 | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: rest.bind-address, 0.0.0.0 | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.bind-host, 0.0.0.0 | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: parallelism.default, 1 | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.numberOfTaskSlots, 1 | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: rest.address, 0.0.0.0 | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: fs.s3a.aws.credentials.provider, com.amazonaws.auth.profile.ProfileCredentialsProvider | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: sql-client.execution.result-mode, TABLEAU | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: env.java.opts.all, --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading dynamic configuration property: jobmanager.memory.off-heap.size, 134217728b | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading dynamic configuration property: jobmanager.memory.jvm-overhead.min, 201326592b | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading dynamic configuration property: jobmanager.memory.jvm-metaspace.size, 268435456b | |
2024-04-22 23:06:12,973 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading dynamic configuration property: jobmanager.memory.heap.size, 1073741824b | |
2024-04-22 23:06:12,974 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading dynamic configuration property: jobmanager.memory.jvm-overhead.max, 201326592b | |
2024-04-22 23:06:12,989 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Starting StandaloneSessionClusterEntrypoint. | |
2024-04-22 23:06:13,012 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Install default filesystem. | |
2024-04-22 23:06:13,020 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: external-resource-gpu | |
2024-04-22 23:06:13,023 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-statsd | |
2024-04-22 23:06:13,023 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-graphite | |
2024-04-22 23:06:13,023 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-influx | |
2024-04-22 23:06:13,023 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-jmx | |
2024-04-22 23:06:13,023 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-datadog | |
2024-04-22 23:06:13,023 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-slf4j | |
2024-04-22 23:06:13,023 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-prometheus | |
2024-04-22 23:06:13,024 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: flink-s3-fs-hadoop | |
2024-04-22 23:06:13,056 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Install security context. | |
2024-04-22 23:06:13,066 WARN org.apache.flink.runtime.util.HadoopUtils [] - Could not find Hadoop configuration via any of the supported methods (Flink configuration, environment variables). | |
2024-04-22 23:06:13,084 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Hadoop user set to flink (auth:SIMPLE) | |
2024-04-22 23:06:13,084 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Kerberos security is disabled. | |
2024-04-22 23:06:13,091 INFO org.apache.flink.runtime.security.modules.JaasModule [] - Jaas file will be created as /tmp/jaas-3768840321318659573.conf. | |
2024-04-22 23:06:13,098 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Initializing cluster services. | |
2024-04-22 23:06:13,104 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Using working directory: WorkingDirectory(/tmp/jm_20cc675eb4df29c116830080ce3aa143). | |
2024-04-22 23:06:13,276 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Trying to start actor system, external address jobmanager:6123, bind address 0.0.0.0:6123. | |
2024-04-22 23:06:13,597 INFO org.apache.pekko.event.slf4j.Slf4jLogger [] - Slf4jLogger started | |
2024-04-22 23:06:13,609 INFO org.apache.pekko.remote.RemoteActorRefProvider [] - Pekko Cluster not in use - enabling unsafe features anyway because `pekko.remote.use-unsafe-remote-features-outside-cluster` has been enabled. | |
2024-04-22 23:06:13,610 INFO org.apache.pekko.remote.Remoting [] - Starting remoting | |
2024-04-22 23:06:13,670 INFO org.apache.pekko.remote.Remoting [] - Remoting started; listening on addresses :[pekko.tcp://flink@jobmanager:6123] | |
2024-04-22 23:06:13,718 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Actor system started at pekko.tcp://flink@jobmanager:6123 | |
2024-04-22 23:06:13,725 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Loading delegation token providers | |
2024-04-22 23:06:13,727 WARN org.apache.flink.runtime.util.HadoopUtils [] - Could not find Hadoop configuration via any of the supported methods (Flink configuration, environment variables). | |
2024-04-22 23:06:13,727 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Delegation token provider hadoopfs loaded and initialized | |
2024-04-22 23:06:13,728 WARN org.apache.flink.runtime.util.HadoopUtils [] - Could not find Hadoop configuration via any of the supported methods (Flink configuration, environment variables). | |
2024-04-22 23:06:13,728 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Delegation token provider hbase loaded and initialized | |
2024-04-22 23:06:13,728 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: external-resource-gpu | |
2024-04-22 23:06:13,728 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-statsd | |
2024-04-22 23:06:13,728 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-graphite | |
2024-04-22 23:06:13,728 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-influx | |
2024-04-22 23:06:13,729 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-jmx | |
2024-04-22 23:06:13,729 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-datadog | |
2024-04-22 23:06:13,729 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-slf4j | |
2024-04-22 23:06:13,729 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-prometheus | |
2024-04-22 23:06:13,729 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: flink-s3-fs-hadoop | |
2024-04-22 23:06:13,730 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Delegation token provider s3-hadoop loaded and initialized | |
2024-04-22 23:06:13,730 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Delegation token providers loaded successfully | |
2024-04-22 23:06:13,730 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Loading delegation token receivers | |
2024-04-22 23:06:13,731 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Delegation token receiver hadoopfs loaded and initialized | |
2024-04-22 23:06:13,732 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Delegation token receiver hbase loaded and initialized | |
2024-04-22 23:06:13,732 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: external-resource-gpu | |
2024-04-22 23:06:13,732 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-statsd | |
2024-04-22 23:06:13,732 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-graphite | |
2024-04-22 23:06:13,732 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-influx | |
2024-04-22 23:06:13,732 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-jmx | |
2024-04-22 23:06:13,732 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-datadog | |
2024-04-22 23:06:13,732 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-slf4j | |
2024-04-22 23:06:13,732 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-prometheus | |
2024-04-22 23:06:13,732 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: flink-s3-fs-hadoop | |
2024-04-22 23:06:13,733 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Delegation token receiver s3-hadoop loaded and initialized | |
2024-04-22 23:06:13,733 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Delegation token receivers loaded successfully | |
2024-04-22 23:06:13,733 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Checking provider and receiver instances consistency | |
2024-04-22 23:06:13,733 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Provider and receiver instances are consistent | |
2024-04-22 23:06:13,733 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Obtaining delegation tokens | |
2024-04-22 23:06:13,734 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Delegation tokens obtained successfully | |
2024-04-22 23:06:13,734 WARN org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - No tokens obtained so skipping notifications | |
2024-04-22 23:06:13,742 INFO org.apache.flink.runtime.blob.BlobServer [] - Created BLOB server storage directory /tmp/jm_20cc675eb4df29c116830080ce3aa143/blobStorage | |
2024-04-22 23:06:13,743 INFO org.apache.flink.runtime.blob.BlobServer [] - Started BLOB server at 0.0.0.0:6124 - max concurrent requests: 50 - max backlog: 1000 | |
2024-04-22 23:06:13,752 INFO org.apache.flink.runtime.metrics.MetricRegistryImpl [] - No metrics reporter configured, no metrics will be exposed/reported. | |
2024-04-22 23:06:13,754 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Trying to start actor system, external address jobmanager:0, bind address 0.0.0.0:0. | |
2024-04-22 23:06:13,764 INFO org.apache.pekko.event.slf4j.Slf4jLogger [] - Slf4jLogger started | |
2024-04-22 23:06:13,766 INFO org.apache.pekko.remote.RemoteActorRefProvider [] - Pekko Cluster not in use - enabling unsafe features anyway because `pekko.remote.use-unsafe-remote-features-outside-cluster` has been enabled. | |
2024-04-22 23:06:13,766 INFO org.apache.pekko.remote.Remoting [] - Starting remoting | |
2024-04-22 23:06:13,782 INFO org.apache.pekko.remote.Remoting [] - Remoting started; listening on addresses :[pekko.tcp://flink-metrics@jobmanager:33691] | |
2024-04-22 23:06:13,787 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Actor system started at pekko.tcp://flink-metrics@jobmanager:33691 | |
2024-04-22 23:06:13,797 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.metrics.dump.MetricQueryService at pekko://flink-metrics/user/rpc/MetricQueryService . | |
2024-04-22 23:06:13,811 INFO org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStore [] - Initializing FileExecutionGraphInfoStore: Storage directory /tmp/executionGraphStore-a267c8a8-20d4-4ae5-95d7-b66398065405, expiration time 3600000, maximum cache size 52428800 bytes. | |
2024-04-22 23:06:13,844 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Upload directory /tmp/flink-web-0f2d9849-2bf5-4473-bb9e-9d2bb0ee9f45/flink-web-upload does not exist. | |
2024-04-22 23:06:13,844 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Created directory /tmp/flink-web-0f2d9849-2bf5-4473-bb9e-9d2bb0ee9f45/flink-web-upload for file uploads. | |
2024-04-22 23:06:13,845 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Starting rest endpoint. | |
2024-04-22 23:06:13,945 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils [] - Determined location of main cluster component log file: /opt/flink/log/flink--standalonesession-0-2f6c8ea78d77.log | |
2024-04-22 23:06:13,945 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils [] - Determined location of main cluster component stdout file: /opt/flink/log/flink--standalonesession-0-2f6c8ea78d77.out | |
2024-04-22 23:06:13,991 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Rest endpoint listening at 0.0.0.0:8081 | |
2024-04-22 23:06:13,993 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - http://0.0.0.0:8081 was granted leadership with leaderSessionID=00000000-0000-0000-0000-000000000000 | |
2024-04-22 23:06:13,993 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Web frontend listening at http://0.0.0.0:8081. | |
2024-04-22 23:06:14,000 INFO org.apache.flink.runtime.dispatcher.runner.DefaultDispatcherRunner [] - DefaultDispatcherRunner was granted leadership with leader id 00000000-0000-0000-0000-000000000000. Creating new DispatcherLeaderProcess. | |
2024-04-22 23:06:14,002 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Start SessionDispatcherLeaderProcess. | |
2024-04-22 23:06:14,003 INFO org.apache.flink.runtime.resourcemanager.ResourceManagerServiceImpl [] - Starting resource manager service. | |
2024-04-22 23:06:14,008 INFO org.apache.flink.runtime.resourcemanager.ResourceManagerServiceImpl [] - Resource manager service is granted leadership with session id 00000000-0000-0000-0000-000000000000. | |
2024-04-22 23:06:14,022 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Recover all persisted job graphs that are not finished, yet. | |
2024-04-22 23:06:14,025 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Successfully recovered 0 persisted job graphs. | |
2024-04-22 23:06:14,044 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher at pekko://flink/user/rpc/dispatcher_0 . | |
2024-04-22 23:06:14,047 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at pekko://flink/user/rpc/resourcemanager_1 . | |
2024-04-22 23:06:14,055 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - Starting the resource manager. | |
2024-04-22 23:06:14,058 INFO org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManager [] - Starting the slot manager. | |
2024-04-22 23:06:14,058 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Starting tokens update task | |
2024-04-22 23:06:14,060 WARN org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - No tokens obtained so skipping notifications | |
2024-04-22 23:06:14,060 WARN org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Tokens update task not started because either no tokens obtained or none of the tokens specified its renewal date | |
2024-04-22 23:06:14,391 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - Registering TaskManager with ResourceID 172.29.0.3:34027-db24e6 (pekko.tcp://flink@172.29.0.3:34027/user/rpc/taskmanager_0) at ResourceManager | |
2024-04-22 23:06:14,402 INFO org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManager [] - Registering task executor 172.29.0.3:34027-db24e6 under 83c4a5ecb080b987fe9f924004aa292c at the slot manager. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2024-04-23 16:27:06,315 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -------------------------------------------------------------------------------- | |
2024-04-23 16:27:06,317 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Preconfiguration: | |
2024-04-23 16:27:06,317 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - | |
RESOURCE_PARAMS extraction logs: | |
jvm_params: -Xmx1073741824 -Xms1073741824 -XX:MaxMetaspaceSize=268435456 | |
dynamic_configs: -D jobmanager.memory.off-heap.size=134217728b -D jobmanager.memory.jvm-overhead.min=201326592b -D jobmanager.memory.jvm-metaspace.size=268435456b -D jobmanager.memory.heap.size=1073741824b -D jobmanager.memory.jvm-overhead.max=201326592b | |
logs: WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. | |
INFO [] - Loading configuration property: blob.server.port, 6124 | |
INFO [] - Loading configuration property: taskmanager.memory.process.size, 1728m | |
INFO [] - Loading configuration property: taskmanager.bind-host, 0.0.0.0 | |
INFO [] - Loading configuration property: jobmanager.execution.failover-strategy, region | |
INFO [] - Loading configuration property: jobmanager.rpc.address, jobmanager | |
INFO [] - Loading configuration property: sql-client.verbose, true | |
INFO [] - Loading configuration property: jobmanager.memory.process.size, 1600m | |
INFO [] - Loading configuration property: jobmanager.rpc.port, 6123 | |
INFO [] - Loading configuration property: query.server.port, 6125 | |
INFO [] - Loading configuration property: rest.bind-address, 0.0.0.0 | |
INFO [] - Loading configuration property: jobmanager.bind-host, 0.0.0.0 | |
INFO [] - Loading configuration property: parallelism.default, 1 | |
INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 1 | |
INFO [] - Loading configuration property: rest.address, 0.0.0.0 | |
INFO [] - Loading configuration property: fs.s3a.aws.credentials.provider, com.amazonaws.auth.profile.ProfileCredentialsProvider | |
INFO [] - Loading configuration property: sql-client.execution.result-mode, TABLEAU | |
INFO [] - Loading configuration property: env.java.opts.all, --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED | |
INFO [] - The derived from fraction jvm overhead memory (160.000mb (167772162 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead | |
INFO [] - Final Master Memory configuration: | |
INFO [] - Total Process Memory: 1.563gb (1677721600 bytes) | |
INFO [] - Total Flink Memory: 1.125gb (1207959552 bytes) | |
INFO [] - JVM Heap: 1024.000mb (1073741824 bytes) | |
INFO [] - Off-heap: 128.000mb (134217728 bytes) | |
INFO [] - JVM Metaspace: 256.000mb (268435456 bytes) | |
INFO [] - JVM Overhead: 192.000mb (201326592 bytes) | |
2024-04-23 16:27:06,318 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -------------------------------------------------------------------------------- | |
2024-04-23 16:27:06,318 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Starting StandaloneSessionClusterEntrypoint (Version: 1.18.1, Scala: 2.12, Rev:a8c8b1c, Date:2023-12-19T22:17:36+01:00) | |
2024-04-23 16:27:06,318 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - OS current user: flink | |
2024-04-23 16:27:06,502 WARN org.apache.hadoop.util.NativeCodeLoader [] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
2024-04-23 16:27:06,516 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Current Hadoop/Kerberos user: flink | |
2024-04-23 16:27:06,516 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM: OpenJDK 64-Bit Server VM - Ubuntu - 11/11.0.22+7-post-Ubuntu-0ubuntu222.04.1 | |
2024-04-23 16:27:06,516 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Arch: aarch64 | |
2024-04-23 16:27:06,516 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Maximum heap size: 1024 MiBytes | |
2024-04-23 16:27:06,516 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JAVA_HOME: /usr/lib/jvm/java-11-openjdk-arm64 | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Hadoop version: 3.4.0 | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM Options: | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xmx1073741824 | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xms1073741824 | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -XX:MaxMetaspaceSize=268435456 | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -XX:+IgnoreUnrecognizedVMOptions | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=java.base/sun.net.util=ALL-UNNAMED | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED | |
2024-04-23 16:27:06,520 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED | |
2024-04-23 16:27:06,521 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED | |
2024-04-23 16:27:06,521 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.lang=ALL-UNNAMED | |
2024-04-23 16:27:06,521 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.net=ALL-UNNAMED | |
2024-04-23 16:27:06,521 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.io=ALL-UNNAMED | |
2024-04-23 16:27:06,521 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.nio=ALL-UNNAMED | |
2024-04-23 16:27:06,521 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/sun.nio.ch=ALL-UNNAMED | |
2024-04-23 16:27:06,521 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.lang.reflect=ALL-UNNAMED | |
2024-04-23 16:27:06,521 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.text=ALL-UNNAMED | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.time=ALL-UNNAMED | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.util=ALL-UNNAMED | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.util.concurrent=ALL-UNNAMED | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog.file=/opt/flink/log/flink--standalonesession-0-3080ffd34e9b.log | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configuration=file:/opt/flink/conf/log4j-console.properties | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configurationFile=file:/opt/flink/conf/log4j-console.properties | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Program Arguments: | |
2024-04-23 16:27:06,522 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -D | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - jobmanager.memory.off-heap.size=134217728b | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -D | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - jobmanager.memory.jvm-overhead.min=201326592b | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -D | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - jobmanager.memory.jvm-metaspace.size=268435456b | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -D | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - jobmanager.memory.heap.size=1073741824b | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -D | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - jobmanager.memory.jvm-overhead.max=201326592b | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --configDir | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - /opt/flink/conf | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --executionMode | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - cluster | |
2024-04-23 16:27:06,523 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Classpath: /opt/flink/lib/flink-cep-1.18.1.jar:/opt/flink/lib/flink-connector-files-1.18.1.jar:/opt/flink/lib/flink-csv-1.18.1.jar:/opt/flink/lib/flink-json-1.18.1.jar:/opt/flink/lib/flink-scala_2.12-1.18.1.jar:/opt/flink/lib/flink-sql-connector-hive-2.3.9_2.12-1.18.1.jar:/opt/flink/lib/flink-sql-connector-kafka-3.0.2-1.18.jar:/opt/flink/lib/flink-sql-parquet-1.19.0.jar:/opt/flink/lib/flink-table-api-java-uber-1.18.1.jar:/opt/flink/lib/flink-table-planner-loader-1.18.1.jar:/opt/flink/lib/flink-table-runtime-1.18.1.jar:/opt/flink/lib/log4j-1.2-api-2.17.1.jar:/opt/flink/lib/log4j-api-2.17.1.jar:/opt/flink/lib/log4j-core-2.17.1.jar:/opt/flink/lib/log4j-slf4j-impl-2.17.1.jar:/opt/flink/lib/flink-dist-1.18.1.jar::/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/kerb-client-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.5.0.jar:/opt/hadoop/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/hadoop/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/opt/hadoop/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/opt/hadoop/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.24.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.20.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.5.4.jar:/opt/hadoop/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/kerb-server-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.15.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-util-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop/share/hadoop/common/lib/dnsjava-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/jline-3.9.0.jar:/opt/hadoop/share/hadoop/common/lib/kerb-core-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/token-provider-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.9.2.jar:/opt/hadoop/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/curator-client-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/opt/hadoop/share/hadoop/common/lib/gson-2.9.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.9.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-text-1.10.0.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/hadoop/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/kerb-common-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/opt/hadoop/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.14.0.jar:/opt/hadoop/share/hadoop/common/lib/httpcore-4.4.13.jar:/opt/hadoop/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/opt/hadoop/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/common/lib/reload4j-1.2.22.jar:/opt/hadoop/share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/stax2-api-4.2.1.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.8.3.jar:/opt/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/hadoop/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/opt/hadoop/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/failureaccess-1.0.jar:/opt/hadoop/share/hadoop/common/lib/kerby-util-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/opt/hadoop/share/hadoop/common/lib/curator-framework-5.2.0.jar:/opt/hadoop/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-math3-3.6.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-2.12.7.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.2.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.19.4.jar:/opt/hadoop/share/hadoop/common/lib/kerby-config-2.0.3.jar:/opt/hadoop/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/opt/hadoop/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-kms-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-registry-3.4.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/opt/hadoop/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jline-3.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/avro-1.9.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/gson-2.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/opt/hadoop/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/opt/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/opt/hadoop/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/lib/jna-5.2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-commons-9.6.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/codemodel-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-tree-9.6.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/lib/objenesis-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/opt/hadoop/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-4.2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/opt/hadoop/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:: | |
2024-04-23 16:27:06,524 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -------------------------------------------------------------------------------- | |
2024-04-23 16:27:06,524 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Registered UNIX signal handlers for [TERM, HUP, INT] | |
2024-04-23 16:27:06,531 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: blob.server.port, 6124 | |
2024-04-23 16:27:06,531 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.memory.process.size, 1728m | |
2024-04-23 16:27:06,531 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.bind-host, 0.0.0.0 | |
2024-04-23 16:27:06,531 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.execution.failover-strategy, region | |
2024-04-23 16:27:06,531 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.address, jobmanager | |
2024-04-23 16:27:06,531 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: sql-client.verbose, true | |
2024-04-23 16:27:06,531 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.memory.process.size, 1600m | |
2024-04-23 16:27:06,531 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.port, 6123 | |
2024-04-23 16:27:06,531 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: query.server.port, 6125 | |
2024-04-23 16:27:06,531 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: rest.bind-address, 0.0.0.0 | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.bind-host, 0.0.0.0 | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: parallelism.default, 1 | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.numberOfTaskSlots, 1 | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: rest.address, 0.0.0.0 | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: fs.s3a.aws.credentials.provider, com.amazonaws.auth.profile.ProfileCredentialsProvider | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: sql-client.execution.result-mode, TABLEAU | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: env.java.opts.all, --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading dynamic configuration property: jobmanager.memory.off-heap.size, 134217728b | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading dynamic configuration property: jobmanager.memory.jvm-overhead.min, 201326592b | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading dynamic configuration property: jobmanager.memory.jvm-metaspace.size, 268435456b | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading dynamic configuration property: jobmanager.memory.heap.size, 1073741824b | |
2024-04-23 16:27:06,532 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading dynamic configuration property: jobmanager.memory.jvm-overhead.max, 201326592b | |
2024-04-23 16:27:06,543 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Starting StandaloneSessionClusterEntrypoint. | |
2024-04-23 16:27:06,564 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Install default filesystem. | |
2024-04-23 16:27:06,574 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: external-resource-gpu | |
2024-04-23 16:27:06,576 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-statsd | |
2024-04-23 16:27:06,576 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-graphite | |
2024-04-23 16:27:06,576 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-influx | |
2024-04-23 16:27:06,576 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-jmx | |
2024-04-23 16:27:06,576 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-datadog | |
2024-04-23 16:27:06,576 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-slf4j | |
2024-04-23 16:27:06,576 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-prometheus | |
2024-04-23 16:27:06,576 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: flink-s3-fs-hadoop | |
2024-04-23 16:27:06,625 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Install security context. | |
2024-04-23 16:27:06,633 WARN org.apache.flink.runtime.util.HadoopUtils [] - Could not find Hadoop configuration via any of the supported methods (Flink configuration, environment variables). | |
2024-04-23 16:27:06,655 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Hadoop user set to flink (auth:SIMPLE) | |
2024-04-23 16:27:06,656 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Kerberos security is disabled. | |
2024-04-23 16:27:06,659 INFO org.apache.flink.runtime.security.modules.JaasModule [] - Jaas file will be created as /tmp/jaas-7813621173272278956.conf. | |
2024-04-23 16:27:06,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Initializing cluster services. | |
2024-04-23 16:27:06,665 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Using working directory: WorkingDirectory(/tmp/jm_6f85d7d7a0d290a4a84987964d03541b). | |
2024-04-23 16:27:06,847 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Trying to start actor system, external address jobmanager:6123, bind address 0.0.0.0:6123. | |
2024-04-23 16:27:07,142 INFO org.apache.pekko.event.slf4j.Slf4jLogger [] - Slf4jLogger started | |
2024-04-23 16:27:07,156 INFO org.apache.pekko.remote.RemoteActorRefProvider [] - Pekko Cluster not in use - enabling unsafe features anyway because `pekko.remote.use-unsafe-remote-features-outside-cluster` has been enabled. | |
2024-04-23 16:27:07,156 INFO org.apache.pekko.remote.Remoting [] - Starting remoting | |
2024-04-23 16:27:07,215 INFO org.apache.pekko.remote.Remoting [] - Remoting started; listening on addresses :[pekko.tcp://flink@jobmanager:6123] | |
2024-04-23 16:27:07,258 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Actor system started at pekko.tcp://flink@jobmanager:6123 | |
2024-04-23 16:27:07,265 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Loading delegation token providers | |
2024-04-23 16:27:07,266 WARN org.apache.flink.runtime.util.HadoopUtils [] - Could not find Hadoop configuration via any of the supported methods (Flink configuration, environment variables). | |
2024-04-23 16:27:07,266 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Delegation token provider hadoopfs loaded and initialized | |
2024-04-23 16:27:07,267 WARN org.apache.flink.runtime.util.HadoopUtils [] - Could not find Hadoop configuration via any of the supported methods (Flink configuration, environment variables). | |
2024-04-23 16:27:07,267 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Delegation token provider hbase loaded and initialized | |
2024-04-23 16:27:07,267 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: external-resource-gpu | |
2024-04-23 16:27:07,267 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-statsd | |
2024-04-23 16:27:07,267 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-graphite | |
2024-04-23 16:27:07,267 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-influx | |
2024-04-23 16:27:07,267 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-jmx | |
2024-04-23 16:27:07,267 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-datadog | |
2024-04-23 16:27:07,267 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-slf4j | |
2024-04-23 16:27:07,267 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-prometheus | |
2024-04-23 16:27:07,267 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: flink-s3-fs-hadoop | |
2024-04-23 16:27:07,268 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Delegation token provider s3-hadoop loaded and initialized | |
2024-04-23 16:27:07,269 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Delegation token providers loaded successfully | |
2024-04-23 16:27:07,269 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Loading delegation token receivers | |
2024-04-23 16:27:07,270 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Delegation token receiver hadoopfs loaded and initialized | |
2024-04-23 16:27:07,270 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Delegation token receiver hbase loaded and initialized | |
2024-04-23 16:27:07,270 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: external-resource-gpu | |
2024-04-23 16:27:07,270 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-statsd | |
2024-04-23 16:27:07,270 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-graphite | |
2024-04-23 16:27:07,270 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-influx | |
2024-04-23 16:27:07,270 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-jmx | |
2024-04-23 16:27:07,270 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-datadog | |
2024-04-23 16:27:07,270 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-slf4j | |
2024-04-23 16:27:07,270 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: metrics-prometheus | |
2024-04-23 16:27:07,270 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID found, reusing it: flink-s3-fs-hadoop | |
2024-04-23 16:27:07,271 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Delegation token receiver s3-hadoop loaded and initialized | |
2024-04-23 16:27:07,271 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Delegation token receivers loaded successfully | |
2024-04-23 16:27:07,271 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Checking provider and receiver instances consistency | |
2024-04-23 16:27:07,271 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Provider and receiver instances are consistent | |
2024-04-23 16:27:07,271 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Obtaining delegation tokens | |
2024-04-23 16:27:07,272 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Delegation tokens obtained successfully | |
2024-04-23 16:27:07,272 WARN org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - No tokens obtained so skipping notifications | |
2024-04-23 16:27:07,279 INFO org.apache.flink.runtime.blob.BlobServer [] - Created BLOB server storage directory /tmp/jm_6f85d7d7a0d290a4a84987964d03541b/blobStorage | |
2024-04-23 16:27:07,281 INFO org.apache.flink.runtime.blob.BlobServer [] - Started BLOB server at 0.0.0.0:6124 - max concurrent requests: 50 - max backlog: 1000 | |
2024-04-23 16:27:07,288 INFO org.apache.flink.runtime.metrics.MetricRegistryImpl [] - No metrics reporter configured, no metrics will be exposed/reported. | |
2024-04-23 16:27:07,289 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Trying to start actor system, external address jobmanager:0, bind address 0.0.0.0:0. | |
2024-04-23 16:27:07,296 INFO org.apache.pekko.event.slf4j.Slf4jLogger [] - Slf4jLogger started | |
2024-04-23 16:27:07,297 INFO org.apache.pekko.remote.RemoteActorRefProvider [] - Pekko Cluster not in use - enabling unsafe features anyway because `pekko.remote.use-unsafe-remote-features-outside-cluster` has been enabled. | |
2024-04-23 16:27:07,297 INFO org.apache.pekko.remote.Remoting [] - Starting remoting | |
2024-04-23 16:27:07,305 INFO org.apache.pekko.remote.Remoting [] - Remoting started; listening on addresses :[pekko.tcp://flink-metrics@jobmanager:35935] | |
2024-04-23 16:27:07,310 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Actor system started at pekko.tcp://flink-metrics@jobmanager:35935 | |
2024-04-23 16:27:07,326 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.metrics.dump.MetricQueryService at pekko://flink-metrics/user/rpc/MetricQueryService . | |
2024-04-23 16:27:07,334 INFO org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStore [] - Initializing FileExecutionGraphInfoStore: Storage directory /tmp/executionGraphStore-202a0b20-c3e2-4b9a-824b-091d9085490c, expiration time 3600000, maximum cache size 52428800 bytes. | |
2024-04-23 16:27:07,362 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Upload directory /tmp/flink-web-4c7133aa-8c13-41b6-9e4c-077ab3af5574/flink-web-upload does not exist. | |
2024-04-23 16:27:07,362 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Created directory /tmp/flink-web-4c7133aa-8c13-41b6-9e4c-077ab3af5574/flink-web-upload for file uploads. | |
2024-04-23 16:27:07,363 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Starting rest endpoint. | |
2024-04-23 16:27:07,465 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils [] - Determined location of main cluster component log file: /opt/flink/log/flink--standalonesession-0-3080ffd34e9b.log | |
2024-04-23 16:27:07,465 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils [] - Determined location of main cluster component stdout file: /opt/flink/log/flink--standalonesession-0-3080ffd34e9b.out | |
2024-04-23 16:27:07,530 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Rest endpoint listening at 0.0.0.0:8081 | |
2024-04-23 16:27:07,531 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - http://0.0.0.0:8081 was granted leadership with leaderSessionID=00000000-0000-0000-0000-000000000000 | |
2024-04-23 16:27:07,532 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Web frontend listening at http://0.0.0.0:8081. | |
2024-04-23 16:27:07,540 INFO org.apache.flink.runtime.dispatcher.runner.DefaultDispatcherRunner [] - DefaultDispatcherRunner was granted leadership with leader id 00000000-0000-0000-0000-000000000000. Creating new DispatcherLeaderProcess. | |
2024-04-23 16:27:07,542 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Start SessionDispatcherLeaderProcess. | |
2024-04-23 16:27:07,544 INFO org.apache.flink.runtime.resourcemanager.ResourceManagerServiceImpl [] - Starting resource manager service. | |
2024-04-23 16:27:07,546 INFO org.apache.flink.runtime.resourcemanager.ResourceManagerServiceImpl [] - Resource manager service is granted leadership with session id 00000000-0000-0000-0000-000000000000. | |
2024-04-23 16:27:07,552 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Recover all persisted job graphs that are not finished, yet. | |
2024-04-23 16:27:07,552 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Successfully recovered 0 persisted job graphs. | |
2024-04-23 16:27:07,562 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher at pekko://flink/user/rpc/dispatcher_0 . | |
2024-04-23 16:27:07,566 INFO org.apache.flink.runtime.rpc.pekko.PekkoRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at pekko://flink/user/rpc/resourcemanager_1 . | |
2024-04-23 16:27:07,577 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - Starting the resource manager. | |
2024-04-23 16:27:07,582 INFO org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManager [] - Starting the slot manager. | |
2024-04-23 16:27:07,583 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Starting tokens update task | |
2024-04-23 16:27:07,583 WARN org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - No tokens obtained so skipping notifications | |
2024-04-23 16:27:07,583 WARN org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Tokens update task not started because either no tokens obtained or none of the tokens specified its renewal date | |
2024-04-23 16:27:07,954 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - Registering TaskManager with ResourceID 192.168.16.3:33893-b9dd2f (pekko.tcp://flink@192.168.16.3:33893/user/rpc/taskmanager_0) at ResourceManager | |
2024-04-23 16:27:07,966 INFO org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManager [] - Registering task executor 192.168.16.3:33893-b9dd2f under 9d6cb96f7917c772549c91f184deefd8 at the slot manager. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import logging | |
import sys | |
from pyflink.table import EnvironmentSettings, StreamTableEnvironment | |
from pyflink.table.table_result import TableResult | |
from pyflink.table.catalog import HiveCatalog | |
env_settings = EnvironmentSettings.in_streaming_mode() | |
t_env = StreamTableEnvironment.create(environment_settings=env_settings) | |
catalog_name = "dwh" | |
print("Creating dwh catalog") | |
hive_catalog = HiveCatalog( | |
catalog_name, | |
"default", | |
"/tmp/hive-dwh-3/hive/conf-dwh-prod" | |
) | |
t_env.register_catalog(catalog_name, hive_catalog) | |
t_env.use_catalog(catalog_name) | |
print("Creating table iris_shadow") | |
t_env.execute_sql(f""" | |
CREATE TABLE default_catalog.default_database.iris_shadow LIKE rdg_test.iris_test | |
""") | |
print("Creating table iris_out") | |
t_env.execute_sql(f""" | |
CREATE TABLE default_catalog.default_database.iris_out WITH ( | |
'connector' = 'filesystem', | |
'path' = 's3://<path>', | |
'format' = 'json' | |
) LIKE rdg_test.iris_test (EXCLUDING ALL) | |
""") | |
print("Switching back to default") | |
t_env.use_catalog("default_catalog") | |
print("Running insert") | |
t_env.execute_sql("INSERT INTO iris_out SELECT * FROM iris_shadow LIMIT 50") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import logging | |
import sys | |
from pyflink.table import EnvironmentSettings, StreamTableEnvironment | |
from pyflink.table.table_result import TableResult | |
from pyflink.table.catalog import HiveCatalog | |
env_settings = EnvironmentSettings.in_streaming_mode() | |
t_env = StreamTableEnvironment.create(environment_settings=env_settings) | |
catalog_name = "dwh" | |
print("Creating dwh catalog") | |
hive_catalog = HiveCatalog( | |
catalog_name, | |
"default", | |
"/tmp/hive-dwh-3/hive/conf-dwh-prod" | |
) | |
t_env.register_catalog(catalog_name, hive_catalog) | |
t_env.use_catalog(catalog_name) | |
print("Creating table iris_out") | |
t_env.execute_sql(f""" | |
CREATE TABLE default_catalog.default_database.iris_out WITH ( | |
'connector' = 'filesystem', | |
'path' = 's3://aa.test.rdg/run11_out', | |
'format' = 'json' | |
) LIKE rdg_test.iris_test (EXCLUDING ALL) | |
""") | |
print("Running insert") | |
t_env.execute_sql("INSERT INTO default_catalog.default_database.iris_out SELECT * FROM rdg_test.iris_test LIMIT 50") |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment