Last active
March 28, 2020 14:46
-
-
Save zjffdu/eda550fd89667a31a5dd6fe544eaaa69 to your computer and use it in GitHub Desktop.
flink interpreter properties
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Property | Default | Description | |
---|---|---|---|
FLINK_HOME | Location of flink installation. It is must be specified, otherwise you can not use flink in Zeppelin | ||
HADOOP_CONF_DIR | location of hadoop conf, this is must be set if running in yarn mode | ||
HIVE_CONF_DIR | location of hive conf, this is must be set if you want to connect to hive metastore | ||
flink.execution.mode | local | Execution mode of flink, e.g. local | yarn | remote | |
flink.execution.remote.host | jobmanager hostname if it is remote mode | ||
flink.execution.remote.port | jobmanager port if it is remote mode | ||
flink.jm.memory | 1024 | Total number of memory(mb) of JobManager | |
flink.tm.memory | 1024 | Total number of memory(mb) of TaskManager | |
flink.tm.slot | 1 | Number of slot per TaskManager | |
local.number-taskmanager | 4 | Total number of TaskManagers in Local mode | |
flink.yarn.appName | Zeppelin Flink Session | Yarn app name | |
flink.yarn.queue | queue name of yarn app | ||
flink.webui.yarn.useProxy | false | whether use yarn proxy url as flink weburl, e.g. http://localhost:8088/proxy/application15833965980680004 | |
flink.udf.jars | udf jars (comma separated), zeppelin will register udf in this jar automatically for user. The udf name is the class name. | ||
flink.execution.jars | additional user jars (comma separated) | ||
flink.execution.packages | additional user packages (comma separated), e.g. org.apache.flink:flink-connector-kafka2.11:1.10,org.apache.flink:flink-connector-kafka-base2.11:1.10,org.apache.flink:flink-json:1.10 | ||
zeppelin.flink.concurrentBatchSql.max | 10 | Max concurrent sql of Batch Sql (%flink.bsql) | |
zeppelin.flink.concurrentStreamSql.max | 10 | Max concurrent sql of Stream Sql (%flink.ssql) | |
zeppelin.pyflink.python | python | python binary executable for PyFlink | |
table.exec.resource.default-parallelism | 1 | Default parallelism for flink sql job | |
zeppelin.flink.scala.color | true | whether display scala shell output in colorful format | |
zeppelin.flink.enableHive | false | whether enable hive | |
zeppelin.flink.hive.version | 2.3.4 | Hive version that you would like to connect | |
zeppelin.flink.printREPLOutput | true | Print REPL output | |
zeppelin.flink.maxResult | 1000 | max number of row returned by sql interpreter | |
flink.interpreter.close.shutdown_cluster | true | Whether shutdown application when closing interpreter | |
zeppelin.interpreter.close.cancel_job | true | Whether cancel flink job when closing interpreter |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment