Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
/etc/teleport.yaml of Teleport cluster located in AWS N.V region
# By default, this file should be stored in /etc/teleport.yaml
# This section of the configuration file applies to all teleport
# services.
teleport:
# nodename allows to assign an alternative name this node can be reached by.
# by default it's equal to hostname
nodename: graviton
# Data directory where Teleport daemon keeps its data.
# See "Filesystem Layout" section above for more details.
data_dir: /var/lib/teleport
# Invitation token used to join a cluster. it is not used on
# subsequent starts
auth_token: xxxx-token-xxxx
# Optional CA pin of the auth server. This enables more secure way of adding new
# nodes to a cluster. See "Adding Nodes" section above.
ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1"
# When running in multi-homed or NATed environments Teleport nodes need
# to know which IP it will be reachable at by other nodes
#
# This value can be specified as FQDN e.g. host.example.com
advertise_ip: teleport.example.com
# list of auth servers in a cluster. you will have more than one auth server
# if you configure teleport auth to run in HA configuration
auth_servers:
- 0.0.0.0:3025
# Teleport throttles all connections to avoid abuse. These settings allow
# you to adjust the default limits
connection_limits:
max_connections: 1000
max_users: 250
# Logging configuration. Possible output values are 'stdout', 'stderr' and
# 'syslog'. Possible severity values are INFO, WARN and ERROR (default).
log:
output: stderr
severity: DEBUG
# Configuration for the storage back-end used for the cluster state and the
# audit log. Several back-end types are supported. See "High Availability"
# section of this Admin Manual below to learn how to configure DynamoDB,
# S3, etcd and other highly available back-ends.
# storage:
# By default teleport uses the `data_dir` directory on a local filesystem
# type: dir
# Array of locations where the audit log events will be stored. by
# default they are stored in `/var/lib/teleport/log`
# audit_events_uri: ['file:///var/lib/teleport/log', 'dynamodb://events_table_name']
# Use this setting to configure teleport to store the recorded sessions in
# an AWS S3 bucket. see "Using Amazon S3" chapter for more information.
# audit_sessions_uri: 's3://bucket/?region=us-east-1'
# Cipher algorithms that the server supports. This section only needs to be
# set if you want to override the defaults.
ciphers:
- aes128-ctr
- aes192-ctr
- aes256-ctr
- aes128-gcm@openssh.com
# - arcfour256
# - arcfour128
# Key exchange algorithms that the server supports. This section only needs
# to be set if you want to override the defaults.
kex_algos:
- curve25519-sha256@libssh.org
- ecdh-sha2-nistp256
- ecdh-sha2-nistp384
- ecdh-sha2-nistp521
- diffie-hellman-group14-sha1
- diffie-hellman-group1-sha1
# Message authentication code (MAC) algorithms that the server supports.
# This section only needs to be set if you want to override the defaults.
mac_algos:
- hmac-sha2-256-etm@openssh.com
- hmac-sha2-256
- hmac-sha1
- hmac-sha1-96
# List of the supported ciphersuites. If this section is not specified,
# only the default ciphersuites are enabled.
ciphersuites:
- tls-rsa-with-aes-128-cbc-sha # default
- tls-rsa-with-aes-256-cbc-sha # default
- tls-rsa-with-aes-128-cbc-sha256
- tls-rsa-with-aes-128-gcm-sha256
- tls-rsa-with-aes-256-gcm-sha384
- tls-ecdhe-ecdsa-with-aes-128-cbc-sha
- tls-ecdhe-ecdsa-with-aes-256-cbc-sha
- tls-ecdhe-rsa-with-aes-128-cbc-sha
- tls-ecdhe-rsa-with-aes-256-cbc-sha
- tls-ecdhe-ecdsa-with-aes-128-cbc-sha256
- tls-ecdhe-rsa-with-aes-128-cbc-sha256
- tls-ecdhe-rsa-with-aes-128-gcm-sha256
- tls-ecdhe-ecdsa-with-aes-128-gcm-sha256
- tls-ecdhe-rsa-with-aes-256-gcm-sha384
- tls-ecdhe-ecdsa-with-aes-256-gcm-sha384
- tls-ecdhe-rsa-with-chacha20-poly1305
- tls-ecdhe-ecdsa-with-chacha20-poly1305
# This section configures the 'auth service':
auth_service:
# Turns 'auth' role on. Default is 'yes'
enabled: yes
# A cluster name is used as part of a signature in certificates
# generated by this CA.
#
# We strongly recommend to explicitly set it to something meaningful as it
# becomes important when configuring trust between multiple clusters.
#
# By default an automatically generated name is used (not recommended)
#
# IMPORTANT: if you change cluster_name, it will invalidate all generated
# certificates and keys (may need to wipe out /var/lib/teleport directory)
cluster_name: "teleport.example.com"
authentication:
# default authentication type. possible values are 'local', 'oidc' and 'saml'
# only local authentication (Teleport's own user DB) is supported in the open
# source version
type: local
# second_factor can be off, otp, or u2f
second_factor: otp
# this section is used if second_factor is set to 'u2f'
u2f:
# app_id must point to the URL of the Teleport Web UI (proxy) accessible
# by the end users
app_id: https://localhost:3080
# facets must list all proxy servers if there are more than one deployed
facets:
- https://localhost:3080
# IP and the port to bind to. Other Teleport nodes will be connecting to
# this port (AKA "Auth API" or "Cluster API") to validate client
# certificates
listen_addr: 0.0.0.0:3025
# The optional DNS name the auth server if located behind a load balancer.
# (see public_addr section below)
public_addr: teleport.example.com:3025
# Pre-defined tokens for adding new nodes to a cluster. Each token specifies
# the role a new node will be allowed to assume. The more secure way to
# add nodes is to use `ttl node add --ttl` command to generate auto-expiring
# tokens.
#
# We recommend to use tools like `pwgen` to generate sufficiently random
# tokens of 32+ byte length.
tokens:
- "proxy,node:xxxxx"
- "auth:yyyy"
# Optional setting for configuring session recording. Possible values are:
# "node" : sessions will be recorded on the node level (the default)
# "proxy" : recording on the proxy level, see "recording proxy mode" section.
# "off" : session recording is turned off
session_recording: "node"
# This setting determines if a Teleport proxy performs strict host key checks.
# Only applicable if session_recording=proxy, see "recording proxy mode" for details.
proxy_checks_host_keys: yes
# Determines if SSH sessions to cluster nodes are forcefully terminated
# after no activity from a client (idle client).
# Examples: "30m", "1h" or "1h30m"
client_idle_timeout: never
# Determines if the clients will be forcefully disconnected when their
# certificates expire in the middle of an active SSH session. (default is 'no')
disconnect_expired_cert: no
# License file to start auth server with. Note that this setting is ignored
# in open-source Teleport and is required only for Teleport Pro, Business
# and Enterprise subscription plans.
#
# The path can be either absolute or relative to the configured `data_dir`
# and should point to the license file obtained from Teleport Download Portal.
#
# If not set, by default Teleport will look for the `license.pem` file in
# the configured `data_dir`.
license_file: /var/lib/teleport/license.pem
# DEPRECATED in Teleport 3.2 (moved to proxy_service section)
# kubeconfig_file: /home/ubuntu/.kube/config
# This section configures the 'node service':
ssh_service:
# Turns 'ssh' role on. Default is 'yes'
enabled: yes
# IP and the port for SSH service to bind to.
listen_addr: 0.0.0.0:3022
# The optional public address the SSH service. This is useful if administrators
# want to allow users to connect to nodes directly, bypassing a Teleport proxy
# (see public_addr section below)
public_addr: teleport.example.com:3022
# See explanation of labels in "Labeling Nodes" section below
labels:
role: master
type: postgres
# List of the commands to periodically execute. Their output will be used as node labels.
# See "Labeling Nodes" section below for more information and more examples.
commands:
# this command will add a label 'arch=x86_64' to a node
- name: arch
command: ['/bin/uname', '-p']
period: 1h0m0s
# enables reading ~/.tsh/environment before creating a session. by default
# set to false, can be set true here or as a command line flag.
permit_user_env: false
# configures PAM integration. see below for more details.
pam:
enabled: no
service_name: teleport
# This section configures the 'proxy service'
proxy_service:
# Turns 'proxy' role on. Default is 'yes'
enabled: yes
# SSH forwarding/proxy address. Command line (CLI) clients always begin their
# SSH sessions by connecting to this port
listen_addr: 0.0.0.0:3023
# Reverse tunnel listening address. An auth server (CA) can establish an
# outbound (from behind the firewall) connection to this address.
# This will allow users of the outside CA to connect to behind-the-firewall
# nodes.
tunnel_listen_addr: 0.0.0.0:3024
# The HTTPS listen address to serve the Web UI and also to authenticate the
# command line (CLI) users via password+HOTP
web_listen_addr: 0.0.0.0:3080
# The DNS name the proxy HTTPS endpoint as accessible by cluster users.
# Defaults to the proxy's hostname if not specified. If running multiple
# proxies behind a load balancer, this name must point to the load balancer
# (see public_addr section below)
public_addr: teleport.example.com:3080
# The DNS name of the proxy SSH endpoint as accessible by cluster clients.
# Defaults to the proxy's hostname if not specified. If running multiple proxies
# behind a load balancer, this name must point to the load balancer.
# Use a TCP load balancer because this port uses SSH protocol.
ssh_public_addr: teleport.example.com:3023
# TLS certificate for the HTTPS connection. Configuring these properly is
# critical for Teleport security.
https_key_file: /var/lib/teleport/webproxy_key.pem
https_cert_file: /var/lib/teleport/webproxy_cert.pem
# This section configures the Kubernetes proxy service
kubernetes:
# Turns 'kubernetes' proxy on. Default is 'no'
enabled: yes
# Kubernetes proxy listen address.
listen_addr: 0.0.0.0:3026
# The DNS name of the Kubernetes proxy server that is accessible by cluster clients.
# If running multiple proxies behind a load balancer, this name must point to the
# load balancer.
public_addr: ['teleport.example.com:3026']
# This setting is not required if the Teleport proxy service is
# deployed inside a Kubernetes cluster. Otherwise, Teleport proxy
# will use the credentials from this file:
kubeconfig_file: /home/ubuntu/.kube/config
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment