Skip to content

Instantly share code, notes, and snippets.

@v1k0d3n
Last active January 2, 2020 15:40
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save v1k0d3n/5e6fcc7ff929a827d1edd94df0bdc49d to your computer and use it in GitHub Desktop.
Save v1k0d3n/5e6fcc7ff929a827d1edd94df0bdc49d to your computer and use it in GitHub Desktop.
Install kubeadm in self-hosted mode
# System preparation
## Update system:
sudo apt-get update && sudo apt-get dist-upgrade -y
## Prepare for Docker-CE install:
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
## Add GPG key for Docker-CE repo:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
## Apt repository add for Docker-CE:
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
## Install docker-ce, docker-ce-cli, containerd:
sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install -y \
docker-ce \
docker-ce-cli \
containerd.io
# Prepare for Kubernetes tools:
## Add GPG key for Kubernetes repo:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
## Apt repository add for Kubernetes tools:
sudo bash -c 'cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF'
## Apt update:
sudo apt-get update
## Apt Install Kubernetes tools:
sudo apt-get install -y \
kubelet \
kubeadm \
kubectl \
ipvsadm \
jq
# Calico preparation:
## Obtain Calico manifests (the one included in this gist):
## You can either download the Calico maniest in this Gist and edit to your needs, or you can simply apply directly as described below:
# curl https://gist.githubusercontent.com/v1k0d3n/5e6fcc7ff929a827d1edd94df0bdc49d/raw/0438346eddaf2d99d5500b362f2b30f7921c811a/sdn-calico.yaml -O > /home/"${USER}"/calico.yaml
## Replace default POD/CIDR with custom POD/CIDR:
export CALICO_POD_CIDR="192.168.0.0/16"
export POD_CIDR="10.25.0.0/22"
sed -i -e "s?${CALICO_POD_CIDR}?${POD_CIDR}?g" calico.yaml
## FYI: THERE ARE NEW VERSIONS AT THE FOLLOWING URLS:
# curl https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml -O > /home/"${USER}"/calico-rbac-kdd.yaml
# curl https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml -O > /home/"${USER}"/calico.yaml
# Configure Docker:
## Versions of Kubeadm now fail when using `cgroupfs` drivers. You need to configure `systemd` now, and restart docker.
sudo bash -c 'cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF'
sudo systemctl restart docker
# Swapoff:
sudo swapoff -a
# Initialize Kubernetes:
## First option: kubeadm initialization command:
curl https://gist.githubusercontent.com/v1k0d3n/5e6fcc7ff929a827d1edd94df0bdc49d/raw/f1ee3fdabd1905d87ae959285e44b6f6adf69b71/kubeadm-init.yaml -O
kubeadm init --config kubeadm-init.yaml
## FYI: Or if you do not want to use this configuration YAML, you can init by doing:
# sudo kubeadm init --pod-network-cidr="${POD_CIDR}"
## Secondary option: kubeadm initialization command:
# sudo kubeadm init \
# --apiserver-advertise-address="192.168.4.81" \
# --apiserver-bind-port="8443" \
# --kubernetes-version="v1.13.0" \
# --node-name="kubenode01.flagship.sh" \
# --pod-network-cidr="10.25.0.0/22" \
# --service-cidr="10.96.0.0/12" \
# --token="783bde.3f89s0fje9f38fhf" \
# --service-dns-domain="cluster.local" \
# --token-ttl="24h0m0s"
## Allow ${USER} to control Kubernetes:
sudo cp /etc/kubernetes/admin.conf /home/"${USER}"/.kube/config
sudo chown "${USER}":"${USER}" /home/"${USER}"/.kube/config
## Apply Calico Configuration:
# If you choose to edit with custom CIDR ranges, curl and edit the following file:
# kubectl apply -f /home/"${USER}"/calico.yaml
kubectl apply -f https://gist.githubusercontent.com/v1k0d3n/5e6fcc7ff929a827d1edd94df0bdc49d/raw/0438346eddaf2d99d5500b362f2b30f7921c811a/sdn-calico.yaml
## Untaint the node for AIO workloads:
kubectl taint nodes --all node-role.kubernetes.io/master-
# kubeadm self-hosted initialization:
## Move control plane to self-hosted mode:
sudo kubeadm alpha selfhosting pivot
# Remove all Kubernetes componets after testing:
## Reset the cluster:
sudo kubeadm reset --force
sudo ipvsadm --clear
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
# Other helpful commands:
kubectl -n kube-system get cm kubeadm-config -oyaml
# For audit logs, create the following directory:
/var/log/kubernetes/apiserver
# INITIALIZATION CONFIGURATION:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.4.81
bindPort: 6443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: dcebae.9432563132ffacba
ttl: 1h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: kubenode01.flagship.sh
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
# CLUSTER CONFIGURATION:
# INCLUDED STILL: apiServer, controllerManager, scheduler, etcd, dns, networking
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
clusterName: kubernetes
imageRepository: k8s.gcr.io
useHyperKubeImage: true
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: ""
# KUBE-APISERVER CONFIGURATION:
apiServer:
timeoutForControlPlane: 4m0s
extraArgs:
allow-privileged: "true"
anonymous-auth: "true"
audit-log-path: "/var/log/kubernetes/apiserver/audit.log"
authorization-mode: "Node,RBAC"
# NOTES: USED IN FLAGSHIP DEVELOPMENT MACOS-BASED CLUSTERS
# basic-auth-file: "/srv/kubernetes/secrets/basic_auth.csv"
profiling: "false"
advertise-address: "192.168.4.81"
client-ca-file: "/etc/kubernetes/pki/ca.crt"
cloud-provider:
enable-admission-plugins: "NodeRestriction"
enable-bootstrap-token-auth: "true"
etcd-cafile: "/etc/kubernetes/pki/etcd/ca.crt"
etcd-certfile: "/etc/kubernetes/pki/apiserver-etcd-client.crt"
etcd-keyfile: "/etc/kubernetes/pki/apiserver-etcd-client.key"
etcd-servers: "https://127.0.0.1:2379"
insecure-port: "0"
kubelet-client-certificate: "/etc/kubernetes/pki/apiserver-kubelet-client.crt"
kubelet-client-key: "/etc/kubernetes/pki/apiserver-kubelet-client.key"
kubelet-preferred-address-types: "InternalIP,ExternalIP,Hostname"
proxy-client-cert-file: "/etc/kubernetes/pki/front-proxy-client.crt"
proxy-client-key-file: "/etc/kubernetes/pki/front-proxy-client.key"
requestheader-allowed-names: "front-proxy-client"
requestheader-client-ca-file: "/etc/kubernetes/pki/front-proxy-ca.crt"
requestheader-extra-headers-prefix: "X-Remote-Extra-"
requestheader-group-headers: "X-Remote-Group"
requestheader-username-headers: "X-Remote-User"
secure-port: "6443"
service-account-key-file: "/etc/kubernetes/pki/sa.pub"
service-cluster-ip-range: "10.96.0.0/22"
service-node-port-range: "10000-32767"
storage-backend: "etcd3"
tls-cert-file: "/etc/kubernetes/pki/apiserver.crt"
tls-private-key-file: "/etc/kubernetes/pki/apiserver.key"
# NOTES: CIS REASONS FOR CHANGES
# anonymous-auth: "false" / CIS: 1.1.1 / causes api/scheduler to periodically bounce
# allow-privileged: "false" / CIS: 1.7.1 / causes daemonsets to not deploy by default
# KUBE-CONTROLLER CONFIGURATION:
controllerManager:
extraArgs:
address: "127.0.0.1"
allocate-node-cidrs: "true"
authentication-kubeconfig: /etc/kubernetes/controller-manager.conf
authorization-kubeconfig: /etc/kubernetes/controller-manager.conf
client-ca-file: /etc/kubernetes/pki/ca.crt
cloud-provider:
cluster-cidr: "10.25.0.0/22"
cluster-signing-cert-file: /etc/kubernetes/pki/ca.crt
cluster-signing-key-file: /etc/kubernetes/pki/ca.key
configure-cloud-routes: "false"
controllers: "*,bootstrapsigner,tokencleaner"
kubeconfig: /etc/kubernetes/controller-manager.conf
leader-elect: "true"
node-cidr-mask-size: "22"
requestheader-client-ca-file: /etc/kubernetes/pki/front-proxy-ca.crt
root-ca-file: /etc/kubernetes/pki/ca.crt
service-account-private-key-file: /etc/kubernetes/pki/sa.key
service-cluster-ip-range: "10.96.0.0/16"
use-service-account-credentials: "true"
# KUBE-SCHEDULER CONFIGURATION:
scheduler:
extraArgs:
address: "127.0.0.1"
kubeconfig: /etc/kubernetes/scheduler.conf
leader-elect: "true"
# KUBE-DNS CONFIGURATION:
dns:
type: CoreDNS
# KUBE-ETCD CONFIGURATION:
etcd:
local:
imageRepository: k8s.gcr.io
imageTag: 3.2.24
dataDir: /var/lib/etcd
# extraArgs:
# name: "flagship-etcd"
# EXTERNAL ETCD SUPPORT:
# external:
# caFile: /etc/kubernetes/pki/etcd/ca.crt
# certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
# keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
# endpoints:
# - https://192.168.4.81:2379
# - https://127.0.0.1:2379
networking:
dnsDomain: cluster.local
podSubnet: 10.25.0.0/22
serviceSubnet: 10.96.0.0/12
---
# KUBE-PROXY CONFIGURATION:
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 5
clusterCIDR: 10.25.0.0/22
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
metricsBindAddress: 127.0.0.1:10249
mode: ipvs
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
---
# KUBELET CONFIGURATION:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
staticPodPath: /etc/kubernetes/manifests
syncFrequency: 1m0s
fileCheckFrequency: 20s
httpCheckFrequency: 20s
address: 0.0.0.0
port: 10250
# DEFAULTS:
# tlsCertFile: /var/lib/kubelet/pki/kubelet.crt
# tlsPrivateKeyFile: /var/lib/kubelet/pki/kubelet.key
tlsCertFile: /etc/kubernetes/pki/apiserver-kubelet-client.crt
tlsPrivateKeyFile: /etc/kubernetes/pki/apiserver-kubelet-client.key
rotateCertificates: true
authentication:
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
webhook:
enabled: true
cacheTTL: 2m0s
anonymous:
enabled: false
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 10m0s
cacheUnauthorizedTTL: 30s
registryPullQPS: 5
registryBurst: 10
eventRecordQPS: 5
eventBurst: 10
enableDebuggingHandlers: true
healthzPort: 10248
healthzBindAddress: 127.0.0.1
oomScoreAdj: -999
clusterDomain: cluster.local
clusterDNS:
- 10.96.0.10
streamingConnectionIdleTimeout: 4h0m0s
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m0s
nodeLeaseDurationSeconds: 40
imageMinimumGCAge: 2m0s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m0s
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
runtimeRequestTimeout: 2m0s
hairpinMode: promiscuous-bridge
maxPods: 110
podPidsLimit: -1
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
maxOpenFiles: 1000000
contentType: application/vnd.kubernetes.protobuf
kubeAPIQPS: 5
kubeAPIBurst: 10
serializeImagePulls: true
evictionHard:
"imagefs.available": "15%"
"memory.available": 100Mi
"nodefs.available": "10%"
"nodefs.inodesFree": "5%"
evictionPressureTransitionPeriod: 5m0s
enableControllerAttachDetach: true
makeIPTablesUtilChains: true
iptablesMasqueradeBit: 14
iptablesDropBit: 15
failSwapOn: true
containerLogMaxSize: 10Mi
containerLogMaxFiles: 5
configMapAndSecretChangeDetectionStrategy: Watch
enforceNodeAllocatable:
- pods
# Calico Version v3.5.0
# https://docs.projectcalico.org/v3.5/releases#v3.5.0
# This manifest includes the following component versions:
# calico/node:v3.5.0
# calico/cni:v3.5.0
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# You must set a non-zero value for Typha replicas below.
typha_service_name: "calico-typha"
# Configure the Calico backend to use.
calico_backend: "bird"
# Configure the MTU to use
veth_mtu: "1440"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
---
# This manifest creates a Service, which will be backed by Calico's Typha daemon.
# Typha sits in between Felix and the API server, reducing Calico's load on the API server.
apiVersion: v1
kind: Service
metadata:
name: calico-typha
namespace: kube-system
labels:
k8s-app: calico-typha
spec:
ports:
- port: 5473
protocol: TCP
targetPort: calico-typha
name: calico-typha
selector:
k8s-app: calico-typha
---
# This manifest creates a Deployment of Typha to back the above service.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: calico-typha
namespace: kube-system
labels:
k8s-app: calico-typha
spec:
# Number of Typha replicas. To enable Typha, set this to a non-zero value *and* set the
# typha_service_name variable in the calico-config ConfigMap above.
#
# We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is essential
# (when using the Kubernetes datastore). Use one replica for every 100-200 nodes. In
# production, we recommend running at least 3 replicas to reduce the impact of rolling upgrade.
replicas: 1
revisionHistoryLimit: 2
template:
metadata:
labels:
k8s-app: calico-typha
annotations:
# This, along with the CriticalAddonsOnly toleration below, marks the pod as a critical
# add-on, ensuring it gets priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
cluster-autoscaler.kubernetes.io/safe-to-evict: 'true'
spec:
nodeSelector:
beta.kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
# Since Calico can't network a pod until Typha is up, we need to run Typha itself
# as a host-networked pod.
serviceAccountName: calico-node
containers:
- image: calico/typha:v3.5.0
name: calico-typha
ports:
- containerPort: 5473
name: calico-typha
protocol: TCP
env:
# Enable "info" logging by default. Can be set to "debug" to increase verbosity.
- name: TYPHA_LOGSEVERITYSCREEN
value: "info"
# Disable logging to file and syslog since those don't make sense in Kubernetes.
- name: TYPHA_LOGFILEPATH
value: "none"
- name: TYPHA_LOGSEVERITYSYS
value: "none"
# Monitor the Kubernetes API to find the number of running instances and rebalance
# connections.
- name: TYPHA_CONNECTIONREBALANCINGMODE
value: "kubernetes"
- name: TYPHA_DATASTORETYPE
value: "kubernetes"
- name: TYPHA_HEALTHENABLED
value: "true"
# Uncomment these lines to enable prometheus metrics. Since Typha is host-networked,
# this opens a port on the host, which may need to be secured.
#- name: TYPHA_PROMETHEUSMETRICSENABLED
# value: "true"
#- name: TYPHA_PROMETHEUSMETRICSPORT
# value: "9093"
livenessProbe:
exec:
command:
- calico-typha
- check
- liveness
periodSeconds: 30
initialDelaySeconds: 30
readinessProbe:
exec:
command:
- calico-typha
- check
- readiness
periodSeconds: 10
---
# This manifest creates a Pod Disruption Budget for Typha to allow K8s Cluster Autoscaler to evict
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: calico-typha
namespace: kube-system
labels:
k8s-app: calico-typha
spec:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: calico-typha
---
# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
beta.kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
initContainers:
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: calico/cni:v3.5.0
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
containers:
# Runs calico/node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: calico/node:v3.5.0
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Typha support: controlled by the ConfigMap.
- name: FELIX_TYPHAK8SSERVICENAME
valueFrom:
configMapKeyRef:
name: calico-config
key: typha_service_name
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.25.0.0/22"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
host: localhost
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -bird-ready
- -felix-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
volumes:
# Used by calico/node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Create all the CustomResourceDefinitions needed for
# Calico policy and networking mode.
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- clusterinformations
- hostendpoints
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are only requried for upgrade from v2.6, and can
# be removed after upgrade or on fresh installations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- bgpconfigurations
- bgppeers
verbs:
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
@v1k0d3n
Copy link
Author

v1k0d3n commented Feb 10, 2019

Notes: Kubeadm seems to still be having issues with self-hosted mode of operation (tested as an AIO node via Hyperkit).

Every 2.0s: kubectl get pods -o wide --all-namespaces                                                                                                                                                   kubeadm-testing: Sun Feb 10 04:08:39 2019

NAMESPACE     NAME                                        READY   STATUS             RESTARTS   AGE     IP             NODE              NOMINATED NODE   READINESS GATES
kube-system   calico-node-4c56t                           1/1     Running            0          15m     192.168.4.81   kubeadm-testing   <none>           <none>
kube-system   calico-typha-658bcf6d77-67vtb               1/1     Running            0          15m     192.168.4.81   kubeadm-testing   <none>           <none>
kube-system   coredns-86c58d9df4-blvd6                    1/1     Running            0          21m     10.25.0.9      kubeadm-testing   <none>           <none>
kube-system   coredns-86c58d9df4-xjbc4                    1/1     Running            0          21m     10.25.0.8      kubeadm-testing   <none>           <none>
kube-system   etcd-kubeadm-testing                        1/1     Running            0          20m     192.168.4.81   kubeadm-testing   <none>           <none>
kube-system   kube-proxy-7d56g                            1/1     Running            0          21m     192.168.4.81   kubeadm-testing   <none>           <none>
kube-system   self-hosted-kube-apiserver-4d8tb            1/1     Running            1          3m44s   192.168.4.81   kubeadm-testing   <none>           <none>
kube-system   self-hosted-kube-controller-manager-q5cqs   1/1     Running            1          2m28s   192.168.4.81   kubeadm-testing   <none>           <none>
kube-system   self-hosted-kube-scheduler-dbjtr            0/1     CrashLoopBackOff   3          97s     192.168.4.81   kubeadm-testing   <none>           <none>

The scheduler is always caught in a bad state. It seems this can be resolved manually, but by default (out of the box) the logs give a good idea of what's going on:

ubuntu@kubeadm-testing:~$ kubectl logs pod/self-hosted-kube-scheduler-dbjtr -n kube-system
I0210 04:09:10.016183       1 serving.go:318] Generated self-signed cert in-memory
W0210 04:09:10.824199       1 authentication.go:248] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLE_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
ubuntu@kubeadm-testing:~$

Been playing with this for the better part of the day, and it's still just too fragile for production. Unfortunately as a result...we'll have to build up around it and some of the other client-side libs in Kubernetes until we can figure out how to get our changes in upstream into kubeadm because I really like the project, the people are great to work with, and it's ultimately the direction for Kubernetes going forward. We, and our use-cases for containerized voice, video, network services (for NFV), are definitely an outlier for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment