$ vagrant init fedora/25-atomic-host
$ vagrant up
$ vagrant ssh
temporarily enable updates-testing, this bit can go away once this package gets enough karma.
# sed -i 's/enabled=0/enabled=1/1' /etc/yum.repos.d/fedora-updates-testing.repo
# rpm-ostree install --reboot kubernetes-client kubernetes-node etcd
As of this commit, fedora atomic no longer includes kubernetes in the image. The rpm-ostree
command above uses package layering to add the needed pkgs.
these containers come from this repo, I'd like to move the repo / docker namespace to projectatomic.
# mkdir -p /etc/kubernetes/manifests/
# vi /etc/kubernetes/manifests/apiserver-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
spec:
containers:
- image: jasonbrooks/kubernetes-apiserver:f25
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/ssl
name: etcssl
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/ssl
name: etcssl
# vi /etc/kubernetes/manifests/scheduler-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
spec:
containers:
- image: jasonbrooks/kubernetes-scheduler:f25
livenessProbe:
httpGet:
path: /healthz
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
hostNetwork: true
# vi /etc/kubernetes/manifests/controller-mgr-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
spec:
containers:
- image: jasonbrooks/kubernetes-controller-manager:f25
livenessProbe:
httpGet:
path: /healthz
port: 10252
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
volumeMounts:
- mountPath: /etc/ssl
name: etcssl
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/ssl
name: etcssl
These pod manifests are based on the ones from this RHEL doc. I'm not mounting the host's /etc/kubernetes
into the containers, because by default, the host won't have any configs, and so the containers won't start. To customize the default values, you can add a command section like the following. Arguments you don't provide will be taken from the config files inside the container image.
# ps ax | grep apiserver
2320 ? Ssl 0:09 /usr/bin/kube-apiserver --admission-control=NamespaceLifecycle,
NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --allow-privileged=false
--etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=127.0.0.1 --logtostderr=true
--service-cluster-ip-range=10.254.0.0/16 --v=0
vi /etc/kubernetes/manifests/apiserver-pod.yaml
...
spec:
containers:
- image: jasonbrooks/kubernetes-apiserver:f25
command:
- /usr/bin/kube-apiserver-docker.sh
- --allow-privileged=true
livenessProbe:
...
# ps ax | grep apiserver
621 ? Ssl 0:00 /usr/bin/kube-apiserver --admission-control=NamespaceLifecycle,
NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --allow-privileged=true
--etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=127.0.0.1 --logtostderr=true
--service-cluster-ip-range=10.254.0.0/16 --v=0
# sed -i 's#KUBELET_ARGS=""#KUBELET_ARGS="--register-node=true --config=/etc/kubernetes/manifests/"#g' \
/etc/kubernetes/kubelet
# mkdir /var/lib/kubelet
# for SERVICES in docker etcd kube-proxy kubelet; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl is-active $SERVICES
done