Skip to content

Instantly share code, notes, and snippets.

@sleshJdev
Created March 15, 2021 22:31
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save sleshJdev/48770646699731c4cf5a8ebf0d823d88 to your computer and use it in GitHub Desktop.
Save sleshJdev/48770646699731c4cf5a8ebf0d823d88 to your computer and use it in GitHub Desktop.
k8s single-node cluster on vagrant ubuntu vm
# -*- mode: ruby -*-
# vi: set ft=ruby :
$script = <<-SCRIPT
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo add-apt-repository "deb [arch=amd64] https://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common gnupg2 nginx docker.io kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo groupadd -f docker
sudo usermod -aG docker vagrant
newgrp docker
IPADDR=`ip -4 address show dev eth1 | grep inet | awk '{print $2}' | cut -f1 -d/`
NODENAME=$(hostname -s)
echo "IPADDR: ${IPADDR}, NODENAME: ${NODENAME}"
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-cert-extra-sans=${IPADDR} --node-name=${NODENAME}
sudo --user=vagrant mkdir -p /home/vagrant/.kube
sudo cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
sudo chown $(id -u vagrant):$(id -g vagrant) /home/vagrant/.kube/config
sleep 10s
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sleep 5s
kubectl taint nodes --all node-role.kubernetes.io/master-
SCRIPT
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
config.vm.boot_timeout = 300
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = "bento/ubuntu-20.04"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
config.vm.network "forwarded_port", guest: 80, host: 9999, host_ip: "127.0.0.1"
config.vm.network "forwarded_port", guest: 6443, host: 6443, host_ip: "127.0.0.1"
config.vm.network "forwarded_port", guest: 8001, host: 8001, host_ip: "127.0.0.1"
for port in 30001..30010
config.vm.network "forwarded_port", guest: port, host: port, host_ip: "127.0.0.1"
end
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
config.vm.network "private_network", type: "dhcp"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
# vb.gui = true
# Customize the amount of memory on the VM:
vb.memory = "3072"
vb.cpus = 3
end
#
# View the documentation for the provider you are using for more
# information on available options.
# Enable provisioning with a shell script. Additional provisioners such as
# Ansible, Chef, Docker, Puppet and Salt are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision "shell", inline: $script, privileged: false
config.vagrant.plugins = ["vagrant-scp"]
end
@sleshJdev
Copy link
Author

After you've create vm using vagrant up --provision to access your cluster from host machine you have to configure kubectl.
First, copy cluster config file into $HOME/.kube/config by executing vagrant scp :$HOME/.kube/config $HOME/.kube/config. Then open the $HOME/.kube/config on your host machine and replace server address(in my case it's server: https://10.0.2.15:6443) with https://{insert here $IPADDR value from guest vm}:6443.

If you could run all the commands without any errors, then by running kubectl get pods on your gost machine you will see:

$ kubectl get pods --all-namespaces 
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-74ff55c5b-8lqhh           1/1     Running   0          14m
kube-system   coredns-74ff55c5b-sdkbd           1/1     Running   0          14m
kube-system   etcd-vagrant                      1/1     Running   0          14m
kube-system   kube-apiserver-vagrant            1/1     Running   0          14m
kube-system   kube-controller-manager-vagrant   1/1     Running   0          14m
kube-system   kube-flannel-ds-997bq             1/1     Running   0          14m
kube-system   kube-proxy-wh4bq                  1/1     Running   0          14m
kube-system   kube-scheduler-vagrant            1/1     Running   0          14m

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment