Skip to content

Instantly share code, notes, and snippets.

@elnemesisdivina
Last active January 25, 2021 06:10
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save elnemesisdivina/4b6e60cf2e7288fc838c6581310ba503 to your computer and use it in GitHub Desktop.
Save elnemesisdivina/4b6e60cf2e7288fc838c6581310ba503 to your computer and use it in GitHub Desktop.
playbook kind
# pre reqs kind on ubuntu box
-check permissions
-Kubectl
-docker on laptop/VM
reference on this page : https://kind.sigs.k8s.io/
-------------------------
instal kubectl
check here :
https://kubernetes.io/docs/tasks/tools/install-kubectl/
--it can be also using brew in mac or linuxbrew
------------------------
install octant or k9s for easy experience on manage k8s cluster
==========k9s==========
snap install k9s # if you already using ver. 20 of ubuntu as in my case snap comes as default
check by invoke k9s from cli, if fails check doing this export for config file:
export KUBECONFIG=$HOME/.kube/config
other issue is the permission on .k9s: permissions denied try this:
mkdir ~/.k9s
=================
****************** to Install KIND on linux *************************************
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.9.0/kind-linux-amd64
chmod +x ./kind
mv ./kind /usr/local/bin/kind
which kind
first test:
kind create cluster
then:
kind get clsuters
#by defaul willc reate one node cluster called kind by default
check if is running as a docker container in the host machine :
docker ps -a
then
kind delete clsuter
---------create kind cluster with name and config----------
use this manifesto as ref, yours can be nicer:
│ File: calico-kind.yaml
───────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ kind: Cluster
2 │ apiVersion: kind.x-k8s.io/v1alpha4 #always check api version on docs
3 │ networking:
4 │ disableDefaultCNI: true # disable kindnet
5 │ podSubnet: 192.168.0.0/16 # set to Calico's default subnet
6 │ nodes: #set one master and 2 worker nodes cluster
7 │ - role: control-plane
8 │ - role: worker
9 │ - role: worker
apply the manifesto in your kind enviroment:
kind create cluster --name calicokind --config calico-kind.yaml
kubectl config view
#Export kubeconfig of new created cluster on kind
export KUBECONFIG="$(kind get kubeconfig --name="calicokind")"
kubectl config view
kubectl get nodes
kubectl get pods -A.
root@srvhost:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-c79j2 0/1 Pending 0 177m
kube-system coredns-f9fd979d6-cstpc 0/1 Pending 0 177m
kube-system etcd-calicokind-control-plane 1/1 Running 0 177m
kube-system kube-apiserver-calicokind-control-plane 1/1 Running 0 177m
kube-system kube-controller-manager-calicokind-control-plane 1/1 Running 0 177m
kube-system kube-proxy-hzjwq 1/1 Running 0 177m
kube-system kube-scheduler-calicokind-control-plane 1/1 Running 0 177m
local-path-storage local-path-provisioner-78776bfc44-g7znt 0/1 Pending 0 177m
#also can ask in the kube-system ns aka specific system namespoace
#check doc on kubernetes.io to go deeper on kubec-system but for us is just matter of CNI and the services on container that depends on it
kubectl get ns
kubectl get pods -n kube-system
#will see core dns in pending state due to the inhibition of kindnet (CNI in the cluster is not there yet)
intall the CNI in this case calico (if you need especific version just check https://docs.projectcalico.org/releases for the maniffest):
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
root@srvhost:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-744cfdf676-zpffd 0/1 Pending 0 3s
kube-system calico-node-84jnk 0/1 Init:0/3 0 3s
kube-system coredns-f9fd979d6-c79j2 0/1 Pending 0 3h2m
kube-system coredns-f9fd979d6-cstpc 0/1 Pending 0 3h2m
kube-system etcd-calicokind-control-plane 1/1 Running 0 3h2m
kube-system kube-apiserver-calicokind-control-plane 1/1 Running 0 3h2m
kube-system kube-controller-manager-calicokind-control-plane 1/1 Running 0 3h2m
kube-system kube-proxy-hzjwq 1/1 Running 0 3h2m
kube-system kube-scheduler-calicokind-control-plane 1/1 Running 0 3h2m
local-path-storage local-path-provisioner-78776bfc44-g7znt 0/1 Pending 0 3h2m
kubectl -n kube-system get pods | grep calico-node
root@srvhost:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-744cfdf676-zpffd 1/1 Running 0 3m58s
calico-node-84jnk 1/1 Running 0 3m58s
coredns-f9fd979d6-c79j2 1/1 Running 0 3h6m
coredns-f9fd979d6-cstpc 1/1 Running 0 3h6m
etcd-calicokind-control-plane 1/1 Running 0 3h6m
kube-apiserver-calicokind-control-plane 1/1 Running 0 3h6m
kube-controller-manager-calicokind-control-plane 1/1 Running 0 3h6m
kube-proxy-hzjwq 1/1 Running 0 3h6m
kube-scheduler-calicokind-control-plane 1/1 Running 0 3h6m
(will see coredns is now working)
Optional
kind delete clsuter --name calicokind
------------CNI antea---------
https://antrea.io/docs/v0.11.1/getting-started/
apply manisfesto called antrea-kind.yaml
───────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ File: antrea-kind.yaml
───────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ kind: Cluster
2 │ apiVersion: kind.x-k8s.io/v1alpha4 #always check api version on docs
3 │ networking:
4 │ disableDefaultCNI: true # disable kindnet
5 │ podSubnet: 192.168.0.0/16 # set to Antrea's default subnet
6 │ nodes: #set one master and 3 worker nodes cluster
7 │ - role: control-plane
8 │ - role: worker
9 │ - role: worker
10 │
───────┴──────────────────────────────────────────────────────────────────────────────────
https://antrea.io/docs/v0.11.1/kind/
in my case I use one master and one worker so my outpus is like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b7e28a6d0493 kindest/node:v1.19.1 "/usr/local/bin/entr…" About a minute ago Up About a minute antreakind-worker
2a1d1484f828 kindest/node:v1.19.1 "/usr/local/bin/entr…" About a minute ago Up About a minute 127.0.0.1:45109->6443/tcp antreakind-control-plane
******NOT KIND***
TAG check release you wan in my case TAG is v0.12.0 aka TAG=v0.12.0
kubectl apply -f https://github.com/vmware-tanzu/antrea/releases/download/$TAG/antrea.yml
***********
get this script: kind-fix-networking.sh .-
wget https://raw.githubusercontent.com/vmware-tanzu/antrea/master/hack/kind-fix-networking.sh
this script according to doscs fix eth tansmits problem on docker check here https://github.com/vmware-tanzu/antrea/blob/master/docs/kind.md:
───────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ File: kind-fix-networking.sh
───────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ #!/usr/bin/env bash
2 │
3 │ # Copyright 2019 Antrea Authors
4 │ #
5 │ # Licensed under the Apache License, Version 2.0 (the "License");
6 │ # you may not use this file except in compliance with the License.
7 │ # You may obtain a copy of the License at
8 │ #
9 │ # http://www.apache.org/licenses/LICENSE-2.0
10 │ #
11 │ # Unless required by applicable law or agreed to in writing, software
12 │ # distributed under the License is distributed on an "AS IS" BASIS,
13 │ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 │ # See the License for the specific language governing permissions and
15 │ # limitations under the License.
16 │
17 │ # This script is required for Antrea to work properly in a Kind cluster on Linux. It takes care of
18 │ # disabling TX hardware checksum offload for the veth interface (in the host's network namespace) of
19 │ # each Kind Node. This is required when using OVS in userspace mode. Refer to
20 │ # https://github.com/vmware-tanzu/antrea/issues/14 for more information.
21 │
22 │ # The script uses the antrea/ethtool Docker image (so that ethtool does not need to be installed on
23 │ # the Linux host).
24 │
25 │ set -eo pipefail
26 │
27 │ for node in "$@"; do
28 │ peerIdx=$(docker exec "$node" ip link | grep eth0 | awk -F[@:] '{ print $3 }' | cut -c 3-)
29 │ peerName=$(docker run --net=host antrea/ethtool:latest ip link | grep ^"$peerIdx": | awk -F[:@] '{ print $2 }' | cut -c 2-)
30 │ echo "Disabling TX checksum offload for node $node ($peerName)"
31 │ docker run --net=host --privileged antrea/ethtool:latest ethtool -K "$peerName" tx off
32 │ # Workaround for https://github.com/vmware-tanzu/antrea/issues/324
33 │ docker exec "$node" sysctl -w net.ipv4.tcp_retries2=4
34 │ done
───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────
workaroing to issue the script commands manually*
according to this procedure: https://antrea.io/docs/v0.11.1/kind/
check yout path & permissions(chmod +x kind-fix-networking.sh) to the script:
kind get nodes --name antreakind | xargs ./kind-fix-networking.sh
output sample:
kind get nodes --name antreakind | xargs ./kind-fix-networking.sh
Disabling TX checksum offload for node antreakind-worker (veth9016bb5)
Actual changes:
tx-checksumming: off
tx-checksum-ip-generic: off
tx-checksum-sctp: off
tcp-segmentation-offload: off
tx-tcp-segmentation: off [requested on]
tx-tcp-ecn-segmentation: off [requested on]
tx-tcp-mangleid-segmentation: off [requested on]
tx-tcp6-segmentation: off [requested on]
net.ipv4.tcp_retries2 = 4
Disabling TX checksum offload for node antreakind-control-plane (veth3af0c96)
Actual changes:
tx-checksumming: off
tx-checksum-ip-generic: off
tx-checksum-sctp: off
tcp-segmentation-offload: off
tx-tcp-segmentation: off [requested on]
tx-tcp-ecn-segmentation: off [requested on]
tx-tcp-mangleid-segmentation: off [requested on]
tx-tcp6-segmentation: off [requested on]
net.ipv4.tcp_retries2 = 4
---------------------------------------------------------------------------------
then install antrea from manifesto
kubectl apply -f https://github.com/vmware-tanzu/antrea/releases/download/v0.12.0/antrea-kind.yml
output sample:
customresourcedefinition.apiextensions.k8s.io/antreaagentinfos.clusterinformation.antrea.tanzu.vmware.com created
customresourcedefinition.apiextensions.k8s.io/antreacontrollerinfos.clusterinformation.antrea.tanzu.vmware.com created
customresourcedefinition.apiextensions.k8s.io/clusternetworkpolicies.security.antrea.tanzu.vmware.com created
customresourcedefinition.apiextensions.k8s.io/externalentities.core.antrea.tanzu.vmware.com created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.security.antrea.tanzu.vmware.com created
customresourcedefinition.apiextensions.k8s.io/tiers.security.antrea.tanzu.vmware.com created
customresourcedefinition.apiextensions.k8s.io/traceflows.ops.antrea.tanzu.vmware.com created
serviceaccount/antctl created
serviceaccount/antrea-agent created
serviceaccount/antrea-controller created
clusterrole.rbac.authorization.k8s.io/aggregate-antrea-policies-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-antrea-policies-view created
clusterrole.rbac.authorization.k8s.io/aggregate-traceflows-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-traceflows-view created
clusterrole.rbac.authorization.k8s.io/antctl created
clusterrole.rbac.authorization.k8s.io/antrea-agent created
clusterrole.rbac.authorization.k8s.io/antrea-controller created
clusterrolebinding.rbac.authorization.k8s.io/antctl created
clusterrolebinding.rbac.authorization.k8s.io/antrea-agent created
clusterrolebinding.rbac.authorization.k8s.io/antrea-controller created
configmap/antrea-ca created
configmap/antrea-config-4h2tb7btgk created
service/antrea created
deployment.apps/antrea-controller created
apiservice.apiregistration.k8s.io/v1alpha1.stats.antrea.tanzu.vmware.com created
apiservice.apiregistration.k8s.io/v1beta1.controlplane.antrea.tanzu.vmware.com created
apiservice.apiregistration.k8s.io/v1beta1.networking.antrea.tanzu.vmware.com created
apiservice.apiregistration.k8s.io/v1beta1.system.antrea.tanzu.vmware.com created
apiservice.apiregistration.k8s.io/v1beta2.controlplane.antrea.tanzu.vmware.com created
daemonset.apps/antrea-agent created
mutatingwebhookconfiguration.admissionregistration.k8s.io/crdmutator.antrea.tanzu.vmware.com created
validatingwebhookconfiguration.admissionregistration.k8s.io/crdvalidator.antrea.tanzu.vmware.com created
------------------------------------------------------
your should see something like this:
root@srvhost:~# kubectl get pods -n kube-system | grep antrea
antrea-agent-b5kpf 2/2 Running 0 3m16s
antrea-agent-zfkh9 2/2 Running 0 3m16s
antrea-controller-94889cf46-v6rnm 1/1 Running 0 3m16s
etcd-antreakind-control-plane 1/1 Running 0 9m42s
kube-apiserver-antreakind-control-plane 1/1 Running 0 9m42s
kube-controller-manager-antreakind-control-plane 1/1 Running 0 9m42s
kube-scheduler-antreakind-control-plane 1/1 Running 0 9m42s
root@srvhost:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
antrea-agent-b5kpf 2/2 Running 0 3m34s
antrea-agent-zfkh9 2/2 Running 0 3m34s
antrea-controller-94889cf46-v6rnm 1/1 Running 0 3m34s
coredns-f9fd979d6-7z64x 1/1 Running 0 9m54s
coredns-f9fd979d6-qnpvh 1/1 Running 0 9m54s
etcd-antreakind-control-plane 1/1 Running 0 10m
kube-apiserver-antreakind-control-plane 1/1 Running 0 10m
kube-controller-manager-antreakind-control-plane 1/1 Running 0 10m
kube-proxy-968vj 1/1 Running 0 9m41s
kube-proxy-xwqmn 1/1 Running 0 9m54s
kube-scheduler-antreakind-control-plane 1/1 Running 0 10m
++++Core dns is change his status from pending to running as well++++
root@srvhost:~# kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/antrea-agent-b5kpf 2/2 Running 0 25m
pod/antrea-agent-zfkh9 2/2 Running 0 25m
pod/antrea-controller-94889cf46-v6rnm 1/1 Running 0 25m
pod/coredns-f9fd979d6-7z64x 1/1 Running 0 31m
pod/coredns-f9fd979d6-qnpvh 1/1 Running 0 31m
pod/etcd-antreakind-control-plane 1/1 Running 0 31m
pod/kube-apiserver-antreakind-control-plane 1/1 Running 0 31m
pod/kube-controller-manager-antreakind-control-plane 1/1 Running 0 31m
pod/kube-proxy-968vj 1/1 Running 0 31m
pod/kube-proxy-xwqmn 1/1 Running 0 31m
pod/kube-scheduler-antreakind-control-plane 1/1 Running 0 31m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/antrea ClusterIP 10.96.66.183 <none> 443/TCP 25m
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 31m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/antrea-agent 2 2 2 2 2 kubernetes.io/os=linux 25m
daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 31m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/antrea-controller 1/1 1 1 25m
deployment.apps/coredns 2/2 2 2 31m
NAME DESIRED CURRENT READY AGE
replicaset.apps/antrea-controller-94889cf46 1 1 1 25m
replicaset.apps/coredns-f9fd979d6 2 2 2 31m
''''''''''''''''''''''''''''''''explore Antrea''''''''''''''''''''''''''''''
sleect the agent pod from previous output, just check the name for you the last part of the name should be something different:
So just did an ls to check if OK
kubectl exec -n kube-system -it pod/antrea-agent-b5kpf -- ls /
Defaulting container name to antrea-agent.
Use 'kubectl describe pod/antrea-agent-b5kpf -n kube-system' to see all of the containers in this pod.
bin dev home lib lib64 media opt root sbin sys usr
boot etc host lib32 libx32 mnt proc run srv tmp var
enter in shell of antrea-agent pod:
kubectl exec -n kube-system -it pod/antrea-agent-b5kpf -- /bin/bash
Defaulting container name to antrea-agent.
Use 'kubectl describe pod/antrea-agent-b5kpf -n kube-system' to see all of the containers in this pod.
root@antreakind-control-plane:/# antrea-agent --help
The Antrea agent runs on each node.
Usage:
antrea-agent [flags]
Flags:
--add_dir_header If true, adds the file directory to the header
--alsologtostderr log to standard error as well as files
--config string The path to the configuration file
-h, --help help for antrea-agent
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--log_file string If non-empty, use this log file
--log_file_max_num uint16 Maximum number of log files per severity level to be kept. Value 0 means unlimited.
--log_file_max_size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--logtostderr log to standard error instead of files (default true)
--skip_headers If true, avoid header prefixes in the log messages
--skip_log_headers If true, avoid headers when opening log files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level number for the log level verbosity
--version version for antrea-agent
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
,,,,,,,,,,
,,,,,,,,,,
,,,,,,,,,,
root@antreakind-control-plane:/# ovs-
ovs-appctl ovs-docker ovs-dpctl-top ovs-parse-backtrace ovs-pki ovs-tcpundump ovs-vsctl
ovs-bugtool ovs-dpctl ovs-ofctl ovs-pcap ovs-tcpdump ovs-vlan-test ovs-vswitchd
root@antreakind-control-plane:/# ovs-vsctl show
d01fff9d-faf3-439f-bb1e-42fe3cfb40a4
Bridge br-int
datapath_type: system
Port antrea-gw0
Interface antrea-gw0
type: internal
Port antrea-tun0
Interface antrea-tun0
type: geneve
options: {csum="true", key=flow, remote_ip=flow}
Port coredns--975ed0
Interface coredns--975ed0
Bridge br-phy
fail_mode: standalone
datapath_type: netdev
Port eth0
Interface eth0
Port br-phy
Interface br-phy
type: internal
ovs_version: "2.14.0"
--------------
root@antreakind-control-plane:/# ovs-vsctl list-ports br-int
antrea-gw0
antrea-tun0
coredns--975ed0
root@antreakind-control-plane:/# ovs-vsctl list-br
br-int
br-phy
root@antreakind-control-plane:/# ovs-vsctl list-ports br-int
antrea-gw0
antrea-tun0
coredns--975ed0
root@antreakind-control-plane:/# ovs-vsctl list-ifaces br-int
antrea-gw0
antrea-tun0
coredns--975ed0
----------------------------
root@antreakind-control-plane:/# ovs-vsctl show
d01fff9d-faf3-439f-bb1e-42fe3cfb40a4
Bridge br-int
datapath_type: system
Port antrea-gw0
Interface antrea-gw0
type: internal
Port antrea-tun0
Interface antrea-tun0
type: geneve
options: {csum="true", key=flow, remote_ip=flow}
Port coredns--975ed0
Interface coredns--975ed0
Bridge br-phy
fail_mode: standalone
datapath_type: netdev
Port eth0
Interface eth0
Port br-phy
Interface br-phy
type: internal
ovs_version: "2.14.0"
------------
Alternative to previous list of commands aka short way:
kubectl exec -n kube-system -it pod/antrea-agent-b5kpf -c antrea-agent ovs-vsctl show
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
d01fff9d-faf3-439f-bb1e-42fe3cfb40a4
Bridge br-int
datapath_type: netdev
Port antrea-gw0
Interface antrea-gw0
type: internal
Port antrea-tun0
Interface antrea-tun0
type: geneve
options: {csum="true", key=flow, remote_ip=flow}
Port coredns--975ed0
Interface coredns--975ed0
Bridge br-phy
fail_mode: standalone
datapath_type: netdev
Port eth0
Interface eth0
Port br-phy
Interface br-phy
type: internal
ovs_version: "2.14.0"
NOTE it does not like it the --namespace but I did not fin the way to run wihtout it since it does not work without the name of the namespace .
By the way the command issued in the container of the antrea-agent is realated to the old ovs check this url https://www.openvswitch.org/support/dist-docs/ovs-vsctl.8.html for more deeper details on ovs-ctl utility since antrea is based heaviliy on OVS (took me a while to get here :) but I made it)
------------------------------
short list of what I used during my exploration and tests
-----------------------------
ovs-ofctl show br-int
ovs-ofctl dump-ports br-int
ovs-vsctl list-br
ovs-vsctl list-ports br-int
ovs-vsctl list-ifaces br-int
ovs-ofctl dump-flows br-int
ovs-ofctl snoop br-int
ovs-vsctl add-br br-int - set bridge br-int datapath_type=geneve
ovs-vsctl del-br br-int
ovs-vsctl del-controller br-int
ovs-vsctl set Bridge br-int stp_enable=true
ovs-vsctl add-port br-iny ge-1/1/1 type=pronto options:link_speed=1G
ovs-vsctl del-port br-int ge-1/1/
ovs-ofctl add-flow br-int in_port=1,actions=output:2
ovs-ofctl mod-flows br-int in_port=1,dl_type=0x0800,nw_src=100.10.0.1,actions=output:2
ovs-ofctl add-flow br-int in_port=1,actions=output:2,3,4
ovs-ofctl add-flow br-int in_port=1,actions=output:4
ovs-ofctl del-flows br-int
ovs-ofctl mod-port br-int 1 no-flood
ovs-ofctl add-flow br-int in_port=1,dl_type=0x0800,nw_src=192.168.1.241,actions=output:3
ovs-ofctl add-flow br-int in_port=4,dl_type=0x0800,dl_src=60:eb:69:d2:9c:dd,nw_src=198.168.0.2,nw_dst=192.168.0.55,actions=output:1
ovs-ofctl mod-flows br-int in_port=4,dl_type=0x0800,nw_src=192.168.0.45,actions=output:3
ovs-ofctl del-flows br-int in_port=1
-------------------------------------------
root@antreakind-control-plane:/# ip -c a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ovs-netdev: <BROADCAST,MULTICAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 66:20:56:e6:18:21 brd ff:ff:ff:ff:ff:ff
3: br-phy: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 scope global br-phy
valid_lft forever preferred_lft forever
5: coredns--975ed0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default
link/ether 56:af:b7:bc:a1:b6 brd ff:ff:ff:ff:ff:ff link-netnsid 1
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether c2:d0:42:47:9a:9b brd ff:ff:ff:ff:ff:ff
7: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
link/ether 7e:6f:3c:4c:84:6f brd ff:ff:ff:ff:ff:ff
8: antrea-gw0: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
link/ether ca:83:4c:9f:48:01 brd ff:ff:ff:ff:ff:ff
9: eth0@if10: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
------------
Metallb with Kind
on same cluster in kind let's deploy just MetalLB to expose services from in same to host network check this https://metallb.universe.tf/installation/
first part is installation of artefacts------
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
namespace/metallb-system created
root@srvhost:~# kubectl get ns
NAME STATUS AGE
default Active 7h36m
kube-node-lease Active 7h36m
kube-public Active 7h36m
kube-system Active 7h36m
local-path-storage Active 7h36m
metallb-system Active 18s
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
kubectl -n metallb-system get all
NAME READY STATUS RESTARTS AGE
pod/controller-65db86ddc6-5zqrf 1/1 Running 0 57s
pod/speaker-8cjtq 0/1 CreateContainerConfigError 0 57s
pod/speaker-9gmzm 0/1 CreateContainerConfigError 0 57s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 2 2 0 2 0 kubernetes.io/os=linux 57s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 57s
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-65db86ddc6 1 1 1 57s
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
root@srvhost:~# kubectl -n metallb-system get all
NAME READY STATUS RESTARTS AGE
pod/controller-65db86ddc6-5zqrf 1/1 Running 0 2m24s
pod/speaker-8cjtq 1/1 Running 0 2m24s
pod/speaker-9gmzm 1/1 Running 0 2m24s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 2 2 2 2 2 kubernetes.io/os=linux 2m24s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 2m24s
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-65db86ddc6 1 1 1 2m24s
then is the configuration of MetalLB
from outpit of this command jjust verify netwokr CIDR assigned to the nodes in the br-XXXX interface inside host
root@srvhost:~# ip -c a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 6c:0b:84:03:6c:9f brd ff:ff:ff:ff:ff:ff
altname enp0s25
inet 192.168.0.13/24 brd 192.168.0.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::6e0b:84ff:fe03:6c9f/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:6c:5b:ca brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: br-596125009eac: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:fc:cd:e1:d3 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-596125009eac
valid_lft forever preferred_lft forever
inet6 fc00:f853:ccd:e793::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::42:fcff:fecd:e1d3/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::1/64 scope link
valid_lft forever preferred_lft forever
10: veth3af0c96@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-596125009eac state UP group default
link/ether f6:ef:23:10:6a:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::f4ef:23ff:fe10:6a43/64 scope link
valid_lft forever preferred_lft forever
12: veth9016bb5@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-596125009eac state UP group default
link/ether 8a:0f:56:b4:dd:36 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::880f:56ff:feb4:dd36/64 scope link
valid_lft forever preferred_lft forever
root@srvhost:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
antreakind-control-plane Ready master 7h31m v1.19.1 172.18.0.2 <none> Ubuntu Groovy Gorilla (development branch) 5.8.0-40-generic containerd://1.4.0
antreakind-worker Ready <none> 7h30m v1.19.1 172.18.0.3 <none> Ubuntu Groovy Gorilla (development branch) 5.8.0-40-generic containerd://1.4.0
root@srvhost:~#
then cal the segment to add for the LB in this case will be : 172.18.255.1-172.18.255.250 from sipcalc:
root@srvhost:~# sipcalc 172.18.0.1/16
-[ipv4 : 172.18.0.1/16] - 0
[CIDR]
Host address - 172.18.0.1
Host address (decimal) - 2886860801
Host address (hex) - AC120001
Network address - 172.18.0.0
Network mask - 255.255.0.0
Network mask (bits) - 16
Network mask (hex) - FFFF0000
Broadcast address - 172.18.255.255
Cisco wildcard - 0.0.255.255
Addresses in network - 65536
Network range - 172.18.0.0 - 172.18.255.255
Usable range - 172.18.0.1 - 172.18.255.254
see output of config for MetalLB
│ File: metallb-config.yaml
───────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ apiVersion: v1
2 │ kind: ConfigMap
3 │ metadata:
4 │ namespace: metallb-system
5 │ name: config
6 │ data:
7 │ config: |
8 │ address-pools:
9 │ - name: default
10 │ protocol: layer2 #no BGP peering this time
11 │ addresses:
12 │ - 172.18.255.1-172.18.255.250
then just apply the manifesto
kubectl apply -f metallb-config.yaml
root@srvhost:~# kubectl create -f metallb-config.yaml
configmap/config created
kubectl get all
then test with a nginx container
kubectl create deploy nginx --image nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-6799fc88d8-zsmdt 0/1 ContainerCreating 0 2s
Expose the deployment
kubectl expose deploy nginx --port 80 --type LoadBalancer
root@srvhost:~# kubectl expose deploy nginx --port 80 --type LoadBalancer
service/nginx exposed
root@srvhost:~# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-6799fc88d8-zsmdt 1/1 Running 0 91s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h45m
service/nginx LoadBalancer 10.96.75.187 172.18.255.1 80:31725/TCP 4s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 91s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-6799fc88d8 1 1 1 91s
in this case the exposed container got the 172.18.255.1 IP for the service type LoadBalanced trhu Metallb
curl http://172.18.255.1
-------------------------------------
root@srvhost:~# curl http://172.18.255.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
-------------------------
Cleanup
*kubectl delete deploy nginx
*kubectl delete service nginx
@elnemesisdivina
Copy link
Author

HELM +Kubeapps

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment