Skip to content

Instantly share code, notes, and snippets.

@shettyg
Created November 8, 2018 18:05
Show Gist options
  • Save shettyg/b028011663dcf95209d963d358a4b809 to your computer and use it in GitHub Desktop.
Save shettyg/b028011663dcf95209d963d358a4b809 to your computer and use it in GitHub Desktop.
OVN policy with port groups
Ingress Policy Testing
=========
=========
* Create namespace "ns1"
kubectl create -f ~/policy/ns1.yaml
# Should create an empty address_set for that namespace
ovn-nbctl find address_set external-ids:name="ns1"
kubectl delete -f ~/policy/ns1.yaml
# Should delete the address set
kubectl create -f ~/policy/ns1.yaml
# Should create it again
* Create policy
kubectl create -f ~/policy/policy1.yaml
# For each ingress rule, create an adderss_set for each pod selector
sudo ovn-nbctl find address_set external-ids:name="ns1.policy1.ingress.0"
sudo ovn-nbctl find address_set external-ids:name="ns1.policy1.ingress.1"
# Create a port_group for the policy
sudo ovn-nbctl find port_group external_ids:name=ns1_policy1
# Create ACLs in that port_group
sudo ovn-nbctl find acl external-ids:namespace="ns1" external-ids:policy="policy1" external-ids:policy_type=Ingress | grep match | sort
Should return:
match : "ip4.src == {172.16.1.0/24} && tcp && tcp.dst==10 && outport == \"$ns1_policy1\""
match : "ip4.src == {172.16.1.0/24} && udp && udp.dst==11 && outport == \"$ns1_policy1\""
match : "ip4.src == {$a15401877322708248246} && tcp && tcp.dst==20 && outport == \"$ns1_policy1\""
match : "ip4.src == {$a15401878422219876457} && tcp && tcp.dst==10 && outport == \"$ns1_policy1\""
match : "ip4.src == {$a15401878422219876457} && udp && udp.dst==11 && outport == \"$ns1_policy1\""
* Delete the policy
kubectl delete -f ~/policy/policy1.yaml
# For each ingress rule, delete the adderss_set for each pod selector and delete the port group.
sudo ovn-nbctl find address_set external-ids:name="ns1.policy1.ingress.0"
sudo ovn-nbctl find address_set external-ids:name="ns1.policy1.ingress.1"
sudo ovn-nbctl find port_group external_ids:name=ns1_policy1
sudo ovn-nbctl find acl external-ids:namespace="ns1" external-ids:policy="policy1" external-ids:policy_type=Ingress
kubectl create -f ~/policy/policy1.yaml
* Create 2 pods to which policy needs to be applied.
kubectl create -f ~/policy/policypod1.yaml
kubectl create -f ~/policy/policypod2.yaml
# The the pod IP addresses should populate the namespace address set.
sudo ovn-nbctl find address_set external-ids:name="ns1"
# Port_group “ns1_policy1” should have the 2 logical ports added to ports
sudo ovn-nbctl find port_group external_ids:name=ns1_policy1
# ingressDefaultDeny port_group should have the same added.
sudo ovn-nbctl find port_group name=ingressDefaultDeny
# Delete the above pods to them being removed from port_groups
kubectl delete -f ~/policy/policypod1.yaml
kubectl delete -f ~/policy/policypod2.yaml
# Re-create the pods
kubectl create -f ~/policy/policypod1.yaml
kubectl create -f ~/policy/policypod2.yaml
# You should not be able to curl $IP:80 for these pods from master.
# Ctrl-C the watcher and start it again. Same ACls should be present
# Create 2 pods with labels
# role: frontend1
kubectl create -f ~/policy/testpod1.yaml
kubectl create -f ~/policy/testpod2.yaml
# The address_set "ns1.policy1.ingress.0" should be updated with 2
# IP addresses.
* Create 2 pods with labels
role: frontend2
kubectl create -f ~/policy/testpod3.yaml
kubectl create -f ~/policy/testpod4.yaml
# The address_set "ns1.policy1.ingress.1" should be updated with 2 IP
# addresses.
# The same ip addresses should be added to "ns1.policy1.ingress.0"
* Delete the 2 pods with labels "role: frontend1"
kubectl delete -f ~/policy/testpod1.yaml
kubectl delete -f ~/policy/testpod2.yaml
# The address_set "ns1.policy1.ingress.0" should be updated to have
# 2 less IP addresses.
kubectl delete -f ~/policy/testpod3.yaml
kubectl delete -f ~/policy/testpod4.yaml
# Both address_set "ns1.policy1.ingress.[01]" should be updated to have
# no IP addresses
* Create 2 pods with labels
role: frontend1
kubectl create -f ~/policy/testpod1.yaml
kubectl create -f ~/policy/testpod2.yaml
# The address_set "ns1.policy1.ingress.0" should be updated with 2
# IP addresses.
* Create 2 pods with labels
role: frontend2
kubectl create -f ~/policy/testpod3.yaml
kubectl create -f ~/policy/testpod4.yaml
# The address_set "ns1.policy1.ingress.1" should be updated with 2 IP
# addresses.
# The same ip addresses should be added to "ns1.policy1.ingress.0"
Namespace Selectors
===================
* Create new namespaces
kubectl create -f ~/policy/namespaceproject1.yaml
kubectl create -f ~/policy/namespaceproject1twin.yaml
kubectl create -f ~/policy/namespaceproject2.yaml
# 3 new address_set "project1" and "project1twin" and "project2" is created.
# The ACLs (for all pods in ns1 selected by policy1) are updated with the address_sets
# ovn-nbctl find acl external-ids:namespace="ns1" external-ids:policy="policy1" external-ids:policy_type=Ingress | grep match | sort
* Delete one namespace and re-add it
kubectl delete -f ~/policy/namespaceproject1.yaml
# The ACL should have the address_set deleted
kubectl create -f ~/policy/namespaceproject1.yaml
# The ACL should have the address_set re-added.
* Create new pods in those namespaces.
kubectl create -f ~/policy/nspod1.yaml
kubectl create -f ~/policy/nspod2.yaml
# the addressSet "project1" and "project2" should get updated
And finally
===========
kubectl delete -f ~/policy/policypod1.yaml
kubectl delete -f ~/policy/policypod2.yaml
# It should delete ports in port_group
ovn-nbctl find port_group external_ids:name=ns1_policy1
ovn-nbctl find port_group name=ingressDefaultDeny
* Create them back
kubectl create -f ~/policy/policypod1.yaml
kubectl create -f ~/policy/policypod2.yaml
# Delete the policy
kubectl delete -f ~/policy/policy1.yaml
# port_group should be deleted
sudo ovn-nbctl find port_group external_ids:name=ns1_policy1
# Ingress default deny group should not have any pods
sudo ovn-nbctl find port_group name=ingressDefaultDeny
# All address_sets (2 associated with 2 ingresspolicies) created for that policy should get deleted
sudo ovn-nbctl find address_set external-ids:name="ns1.policy1.ingress.0"
sudo ovn-nbctl find address_set external-ids:name="ns1.policy1.ingress.1"
# Create it again
kubectl create -f ~/policy/policy1.yaml
sudo ovn-nbctl find address_set external-ids:name="ns1.policy1.ingress.0"
sudo ovn-nbctl find address_set external-ids:name="ns1.policy1.ingress.1"
sudo ovn-nbctl find port_group name=ingressDefaultDeny
sudo ovn-nbctl find port_group external_ids:name=ns1_policy1
sudo ovn-nbctl find acl external-ids:namespace="ns1" external-ids:policy="policy1" external-ids:policy_type=Ingress | grep match | sort
Cleanup
=======
kubectl delete -f ~/policy/policy1.yaml
# Deleting the local peer pods
kubectl delete -f ~/policy/testpod1.yaml
kubectl delete -f ~/policy/testpod2.yaml
kubectl delete -f ~/policy/testpod3.yaml
kubectl delete -f ~/policy/testpod4.yaml
# Deleting the peer namespaces (and pods in it)
kubectl delete -f ~/policy/nspod1.yaml
kubectl delete -f ~/policy/nspod2.yaml
kubectl delete -f ~/policy/namespaceproject1.yaml
kubectl delete -f ~/policy/namespaceproject1twin.yaml
kubectl delete -f ~/policy/namespaceproject2.yaml
# Delete namespace
kubectl delete -f ~/policy/ns1.yaml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment