We're going to use SR-IOV technology with OpenShift Virtualization. Today we're going to cover looking at the SR-IOV network attachment definitions, spinning up VMs associated with those, and then we'll make a SIP phone call over the VMs (because we're all bored of a ping test!)
Notably, this gist does not cover installing the SR-IOV Operator, configuring the SR-IOV operator.
The NetworkAttachmentDefinition
(which defines an intent to attach additional networks to pods in Kubernetes, which you typically see by the presence of an additional network interface) that we use here is the one provided by default by the SR-IOV network operator.
We're going to take samples from the Kubevirt upstream repo, and we'll modify those to fit.
Let's get our sample manifests...
$ git clone https://github.com/kubevirt/kubevirt.git --depth 50
$ cd kubevirt/examples
Copy these into a couple new files we'll use, and we'll give the second one a unique name...
$ sed -ie "s|registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel|kubevirt/fedora-cloud-container-disk-demo:latest|" vmi-sriov.yaml
$ sed -ie "s|sriov/sriov-network|sriov-net|" vmi-sriov.yaml
$ cp vmi-sriov.yaml doug-sriov.yaml
$ cp vmi-sriov.yaml doug-sriov2.yaml
$ sed -ie "s|vmi-sriov|vmi-sriov2|" doug-sriov2.yaml
And we'll look at the name...
$ cat doug-sriov.yaml | grep -A2 multus
- multus:
networkName: sriov-net
name: sriov-net
Cool, we'll look at this net-attach-def, this is the one provided by default by the SR-IOV network operator.
$ oc get net-attach-def sriov-net
NAME AGE
sriov-net 56d
You might also want to describe it:
oc describe net-attach-def sriov-net
Create your VMs...
$ oc create -f doug-sriov.yaml
$ oc create -f doug-sriov2.yaml
You can watch the state of it being created:
$ watch -n1 oc get virtualmachineinstance.kubevirt.io
If you're not seeing that the PHASE
eventually changes to Running
, you might have a problem, do an oc describe
on the custom resource like so:
$ oc describe virtualmachineinstance.kubevirt.io/vmi-sriov
Now you can console in!
$ virtctl console vmi-sriov2
Use username fedora
and password fedora
to login, and then you can sudo su -
if you need to.
First, I'll try to ping over SR-IOV...
I login to instance #2 first and get the IPs, just using ip a
.
[root@cnvqe-01 examples]# virtctl console vmi-sriov2
vmi-sriov2 login: fedora
Password: fedora
[fedora@vmi-sriov2 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc fq_codel state UP group default qlen 1000
link/ether 02:00:00:e6:73:ff brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 10.0.2.2/24 brd 10.0.2.255 scope global dynamic noprefixroute eth0
valid_lft 86313394sec preferred_lft 86313394sec
inet6 fe80::ff:fee6:73ff/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether ba:da:7a:f3:be:39 brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens1
inet 10.46.41.244/24 brd 10.46.41.255 scope global dynamic noprefixroute eth1
valid_lft 1595sec preferred_lft 1595sec
inet 10.46.41.111/24 brd 10.46.41.255 scope global secondary dynamic eth1
valid_lft 1671sec preferred_lft 1671sec
inet6 2620:52:0:2e29:a6ee:dcaa:5e7d:c2c9/64 scope global dynamic noprefixroute
valid_lft 2591797sec preferred_lft 604597sec
inet6 fe80::9be:3888:5562:4316/64 scope link noprefixroute
valid_lft forever preferred_lft forever
In this case eth1
is our interface, so I'll try to ping 10.46.41.111
[root@cnvqe-01 examples]# virtctl console vmi-sriov
Successfully connected to vmi-sriov console. The escape sequence is ^]
vmi-sriov login: fedora
Password:
Last login: Wed Sep 23 19:12:56 on ttyS0
fedora@vmi-sriov ~]$ ping -c5 10.46.41.244
PING 10.46.41.244 (10.46.41.244) 56(84) bytes of data.
64 bytes from 10.46.41.244: icmp_seq=1 ttl=64 time=0.071 ms
64 bytes from 10.46.41.244: icmp_seq=2 ttl=64 time=0.116 ms
64 bytes from 10.46.41.244: icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from 10.46.41.244: icmp_seq=4 ttl=64 time=0.093 ms
64 bytes from 10.46.41.244: icmp_seq=5 ttl=64 time=0.095 ms
--- 10.46.41.244 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4125ms
rtt min/avg/max/mdev = 0.070/0.089/0.116/0.017 ms
Oh yeah, if you want, you can also see the results of what networks the pods is connected to by looking at the k8s.v1.cni.cncf.io/network-status
annotation, too, like...
$ oc get pods
NAME READY STATUS RESTARTS AGE
virt-launcher-vmi-sriov-425k2 2/2 Running 0 33m
virt-launcher-vmi-sriov2-j8fb2 2/2 Running 0 10m
$ oc describe pod virt-launcher-vmi-sriov2-j8fb2 | grep -A14 network-status
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.131.1.165"
],
"default": true,
"dns": {}
},{
"name": "sriov-net",
"interface": "net1",
"dns": {}
}]
k8s.v1.cni.cncf.io/networks: [{"interface":"net1","name":"sriov-net","namespace":"default"}]
...In an ideal world, you'd probably have an image for this. (and/or other configuration management, service discovery, etc etc).
In my case, for a demo, I installed and configured the Asterisk by hand.
I installed the basic components we need:
yum install -y asterisk-pjsip asterisk asterisk-sounds-core-en-ulaw
Next, we’re going to setup our /etc/asterisk/pjsip.conf file on both VMs. This creates a SIP trunk between each machine.
Note that the IPs correlate to what we discovered earlier.
[transport-udp]
type=transport
protocol=udp
bind=0.0.0.0
[alice]
type=endpoint
transport=transport-udp
context=endpoints
disallow=all
allow=ulaw
aors=alice
[alice]
type=identify
endpoint=alice
match=10.46.41.249/255.255.255.255
[alice]
type=aor
contact=sip:anyuser@10.46.41.249:5060
[bob]
type=endpoint
transport=transport-udp
context=endpoints
disallow=all
allow=ulaw
aors=bob
[bob]
type=identify
endpoint=bob
match=10.46.41.107/255.255.255.255
[bob]
type=aor
contact=sip:anyuser@10.46.41.107:5060
Once you’ve loaded that, console into the VM and issue:
# asterisk -rx 'pjsip reload'
Next we’re going to create a file /etc/asterisk/extensions.conf which is our “dialplan” – this tells Asterisk how to behave when a call comes in our trunk. In our case, we’re going to have it answer the call, play a sound file, and then hangup.
Create the file as so:
[endpoints]
exten => _X.,1,NoOp()
same => n,Answer()
same => n,SayDigits(1)
same => n,Hangup()
Next, you’re going to tell asterisk to reload this with:
# asterisk -rx 'dialplan reload'
Now, from the first VM with the 192.168.100.2 address, go ahead and console into the VM and run asterisk -rvvv to get an Asterisk console, and we’ll set some debugging output on, and then we’ll originate a phone call:
vmi-sriov*CLI> pjsip set logger on
vmi-sriov*CLI> rtp set debug on
vmi-sriov*CLI> channel originate PJSIP/333@bob application saydigits 1
You should see a ton of output now! You’ll see the SIP messages to initiate the phone call, and then you’ll see information about the RTP (real-time protocol) packets that include the voice media going between the machines!