Skip to content

Instantly share code, notes, and snippets.

@dougbtv
Last active March 7, 2017 19:39
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dougbtv/b90af168cecf1587fd9f34031e6c76e9 to your computer and use it in GitHub Desktop.
Save dougbtv/b90af168cecf1587fd9f34031e6c76e9 to your computer and use it in GitHub Desktop.

Trying kuryr kubernetes

Using CentOS 7

[stack@droctagon4 devstack]$ cat /etc/redhat-release 
CentOS Linux release 7.3.1611 (Core) 

Following the steps using video in this article

I clone devstack, and create a stack user...

Here's my local.conf

[stack@droctagon4 devstack]$ cat local.conf 
[[local|localrc]]

LOGFILE=devstack.log
LOG_COLOR=False

# HOST_IP=CHANGEME
# Credentials
ADMIN_PASSWORD=pass
MYSQL_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable Keystone v3
IDENTITY_API_VERSION=3

# Q_PLUGIN=ml2
# Q_ML2_TENANT_NETWORK_TYPE=vxlan

# LBaaSv2 service and Haproxy agent
enable_plugin neutron-lbaas \
 git://git.openstack.org/openstack/neutron-lbaas
enable_service q-lbaasv2
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"

enable_plugin kuryr-kubernetes \
 https://git.openstack.org/openstack/kuryr-kubernetes refs/changes/45/376045/12

enable_service docker
enable_service etcd
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
enable_service kubelet
enable_service kuryr-kubernetes

# [[post-config|/$Q_PLUGIN_CONF_FILE]]
# [securitygroup]
# firewall_driver = openvswitch

And we need a dockerfile & and this simple python server

[stack@droctagon4 devstack]$ cat demo/Dockerfile 
FROM alpine
RUN apk add --no-cache python bash openssh-client curl
COPY server.py /server.py
ENTRYPOINT ["python", "server.py"]

[stack@droctagon4 devstack]$ cat demo/server.py 
import BaseHTTPServer as http
import platform

class Handler(http.BaseHTTPRequestHandler):
  def do_GET(self):
    self.send_response(200)
    self.send_header('Content-Type', 'text/plain')
    self.end_headers()
    self.wfile.write("%s\n" % platform.node())

if __name__ == '__main__':
  httpd = http.HTTPServer(('', 8080), Handler)
  httpd.serve_forever()

Run ./stack.sh, and get a total run time, 2890 (about 50 minutes?)

This is your host IP address: 192.168.1.26
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.1.26/dashboard
Keystone is serving at http://192.168.1.26/identity/
The default users are: admin and demo
The password: pass

Getting pretty in the video until the VM...

[stack@droctagon4 devstack]$ source openrc 
[stack@droctagon4 devstack]$ kuryr_conf=/etc/kuryr/kuryr.conf 
[stack@droctagon4 devstack]$ cat /etc/kuryr/kuryr.conf | grep -A15 neutron_defaults | grep "^project"
project = e99e8bdb250448ce9d09407e54314417
[stack@droctagon4 devstack]$ kuryr_project=e99e8bdb250448ce9d09407e54314417
[stack@droctagon4 devstack]$ openstack project show e99e8bdb250448ce9d09407e54314417
[stack@droctagon4 devstack]$ openstack subnet list --network private -c ID -c Name
[stack@droctagon4 devstack]$ cat /etc/kuryr/kuryr.conf | grep -A15 neutron_defaults | grep "^pod_subnet"
pod_subnet = 0f7f4331-1f0a-445c-aa33-d3b3c6bb803d
[stack@droctagon4 devstack]$ pod_subnet=0f7f4331-1f0a-445c-aa33-d3b3c6bb803d
[stack@droctagon4 devstack]$ openstack subnet show $pod_subnet
[stack@droctagon4 devstack]$ cat /etc/kuryr/kuryr.conf | grep -A15 neutron_defaults | grep "^service_subnet"
service_subnet = 14a38682-e719-4625-8793-dee32d6bd267
[stack@droctagon4 devstack]$ service_subnet=14a38682-e719-4625-8793-dee32d6bd267
[stack@droctagon4 devstack]$ openstack subnet show $service_subnet
[stack@droctagon4 devstack]$ ps -eoargs | grep '^/hyperkube' | egrep 'service-cluster-ip-range=\S+'
[stack@droctagon4 devstack]$ openstack server list -c Name -c Networks -c 'Image Name'

{......... that was blank .............}

So I guess let's spin up something like what is shown.

[stack@droctagon4 devstack]$ curl -o /tmp/cirros.qcow2 http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
[stack@droctagon4 devstack]$ glance image-create --name cirros --disk-format qcow2  --container-format bare  --file /tmp/cirros.qcow2 --progress
[stack@droctagon4 devstack]$ nova boot --flavor m1.tiny --image cirros testvm
[stack@droctagon4 devstack]$ openstack subnet list --network private -c ID -c Name
+--------------------------------------+---------------------+
| ID                                   | Name                |
+--------------------------------------+---------------------+
| 0f7f4331-1f0a-445c-aa33-d3b3c6bb803d | private-subnet      |
| 14a38682-e719-4625-8793-dee32d6bd267 | k8s-service-subnet  |
| 5510dcd9-44e9-4c3d-9b91-e568d6126b0f | ipv6-private-subnet |
+--------------------------------------+---------------------+
[stack@droctagon4 devstack]$ openstack server list -c Name -c Networks -c 'Image Name'
+--------+--------------------------------------------------------+------------+
| Name   | Networks                                               | Image Name |
+--------+--------------------------------------------------------+------------+
| testvm | private=10.0.0.8, fd6a:f736:7b67:0:f816:3eff:fefb:ad90 | cirros     |
+--------+--------------------------------------------------------+------------+

Looks pretty close. Continuing on...

Sooooo... stiking again, no kubelet, huh?

[stack@droctagon4 devstack]$ kubectl get nodes
{ ...empty... }

And in screen 28 for kubelet... Just an empty prompt.

[stack@droctagon4 devstack]$ 

And of course if I try to run a pod, it fails to schedule.

[stack@droctagon4 devstack]$ kubectl get nodes
[stack@droctagon4 devstack]$ kubectl run demo --image=demo:demo
deployment "demo" created
[stack@droctagon4 devstack]$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
demo-2945424114-xp6gu   0/1       Pending   0          21s
[stack@droctagon4 devstack]$ kubectl describe pod demo
Name:		demo-2945424114-xp6gu
Namespace:	default
Node:		/
Labels:		pod-template-hash=2945424114
		run=demo
Status:		Pending
IP:		
Controllers:	ReplicaSet/demo-2945424114
Containers:
  demo:
    Image:	demo:demo
    Port:	
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-m6yia (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  PodScheduled 	False 
Volumes:
  default-token-m6yia:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-m6yia
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubobjectPath	Type		Reason			Message
  ---------	--------	-----	----			-------------	--------	------			-------
  27s		12s		6	{default-scheduler }			Warning		FailedScheduling	no nodes available to schedule pods
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment