Skip to content

Instantly share code, notes, and snippets.

Busy busy with Kata Containers

Graham Whaley grahamwhaley

Busy busy with Kata Containers
  • Intel
Block or report user

Report or block grahamwhaley

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
grahamwhaley /
Last active Aug 16, 2019
Duplicate an elasticsearch index with logstash

I wanted to copy an index in my elasticsearch DB, so I could try and re-open it in Kibana whilst ticking the 'not a time series database' option. So, how to copy an index??

I found a useful post at That copies an index from one elastic to another, and is written for an older version of logstash. Updating that to logstash ~7.3 and modifying to copy an index to a new index of a different name in the same DB, I came up with:

# Logstash config to copy one index to another

Simple elastic/json setup

I've had to do this twice now, as I lost my info from the first time around. So, let's write it down...

Run up elastic in docker

First, let's run up elastic. Elastic give you info on how to do this on their site. I ended up firing a small docker compose:

version: '2'
grahamwhaley /
Last active Jul 16, 2019
Quick script to apply kata_deploy to k8s
# Copyright (c) 2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# applies kata-deploy to the default kubectl cluster
set -e

How to use jenkinsfile-runner

Developing pipeline jenkinsfiles can be painful. It is slightly easier if you use the jenkinsfile runner to run them locally.

Here is the script I have been using to do that for some in-dev Jenkinsfiles. Writing down as there are some subtleties here that it is going to be easy to lose and hard to re-create.

View gist:4315f34a8f1da0932d59ce8de8f902ef

How to config k8s to have many pods (on a node)

I had a need (understand this is for some testing, not for a real deployment ;-) ) to run a lot of pods (like >=1k of them) on a single k8s node. Now, I had the hw available - 88cores and 377Gb of RAM - but, k8s has some inbuilt limits by default that will not let you launch more than 110 pods, and if you get past that, you'll hit a network limit at about 250 pods... so, before I forget, here is how to configure to run more...


In your kubeadm init file, something like:


Prereqs for Kata containers on Kubernetes workshop

To fully participate in the workshop, you will require access to a Kubernetes cluster with the following properties:

  • Version >= v1.12 kubernetes
  • That is scratch/disposable (i.e. not a live cluster you care about)
  • That supports virtualisation (Kata runs containers in VMs, so the kubelet must be running on a node that is able to run a VM. - see below)
  • preferably CRI-O installed as the CRI runtime on the cluster, along with the matching runc as the default container runtime.
    • Kata Containers also works with containerd, but the workshop demo will be conducted on a stack configured with CRI-O. you are welcome to follow along with containerd and adapt as necessary if you wish. See this page

Running Kata Containers in Minikube

minikube is an easy way to try out a kubernetes (k8s) cluster locally. It utilises running a single node k8s stack in a local VM.

Kata Containers is an OCI compatible container runtime that runs container workloads inside VMs.

Wouldn't it be nice if you could use kata under minikube to get an easy out of the box experience to try out Kata? Well, turns out with a little bit of config and setup that is already supported, you can!


I needed to try and grab the golang GODEBUG gc and scheduler trace debug info, from the Kata proxy. But, how - these are enabled by the ENV and/or command line - and the proxy is run from the runtime, which is run from dockerd. Well, this might just require some patches...

In the proxy itself:

diff --git a/proxy.go b/proxy.go
index 2a51f16..4a054bb 100644
--- a/proxy.go
grahamwhaley /
Last active Aug 6, 2019
Configuring filebeat and logstash to pass raw JSON to elastic

Configuring filebeat and logstash to pass JSON to elastic

Over on Kata Contaiers we want to store some metrics results into Elasticsearch so we can have some nice views and analysis. Our results are generated as JSON, and we have trialled injecting them directly into Elastic using curl, and that worked OK. As Kata is under the OSF umbrella, we will likely end up using the existing ELK infrastructure. One requirement of that is that we route our JSON data through logstash. To do that from the build machines, the obvious choice is to use filebeat.

The flow

grahamwhaley /
Last active Aug 31, 2018
Joining to Jenkins

Setting up Jenkins with

First, install the Jenkins jcloud plugin into your Jenkins master.

In Jenkins UI, go to 'Manage Jenkins/Configure System'. Scroll down to the 'Cloud' section. 'Add a new cloud' and choose 'Cloud(jcloud)'.

and fail - the jcloud plugin is not listing as one of the valid providers...

Let's try the openstack plugin for kicks, as that also sits atop jcloud, and we use that plugin already for Kata.

You can’t perform that action at this time.