Skip to content

Instantly share code, notes, and snippets.

@egernst
egernst / Debug CC QEMU pc-lite.md
Last active April 28, 2017 07:54 — forked from mcastelino/Debug CC QEMU pc-lite.md
Debugging Clear Containers Kernel and Rootfs using qemu-lite

Debug the kernel and rootfs for Clear Containers with pc lite

sudo qemu-lite-system-x86_64 \
  -machine pc-lite,accel=kvm,kernel_irqchip,nvdimm \
  -cpu host -m 256,maxmem=1G,slots=2 \
  -smp 2 -no-user-config -nodefaults \
  -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard \
  -kernel ./vmlinux-4.9.4-53.container \
  -append "reboot=k panic=1 rw tsc=reliable no_timer_check \
@egernst
egernst / debug-of-clear-containers.md
Last active March 13, 2017 21:28
debugging clear containers

#Debug with your own rootfs: Easy setup available to build your own kernel, rootfs and image available at:

$ git clone https://github.com/clearcontainers/osbuilder.git

##Connect to mini-OS: To aid in debug of the mini-os itself, you can create a debug console service and add it to the rootfs. Directions for this can be found at : https://github.com/jcvenegas/cc-oci-runtime/commit/3334a2bc7c915e

@egernst
egernst / dpdk-debug.md
Last active March 14, 2017 20:08
dpdk-debug

Baseline

Make necessary changes in vm.json/hypervisor.args to move from pc-lite to pc:

diff --git a/hypervisor.args b/hypervisor.args                                                                                               │root@d0d829266c89:/# 
-pc,accel=kvm,kernel_irqchip,nvdimm                                                                                                          │
+pc-lite,accel=kvm,kernel_irqchip,nvdimm  
diff --git a/vm.json b/vm.json
-                       "path": "/usr/share/clear-containers/vmlinuz-4.9.4-53.container",
+                       "path": "/usr/share/clear-containers/vmlinux.container",
@egernst
egernst / pc-memory.md
Last active March 15, 2017 20:12
description of memory holding with qemu and pc machine type

Reproducing Memory issues with PC and OVS/DPDK

setup OVS

This assumes you already have OVS and DPDK installed on your system

sudo mkdir -p /var/run/openvswitch
sudo killall ovsdb-server ovs-vswitchd
rm -f /var/run/openvswitch/vhost-user*
sudo rm -f /etc/openvswitch/conf.db

export DB_SOCK=/var/run/openvswitch/db.sock
@egernst
egernst / cc-dpdk-ovs.md
Last active March 23, 2017 18:47
Clear Containers, DPDK and OVS

Get the necessary sources

  • For plugin: go get github.com/clearcontainers/ovsdpdk
  • For ovs-dpdk enabled CC runtime: go get github.com/01org/cc-oci-runtime --> detail out getting the branch

Build the runtime

Will likely need a number of packages. For fedora 25, for example, run:

sudo dnf install \
glib2-devel-2.50.3-1.fc25.x86_64 \
@egernst
egernst / vpp-getting-started-with-vhostuser-VMs.md
Last active November 30, 2017 22:48
Basic VPP setup for vhost-user and VMs

Basic VPP Setup for vhost-user interfaces and VMs

There are many examples of using VPP with tap interfaces/namespaces. After reviewing these, and details at https://wiki.fd.io/view/VPP/Tutorial_Routing_and_Switching, we came up with the following setup for testing inter-VM connectivity using VPP's vhost-user.

First time setup

Grab VPP packages, per directions from FD.io at https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

$ sudo rm /etc/apt/sources.list.d/99fd.io.list
$ echo "deb [trusted=yes] https://nexus.fd.io/content/repositories/fd.io.ubuntu.xenial.main/ ./" | sudo tee -a /etc/apt/sources.list.d/99fd.io.list
@egernst
egernst / container-dpdk.txt
Last active May 16, 2017 03:04
Container for DPDK test applications
Example docker file to create an image with a few tools and DPDK installed can be found below.
This will be updated with usage with VPP CNM plugin and Clear Containers
```
FROM ubuntu:16.04
ARG DPDK_VER=17.02
ARG DPDK_TARGET=x86_64-native-linuxapp-gcc
@egernst
egernst / sriov-with-cnm-plugin.md
Last active May 11, 2017 19:36
SRIO-V with Docker CNM plugin

Using a Docker CNM plugin to play with SRIO-V

This gist describes the setup necessary for testing SRIO-V based connectivity between two physical boxes which are each setup as described here, and directly connected via their respective SRIO-V enabled NICs.

Setup host system's packages

For this scenario, I'm setting up two Ubuntu 16.04 systems which have a SRIO-V enabled interface as well as a second port for accessing the SUT. To setup:

@egernst
egernst / cor-sriov-with-cnm-plugin.md
Last active December 8, 2017 23:04
COR edition: SRIO-V with Docker CNM plugin

Using a Docker CNM plugin to play with SRIO-V

This gist describes the setup necessary for testing SRIO-V based connectivity between two physical boxes which are each setup as described here, and directly connected via their respective SRIO-V enabled NICs.

Setup host system's packages

For this scenario, I'm setting up two Ubuntu 16.04 systems which have a SRIO-V enabled interface as well as a second port for accessing the SUT. To setup:

#!/bin/bash
go_src=$GOPATH/src
find $go_src -name "*.go" -print > cscope.files
if cscope -b -k; then
echo "Done"
else
echo "Failed"