Skip to content

Instantly share code, notes, and snippets.

View karampok's full-sized avatar

Konstantinos Karampogias karampok

View GitHub Profile
@karampok
karampok / ipv6-ping-linklocal-bridge.sh
Last active October 4, 2022 09:15
ipv6-ping-linklocal-bridge
# setup
ip netns add radio
ip netns add ocp
ip link add veth1 netns radio type veth peer name veth2 netns ocp
# First network namespace == radio
ip netns exec radio bash
ip link set dev veth1 up
veth1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 6e:90:92:21:48:c2 brd ff:ff:ff:ff:ff:ff link-netns ocp

Question

if hugepages are enabled in the node, when you run oc adm top node, the output shows huge pages as used memory. So the output is (memory used + huge pages). Is that expected?

Answer

The output of adm top node does not include the hugepage in the calculation.

08:36:15.3 msg="Request:
&RunPodSandboxRequest{Config:
&PodSandboxConfig {
Metadata:&PodSandboxMetadata{Name:p2c-pod,Uid:2616dd35-67ae-4a59-88f9-18812ad7b12c,Namespace:p2c,Attempt:0,}
Hostname:p2c-pod,LogDirectory:/var/log/pods/p2c_p2c-pod_2616dd35-67ae-4a59-88f9-18812ad7b12c,
DnsConfig:&DNSConfig{Servers:[fd02::a],
Searches:[p2c.svc.cluster.local svc.cluster.local cluster.local bcn.hub-virtual.lab],
Options:[ndots:5],},
PortMappings:[]*PortMapping{},
Labels:map[string]string{color: red,io.kubernetes.pod.name: p2c-pod,io.kubernetes.pod.namespace: p2c,io.kubernetes.pod.uid: 2616dd35-67ae-4a59-88f9-18812ad7b12c,},
Nov 16 08:36:17 openshift-worker-0.hub-virtual.lab crio[2694]: time="2021-11-16
08:36:17.372819882Z" level=debug msg="Request:
&ImageStatusRequest{Image:&ImageSpec{Image:konstantinos-kvm.cloud.lab.eng.bos.redhat.com:5000/trbsht:latest,Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration:
{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"color\":\"red\"},\"name\":\"p2c-pod\",\"namespace\":\"p2c\"},\"spec\":{\"containers\":[{\"args\":[\"--cpu\",\"2\"],\"command\":[\"stress-ng\"],\"image\":\"konstantinos-kvm.cloud.lab.eng.bos.redhat.com:5000/stress-ng\",\"name\":\"bee-one\",\"resources\":{\"requests\":{\"cpu\":2,\"memory\":\"500Mi\"}},\"volumeMounts\":[{\"mountPath\":\"/opt\",\"name\":\"envs\"}]},{\"args\":[\"--cpu\",\"1\"],\"command\":[\"stress-ng\"],\"image\":\"konstantinos-kvm.cloud.lab.eng.bos.redhat.com:5000/stress-ng\",\"name\":\"bee-two\",\"resources\":{\"requests\":{\"cpu\":1,\"memory\":\"500Mi\"}},\"volumeMounts\":[{\"mountPath\":\"/opt\",
@karampok
karampok / tail.go
Created January 21, 2020 13:14
Tailer
func (w *Watcher) Initialize() error {
lines := make(chan (string))
//go get lines from file
go func(file string) {
defer close(lines)
//NOTE. To implement tail properly is challenging
// here is a link with nice explanation
// https://github.com/fstab/grok_exporter/wiki/tailer-(tail-%E2%80%90f)
// In all cases, if custom-made tail implementation is needed
// UnionFetcher fetches from east and west and combines the result. If one
// fetcher fails only the reply of the other fetcher is returned.
type UnionFetcher struct {
East, West PathFetcher
}
// GetPaths gets paths from both fetchers.
func (f *UnionFetcher) GetPaths(ctx context.Context, req *sciond.PathReq,
earlyReplyInterval time.Duration, logger log.Logger) (*sciond.PathReply, error) {
type myFrame struct {
epoch uint16
seq uint32
}
func (f myFrame) Matches(x interface{}) bool {
//b := []byte(fmt.Sprintf("%v", x.(interface{})))
//epoch := common.Order.UintN(x[1:3], 2)
//seq := common.Order.UintN(b[3:6], 3)
// more logic here
package main
import (
"context"
"log"
"net/http"
"os"
"os/signal"
"time"
)
@karampok
karampok / grootfs-issue.md
Created November 29, 2018 12:51
Grootfs issue

how to create a container directly in garden

bosh ssh -d cloudfoundry cell/X
sudo -i && cd /root
wget https://github.com/contraband/gaol/releases/download/2016-8-22/gaol_linuxchmod +x gaol_linux
./gaol_linux -t /var/vcap/data/garden/garden.sock  create -n testme
./gaol_linux -t /var/vcap/data/garden/garden.sock list
./gaol_linux -t /var/vcap/data/garden/garden.sock  shell testme
./gaol_linux -t /var/vcap/data/garden/garden.sock  destroy testme
@karampok
karampok / cc.md
Last active August 16, 2018 14:30

Deploying CFCR with cloud provider:

  1. Given a bosh director
  2. Create service-accounts for master and worker link and save the name into a var file.
cat <deploymentName.cc>.vars 
    cfcr_master_service_account_address: bbl-158293-cfcr-master@cf-pcf-kubo.iam.gserviceaccount.com
    cfcr_worker_service_account_address: bbl-158293-cfcr-worker@cf-pcf-kubo.iam.gserviceaccount.com