Skip to content

Instantly share code, notes, and snippets.

View nithu0115's full-sized avatar
🍻
Work Hard, Have Fun “¯\_(ツ)_/¯“

Nithish Murcherla nithu0115

🍻
Work Hard, Have Fun “¯\_(ツ)_/¯“
View GitHub Profile
@nithu0115
nithu0115 / gist:f8deed67bf365cb9f4e49014679e3303
Last active December 19, 2018 03:48
Amazon Linux2 - Using a secondary volume for docker overlay2 filesystem
## ECS AMI - Amzn Linux2 - Using a secondary volume for docker overlay2 filesystem
#### — Replace the device names with appropriate names, as they vary depending on Instance type and volume type - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html
[1] https://github.com/docker/docker-ce/blob/v18.06.1-ce/components/packaging/rpm/systemd/docker.service#L4
[2] https://github.com/aws/amazon-ecs-init/blob/master/packaging/amazon-linux-ami/ecs.service#L19
UserData for 'c5d' with an instance store as the secondary volume
#!/bin/bash -x
## disable systemd from starting docker and ecs during cloud-init

As of Feburary-14-2019, Spark does not have integration with EKS as spark binary would require to use aws-iam-authentator to fetch credential to authenticate to EKS cluster which will be integrated to kubernetes-client 4.1.2 release Ref: fabric8io/kubernetes-client#1358

In order to integrate spark binary with EKS, we will have to do a custom build with fabric 4.1-SHAPSHOT version.

Prerequisites:

Java8

Apache-maven3.x.x and above

@nithu0115
nithu0115 / gist:1f09ce1414f0430bdfe337d2e7461ce6
Created October 5, 2019 00:07
Title: Gather logs from Worker Nodes without SSH'ing into the instance using `kubectl proxy`
Services: eks; kubernetes
Summary
Gather logs from Worker Nodes without SSH'ing into the instacne
Q) How to get logs from a Worker nodes if customers cannot SSH into the instance as they don’t have a keypair associated (for security reasons) and without detaching the volume and attaching it to another instance to troubleshoot an issue?
Solution:
To gather logs, first run
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mlsmaycon1
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-nginx
data:
default.conf: |
server {
listen 80 default_server;
listen [::]:80 default_server;
@nithu0115
nithu0115 / test.sh
Last active December 26, 2019 17:29
#!/bin/bash -x
export C=extravagant-outfit-1571795722
PAYLOAD=$(aws eks describe-cluster --name $C --query 'cluster.{CA: certificateAuthority.data,Endpoint: endpoint}' --region us-east-2)
echo $PAYLOAD | jq -rc .CA | base64 -d > /tmp/public_cert
ENDPOINT=$(echo $PAYLOAD | jq -rc .Endpoint)
curl -iv --cacert /tmp/public_cert -H "Authorization: Bearer "$(aws eks get-token --cluster-name $C | jq -rc .status.token) $ENDPOINT/api/v1/namespaces/default/pods/
while [ $? -ne 0 ]; do
"command error"
sleep 10s
done
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-node-sa
namespace: kube-system
---
index a79f5a89cab1..768db651373f 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -750,6 +750,10 @@ static int nf_ct_resolve_clash(struct net *net, struct sk_buff *skb,
const struct nf_conntrack_l4proto *l4proto;
enum ip_conntrack_info oldinfo;
struct nf_conn *loser_ct = nf_ct_get(skb, &oldinfo);
+
+ // Added by nithish
+ pr_debug("nf_ct_resolve_clash: %p clashes with %p\n", loser_ct, ct);
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: startup-script
labels:
app: startup-script
spec:
template:
metadata:
labels:
@nithu0115
nithu0115 / cni-bandwidth.yaml
Created May 26, 2020 02:58
CNI bandwidth testing
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/ingress-bandwidth: 1M
kubernetes.io/egress-bandwidth: 1M
name: iperf-server
labels:
app: iperf-server
spec: