Skip to content

Instantly share code, notes, and snippets.

View gbrayut's full-sized avatar
👨‍💻
Living life one byte at a time

Greg Bray gbrayut

👨‍💻
Living life one byte at a time
View GitHub Profile
@gbrayut
gbrayut / gke-cgroupmode-test.yaml
Created March 29, 2024 22:44
KCC GKE cgroupMode Testing
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeNetwork
metadata:
annotations:
cnrm.cloud.google.com/management-conflict-prevention-policy: "none"
cnrm.cloud.google.com/deletion-policy: "abandon"
name: default
spec:
description: Default network for the project
---
@gbrayut
gbrayut / 01-systemd-unit.sh
Created March 30, 2024 04:49
Configure static ipv6 ULA address
# Create systemd unit
cat << EOF > /etc/systemd/system/theg2-ipv6-ula.service
[Unit]
Description=Add ipv6 static ULA
After=network-online.target
Requires=network-online.target
[Service]
Type=oneshot
ExecStart=/sbin/ip address add fd0b:dead:b0b1::123 dev wlan0
@gbrayut
gbrayut / test.sh
Created April 23, 2024 16:52
LinkedIn TLS Error *.azureedge.net
$ echo "GET /" | openssl s_client -showcerts -servername www.linkedin.com -connect www.linkedin.com:443 | openssl x509 -noout -text
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root G2
verify return:1
depth=1 C = US, O = Microsoft Corporation, CN = Microsoft Azure RSA TLS Issuing CA 04
verify return:1
depth=0 C = US, ST = WA, L = Redmond, O = Microsoft Corporation, CN = *.azureedge.net
verify return:1
DONE
Certificate:
Data:
@gbrayut
gbrayut / 00-setup.sh
Last active April 30, 2024 02:02
GKE 1.27 nvidia-smi -p2p testing
# Add GPU node pool with automatic driver installation. Manual drivers requred before 1.27 https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers
# If you used the installation DaemonSet to manually install GPU drivers on or before January 25, 2023, you might need to re-apply the DaemonSet to get a version that ignores nodes that use automatic driver installation.
# COS based L4 via g2-standard-24 VMs https://cloud.google.com/compute/docs/accelerator-optimized-machines#g2-vms
gcloud beta container --project "gregbray-vpc" node-pools create "nvidia-l4-cos" --cluster "gke-iowa" --region "us-central1" \
--machine-type "g2-standard-24" --accelerator type=nvidia-l4,count=2,gpu-driver-version=default \
--image-type "COS_CONTAINERD" --disk-type "pd-balanced" --disk-size "100" \
--num-nodes "1" --enable-autoscaling --min-nodes=1 --max-nodes=1 \
--max-pods-per-node "110" --node-locations "us-central1-a"