Skip to content

Instantly share code, notes, and snippets.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: loki-store
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 25G
---
global:
writeToFile: false
requestTimeout: 15s
indexerConfig:
enabled: true
esServers: ["https://search-perfscale-dev-chmf5l4sh66lvxbnadi4bznl3a.us-west-2.es.amazonaws.com"]
defaultIndex: ripsaw-kube-burner
type: elastic
1. Pull the origin-tests image
podman pull quay.io/openshift/origin-tests
.
<snip>
.
Writing manifest to image destination
Storing signatures
0b30a9e03d14b0152319438abcac4ebb14f967675aab66dc8da0a8f011132c42
@mffiedler
mffiedler / create_crd.sh
Created March 3, 2021 14:16
Shell script to create 200 openshift CRDs
#!/usr/bin/bash
for i in {0..199}; do
cat <<EOF | oc create -f -
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
generation: 1
name: svtconfig${i}.svt${i}.io
spec:
@mffiedler
mffiedler / upgrade.yaml
Created February 9, 2021 17:48
small workload for upgrades
projects:
- num: 200
basename: svt-
templates:
-
num: 6
file: ./content/build-template.json
-
num: 10
file: ./content/image-stream-template.json
@mffiedler
mffiedler / gist:0ea5c36b0558e42bd7fddeb894e79c41
Last active June 22, 2020 20:45
Increase root FS space on
parted /dev/nvme0n1 (starts interactive session)
p
respond Fix twice
q (return to bash)
fdisk /dev/nvme0n1
n (new partition - keep hitting enter to take defaults)
w
q (return to bash)
partprobe
lsblk and make sure nvme0n1p4 exists
In general when things are left over from failed installs or failed destroy cluster, you need to go through resource by resource and look for your partial label (e.g. mffiedler). Oftentimes deleting the VPC will reap child resources, but to be thorough, go through (in this order):
S3: S3 bucket - this can be difficult to find. There could be two (one starts terraform and one image-registry) - use install log or cluster creation time to find them
EC2: Instances
EC2: Load Balancers (also search on the VPC IOD for ELBs that show up - there are sometimes "hidden" ELBs in the same VPC)
VPC: NAT Gateways (Delete 1-by-1, they take time to actually delete and can old up subsequent deletes, keep refreshing)
VPC: After waiting you can try to delete the VPC itself but it will likely complain about interfaces in use
VPC: If the VPC did not delete clean you likely have to go to the security group it complains about, try to delete it and then delete any resources it thinks are in use
VPC: Security group - search by la
@mffiedler
mffiedler / master-vert-100.yaml
Created March 13, 2020 14:31
cluster loader profile for 100 node cluster on AWS
projects:
- num: 1500
basename: mastervert
templates:
-
num: 3
file: ./content/build-config-template.json
-
num: 6
file: ./content/build-template.json
In general when things are left over from failed installs or failed destroy cluster, you need to go through resource by resource and look for your label (e.g. mffiedler). Oftentimes deleting the VPC will reap child resources, but to be thorough, go through (in this order):
S3: S3 bucket - this can be difficult to find. There could be two (one starts terraform and one image-registry) - use install log or cluster creation time to find them
EC2: Instances
EC2: Load Balancers (also search on the VPC ID for ELBs that show up - there are sometimes "hidden" ELBs in the same VPC)
VPC: NAT Gateways (Delete 1-by-1, they take time to actually delete and can old up subsequent deletes, keep refreshing)
VPC: After waiting you can try to delete the VPC itself but it will likely complain about interfaces in use
VPC: If the VPC did not delete clean you likely have to go to the security group it complains about, try to delete it and then delete any resources it thinks are in use
VPC: Security group - search by label or sec
@mffiedler
mffiedler / big_pv.txt
Last active September 18, 2019 14:54
1. Remove existing project with the PV if necessary
2. cd svt/openshift_scalability/content/quickstarts
3. oc process -f rails-postgresql-pv.json -p VOLUME_CAPACITY=1000Gi | oc create -f -
(you can change the PV size but it should not matter - the actual data on disk is what should matter)
4. wait a few minutes until the postgresql pod is running
5. oc rsh <pod_name>
6. mkdir -p /var/lib/pgsql/data/xtra
7. for i in {1..110}; do echo $i; dd if=/dev/zero of=/var/lib/pgsql/data/xtra/file_$i.txt count=1048576 bs=8192; done