Skip to content

Instantly share code, notes, and snippets.

@jcnars
jcnars / snippets_deleting_PR.txt
Created July 8, 2021 02:13
While trying to troubleshoot the CLA warnings, used `git reset --hard` clause that inadvertently closed the PR.
The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
jcnarasimhan-macbookpro:bms-toolkit jcnarasimhan$ git brancj
git: 'brancj' is not a git command. See 'git --help'.
The most similar command is
branch
✔ ~/050521-mntrl-instlln/bms-toolkit [host-provision ↑·1|✚ 1…19]
15:19 $ ./host-provision.sh --inventory-file inventory_files/inventory_linuxserver44.orcl_orcl --hostprovision-ansible-user ansible8
Command used:
./host-provision.sh --inventory-file inventory_files/inventory_linuxserver44.orcl_orcl --hostprovision-ansible-user ansible8
Running with parameters from command line or environment variables:
INVENTORY_FILE=inventory_files/inventory_linuxserver44.orcl_orcl
ORA_HOSTPROVISION_ANSIBLE_USER=ansible8
@jcnars
jcnars / gist:6858344c22cef0d94b851a17de68f1c8
Created September 2, 2021 05:03
Latest run, with changes to incorporate review comments
✔ ~/050521-mntrl-instlln/bms-toolkit [host-provision L|✚ 2…19]
21:47 $ ./host-provision.sh --comma-separated-dbhosts 172.16.30.1 --instance-ssh-user ansible10
Command used:
./host-provision.sh --comma-separated-dbhosts 172.16.30.1 --instance-ssh-user ansible10
Running with parameters from command line or environment variables:
INSTANCE_SSH_USER=ansible10
INVENTORY_FILE=172.16.30.1,
ORA_CS_HOSTS=172.16.30.1
✔ ~/DriveFS/My Drive/bmaas/host_provisioning/bms-toolkit [host-provision|✚ 6]
13:53 $ runlocalssh ./host-provision.sh --dbhost_ip 172.16.30.1 --instance-ssh-user ansible9
Command used:
/usr/local/google/home/jcnarasimhan/DriveFS/My Drive/bmaas/host_provisioning/bms-toolkit/host-provision.sh --dbhost_ip 172.16.30.1 --instance-ssh-user ansible9
Running with parameters from command line or environment variables:
INSTANCE_SSH_USER=ansible9
INVENTORY_FILE=172.16.30.1,
ORA_CS_HOSTS=172.16.30.1
✔ ~/learn_prow_local/test-infra [master ↓·167|✔]
20:21 $ minikube start
😄 minikube v1.23.2 on Debian rodete
✨ Automatically selected the docker driver
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=15900MB) ...
🐳 Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
✔ ~/learn_prow_local/test-infra [master ↓·167|✔]
20:15 $ minikube start --kubernetes-version=1.21.5
😄 minikube v1.23.2 on Debian rodete
✨ Automatically selected the docker driver
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=15900MB) ...
🐳 Preparing Kubernetes v1.21.5 on Docker 20.10.8 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl delete pod test-pods-namespace-lo-podname -n test-pods ; kubectl apply -f ~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle/pod-test.yml; kubectl get pods -n test-pods
pod "test-pods-namespace-lo-podname" deleted
storageclass.storage.k8s.io/localdisk unchanged
persistentvolume/host-pv unchanged
persistentvolumeclaim/host-pvc unchanged
pod/test-pods-namespace-lo-podname created
NAME READY STATUS RESTARTS AGE
test-pods-namespace-lo-podname 0/1 Pending 0 1s
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl describe pods -n test-pods
jcnarasimhan@jon2:~/DriveFS/My Drive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ !grep
grep -v \# ../test_locally.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: test-pods-namespace-lo-podname
namespace: test-pods
spec:
hostNetwork: true
jcnarasimhan@jon2:~/DriveFS/My Drive/bmaas/test_automation/oss_prow_onboarding$ !grep
grep -v \# test_locally.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: test-pods-namespace-lo-podname
namespace: test-pods
spec:
hostNetwork: true
jcnarasimhan@jon2:~/DriveFS/My Drive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl create configmap rac-inventory-syd2 --from-file="/usr/local/google/home/jcnarasimhan/DriveFS/My Drive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle/inventory_rac_syd2" -n test-pods
configmap/rac-inventory-syd2 created
jcnarasimhan@jon2:~/DriveFS/My Drive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl create configmap rac-asm-syd2 --from-file="/usr/local/google/home/jcnarasimhan/DriveFS/My Drive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle/asm_config_rac_syd2.json" -n test-pods
configmap/rac-asm-syd2 created
jcnarasimhan@jon2:~/DriveFS/My Drive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl create configmap rac-datamounts-syd2 --from-file="/usr/local/google/home/jcnarasimhan/DriveFS/My Drive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle/data_