Skip to content

Instantly share code, notes, and snippets.

@jcnars
Created November 9, 2021 16:45
Show Gist options
  • Save jcnars/1f2684ddd74976ae60ae77bf461de2bf to your computer and use it in GitHub Desktop.
Save jcnars/1f2684ddd74976ae60ae77bf461de2bf to your computer and use it in GitHub Desktop.
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl delete pod test-pods-namespace-lo-podname -n test-pods ; kubectl apply -f ~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle/pod-test.yml; kubectl get pods -n test-pods
pod "test-pods-namespace-lo-podname" deleted
storageclass.storage.k8s.io/localdisk unchanged
persistentvolume/host-pv unchanged
persistentvolumeclaim/host-pvc unchanged
pod/test-pods-namespace-lo-podname created
NAME READY STATUS RESTARTS AGE
test-pods-namespace-lo-podname 0/1 Pending 0 1s
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl describe pods -n test-pods
Name: test-pods-namespace-lo-podname
Namespace: test-pods
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
alpineinstance:
Image: quay.io/ansible/ansible-runner:latest
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
Args:
echo ; echo "starting - listing /root before adding host to known_hosts"; ls -al "/root"; mkdir /root/.ssh; chmod 0700 /root/.ssh; ssh-keyscan -tecdsa 172.16.30.1 > /root/.ssh/known_hosts; echo; echo "listing /root/.ssh after adding host to known_hosts"; ls -al "/root/.ssh"; cd /root;git clone https://github.com/google/bms-toolkit.git; ls -l /root; sleep 600; echo; echo done;
Requests:
cpu: 3
memory: 2Gi
Environment: <none>
Mounts:
/etc/.ssh from id-rsa-bms-tk-key (rw)
/etc/running_cleanup_install_oracle from static-testing-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkj7s (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
id-rsa-bms-tk-key:
Type: Secret (a volume populated by a Secret)
SecretName: id-rsa-bms-ansible9-syd1
Optional: false
static-testing-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: host-pvc
ReadOnly: false
default-token-vkj7s:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkj7s
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 15s (x2 over 15s) default-scheduler 0/2 nodes are available: 2 persistentvolumeclaim "host-pvc" not found.
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-bmaas-testing-prow-c-default-pool-2b0e9e71-48bo Ready <none> 4d1h v1.20.10-gke.1600 10.210.15.57 34.95.43.100 Container-Optimized OS from Google 5.4.120+ containerd://1.4.4
gke-bmaas-testing-prow-c-default-pool-2b0e9e71-nmyw Ready <none> 4d1h v1.20.10-gke.1600 10.210.15.56 34.95.8.38 Container-Optimized OS from Google 5.4.120+ containerd://1.4.4
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
host-pv 1Gi RWO Recycle Bound default/host-pvc localdisk 5m55s
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
host-pvc Bound host-pv 1Gi RWO localdisk 6m4s
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ # could be namespace issue
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ # commented namespace
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ cat pod-test.yml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localdisk
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: true
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: host-pv
spec:
storageClassName: localdisk
persistentVolumeReclaimPolicy: Recycle
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /local_test
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: host-pvc
spec:
storageClassName: localdisk
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: Pod
metadata:
name: test-pods-namespace-lo-podname
# namespace: test-pods
spec:
hostNetwork: true
containers:
- name: alpineinstance
image: quay.io/ansible/ansible-runner:latest
command:
- /bin/sh
- -c
args:
- echo ; echo "starting - listing /root before adding host to known_hosts";
ls -al "/root";
mkdir /root/.ssh;
chmod 0700 /root/.ssh;
ssh-keyscan -tecdsa 172.16.30.1 > /root/.ssh/known_hosts;
echo; echo "listing /root/.ssh after adding host to known_hosts";
ls -al "/root/.ssh";
cd /root;git clone https://github.com/google/bms-toolkit.git;
ls -l /root;
sleep 600;
echo; echo done;
resources:
requests:
memory: "2.0Gi"
cpu: "3.0"
volumeMounts:
- name: id-rsa-bms-tk-key
mountPath: /etc/.ssh
- name: static-testing-dir
mountPath: /etc/running_cleanup_install_oracle
volumes:
- name: id-rsa-bms-tk-key
secret:
secretName: id-rsa-bms-ansible9-syd1
defaultMode: 0400
- name: static-testing-dir
persistentVolumeClaim:
claimName: host-pvc
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl delete pod test-pods-namespace-lo-podname -n test-pods ;
pod "test-pods-namespace-lo-podname" deleted
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl apply -f ~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle/pod-test.yml
storageclass.storage.k8s.io/localdisk unchanged
persistentvolume/host-pv unchanged
persistentvolumeclaim/host-pvc unchanged
pod/test-pods-namespace-lo-podname created
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pods-namespace-lo-podname 0/1 CreateContainerError 0 13s
jcnarasimhan@jon2:~/mydrive/bmaas/test_automation/oss_prow_onboarding/running_cleanup_install_oracle$ kubectl describe pods
Name: test-pods-namespace-lo-podname
Namespace: default
Priority: 0
Node: gke-bmaas-testing-prow-c-default-pool-2b0e9e71-48bo/10.210.15.57
Start Time: Tue, 09 Nov 2021 08:40:56 -0800
Labels: <none>
Annotations: <none>
Status: Pending
IP: 10.210.15.57
IPs:
IP: 10.210.15.57
Containers:
alpineinstance:
Container ID:
Image: quay.io/ansible/ansible-runner:latest
Image ID:
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
Args:
echo ; echo "starting - listing /root before adding host to known_hosts"; ls -al "/root"; mkdir /root/.ssh; chmod 0700 /root/.ssh; ssh-keyscan -tecdsa 172.16.30.1 > /root/.ssh/known_hosts; echo; echo "listing /root/.ssh after adding host to known_hosts"; ls -al "/root/.ssh"; cd /root;git clone https://github.com/google/bms-toolkit.git; ls -l /root; sleep 600; echo; echo done;
State: Waiting
Reason: CreateContainerError
Ready: False
Restart Count: 0
Requests:
cpu: 3
memory: 2Gi
Environment: <none>
Mounts:
/etc/.ssh from id-rsa-bms-tk-key (rw)
/etc/running_cleanup_install_oracle from static-testing-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hmnph (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
id-rsa-bms-tk-key:
Type: Secret (a volume populated by a Secret)
SecretName: id-rsa-bms-ansible9-syd1
Optional: false
static-testing-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: host-pvc
ReadOnly: false
default-token-hmnph:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hmnph
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24s default-scheduler Successfully assigned default/test-pods-namespace-lo-podname to gke-bmaas-testing-prow-c-default-pool-2b0e9e71-48bo
Normal Pulled 19s kubelet Successfully pulled image "quay.io/ansible/ansible-runner:latest" in 4.053418026s
Warning Failed 19s kubelet Error: failed to generate container "e9758aa77c0b804e0052340cf23a6fc857fdbe0edea1a5d5618cbe7fbca9b23b" spec: failed to generate spec: failed to mkdir "/local_test": mkdir /local_test: read-only file system
Normal Pulled 19s kubelet Successfully pulled image "quay.io/ansible/ansible-runner:latest" in 174.279052ms
Warning Failed 19s kubelet Error: failed to generate container "ba90421ce95cff74df7acdafdf36a5941a2ce1e12303ffaa355d2ef411330faa" spec: failed to generate spec: failed to mkdir "/local_test": mkdir /local_test: read-only file system
Normal Pulling 8s (x3 over 23s) kubelet Pulling image "quay.io/ansible/ansible-runner:latest"
Normal Pulled 8s kubelet Successfully pulled image "quay.io/ansible/ansible-runner:latest" in 146.695961ms
Warning Failed 8s kubelet Error: failed to generate container "1be7400cd32e19290c992ea518428e546b5c289d8f55d6f83a8ed4e85dc5e47c" spec: failed to generate spec: failed to mkdir "/local_test": mkdir /local_test: read-only file system
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment