Skip to content

Instantly share code, notes, and snippets.

@CaptTofu
Created November 5, 2017 16:15
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save CaptTofu/ddddc4ee74c1595be59db0848ae7d1f6 to your computer and use it in GitHub Desktop.
Save CaptTofu/ddddc4ee74c1595be59db0848ae7d1f6 to your computer and use it in GitHub Desktop.
kubectl describe pod zone1-main-x-80-replica-0
Name: zone1-main-x-80-replica-0
Namespace: default
Node: gke-example-default-pool-15f6fa98-vfnj/10.142.0.3
Start Time: Sun, 05 Nov 2017 11:10:16 -0500
Labels: app=vitess
cell=zone1
component=vttablet
controller-revision-hash=zone1-main-x-80-replica-3616661517
keyspace=main
shard=x-80
type=replica
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"default","name":"zone1-main-x-80-replica","uid":"cb5eed8c-c243-11e7-9405-42010a8...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for init container init-vtdataroot; cpu request for init container init-tablet-uid
pod.alpha.kubernetes.io/initialized=true
Status: Running
IP: 10.0.3.6
Created By: StatefulSet/zone1-main-x-80-replica
Controlled By: StatefulSet/zone1-main-x-80-replica
Init Containers:
init-vtdataroot:
Container ID: docker://603fa71cea17303ce1dd72dc849a883f547a09e8161ded0bf28d23dde1b4bd19
Image: vitess/lite:latest
Image ID: docker-pullable://vitess/lite@sha256:af0dc3c976433c3330c86ee3da86b2eb34be582e57e8e3c078ec38e111870bc7
Port: <none>
Command:
bash
-c
set -ex; mkdir -p $VTDATAROOT/tmp; chown vitess:vitess $VTDATAROOT $VTDATAROOT/tmp;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 05 Nov 2017 11:10:54 -0500
Finished: Sun, 05 Nov 2017 11:10:54 -0500
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dc83c (ro)
/vt/vtdataroot from vtdataroot (rw)
init-tablet-uid:
Container ID: docker://730d139e9c9ac92d9fcbe202a41d4c3fe7936063771142285d342bf4e229f41a
Image: vitess/lite:latest
Image ID: docker-pullable://vitess/lite@sha256:af0dc3c976433c3330c86ee3da86b2eb34be582e57e8e3c078ec38e111870bc7
Port: <none>
Command:
bash
-c
set -ex
# Split pod name (via hostname) into prefix and ordinal index.
hostname=$(hostname -s)
[[ $hostname =~ ^(.+)-([0-9]+)$ ]] || exit 1
pod_prefix=${BASH_REMATCH[1]}
pod_index=${BASH_REMATCH[2]}
# Prepend cell name since tablet UIDs must be globally unique.
uid_name=zone1-$pod_prefix
# Take MD5 hash of cellname-podprefix.
uid_hash=$(echo -n $uid_name | md5sum | awk "{print \$1}")
# Take first 24 bits of hash, convert to decimal.
# Shift left 2 decimal digits, add in index.
tablet_uid=$((16#${uid_hash:0:6} * 100 + $pod_index))
# Save UID for other containers to read.
mkdir -p $VTDATAROOT/init
echo $tablet_uid > $VTDATAROOT/init/tablet-uid
# Tell MySQL what hostname to report in SHOW SLAVE HOSTS.
# Orchestrator looks there, so it should match -tablet_hostname above.
echo report-host=$hostname.vttablet > $VTDATAROOT/init/report-host.cnf
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 05 Nov 2017 11:10:55 -0500
Finished: Sun, 05 Nov 2017 11:10:55 -0500
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dc83c (ro)
/vt/vtdataroot from vtdataroot (rw)
Containers:
vttablet:
Container ID: docker://b2abb9fd981760711f6c783e05a90b529e49d1501a31422ce536014e9499fc60
Image: vitess/lite:latest
Image ID: docker-pullable://vitess/lite@sha256:af0dc3c976433c3330c86ee3da86b2eb34be582e57e8e3c078ec38e111870bc7
Ports: 15002/TCP, 16002/TCP
Command:
bash
-c
set -ex
eval exec /vt/bin/vttablet $(cat <<END_OF_COMMAND
-topo_implementation "etcd"
-etcd_global_addrs "http://etcd-global:4001"
-log_dir "$VTDATAROOT/tmp"
-alsologtostderr
-port 15002
-grpc_port 16002
-service_map "grpc-queryservice,grpc-tabletmanager,grpc-updatestream"
-tablet-path "zone1-$(cat $VTDATAROOT/init/tablet-uid)"
-tablet_hostname "$(hostname).vttablet"
-init_keyspace "main"
-init_shard "-80"
-init_tablet_type "replica"
-health_check_interval "5s"
-mysqlctl_socket "$VTDATAROOT/mysqlctl.sock"
-db-config-app-uname "vt_app"
-db-config-app-dbname "vt_main"
-db-config-app-charset "utf8"
-db-config-dba-uname "vt_dba"
-db-config-dba-dbname "vt_main"
-db-config-dba-charset "utf8"
-db-config-repl-uname "vt_repl"
-db-config-repl-dbname "vt_main"
-db-config-repl-charset "utf8"
-db-config-filtered-uname "vt_filtered"
-db-config-filtered-dbname "vt_main"
-db-config-filtered-charset "utf8"
-enable_semi_sync
-enable_replication_reporter
-orc_api_url "http://orchestrator/api"
-orc_discover_interval "5m"
-restore_from_backup
-backup_storage_implementation="gcs"
-gcs_backup_storage_bucket="kubecon-2017-backups"
END_OF_COMMAND
)
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Sun, 05 Nov 2017 11:13:00 -0500
Finished: Sun, 05 Nov 2017 11:13:02 -0500
Ready: False
Restart Count: 4
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 1
memory: 1Gi
Liveness: http-get http://:15002/debug/vars delay=60s timeout=10s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/dev/log from syslog (rw)
/etc/ssl/certs/ca-certificates.crt from certs (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dc83c (ro)
/vt/vtdataroot from vtdataroot (rw)
mysql:
Container ID: docker://f0220f6a514c362335fda249d95f27356cafaa2be500c414c2c363a353a1c06d
Image: vitess/lite:latest
Image ID: docker-pullable://vitess/lite@sha256:af0dc3c976433c3330c86ee3da86b2eb34be582e57e8e3c078ec38e111870bc7
Port: <none>
Command:
bash
-c
set -ex
eval exec /vt/bin/mysqlctld $(cat <<END_OF_COMMAND
-log_dir "$VTDATAROOT/tmp"
-alsologtostderr
-tablet_uid "$(cat $VTDATAROOT/init/tablet-uid)"
-socket_file "$VTDATAROOT/mysqlctl.sock"
-db-config-dba-uname "vt_dba"
-db-config-dba-charset "utf8"
-init_db_sql_file "$VTROOT/config/init_db.sql"
END_OF_COMMAND
)
State: Running
Started: Sun, 05 Nov 2017 11:10:56 -0500
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 1
memory: 1Gi
Environment:
EXTRA_MY_CNF: /vt/vtdataroot/init/report-host.cnf:/vt/config/mycnf/master_mysql56.cnf
Mounts:
/dev/log from syslog (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dc83c (ro)
/vt/vtdataroot from vtdataroot (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
vtdataroot:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: vtdataroot-zone1-main-x-80-replica-0
ReadOnly: false
syslog:
Type: HostPath (bare host directory volume)
Path: /dev/log
certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs/ca-certificates.crt
default-token-dc83c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dc83c
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 4 default-scheduler Warning FailedScheduling PersistentVolumeClaim is not bound: "vtdataroot-zone1-main-x-80-replica-0" (repeated 5 times)
2m 2m 1 default-scheduler NormalScheduled Successfully assigned zone1-main-x-80-replica-0 to gke-example-default-pool-15f6fa98-vfnj
2m 2m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj NormalSuccessfulMountVolume MountVolume.SetUp succeeded for volume "syslog"
2m 2m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj NormalSuccessfulMountVolume MountVolume.SetUp succeeded for volume "certs"
2m 2m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj NormalSuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-dc83c"
2m 2m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj NormalSuccessfulMountVolume MountVolume.SetUp succeeded for volume "pvc-cba3d75d-c243-11e7-9405-42010a8e00e1"
2m 2m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.initContainers{init-vtdataroot} NormalPulling pulling image "vitess/lite:latest"
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.initContainers{init-vtdataroot} NormalPulled Successfully pulled image "vitess/lite:latest"
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.initContainers{init-vtdataroot} NormalCreated Created container
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.initContainers{init-vtdataroot} NormalStarted Started container
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.initContainers{init-tablet-uid} NormalPulling pulling image "vitess/lite:latest"
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.initContainers{init-tablet-uid} NormalPulled Successfully pulled image "vitess/lite:latest"
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.initContainers{init-tablet-uid} NormalStarted Started container
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.initContainers{init-tablet-uid} NormalCreated Created container
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.containers{mysql} NormalCreated Created container
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.containers{mysql} NormalStarted Started container
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.containers{mysql} NormalPulling pulling image "vitess/lite:latest"
1m 1m 1 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.containers{mysql} NormalPulled Successfully pulled image "vitess/lite:latest"
1m <invalid> 5 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.containers{vttablet} NormalCreated Created container
1m <invalid> 5 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.containers{vttablet} NormalStarted Started container
1m <invalid> 5 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.containers{vttablet} NormalPulling pulling image "vitess/lite:latest"
1m <invalid> 5 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.containers{vttablet} NormalPulled Successfully pulled image "vitess/lite:latest"
1m <invalid> 13 kubelet, gke-example-default-pool-15f6fa98-vfnj spec.containers{vttablet} Warning BackOff Back-off restarting failed container
1m <invalid> 13 kubelet, gke-example-default-pool-15f6fa98-vfnj Warning FailedSync Error syncing pod
kubectl get pods
NAME READY STATUS RESTARTS AGE
etcd-global-30b7r 1/1 Running 0 3m
etcd-global-ggx3m 1/1 Running 0 3m
etcd-global-nggnw 1/1 Running 0 3m
etcd-zone1-0c8ft 1/1 Running 0 3m
etcd-zone1-glnc9 1/1 Running 0 3m
etcd-zone1-sqpjg 1/1 Running 0 3m
orchestrator-4292490586-dg5qp 2/2 Running 0 3m
vtctld-3332466448-20zfz 1/1 Running 0 3m
vtgate-zone1-4079315694-5c9q7 1/1 Running 0 3m
vtgate-zone1-4079315694-8tjmg 1/1 Running 0 3m
vtgate-zone1-4079315694-x3kgc 1/1 Running 0 3m
zone1-main-80-x-replica-0 1/2 CrashLoopBackOff 5 3m
zone1-main-80-x-replica-1 1/2 Error 5 3m
zone1-main-x-80-replica-0 1/2 CrashLoopBackOff 5 3m
zone1-main-x-80-replica-1 1/2 CrashLoopBackOff 5 3m
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment