Skip to content

Instantly share code, notes, and snippets.

@CaptTofu
Created November 5, 2017 00:18
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save CaptTofu/5284cd8395953a5999cc62605db7745a to your computer and use it in GitHub Desktop.
Save CaptTofu/5284cd8395953a5999cc62605db7745a to your computer and use it in GitHub Desktop.
kubectl get nodes
NAME STATUS AGE VERSION
kubernetes-master Ready,SchedulingDisabled 4h v1.6.12
kubernetes-minion-group-2ck6 Ready 4h v1.6.12
kubernetes-minion-group-7qpx Ready 4h v1.6.12
kubernetes-minion-group-jjd6 Ready 4h v1.6.12
patg@dynbox:~/kubernetes/vitess/examples/kubernetes/statefulset$ kubectl get pods
NAME READY STATUS RESTARTS AGE
etcd-global-0jp7l 1/1 Running 0 1m
etcd-global-1hk60 1/1 Running 0 1m
etcd-global-gswb8 1/1 Running 0 1m
etcd-zone1-38gk5 1/1 Running 0 1m
etcd-zone1-4q2js 1/1 Running 0 1m
etcd-zone1-mv8zm 1/1 Running 0 1m
orchestrator-4053657919-7c4kk 2/2 Running 0 1m
vtctld-776985178-625nh 1/1 Running 0 1m
vtgate-zone1-3491021033-12mcs 1/1 Running 0 1m
vtgate-zone1-3491021033-25lv2 1/1 Running 0 1m
vtgate-zone1-3491021033-jn9jd 1/1 Running 0 1m
zone1-main-80-x-replica-0 0/2 Pending 0 1m
zone1-main-x-80-replica-0 0/2 Pending 0 1m
./init-vitess.sh
+ ./kvtctl.sh InitShardMaster -force main/-80 zone1-1504789100
Starting port forwarding to vtctld...
E1104 20:09:22.329764 10702 main.go:61] Remote error: rpc error: code = Unknown desc = node doesn't exist
+ ./kvtctl.sh InitShardMaster -force main/80- zone1-1015307700
Starting port forwarding to vtctld...
E1104 20:09:24.278449 11601 main.go:61] Remote error: rpc error: code = Unknown desc = node doesn't exist
++ cat schema.sql
+ ./kvtctl.sh ApplySchema -sql 'CREATE TABLE messages (
page BIGINT(20) UNSIGNED,
time_created_ns BIGINT(20) UNSIGNED,
message VARCHAR(10000),
PRIMARY KEY (page, time_created_ns)
) ENGINE=InnoDB' main
Starting port forwarding to vtctld...
E1104 20:09:25.637080 12577 main.go:61] Remote error: rpc error: code = Unknown desc = unable to get shard names for keyspace: main, error: node doesn't exist
++ cat vschema.json
+ ./kvtctl.sh ApplyVSchema -vschema '{
"sharded": true,
"vindexes": {
"hash": {
"type": "hash"
}
},
"tables": {
"messages": {
"column_vindexes": [
{
"column": "page",
"name": "hash"
}
]
}
}
}' main
Starting port forwarding to vtctld...
Uploaded VSchema object:
{
"sharded": true,
"vindexes": {
"hash": {
"type": "hash"
}
},
"tables": {
"messages": {
"column_vindexes": [
{
"column": "page",
"name": "hash"
}
]
}
}
}
If this is not what you expected, check the input data (as JSON parsing will skip unexpected fields).
+ ./kvtctl.sh Backup zone1-1504789101
Starting port forwarding to vtctld...
E1104 20:09:30.830605 14548 main.go:61] Remote error: rpc error: code = Unknown desc = node doesn't exist
+ ./kvtctl.sh Backup zone1-1015307701
Starting port forwarding to vtctld...
E1104 20:09:32.366231 15513 main.go:61] Remote error: rpc error: code = Unknown desc = node doesn't exist
kubectl get pods
NAME READY STATUS RESTARTS AGE
etcd-global-0jp7l 1/1 Running 0 17m
etcd-global-1hk60 1/1 Running 0 17m
etcd-global-gswb8 1/1 Running 0 17m
etcd-zone1-38gk5 1/1 Running 0 17m
etcd-zone1-4q2js 1/1 Running 0 17m
etcd-zone1-mv8zm 1/1 Running 0 17m
loadtest-3js0j 1/1 Running 0 <invalid>
orchestrator-4053657919-7c4kk 2/2 Running 0 17m
vtctld-776985178-625nh 1/1 Running 0 17m
vtgate-zone1-3491021033-12mcs 1/1 Running 0 17m
vtgate-zone1-3491021033-25lv2 1/1 Running 0 17m
vtgate-zone1-3491021033-jn9jd 1/1 Running 0 17m
zone1-main-80-x-replica-0 0/2 Pending 0 17m
zone1-main-x-80-replica-0 0/2 Pending 0 17m
kubectl describe pod zone1-main-80-x-replica-0
Name: zone1-main-80-x-replica-0
Namespace: default
Node: <none>
Labels: app=vitess
cell=zone1
component=vttablet
keyspace=main
shard=80-x
type=replica
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"default","name":"zone1-main-80-x-replica","uid":"ae4781cf-c1bb-11e7-b855-42010a8...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for init container init-vtdataroot; cpu request for init container init-tablet-uid
pod.alpha.kubernetes.io/initialized=true
pod.beta.kubernetes.io/hostname=zone1-main-80-x-replica-0
pod.beta.kubernetes.io/subdomain=vttablet
Status: Pending
IP:
Created By: StatefulSet/zone1-main-80-x-replica
Controlled By: StatefulSet/zone1-main-80-x-replica
Init Containers:
init-vtdataroot:
Image: vitess/lite:latest
Port: <none>
Command:
bash
-c
set -ex; mkdir -p $VTDATAROOT/tmp; chown vitess:vitess $VTDATAROOT $VTDATAROOT/tmp;
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c1bq3 (ro)
/vt/vtdataroot from vtdataroot (rw)
init-tablet-uid:
Image: vitess/lite:latest
Port: <none>
Command:
bash
-c
set -ex
# Split pod name (via hostname) into prefix and ordinal index.
hostname=$(hostname -s)
[[ $hostname =~ ^(.+)-([0-9]+)$ ]] || exit 1
pod_prefix=${BASH_REMATCH[1]}
pod_index=${BASH_REMATCH[2]}
# Prepend cell name since tablet UIDs must be globally unique.
uid_name=zone1-$pod_prefix
# Take MD5 hash of cellname-podprefix.
uid_hash=$(echo -n $uid_name | md5sum | awk "{print \$1}")
# Take first 24 bits of hash, convert to decimal.
# Shift left 2 decimal digits, add in index.
tablet_uid=$((16#${uid_hash:0:6} * 100 + $pod_index))
# Save UID for other containers to read.
mkdir -p $VTDATAROOT/init
echo $tablet_uid > $VTDATAROOT/init/tablet-uid
# Tell MySQL what hostname to report in SHOW SLAVE HOSTS.
# Orchestrator looks there, so it should match -tablet_hostname above.
echo report-host=$hostname.vttablet > $VTDATAROOT/init/report-host.cnf
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c1bq3 (ro)
/vt/vtdataroot from vtdataroot (rw)
Containers:
vttablet:
Image: vitess/lite:latest
Ports: 15002/TCP, 16002/TCP
Command:
bash
-c
set -ex
eval exec /vt/bin/vttablet $(cat <<END_OF_COMMAND
-topo_implementation "etcd"
-etcd_global_addrs "http://etcd-global:4001"
-log_dir "$VTDATAROOT/tmp"
-alsologtostderr
-port 15002
-grpc_port 16002
-service_map "grpc-queryservice,grpc-tabletmanager,grpc-updatestream"
-tablet-path "zone1-$(cat $VTDATAROOT/init/tablet-uid)"
-tablet_hostname "$(hostname).vttablet"
-init_keyspace "main"
-init_shard "80-"
-init_tablet_type "replica"
-health_check_interval "5s"
-mysqlctl_socket "$VTDATAROOT/mysqlctl.sock"
-db-config-app-uname "vt_app"
-db-config-app-dbname "vt_main"
-db-config-app-charset "utf8"
-db-config-dba-uname "vt_dba"
-db-config-dba-dbname "vt_main"
-db-config-dba-charset "utf8"
-db-config-repl-uname "vt_repl"
-db-config-repl-dbname "vt_main"
-db-config-repl-charset "utf8"
-db-config-filtered-uname "vt_filtered"
-db-config-filtered-dbname "vt_main"
-db-config-filtered-charset "utf8"
-enable_semi_sync
-enable_replication_reporter
-orc_api_url "http://orchestrator/api"
-orc_discover_interval "5m"
-restore_from_backup
-backup_storage_implementation="gcs"
-gcs_backup_storage_bucket="kubecon-2017-backups"
END_OF_COMMAND
)
Limits:
cpu: 500m
memory: 1Gi
Requests:
cpu: 500m
memory: 1Gi
Liveness: http-get http://:15002/debug/vars delay=60s timeout=10s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/dev/log from syslog (rw)
/etc/ssl/certs/ca-certificates.crt from certs (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c1bq3 (ro)
/vt/vtdataroot from vtdataroot (rw)
mysql:
Image: vitess/lite:latest
Port: <none>
Command:
bash
-c
set -ex
eval exec /vt/bin/mysqlctld $(cat <<END_OF_COMMAND
-log_dir "$VTDATAROOT/tmp"
-alsologtostderr
-tablet_uid "$(cat $VTDATAROOT/init/tablet-uid)"
-socket_file "$VTDATAROOT/mysqlctl.sock"
-db-config-dba-uname "vt_dba"
-db-config-dba-charset "utf8"
-init_db_sql_file "$VTROOT/config/init_db.sql"
END_OF_COMMAND
)
Limits:
cpu: 500m
memory: 1Gi
Requests:
cpu: 500m
memory: 1Gi
Environment:
EXTRA_MY_CNF: /vt/vtdataroot/init/report-host.cnf:/vt/config/mycnf/master_mysql56.cnf
Mounts:
/dev/log from syslog (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c1bq3 (ro)
/vt/vtdataroot from vtdataroot (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
vtdataroot:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: vtdataroot-zone1-main-80-x-replica-0
ReadOnly: false
syslog:
Type: HostPath (bare host directory volume)
Path: /dev/log
certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs/ca-certificates.crt
default-token-c1bq3:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c1bq3
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
20m 20m 4 default-scheduler Warning FailedScheduling [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "vtdataroot-zone1-main-80-x-replica-0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "vtdataroot-zone1-main-80-x-replica-0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "vtdataroot-zone1-main-80-x-replica-0", which is unexpected.]
20m <invalid> 73 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3).
patg@dynbox:~/kubernetes/vitess/examples/kubernetes/statefulset$
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment