Skip to content

Instantly share code, notes, and snippets.

@cweibel
Created January 29, 2020 22:11
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save cweibel/605071241adb5fcd5935d617f83585d5 to your computer and use it in GitHub Desktop.
Save cweibel/605071241adb5fcd5935d617f83585d5 to your computer and use it in GitHub Desktop.
vSphere KubeCF unable to pull images back from bits service
kubectl -n kubecf-eirini get pods Chriss-MacBook-Pro.local: Wed Jan 29 17:05:34 2020
NAME READY STATUS RESTARTS AGE
cf-env-pickles-d51729e679-0 0/1 ImagePullBackOff 0 34m
Every 2.0s: kubectl -n kubecf get pods Chriss-MacBook-Pro.local: Wed Jan 29 17:05:13 2020
NAME READY STATUS RESTARTS AGE
cf-operator-7c6c74766d-9pxkv 1/1 Running 2 64m
cf-operator-quarks-job-67d7b6fdd6-4ghcx 1/1 Running 2 64m
kubecf-adapter-0 4/4 Running 0 58m
kubecf-api-0 15/15 Running 1 58m
kubecf-bits-0 6/6 Running 0 58m
kubecf-bosh-dns-59cd464989-987bb 1/1 Running 0 58m
kubecf-bosh-dns-59cd464989-f848f 1/1 Running 0 58m
kubecf-cc-worker-0 4/4 Running 0 58m
kubecf-credhub-0 5/5 Running 0 58m
kubecf-database-0 2/2 Running 0 58m
kubecf-diego-api-0 6/6 Running 2 58m
kubecf-doppler-0 9/9 Running 0 58m
kubecf-eirini-0 9/9 Running 0 58m
kubecf-log-api-0 7/7 Running 0 58m
kubecf-nats-0 4/4 Running 0 58m
kubecf-router-0 5/5 Running 0 58m
kubecf-routing-api-0 4/4 Running 0 58m
kubecf-scheduler-0 8/8 Running 0 58m
kubecf-singleton-blobstore-0 6/6 Running 0 58m
kubecf-tcp-router-0 5/5 Running 0 58m
kubecf-uaa-0 6/6 Running 0 58m
helm install cf-operator \
--namespace kubecf \
https://github.com/cloudfoundry-incubator/cf-operator/releases/download/v1.0.0/cf-operator-v1.0.0-1.g424dd0b3.tgz
helm install kubecf \
--namespace kubecf \
--values /Users/chris/projects/kubecf/kubecf/values.yaml \
https://scf-v3.s3.amazonaws.com/kubecf-0.1.0-c674c02.tgz
➜ ~ git:(master) ✗ kubectl describe pod cf-env-pickles-d51729e679-0 -n kubecf-eirini
Name: cf-env-pickles-d51729e679-0
Namespace: kubecf-eirini
Priority: 0
Node: 6ecce249-b951-4e45-bc01-2ce8c15520e5.k8s/10.128.57.198
Start Time: Wed, 29 Jan 2020 16:30:58 -0500
Labels: cloudfoundry.org/app_guid=d2bdfd3c-1ba1-4656-b63d-d9ed7efc860f
cloudfoundry.org/guid=d2bdfd3c-1ba1-4656-b63d-d9ed7efc860f
cloudfoundry.org/process_type=web
cloudfoundry.org/rootfs-version=
cloudfoundry.org/source_type=APP
cloudfoundry.org/version=4996bed4-2381-46e8-8249-ffe40e8ce929
controller-revision-hash=cf-env-pickles-d51729e679-76c86c6f7b
statefulset.kubernetes.io/pod-name=cf-env-pickles-d51729e679-0
Annotations: cloudfoundry.org/application_id: d2bdfd3c-1ba1-4656-b63d-d9ed7efc860f
cloudfoundry.org/process_guid: d2bdfd3c-1ba1-4656-b63d-d9ed7efc860f-4996bed4-2381-46e8-8249-ffe40e8ce929
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Pending
IP: 10.244.1.160
IPs:
IP: 10.244.1.160
Controlled By: StatefulSet/cf-env-pickles-d51729e679
Containers:
opi:
Container ID:
Image: 127.0.0.1:32123/cloudfoundry/268c8482-568b-4096-b163-8587f3b9cce2:4553e4dff16dcfb4c80f759b06e02560b40dc0b3
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
Command:
dumb-init
--
/lifecycle/launch
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
ephemeral-storage: 1024M
memory: 256M
Requests:
cpu: 30m
ephemeral-storage: 1024M
memory: 256M
Liveness: tcp-socket :8080 delay=0s timeout=1s period=10s #success=1 #failure=4
Readiness: tcp-socket :8080 delay=0s timeout=1s period=10s #success=1 #failure=1
Environment:
VCAP_SERVICES: {}
VCAP_APP_PORT: 8080
TMPDIR: /home/vcap/tmp
PATH: /usr/local/bin:/usr/bin:/bin
USER: vcap
MEMORY_LIMIT: 256m
VCAP_APPLICATION: {"cf_api":"https://api.10.128.57.200.xip.io","limits":{"fds":16384,"mem":256,"disk":1024},"application_name":"cf-env","application_uris":["cf-env.10.128.57.200.xip.io"],"name":"cf-env","space_name":"pickles","space_id":"b7426297-91c0-40cf-93af-0e872877ffe7","organization_id":"4a0f6958-a774-4b6f-aa60-a7cb8deb6311","organization_name":"drnic","uris":["cf-env.10.128.57.200.xip.io"],"process_id":"d2bdfd3c-1ba1-4656-b63d-d9ed7efc860f","process_type":"web","application_id":"d2bdfd3c-1ba1-4656-b63d-d9ed7efc860f","version":"4996bed4-2381-46e8-8249-ffe40e8ce929","application_version":"4996bed4-2381-46e8-8249-ffe40e8ce929"}
CF_INSTANCE_ADDR: 0.0.0.0:8080
VCAP_APP_HOST: 0.0.0.0
LANG: en_US.UTF-8
HOME: /home/vcap/app
CF_INSTANCE_PORTS: [{"external":8080,"internal":8080}]
PORT: 8080
CF_INSTANCE_PORT: 8080
START_COMMAND: rackup -p $PORT
POD_NAME: cf-env-pickles-d51729e679-0 (v1:metadata.name)
CF_INSTANCE_IP: (v1:status.podIP)
CF_INSTANCE_INTERNAL_IP: (v1:status.podIP)
EIRINI_SSH_KEY: <set to the key 'public_key' in secret 'd2bdfd3c-1ba1-4656-b63d-d9ed7efc860f-4996bed4-2381-46e8-8249-ffe40e8ce929-0-ssh-key-meta'> Optional: false
EIRINI_HOST_KEY: <set to the key 'private_key' in secret 'd2bdfd3c-1ba1-4656-b63d-d9ed7efc860f-4996bed4-2381-46e8-8249-ffe40e8ce929-0-ssh-key-meta'> Optional: false
Mounts: <none>
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes: <none>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned kubecf-eirini/cf-env-pickles-d51729e679-0 to 6ecce249-b951-4e45-bc01-2ce8c15520e5.k8s
Normal Pulling 5m35s (x4 over 7m3s) kubelet, 6ecce249-b951-4e45-bc01-2ce8c15520e5.k8s Pulling image "127.0.0.1:32123/cloudfoundry/268c8482-568b-4096-b163-8587f3b9cce2:4553e4dff16dcfb4c80f759b06e02560b40dc0b3"
Warning Failed 5m35s (x4 over 7m3s) kubelet, 6ecce249-b951-4e45-bc01-2ce8c15520e5.k8s Failed to pull image "127.0.0.1:32123/cloudfoundry/268c8482-568b-4096-b163-8587f3b9cce2:4553e4dff16dcfb4c80f759b06e02560b40dc0b3": rpc error: code = Unknown desc = failed to resolve image "127.0.0.1:32123/cloudfoundry/268c8482-568b-4096-b163-8587f3b9cce2:4553e4dff16dcfb4c80f759b06e02560b40dc0b3": no available registry endpoint: failed to do request: Head https://127.0.0.1:32123/v2/cloudfoundry/268c8482-568b-4096-b163-8587f3b9cce2/manifests/4553e4dff16dcfb4c80f759b06e02560b40dc0b3: dial tcp 127.0.0.1:32123: connect: connection refused
Warning Failed 5m35s (x4 over 7m3s) kubelet, 6ecce249-b951-4e45-bc01-2ce8c15520e5.k8s Error: ErrImagePull
Warning Failed 5m23s (x6 over 7m3s) kubelet, 6ecce249-b951-4e45-bc01-2ce8c15520e5.k8s Error: ImagePullBackOff
Normal BackOff 119s (x20 over 7m3s) kubelet, 6ecce249-b951-4e45-bc01-2ce8c15520e5.k8s Back-off pulling image "127.0.0.1:32123/cloudfoundry/268c8482-568b-4096-b163-8587f3b9cce2:4553e4dff16dcfb4c80f759b06e02560b40dc0b3"
{"level":"error","ts":1580334540.4490805,"caller":"http/server.go:3010","msg":"http: TLS handshake error from 10.244.2.128:64311: remote error: tls: bad certificate"}
{"level":"info","ts":1580334729.1817782,"caller":"middlewares/logger_middleware.go:31","msg":"HTTP Request started","request-id":4170885844708979246,"vcap-request-id":"","host":"10.128.57.200:32123","method":"GET","path":"/v2/","user-agent":"docker/18.09.0 go/go1.10.4 git-commit/4d60db4 kernel/4.4.0-31-generic os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))"}
{"level":"info","ts":1580334729.1819937,"caller":"middlewares/logger_middleware.go:62","msg":"HTTP Request completed","request-id":4170885844708979246,"vcap-request-id":"","host":"10.128.57.200:32123","method":"GET","path":"/v2/","status-code":400,"body-size":112,"duration":0.000235648,"user-agent":"docker/18.09.0 go/go1.10.4 git-commit/4d60db4 kernel/4.4.0-31-generic os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))"}
{"level":"info","ts":1580334729.2078195,"caller":"middlewares/logger_middleware.go:31","msg":"HTTP Request started","request-id":3077636147492460219,"vcap-request-id":"","host":"10.128.57.200:32123","method":"GET","path":"/v2/cloudfoundry/268c8482-568b-4096-b163-8587f3b9cce2/manifests/4553e4dff16dcfb4c80f759b06e02560b40dc0b3","user-agent":"docker/18.09.0 go/go1.10.4 git-commit/4d60db4 kernel/4.4.0-31-generic os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))"}
{"level":"info","ts":1580334729.2081017,"caller":"middlewares/logger_middleware.go:62","msg":"HTTP Request completed","request-id":3077636147492460219,"vcap-request-id":"","host":"10.128.57.200:32123","method":"GET","path":"/v2/cloudfoundry/268c8482-568b-4096-b163-8587f3b9cce2/manifests/4553e4dff16dcfb4c80f759b06e02560b40dc0b3","status-code":400,"body-size":112,"duration":0.000298801,"user-agent":"docker/18.09.0 go/go1.10.4 git-commit/4d60db4 kernel/4.4.0-31-generic os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))"}
system_domain: 10.128.57.200.xip.io
# Set or override job properties. The first level of the map is the instance group name. The second
# level of the map is the job name. E.g.:
# properties:
# adapter:
# adapter:
# scalablesyslog:
# adapter:
# logs:
# addr: kubecf-log-api:8082
#
# Eirini Persistence Broker setup example:
#
# properties:
# eirini:
# eirini-persi-broker:
# eirini-persi-broker:
# service_plans:
# - id: "default"
# name: "default"
# description: "Persistence storage service broker for applications."
# free: true
# kube_storage_class: "default"
# default_size: "1Gi"
properties: {}
kube:
# The storage class to be used for the instance groups that need it (e.g. bits, database and
# singleton-blobstore). If it's not set, the default storage class will be used.
storage_class: vsphere
# The service_cluster_ip_range and pos_cluster_ip_range are used by the internal security group
# definition to allow apps to communicate with internal service brokers (e.g. credhub).
# service_cluster_ip_range can be fetched with the following command, assuming that the API
# server started with the `--service-cluster-ip-range` flag:
# kubectl cluster-info dump --output yaml \
# | awk 'match($0, /service-cluster-ip-range=(.*)/, range) { print range[1] }'
# The default value for `--service-cluster-ip-range` is 10.0.0.0/24.
service_cluster_ip_range: 10.245.0.0/24
# pod_cluster_ip_range can be fetched with the following command, assuming that the controller
# manager started with the `--cluster-cidr` flag:
# kubectl cluster-info dump --output yaml \
# | awk 'match($0, /cluster-cidr=(.*)/, range) { print range[1] }'
# There is no default value for `--cluster-cidr`.
pod_cluster_ip_range: 10.244.0.0/16
releases:
# The defaults for all releases, where we do not otherwise override them.
defaults:
url: docker.io/cfcontainerization
stemcell:
os: SLE_15_SP1
version: 15.1-7.0.0_374.gb8e8e6af
app-autoscaler:
version: 3.0.0
stemcell:
os: opensuse-42.3
version: 36.g03b4653-30.80-7.0.0_367.g6b06e343
bits-service:
stemcell:
os: opensuse-42.3
version: 36.g03b4653-30.80-7.0.0_348.gc8fb3864
brain-tests:
stemcell:
os: opensuse-42.3
version: 36.g03b4653-30.80-7.0.0_372.ge3509601
cf-mysql:
version: 36.19.0
stemcell:
os: opensuse-42.3
version: 36.g03b4653-30.80-7.0.0_360.g0ec8d681
postgres:
version: "39"
stemcell:
os: opensuse-42.3
version: 36.g03b4653-30.80-7.0.0_367.g6b06e343
sle15:
url: registry.suse.com/cap-staging
version: "10.70"
stemcell:
os: SLE-12SP4
version: 11.g2837aef-0.233-7.0.0_372.ge3509601
multi_az: false
high_availability: false
# Sizing takes precedence over the high_availability property. I.e. setting the instance count
# for an instance group greater than 1 will make it highly available.
sizing:
adapter:
instances: ~
api:
instances: ~
asactors:
instances: ~
asapi:
instances: ~
asmetrics:
instances: ~
asnozzle:
instances: ~
auctioneer:
instances: ~
bits:
instances: ~
cc_worker:
instances: ~
credhub:
instances: ~
diego_api:
instances: ~
diego_cell:
instances: ~
doppler:
instances: ~
eirini:
instances: ~
log_api:
instances: ~
nats:
instances: ~
router:
instances: ~
routing_api:
instances: ~
scheduler:
instances: ~
uaa:
instances: ~
tcp_router:
instances: ~
services:
router:
annotations: ~
type: LoadBalancer
externalIPs: []
clusterIP: ~
ssh-proxy:
annotations: ~
type: LoadBalancer
externalIPs: []
clusterIP: ~
tcp-router:
annotations: ~
type: LoadBalancer
externalIPs: []
clusterIP: ~
port_range:
start: 20000
end: 20008
#services:
# router:
# type: NodePort
## externalIPs: []
## clusterIP: 10.245.0.220
# ssh-proxy:
# type: NodePort
## externalIPs: []
## clusterIP: 10.245.0.221
# tcp-router:
# type: NodePort
## externalIPs: []
## clusterIP: 10.245.0.222
# port_range:
# start: 20000
# end: 20008
features:
eirini:
enabled: true
registry:
service:
nodePort: 32123
ingress:
enabled: true #false
tls:
crt: ~
key: ~
annotations: {}
labels: {}
# TODO: suse_buildpacks should be default to true after the opensuse-stemcell-based images are
# published to docker.io.
suse_buildpacks: false
autoscaler:
enabled: false
sle15_stack:
enabled: true
# external_database disables the embedded database and allows using an external, already seeded,
# database.
# The database type can be either 'mysql' or 'postgres'.
external_database:
enabled: false
type: ~
host: ~
port: ~
databases:
uaa:
name: uaa
password: ~
username: ~
cc:
name: cloud_controller
password: ~
username: ~
bbs:
name: diego
password: ~
username: ~
routing_api:
name: routing-api
password: ~
username: ~
policy_server:
name: network_policy
password: ~
username: ~
silk_controller:
name: network_connectivity
password: ~
username: ~
locket:
name: locket
password: ~
username: ~
credhub:
name: credhub
password: ~
username: ~
# Enable or disable instance groups for the different test suites.
# Only smoke tests should be run in production environments.
testing:
brain_tests:
enabled: false
cf_acceptance_tests:
enabled: false
smoke_tests:
enabled: true
operations:
# A list of configmap names that should be applied to the BOSH manifest.
custom: []
k8s-host-url: ""
k8s-service-token: ""
k8s-service-username: ""
k8s-node-ca: ""
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment