Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Openshift console
W0205 09:56:22.104151 25403 start_master.go:269] assetConfig.loggingPublicURL: invalid value '', Details: required to view aggregated container logs in the console
W0205 09:56:22.104286 25403 start_master.go:269] assetConfig.metricsPublicURL: invalid value '', Details: required to view cluster metrics in the console
I0205 09:56:22.760115 25403 plugins.go:71] No cloud provider specified.
I0205 09:56:22.928115 25403 start_master.go:380] Starting master on 0.0.0.0:8443 (v1.1.1)
I0205 09:56:22.928153 25403 start_master.go:381] Public master address is https://openshift.abecorn.com:8443
I0205 09:56:22.928173 25403 start_master.go:385] Using images from "openshift/origin-<component>:v1.1.1"
2016-02-05 09:56:23.948835 I | etcdserver: recovered store from snapshot at index 2050206
2016-02-05 09:56:23.948875 I | etcdserver: name = openshift.local
2016-02-05 09:56:23.948884 I | etcdserver: data dir = /home/dpeterson/openshift/openshift.local.etcd
2016-02-05 09:56:23.948894 I | etcdserver: member dir = /home/dpeterson/openshift/openshift.local.etcd/member
2016-02-05 09:56:23.948901 I | etcdserver: heartbeat = 100ms
2016-02-05 09:56:23.948908 I | etcdserver: election = 1000ms
2016-02-05 09:56:23.948916 I | etcdserver: snapshot count = 0
2016-02-05 09:56:23.948932 I | etcdserver: advertise client URLs = https://23.25.149.227:4001
2016-02-05 09:56:23.948973 I | etcdserver: loaded cluster information from store: <nil>
2016-02-05 09:56:24.121021 I | etcdserver: restarting member 64099f818e6c8fac in cluster 281ef8b5d3391491 at commit index 2058458
2016-02-05 09:56:24.121426 I | raft: 64099f818e6c8fac became follower at term 56
2016-02-05 09:56:24.121451 I | raft: newRaft 64099f818e6c8fac [peers: [64099f818e6c8fac], term: 56, commit: 2058458, applied: 2050206, lastindex: 2058458, lastterm: 56]
2016-02-05 09:56:24.121688 I | etcdserver: set snapshot count to default 10000
2016-02-05 09:56:24.121720 I | etcdserver: starting server... [version: 2.1.2, cluster version: 2.1.0]
I0205 09:56:24.122138 25403 etcd.go:68] Started etcd at 23.25.149.227:4001
I0205 09:56:24.150228 25403 run_components.go:181] Using default project node label selector:
2016-02-05 09:56:25.621872 I | raft: 64099f818e6c8fac is starting a new election at term 56
2016-02-05 09:56:25.621978 I | raft: 64099f818e6c8fac became candidate at term 57
2016-02-05 09:56:25.622050 I | raft: 64099f818e6c8fac received vote from 64099f818e6c8fac at term 57
2016-02-05 09:56:25.622085 I | raft: 64099f818e6c8fac became leader at term 57
2016-02-05 09:56:25.622108 I | raft: raft.node: 64099f818e6c8fac elected leader 64099f818e6c8fac at term 57
2016-02-05 09:56:25.622730 I | etcdserver: published {Name:openshift.local ClientURLs:[https://23.25.149.227:4001]} to cluster 281ef8b5d3391491
W0205 09:56:25.648728 25403 controller.go:290] Resetting endpoints for master service "kubernetes" to &{{ } {kubernetes default 8194cdb4-bffd-11e5-b1b3-bc5ff4ca49a7 198893 0 2016-01-21 00:12:02 -0500 EST <nil> <nil> map[] map[]} [{[{23.25.149.227 <nil>}] [] [{https 8443 TCP} {dns 53 UDP} {dns-tcp 53 TCP}]}]}
I0205 09:56:25.958887 25403 master.go:237] Started Kubernetes API at 0.0.0.0:8443/api/v1
I0205 09:56:25.958933 25403 master.go:237] Started Kubernetes API Extensions at 0.0.0.0:8443/apis/extensions/v1beta1
I0205 09:56:25.958940 25403 master.go:237] Started Origin API at 0.0.0.0:8443/oapi/v1
I0205 09:56:25.958945 25403 master.go:237] Started OAuth2 API at 0.0.0.0:8443/oauth
I0205 09:56:25.958950 25403 master.go:237] Started Login endpoint at 0.0.0.0:8443/login
I0205 09:56:25.958955 25403 master.go:237] Started Web Console 0.0.0.0:8443/console/
I0205 09:56:25.958960 25403 master.go:237] Started Swagger Schema API at 0.0.0.0:8443/swaggerapi/
2016-02-05 09:56:27.485469 I | skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:53 [rcache 0]
2016-02-05 09:56:27.485499 I | skydns: ready for queries on cluster.local. for udp4://0.0.0.0:53 [rcache 0]
I0205 09:56:27.585719 25403 run_components.go:176] DNS listening at 0.0.0.0:53
I0205 09:56:27.585863 25403 start_master.go:507] Controllers starting (*)
E0205 09:56:27.623191 25403 serviceaccounts_controller.go:218] serviceaccounts "default" already exists
E0205 09:56:27.655935 25403 serviceaccounts_controller.go:218] serviceaccounts "builder" already exists
I0205 09:56:27.752371 25403 start_node.go:180] Starting a node connected to https://23.25.149.227:8443
I0205 09:56:27.760154 25403 plugins.go:71] No cloud provider specified.
I0205 09:56:27.760182 25403 start_node.go:257] Starting node openshift.master1 (v1.1.1)
I0205 09:56:27.795730 25403 node.go:54] Connecting to Docker at unix:///var/run/docker.sock
I0205 09:56:27.805838 25403 manager.go:128] cAdvisor running in container: "/user.slice"
I0205 09:56:27.879036 25403 node.go:236] Started Kubernetes Proxy on 0.0.0.0
I0205 09:56:28.048762 25403 nodecontroller.go:133] Sending events to api server.
I0205 09:56:28.051282 25403 start_master.go:562] Started Kubernetes Controllers
W0205 09:56:28.099499 25403 nodecontroller.go:572] Missing timestamp for Node openshift.master1. Assuming now as a timestamp.
I0205 09:56:28.099604 25403 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"openshift.master1", UID:"openshift.master1", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'RegisteredNode' Node openshift.master1 event: Registered Node openshift.master1 in NodeController
I0205 09:56:28.294544 25403 fs.go:108] Filesystem partitions: map[/dev/mapper/rhel-root:{mountpoint:/ major:253 minor:0 fsType: blockSize:0} /dev/sda2:{mountpoint:/boot major:8 minor:2 fsType: blockSize:0} /dev/mapper/rhel-home:{mountpoint:/home major:253 minor:2 fsType: blockSize:0} dockervg1-docker--pool:{mountpoint: major:253 minor:4 fsType:devicemapper blockSize:1024}]
I0205 09:56:28.533042 25403 manager.go:163] Machine: {NumCores:8 CpuFrequency:2500000 MemoryCapacity:67354550272 MachineID:4bfe313b4ee24553a4aece272787a43a SystemUUID:00020003-0004-0005-0006-000700080009 BootID:e942ff28-8f14-4596-980d-7c0b7531afa6 Filesystems:[{Device:dockervg1-docker--pool Capacity:1748672446464} {Device:/dev/mapper/rhel-root Capacity:53660876800} {Device:/dev/sda2 Capacity:517713920} {Device:/dev/mapper/rhel-home Capacity:536608768000}] DiskMap:map[253:12:{Name:dm-12 Major:253 Minor:12 Size:107374182400 Scheduler:none} 253:40:{Name:dm-40 Major:253 Minor:40 Size:107374182400 Scheduler:none} 253:8:{Name:dm-8 Major:253 Minor:8 Size:107374182400 Scheduler:none} 253:13:{Name:dm-13 Major:253 Minor:13 Size:107374182400 Scheduler:none} 253:48:{Name:dm-48 Major:253 Minor:48 Size:107374182400 Scheduler:none} 253:51:{Name:dm-51 Major:253 Minor:51 Size:107374182400 Scheduler:none} 253:53:{Name:dm-53 Major:253 Minor:53 Size:107374182400 Scheduler:none} 253:54:{Name:dm-54 Major:253 Minor:54 Size:107374182400 Scheduler:none} 253:9:{Name:dm-9 Major:253 Minor:9 Size:107374182400 Scheduler:none} 253:25:{Name:dm-25 Major:253 Minor:25 Size:107374182400 Scheduler:none} 253:6:{Name:dm-6 Major:253 Minor:6 Size:107374182400 Scheduler:none} 253:61:{Name:dm-61 Major:253 Minor:61 Size:107374182400 Scheduler:none} 253:20:{Name:dm-20 Major:253 Minor:20 Size:107374182400 Scheduler:none} 253:50:{Name:dm-50 Major:253 Minor:50 Size:107374182400 Scheduler:none} 253:52:{Name:dm-52 Major:253 Minor:52 Size:107374182400 Scheduler:none} 253:60:{Name:dm-60 Major:253 Minor:60 Size:107374182400 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:5000981078016 Scheduler:cfq} 253:2:{Name:dm-2 Major:253 Minor:2 Size:536870912000 Scheduler:none} 253:41:{Name:dm-41 Major:253 Minor:41 Size:107374182400 Scheduler:none} 253:42:{Name:dm-42 Major:253 Minor:42 Size:107374182400 Scheduler:none} 253:43:{Name:dm-43 Major:253 Minor:43 Size:107374182400 Scheduler:none} 253:7:{Name:dm-7 Major:253 Minor:7 Size:107374182400 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:33822867456 Scheduler:none} 253:10:{Name:dm-10 Major:253 Minor:10 Size:107374182400 Scheduler:none} 253:21:{Name:dm-21 Major:253 Minor:21 Size:107374182400 Scheduler:none} 253:36:{Name:dm-36 Major:253 Minor:36 Size:107374182400 Scheduler:none} 253:4:{Name:dm-4 Major:253 Minor:4 Size:1748672446464 Scheduler:none} 253:45:{Name:dm-45 Major:253 Minor:45 Size:107374182400 Scheduler:none} 253:0:{Name:dm-0 Major:253 Minor:0 Size:53687091200 Scheduler:none} 253:23:{Name:dm-23 Major:253 Minor:23 Size:107374182400 Scheduler:none} 253:64:{Name:dm-64 Major:253 Minor:64 Size:107374182400 Scheduler:none} 253:11:{Name:dm-11 Major:253 Minor:11 Size:107374182400 Scheduler:none} 253:26:{Name:dm-26 Major:253 Minor:26 Size:107374182400 Scheduler:none} 253:3:{Name:dm-3 Major:253 Minor:3 Size:4378853376 Scheduler:none} 253:5:{Name:dm-5 Major:253 Minor:5 Size:1748672446464 Scheduler:none}] NetworkDevices:[{Name:enp1s0f0 MacAddress:bc:5f:f4:ca:49:a7 Speed:1000 Mtu:1500} {Name:enp1s0f1 MacAddress:bc:5f:f4:cb:a2:2b Speed:4294967295 Mtu:1500} {Name:virbr0 MacAddress:52:54:00:3f:16:1b Speed:0 Mtu:1500} {Name:virbr0-nic MacAddress:52:54:00:3f:16:1b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:34324250624 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:2 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:3 Threads:[3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:10485760 Type:Unified Level:3}]} {Id:1 Memory:34359738368 Cores:[{Id:0 Threads:[4] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[5] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:2 Threads:[6] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:3 Threads:[7] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:10485760 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown}
I0205 09:56:28.555232 25403 manager.go:169] Version: {KernelVersion:3.10.0-327.4.5.el7.x86_64 ContainerOsVersion:Unknown DockerVersion:1.8.2-el7 CadvisorVersion: CadvisorRevision:}
I0205 09:56:28.556283 25403 server.go:819] Watching apiserver
I0205 09:56:28.583451 25403 start_master.go:581] Started Origin Controllers
2016-02-05 09:56:28.626401 I | http: TLS handshake error from 108.171.131.168:41559: EOF
2016-02-05 09:56:28.646662 I | http: TLS handshake error from 108.171.131.168:48563: EOF
2016-02-05 09:56:28.649466 I | http: TLS handshake error from 108.171.131.168:57448: EOF
2016-02-05 09:56:28.653388 I | http: TLS handshake error from 108.171.131.168:36293: EOF
I0205 09:56:28.659092 25403 replication_controller.go:422] Waiting for pods controller to sync, requeuing rc abecornlandingpageservice-1
I0205 09:56:28.660140 25403 replication_controller.go:422] Waiting for pods controller to sync, requeuing rc itemrepoclientservice-1
I0205 09:56:28.660260 25403 replication_controller.go:422] Waiting for pods controller to sync, requeuing rc itemrepoclientservice-2
I0205 09:56:28.660376 25403 replication_controller.go:422] Waiting for pods controller to sync, requeuing rc tradeclientservice-1
I0205 09:56:28.660481 25403 replication_controller.go:422] Waiting for pods controller to sync, requeuing rc tradeclientservice-2
2016-02-05 09:56:28.665027 I | http: TLS handshake error from 108.171.131.168:49004: EOF
2016-02-05 09:56:28.682658 I | http: TLS handshake error from 108.171.131.168:56343: EOF
I0205 09:56:29.037547 25403 plugins.go:56] Registering credential provider: .dockercfg
I0205 09:56:29.093875 25403 server.go:781] Started kubelet
E0205 09:56:29.094063 25403 kubelet.go:856] Image garbage collection failed: unable to find data for container /
I0205 09:56:29.095007 25403 server.go:104] Starting to listen on 0.0.0.0:10250
I0205 09:56:29.114821 25403 kubelet.go:877] Running in container "/kubelet"
I0205 09:56:29.271146 25403 kubelet.go:988] Node openshift.master1 was previously registered
I0205 09:56:30.321284 25403 factory.go:194] System is using systemd
I0205 09:56:31.104103 25403 factory.go:236] Registering Docker factory
I0205 09:56:31.106913 25403 factory.go:93] Registering Raw factory
I0205 09:56:33.358839 25403 manager.go:1006] Started watching for new ooms in manager
I0205 09:56:33.359058 25403 oomparser.go:199] OOM parser using kernel log file: "/var/log/messages"
I0205 09:56:33.359907 25403 manager.go:250] Starting recovery of all containers
I0205 09:56:33.712867 25403 manager.go:255] Recovery completed
I0205 09:56:33.959650 25403 manager.go:118] Starting to sync pod status with apiserver
I0205 09:56:33.959704 25403 kubelet.go:2116] Starting kubelet main sync loop.
W0205 09:56:34.070935 25403 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to apply oom score -900 to PID 382
I0205 09:56:34.107753 25403 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
W0205 09:56:34.673066 25403 manager.go:1457] No ref for pod '"dfd8b31123a8393bbcff8fe577425fb8dc43b055e2db5db61c35d19f083a551c abecorn/batchservice-2-h9ypv"'
W0205 09:56:36.187232 25403 manager.go:284] Status is up-to-date; skipping: "9fae70c3-cbcc-11e5-9ee0-bc5ff4ca49a7" {status:{Phase:Pending Conditions:[{Type:Ready Status:False LastProbeTime:{Time:{sec:0 nsec:0 loc:<nil>}} LastTransitionTime:{Time:{sec:63590280996 nsec:187045077 loc:0x4c5b1c0}} Reason:ContainersNotReady Message:containers with unready status: [wildfly-batchservice]}] Message: Reason: HostIP:23.25.149.227 PodIP: StartTime:0xc20fe04dc0 ContainerStatuses:[{Name:wildfly-batchservice State:{Waiting:0xc2148e2fc0 Running:<nil> Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:172.30.250.187:5000/abecorn/batchservice@sha256:6fcb012f6806797b9468658b224b9020a316b826456edc3e05dfa08fcfc4bc6c ImageID: ContainerID:}]} version:1 podName:batchservice-2-h9ypv podNamespace:abecorn}
I0205 09:56:44.062870 25403 kubelet.go:1745] volume "1c0fb32e-cbcc-11e5-9ee0-bc5ff4ca49a7/luceneindexes", still has a container running "1c0fb32e-cbcc-11e5-9ee0-bc5ff4ca49a7", skipping teardown
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment