Skip to content

Instantly share code, notes, and snippets.

@amitkumarj441
Last active June 23, 2017 13:22
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save amitkumarj441/ea72cc53168d545303eaa1c6aa585b29 to your computer and use it in GitHub Desktop.
Save amitkumarj441/ea72cc53168d545303eaa1c6aa585b29 to your computer and use it in GitHub Desktop.
oc description and running kibana
[root@viaq openshift-ansible]# oc get pods
NAME READY STATUS RESTARTS AGE
docker-registry-1-deploy 0/1 Error 0 23h
registry-console-1-rjb0f 1/1 Running 1 23h
router-1-deploy 0/1 Error 0 23h
router-2-16rnl 0/1 ContainerCreating 0 7s
router-2-deploy 1/1 Running 0 10s
[root@viaq openshift-ansible]# oc describe po router-2-16rnl
Name: router-2-16rnl
Namespace: default
Security Policy: hostnetwork
Node: viaq.logging.test/172.16.93.5
Start Time: Fri, 23 Jun 2017 12:19:41 +0000
Labels: deployment=router-2
deploymentconfig=router
router=router
Status: Running
IP: 172.16.93.5
Controllers: ReplicationController/router-2
Containers:
router:
Container ID: docker://1f5146c2d5a7236a7b801514a5c7eb496af78dc1c6e6881e147778923a53a068
Image: openshift/origin-haproxy-router:v1.5.1
Image ID: docker-pullable://docker.io/openshift/origin-haproxy-router@sha256:c9374f410e32907be1fa1d14d77e58206ef0be949a63a635e6f3bafa77b35726
Ports: 80/TCP, 443/TCP, 1936/TCP
Requests:
cpu: 100m
memory: 256Mi
State: Running
Started: Fri, 23 Jun 2017 12:20:50 +0000
Ready: True
Restart Count: 0
Liveness: http-get http://localhost:1936/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://localhost:1936/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Volume Mounts:
/etc/pki/tls/private from server-certificate (ro)
/var/run/secrets/kubernetes.io/serviceaccount from router-token-n7m5k (ro)
Environment Variables:
DEFAULT_CERTIFICATE_DIR: /etc/pki/tls/private
ROUTER_EXTERNAL_HOST_HOSTNAME:
ROUTER_EXTERNAL_HOST_HTTPS_VSERVER:
ROUTER_EXTERNAL_HOST_HTTP_VSERVER:
ROUTER_EXTERNAL_HOST_INSECURE: false
ROUTER_EXTERNAL_HOST_INTERNAL_ADDRESS:
ROUTER_EXTERNAL_HOST_PARTITION_PATH:
ROUTER_EXTERNAL_HOST_PASSWORD:
ROUTER_EXTERNAL_HOST_PRIVKEY: /etc/secret-volume/router.pem
ROUTER_EXTERNAL_HOST_USERNAME:
ROUTER_EXTERNAL_HOST_VXLAN_GW_CIDR:
ROUTER_SERVICE_HTTPS_PORT: 443
ROUTER_SERVICE_HTTP_PORT: 80
ROUTER_SERVICE_NAME: router
ROUTER_SERVICE_NAMESPACE: default
ROUTER_SUBDOMAIN:
STATS_PASSWORD: vrYQX3qGVS
STATS_PORT: 1936
STATS_USERNAME: admin
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
server-certificate:
Type: Secret (a volume populated by a Secret)
SecretName: router-certs
router-token-n7m5k:
Type: Secret (a volume populated by a Secret)
SecretName: router-token-n7m5k
QoS Class: Burstable
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned router-2-16rnl to viaq.logging.test
1m 1m 1 {kubelet viaq.logging.test} spec.containers{router} Normal Pulling pulling image "openshift/origin-haproxy-router:v1.5.1"
24s 24s 1 {kubelet viaq.logging.test} spec.containers{router} Normal Pulled Successfully pulled image "openshift/origin-haproxy-router:v1.5.1"
24s 24s 1 {kubelet viaq.logging.test} spec.containers{router} Normal Created Created container with docker id 1f5146c2d5a7; Security:[seccomp=unconfined]
24s 24s 1 {kubelet viaq.logging.test} spec.containers{router} Normal Started Started container with docker id 1f5146c2d5a7
[root@viaq openshift-ansible]#
[root@viaq openshift-ansible]# oc project logging
Now using project "logging" on server "https://viaq.logging.test:8443".
[root@viaq openshift-ansible]# oc status -v
In project logging on server https://viaq.logging.test:8443
svc/logging-es - 172.30.39.221:9200 -> restapi
dc/logging-es-7vo926zw deploys docker.io/openshift/origin-logging-elasticsearch:v1.5.1
deployment #2 deployed 20 hours ago - 1 pod
deployment #1 failed 23 hours ago: config change
svc/logging-es-cluster - 172.30.63.148:9300
dc/logging-es-7vo926zw deploys docker.io/openshift/origin-logging-elasticsearch:v1.5.1
deployment #2 deployed 20 hours ago - 1 pod
deployment #1 failed 23 hours ago: config change
https://kibana.viaq.logging.test (reencrypt) (svc/logging-kibana)
dc/logging-kibana deploys
docker.io/openshift/origin-logging-kibana:v1.5.1
deployment #2 deployed 19 hours ago - 1 pod
deployment #1 failed 23 hours ago: config change
docker.io/openshift/origin-logging-auth-proxy:v1.5.1
deployment #2 deployed 19 hours ago - 1 pod
deployment #1 failed 23 hours ago: config change
svc/logging-mux - 172.30.212.89 ports 24284->mux-forward, 23456->tcp-json
dc/logging-mux deploys docker.io/openshift/origin-logging-fluentd:v1.5.1
deployment #1 failed about an hour ago: config change
dc/logging-curator deploys docker.io/openshift/origin-logging-curator:v1.5.1
deployment #1 deployed 23 hours ago - 1 pod
pod/logging-fluentd-024kj runs docker.io/openshift/origin-logging-fluentd:v1.5.1
Warnings:
* pod/logging-curator-1-hklth has restarted 22 times
Info:
* pod/logging-es-7vo926zw-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/logging-es-7vo926zw-1-deploy --liveness ...
* pod/logging-fluentd-024kj has no liveness probe to verify pods are still running.
try: oc set probe pod/logging-fluentd-024kj --liveness ...
* pod/logging-kibana-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/logging-kibana-1-deploy --liveness ...
* pod/logging-mux-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/logging-mux-1-deploy --liveness ...
* dc/logging-curator has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/logging-curator --readiness ...
* dc/logging-curator has no liveness probe to verify pods are still running.
try: oc set probe dc/logging-curator --liveness ...
* dc/logging-es-7vo926zw has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/logging-es-7vo926zw --readiness ...
* dc/logging-es-7vo926zw has no liveness probe to verify pods are still running.
try: oc set probe dc/logging-es-7vo926zw --liveness ...
* dc/logging-kibana has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/logging-kibana --readiness ...
* dc/logging-kibana has no liveness probe to verify pods are still running.
try: oc set probe dc/logging-kibana --liveness ...
* dc/logging-mux has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/logging-mux --readiness ...
* dc/logging-mux has no liveness probe to verify pods are still running.
try: oc set probe dc/logging-mux --liveness ...
View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
[root@viaq openshift-ansible]#
[root@viaq openshift-ansible]# oc get pods
NAME READY STATUS RESTARTS AGE
docker-registry-1-deploy 0/1 Error 0 23h
registry-console-1-rjb0f 1/1 Running 1 23h
router-1-deploy 0/1 Error 0 23h
router-2-16rnl 1/1 Running 0 4m
[root@viaq openshift-ansible]# oc project logging
Now using project "logging" on server "https://viaq.logging.test:8443".
[root@viaq openshift-ansible]# oc status -v
In project logging on server https://viaq.logging.test:8443
svc/logging-es - 172.30.39.221:9200 -> restapi
dc/logging-es-7vo926zw deploys docker.io/openshift/origin-logging-elasticsearch:v1.5.1
deployment #2 deployed 20 hours ago - 1 pod
deployment #1 failed 23 hours ago: config change
svc/logging-es-cluster - 172.30.63.148:9300
dc/logging-es-7vo926zw deploys docker.io/openshift/origin-logging-elasticsearch:v1.5.1
deployment #2 deployed 20 hours ago - 1 pod
deployment #1 failed 23 hours ago: config change
https://kibana.viaq.logging.test (reencrypt) (svc/logging-kibana)
dc/logging-kibana deploys
docker.io/openshift/origin-logging-kibana:v1.5.1
deployment #2 deployed 19 hours ago - 1 pod
deployment #1 failed 23 hours ago: config change
docker.io/openshift/origin-logging-auth-proxy:v1.5.1
deployment #2 deployed 19 hours ago - 1 pod
deployment #1 failed 23 hours ago: config change
svc/logging-mux - 172.30.212.89 ports 24284->mux-forward, 23456->tcp-json
dc/logging-mux deploys docker.io/openshift/origin-logging-fluentd:v1.5.1
deployment #1 failed about an hour ago: config change
dc/logging-curator deploys docker.io/openshift/origin-logging-curator:v1.5.1
deployment #1 deployed 23 hours ago - 1 pod
pod/logging-fluentd-024kj runs docker.io/openshift/origin-logging-fluentd:v1.5.1
Warnings:
* pod/logging-curator-1-hklth has restarted 22 times
Info:
* pod/logging-es-7vo926zw-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/logging-es-7vo926zw-1-deploy --liveness ...
* pod/logging-fluentd-024kj has no liveness probe to verify pods are still running.
try: oc set probe pod/logging-fluentd-024kj --liveness ...
* pod/logging-kibana-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/logging-kibana-1-deploy --liveness ...
* pod/logging-mux-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/logging-mux-1-deploy --liveness ...
* dc/logging-curator has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/logging-curator --readiness ...
* dc/logging-curator has no liveness probe to verify pods are still running.
try: oc set probe dc/logging-curator --liveness ...
* dc/logging-es-7vo926zw has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/logging-es-7vo926zw --readiness ...
* dc/logging-es-7vo926zw has no liveness probe to verify pods are still running.
try: oc set probe dc/logging-es-7vo926zw --liveness ...
* dc/logging-kibana has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/logging-kibana --readiness ...
* dc/logging-kibana has no liveness probe to verify pods are still running.
try: oc set probe dc/logging-kibana --liveness ...
* dc/logging-mux has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/logging-mux --readiness ...
* dc/logging-mux has no liveness probe to verify pods are still running.
try: oc set probe dc/logging-mux --liveness ...
View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
[root@viaq openshift-ansible]# vim /etc/hosts
[root@viaq openshift-ansible]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.93.5 viaq.logging.test openshift.logging.test kibana.logging.test mux.logging.test viaq
[root@viaq openshift-ansible]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.16.93.2 0.0.0.0 UG 100 0 0 ens33
10.128.0.0 0.0.0.0 255.252.0.0 U 0 0 0 tun0
172.16.93.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
[root@viaq openshift-ansible]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:3d:9b:c2 brd ff:ff:ff:ff:ff:ff
inet 172.16.93.5/24 brd 172.16.93.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe3d:9bc2/64 scope link
valid_lft forever preferred_lft forever
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 0e:0a:39:70:dc:9d brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN qlen 1000
link/ether 4e:e9:a9:96:3a:40 brd ff:ff:ff:ff:ff:ff
7: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
link/ether 52:54:00:76:1c:9d brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
8: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
link/ether 52:54:00:76:1c:9d brd ff:ff:ff:ff:ff:ff
9: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:bc:52:1d:e6 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
10: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65470 qdisc noqueue master ovs-system state UNKNOWN qlen 1000
link/ether b6:f8:9b:ba:ad:23 brd ff:ff:ff:ff:ff:ff
11: tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN qlen 1000
link/ether d2:78:ae:67:e3:06 brd ff:ff:ff:ff:ff:ff
inet 10.128.0.1/23 scope global tun0
valid_lft forever preferred_lft forever
inet6 fe80::d078:aeff:fe67:e306/64 scope link
valid_lft forever preferred_lft forever
12: vethd78006bc@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP
link/ether c2:32:32:37:8a:c1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::c032:32ff:fe37:8ac1/64 scope link
valid_lft forever preferred_lft forever
13: veth220a264b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP
link/ether ee:3c:31:b2:e7:8a brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ec3c:31ff:feb2:e78a/64 scope link
valid_lft forever preferred_lft forever
14: veth6087c303@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP
link/ether fe:21:9f:75:00:39 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::fc21:9fff:fe75:39/64 scope link
valid_lft forever preferred_lft forever
16: vethe1a8b91b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP
link/ether 96:c4:a6:f7:4b:cf brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::94c4:a6ff:fef7:4bcf/64 scope link
valid_lft forever preferred_lft forever
18: veth6a8b54d8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP
link/ether a2:e0:36:35:cb:5b brd ff:ff:ff:ff:ff:ff link-netnsid 5
inet6 fe80::a0e0:36ff:fe35:cb5b/64 scope link
valid_lft forever preferred_lft forever
[root@viaq openshift-ansible]# vim /etc/hosts
[root@viaq openshift-ansible]# firefox
library/ playbooks/ roles/
[root@viaq openshift-ansible]# oc project logging ng
error: Only one argument is supported (project name or context name).
See 'oc project -h' for help and examples.
[root@viaq openshift-ansible]# oc project logging
Already on project "logging" on server "https://viaq.logging.test:8443".
[root@viaq openshift-ansible]# oc login --username=root --password=cncfk8s
Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
[root@viaq openshift-ansible]# oc login --username=system:root
Authentication required for https://viaq.logging.test:8443 (openshift)
Username: system:root
Password:
error: username system:root is invalid for basic auth
[root@viaq openshift-ansible]# oc login --username=root
Logged into "https://viaq.logging.test:8443" as "root" using existing credentials.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
[root@viaq openshift-ansible]# oc login --username=system:admin
Logged into "https://viaq.logging.test:8443" as "system:admin" using existing credentials.
You have access to the following projects and can switch between them with 'oc project <projectname>':
* default
kube-system
logging
management-infra
mux-undefined
openshift
openshift-infra
Using project "default".
[root@viaq openshift-ansible]# oadm policy add-cluster-role-to-user cluster-admin admin
cluster role "cluster-admin" added: "admin"
[root@viaq openshift-ansible]#
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment