-
livenessprobe.yaml
- httpGet : HTTP 요청을 전송해 상태를 검사 (HTTP 요청의 종료 코드가 200 또는 300번 계열이 아닌 경우 애플리케이션의 상태 검사가 실패한 것으로 간주)
- 요청을 보낼 포트와 경로, 헤더, HTTPS 사용 여부 등을 추가로 지정 가능
- tcpSocket : TCP 소케 연결이 가능한지 체크하여 상태 검사
- TCP 연결이 생성될 수 없는 경우, 애플리케이션의 상태 검사가 실패한 것으로 간주
- exec : 컨테이너 내부에서 명령어를 실행하여 상태 검사.
- 명령어의 종료 코드가 0이 아닌 경우, 애플리케이션의 상태 검사가 실패한 것으로 간주
- httpGet : HTTP 요청을 전송해 상태를 검사 (HTTP 요청의 종료 코드가 200 또는 300번 계열이 아닌 경우 애플리케이션의 상태 검사가 실패한 것으로 간주)
apiVersion: v1
kind: Pod
metadata:
name: livenessprobe
spec:
containers:
- name: livenessprobe
image: nginx
livenessProbe:
httpGet:
port: 80
path: /index.html
- 생성 및 확인
curl -s -O https://raw.githubusercontent.com/gasida/DKOS/main/3/livenessprobe.yaml
kubectl apply -f livenessprobe.yaml && kubectl get events --sort-by=.metadata.creationTimestamp -w
# 확인
kubectl describe pod livenessprobe | grep Liveness
Liveness: http-get http://:80/index.html delay=0s timeout=1s period=10s #success=1 #failure=3
kubectl logs livenessprobe -f
10.0.2.15 - - [18/Jun/2021:23:48:01 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
10.0.2.15 - - [18/Jun/2021:23:48:11 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
...
# 상태 검사 실패하게 만들기 위해 index.html 삭제
# 상태검사가 1번 실패 후(RESTARTS 1로 증가) 컨테이너가 재시작되어 index.html 파일이 원래대로 돌아와서 livenessProbe 다시 성공!으로 Running 상태가됨
kubectl exec livenessprobe -- rm /usr/share/nginx/html/index.html && kubectl logs livenessprobe -f
...
10.0.2.15 - - [21/Jun/2021:01:00:25 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
10.0.2.15 - - [21/Jun/2021:01:00:35 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
2021/06/21 01:00:45 [error] 30#30: *3 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 10.0.2.15, server: localhost, request: "GET /index.html HTTP/1.1", host: "172.16.46.3:80"
10.0.2.15 - - [21/Jun/2021:01:00:45 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.21" "-"
2021/06/21 01:00:55 [error] 30#30: *4 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 10.0.2.15, server: localhost, request: "GET /index.html HTTP/1.1", host: "172.16.46.3:80"
10.0.2.15 - - [21/Jun/2021:01:00:55 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.21" "-"
2021/06/21 01:01:05 [error] 30#30: *5 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 10.0.2.15, server: localhost, request: "GET /index.html HTTP/1.1", host: "172.16.46.3:80"
10.0.2.15 - - [21/Jun/2021:01:01:05 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.21" "-"
2021/06/21 01:01:05 [notice] 1#1: signal 3 (SIGQUIT) received, shutting down
2021/06/21 01:01:05 [notice] 30#30: gracefully shutting down
2021/06/21 01:01:05 [notice] 30#30: exiting
2021/06/21 01:01:05 [notice] 30#30: exit
2021/06/21 01:01:05 [notice] 1#1: signal 17 (SIGCHLD) received from 30
2021/06/21 01:01:05 [notice] 1#1: worker process 30 exited with code 0
2021/06/21 01:01:05 [notice] 1#1: exit
## 다시 로그 확인 - 새로 파드가 생성되고 index.html 이 있으니 정상!
[root@k8s-m ~ (kube:default)]# kubectl logs livenessprobe -f
...
2021/06/21 01:01:08 [notice] 1#1: using the "epoll" event method
2021/06/21 01:01:08 [notice] 1#1: nginx/1.21.0
2021/06/21 01:01:08 [notice] 1#1: built by gcc 8.3.0 (Debian 8.3.0-6)
2021/06/21 01:01:08 [notice] 1#1: OS: Linux 5.4.0-74-generic
2021/06/21 01:01:08 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/06/21 01:01:08 [notice] 1#1: start worker processes
2021/06/21 01:01:08 [notice] 1#1: start worker process 31
10.0.2.15 - - [21/Jun/2021:01:01:15 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
# 다음 실습을 위해서 생성된 파드 삭제
kubectl delete pod --all
- readinessprobe-service.yaml
apiVersion: v1
kind: Pod
metadata:
name: readinessprobe
labels:
readinessprobe: first
spec:
containers:
- name: readinessprobe
image: nginx
readinessProbe:
httpGet:
port: 80
path: /
---
apiVersion: v1
kind: Service
metadata:
name: readinessprobe-service
spec:
ports:
- name: nginx
port: 80
targetPort: 80
selector:
readinessprobe: first
type: ClusterIP
- 생성 및 확인
#
curl -s -O https://raw.githubusercontent.com/gasida/DKOS/main/3/readinessprobe-service.yaml
kubectl apply -f readinessprobe-service.yaml && kubectl get events --sort-by=.metadata.creationTimestamp -w
#
kubectl describe pod readinessprobe | grep Readiness
Readiness: http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
kubectl logs readinessprobe -f
10.0.2.15 - - [19/Jun/2021:00:23:14 +0000] "GET / HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
10.0.2.15 - - [19/Jun/2021:00:23:19 +0000] "GET / HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
...
kubectl get service readinessprobe-service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
readinessprobe-service ClusterIP 10.98.51.202 <none> 80/TCP 20s readinessprobe=first
kubectl get endpoints readinessprobe-service
NAME ENDPOINTS AGE
readinessprobe-service 172.16.46.17:80 42s
# 서비스로 접속 테스트
curl <CLUSTER-IP>
[root@k8s-m ~ (kube:default)]# curl 10.98.51.202
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
# index.html 파일을 삭제 후 확인
# livenessProbe 와 달리 RESTARTS 횟수가 증가하지 않았으며, 단순히 READY 상태인 컨테이너가 하나 줄어들었을 뿐입니다.
kubectl exec readinessprobe -- rm /usr/share/nginx/html/index.html && kubectl logs readinessprobe -f
...
# READY 앞 숫자가 0으로 변경
kubectl get pod
NAME READY STATUS RESTARTS AGE
readinessprobe 0/1 Running 0 6m36s
# 서비스의 엔드포인트 IP가 제거됨
kubectl get service readinessprobe-service -o wide
[root@k8s-m ~ (kube:default)]# kubectl get service readinessprobe-service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
readinessprobe-service ClusterIP 10.98.51.202 <none> 80/TCP 7m18s readinessprobe=first
kubectl get endpoints readinessprobe-service
[root@k8s-m ~ (kube:default)]# kubectl get endpoints readinessprobe-service
NAME ENDPOINTS AGE
readinessprobe-service 7m24s
# 다음 실습을 위해서 생성된 자원 삭제
kubectl delete -f readinessprobe-service.yaml