Skip to content

Instantly share code, notes, and snippets.

@jclaret
Forked from ikurni/ocp4-day2-readme
Created April 20, 2023 10:43
Show Gist options
  • Save jclaret/9c9af31fd0336c19bbcbbc658eb72343 to your computer and use it in GitHub Desktop.
Save jclaret/9c9af31fd0336c19bbcbbc658eb72343 to your computer and use it in GitHub Desktop.
OCP4 Day 2
Openshift Day 2 guidance :
-----------------------------------------------------------------------------------------------
Configure Openshift ingress operator to use node label “infra: true” and run router pods only in infra node
Edit openshift-ingress config :
# oc edit ingresscontrollers.operator.openshift.io/default -n openshift-ingress-operator
In the spec: section add below comment :
---
nodePlacement:
nodeSelector:
matchLabels:
infra: "true"
replicas: 3
---
Save the config
Label the infra node with “infra=true” label
#oc label node nodeinfra{1..3}.example.com infra=true
Check the router pods status, if hasn’t change, delete the running pods
-----------------------------------------------------------------------------------------------
Create Machine Config Pools for infra
Edit nodes infra and logging to become infra role
# oc edit node <node-infra-name>
Change : (node-role.kubernetes.io/worker: "") to (node-role.kubernetes.io/infra: "" )
Create new Machine Config Pools with below yaml :
---
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
creationTimestamp: '2020-02-09T11:49:21Z'
generation: 3
labels:
machineconfiguration.openshift.io/mco-built-in: ''
name: infra
resourceVersion: '4010819'
selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/infra
uid: 3c9e4731-2ca1-42df-86bf-9f9603c880e9
spec:
configuration:
name: rendered-infra-4cb6680af8b16b6224e36c11490ff22f
source:
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 00-worker
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 01-worker-container-runtime
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 01-worker-kubelet
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 99-worker-16878f5b-194c-4257-a49d-94de6792b7c8-registries
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 99-worker-ssh
machineConfigSelector:
matchLabels:
machineconfiguration.openshift.io/role: worker
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ''
paused: false
----
Wait for the process until complete
-----------------------------------------------------------------------------------------------
Configure Image Registry operator
Create PV for image-registry pods storage :
Create file named nfs-pv.yaml and content :
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvregistry
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: thin
nfs:
path: /nfs/registry
server: 192.168.112.123
# oc apply -f nfs-pv.yaml
Edit Image Registry Operator config to consume PV
# oc edit configs.imageregistry.operator.openshift.io
Add storage section under spec:
storage:
pvc:
Claim: {}
Save the config file, and it will create PVC “image-registry-storage” automatically and consume above PV
-----------------------------------------------------------------------------------------------
Configure Monitoring component to run on infra node (assume node for monitoring has label "monitoring=true" and using vsphere volume)
Create the configmap for openshift-monitoring project :
# oc -n openshift-monitoring create configmap cluster-monitoring-config
Edit the config map :
# oc -n openshift-monitoring edit configmap cluster-monitoring-config
Add below comment under spec:
---------------------------------------
data:
config.yaml: |
prometheusOperator:
nodeSelector:
monitoring: true
prometheusK8s:
nodeSelector:
monitoring: true
volumeClaimTemplate:
spec:
storageClassName: thin
volumeMode: Filesystem
resources:
requests:
storage: 100Gi
alertmanagerMain:
nodeSelector:
monitoring: true
volumeClaimTemplate:
metadata:
name: alert-main
spec:
storageClassName: thin
resources:
requests:
storage: 2Gi
kubeStateMetrics:
nodeSelector:
monitoring: true
grafana:
nodeSelector:
monitoring: true
telemeterClient:
nodeSelector:
monitoring: true
k8sPrometheusAdapter:
nodeSelector:
monitoring: true
openshiftStateMetrics:
nodeSelector:
monitoring: true
thanosQuerier:
nodeSelector:
monitoring: true
---------------------------------------
Save the config and watch the prometheus components
-----------------------------------------------------------------------------------------------
Configure logging to consume persistent storage, in this case, we are using “local storage operator”
Create local storage
To all 3 nodes of logging, add additional 200 GB disk from vCenter
Follow next step from here :
https://docs.openshift.com/container-platform/4.3/storage/persistent-storage/persistent-storage-local.html#local-storage-install_persistent-storage-local
# oc new-project local-storage
Deploy local storage operator from Openshift to local-storage project
Create storage after local-storage operator is installed
# vi local-volume-pv.yaml
---
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
name: "localdisks"
namespace: "local-storage"
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- prodcluster-log1.dc.pq.example.id
- prodcluster-log2.dc.pq.example.id
- prodcluster-log3.dc.pq.example.id
storageClassDevices:
- storageClassName: "gp2"
volumeMode: Filesystem
fsType: xfs
devicePaths:
- /dev/sdb
---
# oc create -f local-volume-pv.yaml
Deploy Openshift logging
Follow all steps provided in here:
https://docs.openshift.com/container-platform/4.3/logging/cluster-logging-deploying.html
# cd /root/yaml-collection
# oc create -f eo-namespace.yaml
# oc create -f clo-namespace.yaml
# oc create -f eo-og.yaml
# oc create -f eo-sub.yaml
# oc project openshift-operators-redhat
# oc create -f eo-rbac.yaml
#
When create cluster-logging, use below yaml :
---
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
elasticsearch:
nodeCount: 3
storage:
storageClassName: gp2
size: 200G
redundancyPolicy: "SingleRedundancy"
visualization:
type: "kibana"
kibana:
replicas: 1
curation:
type: "curator"
curator:
schedule: "30 3 * * *"
collection:
logs:
type: "fluentd"
fluentd: {}
---
-----------------------------------------------------------------------------------------------
Configure htpasswd user
Create new htpasswd file with htpasswd command :
# htpasswd -c users.htpasswd -b admin MyPassword
Go to web console, Administration > Cluster Settings > Global Configuration > OAuth
Add new identity provider by Htpasswd, point the users.htpasswd file that has been downloaded from the server, click create
To add another user to login with htpasswd method, edit htpasswd secret in openshift-config namespace, append new htpasswd password to the secret and save
-----------------------------------------------------------------------------------------------
Remove kubeadmin user
To add security, user kube:admin has to be removed
# oc delete secret kubeadmin -n kube-system
-----------------------------------------------------------------------------------------------
Add Project Wide default nodeSelector
Create new yaml file, project-creation-template.yaml :
apiVersion: template.openshift.io/v1
kind: Template
metadata:
creationTimestamp: null
name: project-request
objects:
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
openshift.io/node-selector: node-role.kubernetes.io/worker=
creationTimestamp: null
name: ${PROJECT_NAME}
spec: {}
status: {}
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: admin
namespace: ${PROJECT_NAME}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ${PROJECT_ADMIN_USER}
parameters:
- name: PROJECT_NAME
- name: PROJECT_DISPLAYNAME
- name: PROJECT_DESCRIPTION
- name: PROJECT_ADMIN_USER
- name: PROJECT_REQUESTING_USER
# oc create -f project-creation-template.yaml -n openshift-config
# oc edit projects.config.openshift.io cluster
add below line under spec:
projectRequestTemplate:
name: project-request
------------------------------------------------------------------------------------------------------
Replace Router Ingress Certificate
# oc create configmap custom-ca --from-file=ca-bundle.crt=/home/ocpsvc/certs/rootca.crt -n openshift-config
# oc patch proxy/cluster --type=merge --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}'
Create a bundle file with the custom certificate and the chain CA in the following order:
wildcard certificate
intermediate CA (if available)
root CA
# cat apps.ms.dev.corp.company.co.crt intermediate.cer rootca.crt > bundle-cert.crt
# oc create secret tls custom-ca --cert=/home/ocpsvc/certs/bundle-cert.crt --key=/home/ocpsvc/certs/apps.ms.dev.corp.company.co.id.key -n openshift-ingress
# oc patch ingresscontroller.operator default --type=merge -p '{"spec":{"defaultCertificate": {"name": "custom-ca"}}}' -n openshift-ingress-operator
-------------------------------------------------------------------------------------------------------
LDAP Group Sync
ldapsearch command to query member of the group :
ldapsearch -v -o ldif-wrap=no -xWLLL -D "ocp.svc@dev.corp.company.co.id" -h dev.corp.company.co.id -b "OU=Security Group,OU=Kantor Pusat,DC=dev,DC=corp,DC=company,DC=co,DC=id" "cn=allow_access_ocp4x" member
config.yml ---
kind: LDAPSyncConfig
apiVersion: v1
url: ldap://dev.corp.company.co.id
insecure: true
bindDN: 'CN=Openshift 4x,CN=Users,DC=dev,DC=corp,DC=company,DC=co,DC=id'
bindPassword: 'password'
rfc2307:
groupsQuery:
baseDN: "CN=allow_access_ocp4x,OU=Functional Group,OU=Security Group,OU=Kantor Pusat,DC=dev,DC=corp,DC=company,DC=co,DC=id"
scope: sub
derefAliases: never
filter: (objectClass=*)
pageSize: 0
groupUIDAttribute: dn
groupNameAttributes: [ cn ]
groupMembershipAttributes: [ member ]
usersQuery:
baseDN: "DC=dev,DC=corp,DC=company,DC=co,DC=id"
scope: sub
derefAliases: never
pageSize: 0
userUIDAttribute: dn
userNameAttributes: [ sAMAccountName ]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment