Skip to content

Instantly share code, notes, and snippets.

@praveenkumar
Created October 17, 2019 10:08
Show Gist options
  • Save praveenkumar/b5ceab217b57e2fb04606dcb604e0017 to your computer and use it in GitHub Desktop.
Save praveenkumar/b5ceab217b57e2fb04606dcb604e0017 to your computer and use it in GitHub Desktop.
Put the details about consumed resources by Openshift Operators.
This gist thread will be used to find out what all operators are present as part of CRC and what can be done to make low
memory/cpu footprint to the host.
@praveenkumar
Copy link
Author

praveenkumar commented Dec 20, 2019

Now try to deploy the knative serving and eventing operator on this cluster.

$ cat sub.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: knative-eventing-operator
  namespace: openshift-operators 
spec:
  channel: alpha
  name: knative-eventing-operator
  source: community-operators
  sourceNamespace: openshift-marketplace 

$ cat sub-serving.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: knative-serving-operator
  namespace: openshift-operators 
spec:
  channel: alpha
  name: knative-serving-operator
  source: community-operators
  sourceNamespace: openshift-marketplace 

$ oc apply -f sub.yaml
$ oc apply -f sub-serving.yaml

Wait for some time and check the pods (which all are running).

$ oc get pods -A
NAMESPACE                                               NAME                                                              READY   STATUS    RESTARTS   AGE
istio-operator                                          istio-operator-86f7d5ffd8-ssmwq                                   1/1     Running   0          108m
istio-system                                            cluster-local-gateway-58758f588-wzkm2                             1/1     Running   0          106m
istio-system                                            istio-ingressgateway-54968c8854-fx97t                             1/1     Running   0          106m
istio-system                                            istio-pilot-577ff6784c-jrjjb                                      1/1     Running   0          107m
knative-eventing                                        eventing-controller-5d5f979874-96kpb                              1/1     Running   0          114m
knative-eventing                                        eventing-webhook-75bcb6d4bb-j5tml                                 1/1     Running   0          114m
knative-eventing                                        imc-controller-69c54bfdc8-pzxlr                                   1/1     Running   0          113m
knative-eventing                                        imc-dispatcher-94bc9f6b6-c2kfl                                    1/1     Running   0          113m
knative-eventing                                        sources-controller-5c6df78ffb-g9qhj                               1/1     Running   0          114m
knative-serving                                         activator-6c94b9ff47-9xwlp                                        1/1     Running   0          106m
knative-serving                                         autoscaler-64c549bcf4-lhfrx                                       1/1     Running   0          106m
knative-serving                                         controller-564487997f-mh2l2                                       1/1     Running   0          106m
knative-serving                                         knative-openshift-ingress-c7fd864df-7k6xw                         1/1     Running   0          106m
knative-serving                                         networking-certmanager-8466b656dc-ktf9d                           1/1     Running   0          106m
knative-serving                                         networking-istio-59f5c789d-9j69g                                  1/1     Running   0          106m
knative-serving                                         webhook-77ff9bd584-wwz4j                                          1/1     Running   0          106m
[...]

Try to deploy a sample service and check.

$ cat service.yaml 
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: helloworld-go
  namespace: default
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go
          env:
            - name: TARGET
              value: "Go Sample v1"

$ oc get ksvc helloworld-go
NAME            URL                                             LATESTCREATED         LATESTREADY           READY   REASON
helloworld-go   http://helloworld-go.default.apps-crc.testing   helloworld-go-lk6g9   helloworld-go-lk6g9   True    

$ curl http://helloworld-go.default.apps-crc.testing
Hello Go Sample v1!

$ oc describe node
[...]
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests          Limits
  --------           --------          ------
  cpu                3135m (89%)       4700m (134%)
  memory             6435496192 (87%)  5140890112 (70%)
  ephemeral-storage  0 (0%)            0 (0%)
Events:
  Type    Reason                   Age                 From                         Message
  ----    ------                   ----                ----                         -------
  Normal  NodeHasSufficientMemory  49m (x37 over 20h)  kubelet, crc-rk2fc-master-0  Node crc-rk2fc-master-0 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    49m (x37 over 20h)  kubelet, crc-rk2fc-master-0  Node crc-rk2fc-master-0 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     49m (x37 over 20h)  kubelet, crc-rk2fc-master-0  Node crc-rk2fc-master-0 status is now: NodeHasSufficientPID
  Normal  NodeReady                49m (x5 over 20h)   kubelet, crc-rk2fc-master-0  Node crc-rk2fc-master-0 status is now: NodeReady

Check the load on the CRC VM.

sh-4.2# chroot /host 
sh-4.4# free -h
              total        used        free      shared  buff/cache   available
Mem:          7.4Gi       4.3Gi       885Mi       248Mi       2.3Gi       3.4Gi
Swap:            0B          0B          0B

sh-4.4# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda3        30G   19G   12G  62% /
tmpfs           3.8G     0  3.8G   0% /sys/fs/cgroup
devtmpfs        3.7G     0  3.7G   0% /dev
tmpfs           3.8G   84K  3.8G   1% /dev/shm
tmpfs           3.8G  8.6M  3.8G   1% /run

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment