Skip to content

Instantly share code, notes, and snippets.

View jboyd01's full-sized avatar

Jay Boyd jboyd01

  • HCL
  • Wilmington, NC
View GitHub Profile
@jboyd01
jboyd01 / a.1 Overview
Last active October 20, 2017 19:45
exercising Service Catalog within OpenShift
You can validate Service Catalog using a couple of different approaches. The Service Catalog E2E is one (see the instructions
for syncing Service Catalog to OpenShift). Additionally, you can utilize the OpenShift web console or the CLI and add
applications exposed by the Template Service Broker or Ansible Service Broker. Note that some Templates create applications
directly and some use the Service Catalog. Python & HTTPD don't use Service Catalog, most templates with "persistent" in the
name do. For those that don't, there is no ServiceInstance, Class or binding created. Once you add applications to your
project, you should be able to see them via the CLI (oc get serviceinstances --all-namespaces)
In the OpenShift web console (https://127.0.0.1:8443) login as Developer (any password works) and select MyProject (or any
other project and then "Add to Project", "Browse Catalog". I believe the Ansible Service Broker offerings have (APB) in the
name.
@jboyd01
jboyd01 / 0. Using Prometheus metrics with Service Catalog
Last active December 4, 2017 17:03
Using Prometheus metrics with Service Catalog
We want to expose Metrics from Catalog via prometheus to enable monitoring and track key metrics and provide the ability
to alert on specific conditions.
Prometheus provides a client api that enables you to register a HTTP handler (ie /metrics) that automatically exposes
Prometheus metrics objects. Many core components within Kubernetes already do this including the CAdvisor, Kubelet,
Scheduler, Proxy, and many more. The Prometheus server can be easily configured with scrap configurations that will
poll the /metrics endpoints and provide a centralized UI for discovery & analysis. Advanced analytic and graphing tools
such as Grafana can consume Prometheus data are often used to augment monitoring.
Kubernetes API Servers are exposing Prometheus metrics out of the box, Prometheus must be
@jboyd01
jboyd01 / error
Created January 9, 2018 22:09
ansible installer error
my configuration specified:
openshift_version: latest
openshift_image_tag: latest
openshift_pkg_version: "3.6.0"
I checked out release-3.6.0 for the openshift-ansible repository prior to running.
I ran $ ansible-playbook /home/jaboyd/go/src/github.com/openshift-ansible/playbooks/byo/config.yml \
-i hosts \
-e @svc-cat-install
@jboyd01
jboyd01 / gist:3b69bf8d09e074ed2ddc8b83faffebc4
Last active August 6, 2018 21:24
OSBAPI Feature Proposal: Validating WebHooks for Create/Update/Delete of Instances & Bindings
Purpose
Allow brokers to register callbacks (webhooks) that can be used for validation prior to the platform attempting to Create, Update or Delete (CUD) a Service Instance or Binding.
There will likely be a lot of discussion on the actual implementation details, initially this proposal will just focus on surfacing the issue and proposing the use of pre-action validation so Brokers have an opportunity to indicate to the platform that an action will or will not be accepted for processing. Once the SIG has discussed and given general agreement, we'll drill into a detailed design.
This feature allows a broker to register webhooks for precheck validation for Instances and Bindings. That is, if indicated by the broker, the Platform will invoke a validating webhook just prior to invoking the actual call to create, update or delete an Instance or Binding. The webhook will be invoked with the same parameters and payload as the actual create/update/delete operation, but this operation is a dry-run for Broker vali
Spec:
Cluster Service Class External Name: example-starter-pack-service
Cluster Service Class Ref:
Name: 4f6e6cf6-ffdd-425f-a2c7-3c9258ad246a
Cluster Service Plan External Name: default
Cluster Service Plan Ref:
Name: 86064792-7ea2-467b-af93-ac9694d96d5b
External ID: 20da0666-a570-11e8-97f3-0242ac110003
@jboyd01
jboyd01 / gist:1aa48ab55276748577f28434ff2dc38c
Last active August 24, 2018 13:18
missing kubernetes events
I've got a BZ issue that indicates events are sometimes missing on service
instances. I've been able to replicate it. Generally you see all events, but
sometimes, it seems like the most recent events are not displayed. For
instance, in my controller log I pull out all occurrences of "event.go" which
shows the events that were set on (in this case) the instance. The instance was
in an error condition for a bit (set to an invalid plan), but then the last
action I did updated the plan to a valid plan:
21:41:58.603293 1 event.go:221] Event(v1.ObjectReference{Kind:"ServiceInstance", Namespace:"default", Name:"myservice2",...... ResourceVersion:"126", FieldPath:""}): type: 'Warning' reason: 'ReferencesNonexistentServicePlan' References a non-existent ClusterServicePlan {ClusterServicePlanExternalName:"defaultXXX"} on ClusterServiceClass 4f6e6cf6-ffdd-425f-a2c7-3c9258ad246a {ClusterServiceClassExternalName:"example-starter-pack-service"} or there is more than one (found: 0)
@jboyd01
jboyd01 / gist:3de94cfeecb65b8591fdb22c113653ed
Created December 17, 2018 19:10
cannot set blockOwnerDeletion
cat <<'EOF' | oc create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: add-servicebindingfinalizers
rules:
- apiGroups:
- servicecatalog.k8s.io
resources:
- servicebindings/finalizers
@jboyd01
jboyd01 / gather-catalog-logs.sh
Created December 20, 2018 17:31
script to pull all service catalog pod logs
#!/bin/bash
for p in $(oc get pods -n kube-service-catalog -o name -l app=apiserver); do
pod=$(sed -e 's/pod\///g'<<<$p)
oc logs $pod -c apiserver -n kube-service-catalog > /tmp/artifacts/$pod.log
done
for p in $(oc get pods -n kube-service-catalog -o name -l app=controller-manager); do
pod=$(sed -e 's/pod\///g'<<<$p)
oc logs $pod -n kube-service-catalog > /tmp/artifacts/$pod.log
done
@jboyd01
jboyd01 / svcat-rbac.yaml
Last active January 30, 2019 17:51
Jan 25 2019 patch for missing rbac rules for Service Catalog - oc create -f svcat-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:service-catalog:aggregate-to-admin
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups:
- "servicecatalog.k8s.io"
attributeRestrictions: null
@jboyd01
jboyd01 / enablement.md
Last active February 25, 2019 15:47
How to enable Service Catalog in OpenShift 4.0

As of Feb 24 Service Catalog is now installed by two new Cluster Operators. Initially Service Catalog is not enabled/installed. To enable it, the cluster admin must create two custom resources as follows:

cat <<EOF | oc create -f -
apiVersion: operator.openshift.io/v1
kind: ServiceCatalogAPIServer
metadata:
  name: cluster
spec:
 logLevel: "Normal"