View gist:c8dc7aed04d75e70e5dc6c1b14857087
REQUEST:
[POST] https://openwhisk-mixky.127.0.0.1.nip.io/api/v1/namespaces/whisk.system/actions/utils/echo?blocking=true&result=false
Req Headers
{
"Authorization": [
"Basic xxxxxxxxxxxxxxxxxx="
],
"Content-Type": [
"application/json"
],
View template.yaml
apiVersion: v1
kind: Template
metadata:
name: ansible-service-broker
objects:
- apiVersion: v1
kind: Service
metadata:
name: asb
labels:
View ASB_failure.md
➜  SETUP_FOLDER ./run_latest_build.sh 

Error message:

error: server took too long to respond with version information.
Starting OpenShift using docker.io/openshift/origin:latest ...
-- Checking OpenShift client ... OK
View JUG_MS.md

Open Service Broker API mit dem Ansible Service Broker - Einfaches Service Provisioning für Kubernetes und Openshift

Die Bereitstellung von Services innerhalb einer Unternehmung kann unter Umständen sehr zeitaufwendig sein. Zwar vereinfachen Cloud Anbieter den Process, doch ist dieser unterschiedlich von jedem Cloud Anbieter umgesetzt.

Die Open Service Broker API ist ein unabhängiger Standard der das Bereitstellen von Services innerhalb von cloud native Platformen vereinheitlicht. Verschiedene Dienste werden über einen Service Catalog bereitgestellt, mit dem der Anwendungsentwickler interagiert und die verschiedenen Operationen wie das Provisioning oder das Binding ausfuehren kann. Der Vortrag stellt den Ansible Service Broker vor, und zeigt wie mit Hilfe von APBs (Ansible Playbook Bundles) eigene Dienste in den Service Catalog von Kubernetes/Openshift eingestellt werden können.

View log_from_mvn.txt
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/matzew/Work/data-streaming/SIMPLE_POC/raw-strimzi-ocp-app/src/main/resources
[INFO]
[INFO] --- fabric8-maven-plugin:3.5.30:resource (default) @ raw-app ---
[INFO] F8: Running in OpenShift mode
[INFO] F8: Using docker image name of namespace: balh
[INFO] F8: Running generator java-exec
[INFO] F8: java-exec: Using Docker image fabric8/java-jboss-openjdk8-jdk:1.2 as base / builder
[INFO] F8: fmp-controller: Adding a default Deployment
[INFO] F8: fmp-service: Adding a default service 'raw-app' with ports [8080]
View LittleHack.md

Reading from AMQP 1.0 queue (Apache QPid CPP) with Apache Beam, and performing some processing, than writing out to Apache Kafka

        PipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation()
                .as(PipelineOptions.class);

        Pipeline pipeline = Pipeline.create(options);

        PCollection<Message> output = pipeline.apply(AmqpIO.read()
                .withAddresses(Collections.singletonList("amqp://172.17.0.2:5672/myQueue")));
View Push_Sending_Details.md

HTTP Rest API receive 'push request', this gets processed in PushNotificationSenderEndpoint.send(), which eventually calls an async EJB (NotificationRouter.submit()) which performs a grouping/mapping and fires a CDI event per Variant.

This Event is received in the MessageHolderWithVariantsProducer.queueMessageVariantForProcessing() method, which basically sticks the submitted event into a transactional JMS send, based on the variant type a different queue is selected (mainly to keep things separated). The JMS listener MessageHolderWithVariantsConsumer.onMessage() reads from these queues, and fires a different CDI event, which kicks in the TokenLoader.

The TokenLoader generally iterates over all variants (for a given type, see the grouping/mapping done in NotificationRouter), and starts to query tokens from the database, as a steam.

The tokens are in different batches, with different default sizes per Push_Network, to not overflow the push network (e.g. google (non topic case) only allows 1

View gist:5ec87627fcceb7ec2f0ec22df75b4804
<datasource jndi-name="java:jboss/datasources/UnifiedPushDS" enabled="true" use-java-context="true" pool-name="UnifiedPushDS">
<connection-url>jdbc:postgresql://${env.POSTGRES_SERVICE_HOST}:${env.POSTGRES_SERVICE_PORT}/${env.POSTGRES_DATABASE}</connection-url>
<driver>postgresql</driver>
<security>
<user-name>${env.POSTGRES_USER}</user-name>
<password>${env.POSTGRES_PASSWORD}</password>
</security>
<validation>
<check-valid-connection-sql>SELECT 1</check-valid-connection-sql>
<background-validation>true</background-validation>
View error.md

fatal: [localhost]: FAILED! => {"changed": false, "cmd": "oc cluster up --service-catalog=true --host-config-dir=/home/matzew/Work/aerogear/mobile-core/ui --host-data-dir=/home/matzew/Work/aerogear/mobile-core/ui/openshift-data --host-pv-dir=/home/matzew/Work/aerogear/mobile-core/ui/openshift-pvs --host-volumes-dir=/home/matzew/Work/aerogear/mobile-core/ui/openshift-volumes --routing-suffix=192.168.37.1.nip.io --public-hostname=192.168.37.1 --version=v3.7.2 --image=openshift/origin", "delta": "0:00:15.418183", "end": "2018-04-04 10:02:56.488580", "msg": "non-zero return code", "rc": 1, "start": "2018-04-04 10:02:41.070397", "stderr": "FAIL\n Error: could not reconcile service catalog cluster role system:openshift:service-catalog:aggregate-to-admin\n Caused By:\n Error: the server could not find the requested resource", "stderr_lines": ["FAIL", " Error: could not reconcile service catalog cluster role system:openshift:service-catalog:aggregate-to-admin", " Caused By:", " Error: the server could

View gist:23193e6ca991896a4d5c29a77d80444b
alias apb='docker run --rm --privileged -v $PWD:/mnt -v $HOME/.kube:/.kube -v /var/run/docker.sock:/var/run/docker.sock -u $UID docker.io/ansibleplaybookbundle/apb-tools:canary'