Skip to content

Instantly share code, notes, and snippets.

View rawlingsj's full-sized avatar

James Rawlings rawlingsj

View GitHub Profile
How to recover a broken ensemble member using the Fabric quorum
created by James Rawlings on Aug 21, 2014 10:08 AM, last modified by James Rawlings on Aug 21, 2014 10:08 AM Version 1
If you have a quorum in place in the event of loosing a member you can restore the health of the ensemble using the following steps.
Assuming there are two working ensemble members you can unzip a fresh distribution of JBoss Fuse 6.1 and join the existing ensemble. You can try this out locally by setting up the ensemble using vagrant images and the steps found How to create a fabric ensemble from CLI.
1. Ensure there are no orphaned Karaf java processes that could be the old broken ensemble member
2. Delete existing working folder if using the original server
bash-3.2$ rm -rf jboss-fuse-6.1.0.redhat-379
How to create a fabric ensemble from CLI
created by James Rawlings on Aug 21, 2014 9:04 AM, last modified by James Rawlings on Aug 21, 2014 9:49 AM Version 4
There are a number of options when creating a fabric ensemble. Below is one way that we have used for many test and production setups. This example will set up a three fabric server ensemble using JBoss Fuse 6.1 and uses vagrant images from fabric8-devops/vagrant at master · fabric8io/fabric8-devops · GitHub and the remote containers Red Hat JBoss Fuse - Console Reference - fabric:container-create-ssh
Make sure all the prerequisites are installed including SSH keys if used for bidirectional SSH between all nodes and fabric ensemble servers and required ports (details in a followup post) - not needed if using the vagrant images above.
sudo yum -y install java-1.7.0-openjdk-devel.x86_64
sudo yum -y install telnet
sudo yum -y install unzip
docker kill $(docker ps -q)
docker rm $(docker ps -a -q)
$(docker run sequenceiq/socat)
@rawlingsj
rawlingsj / gist:6194875d7d26be82ada2
Created April 30, 2015 08:26
nuke_openshift.sh
osc get replicationControllers | awk '{print $1}' | grep -v CONTROLLER | xargs -n 1 openshift kube resize --replicas=0 rc
osc get services | awk '{print $1}' | grep -v NAME | xargs -n 1 osc delete service
osc get pods | awk '{print $1}' | grep -v POD | xargs -n 1 osc delete pod
osc get replicationControllers | awk '{print $1}' | grep -v CONTROLLER | xargs -n 1 osc delete replicationController
{
"id": "hello-openshift",
"kind": "Pod",
"apiVersion":"v1beta2",
"labels": {
"name": "hello-openshift"
},
"desiredState": {
"manifest": {
"version": "v1beta1",
{
"apiVersion": "v1beta3",
"kind": "Service",
"metadata": {
"name": "hello-openshift"
},
"spec": {
"ports": [
{
"name": "hello-openshift",
{
"apiVersion": "v1beta3",
"kind": "Service",
"metadata": {
"name": "registry"
},
"spec": {
"ports": [
{
"name": "registry",
@rawlingsj
rawlingsj / gist:09b22ca89ae0a80a01b9
Created June 2, 2015 11:52
web-registry-service.json
{
"apiVersion": "v1beta3",
"kind": "Service",
"metadata": {
"name": "registry"
},
"spec": {
"ports": [
{
"name": "registry",
#!/bin/bash
echo "This script will perform a docker pull for all images in a fabric8 app. For a list of apps visit http://repo1.maven.org/maven2/io/fabric8/apps"
echo "Enter the fabric8 app name:"
read APP
echo "Enter release version:"
read VERSION
oc delete rc -l provider=fabric8
oc delete services -l provider=fabric8
oc delete templates --all
oc delete service elasticsearch-cluster
oc delete service letschat
oc delete service sonarqube
oc delete service taiga
oc delete rc letschat
oc delete rc sonarqube
oc delete rc taiga