First create a working service:
apiVersion: v1
kind: Service
metadata:
name: echoheaders
labels:
app: echoheaders
spec:
# type: NodePort
First create a working service:
apiVersion: v1
kind: Service
metadata:
name: echoheaders
labels:
app: echoheaders
spec:
# type: NodePort
Zookeeper:
# A headless service to create DNS records
apiVersion: v1
kind: Service
metadata:
name: zk
labels:
app: zookeeper
spec:
#!/bin/bash | |
mkdir ~/SSLCA/root/ | |
cd ~/SSLCA/root/ | |
openssl genrsa -aes256 -out rootca.key 2048 | |
openssl req -sha256 -new -x509 -days 1826 -key rootca.key -out rootca.crt | |
touch certindex | |
echo 1000 > certserial | |
echo 1000 > crlnumber | |
echo ' | |
[ ca ] |
Run https://gist.github.com/bprashanth/d79b9810dea8b07a7bb1ccf467be5b66 (some googling + fiddling of how to generate intermediates with openssl, so don't take this as an authoritative guide). That script will create 3 CSRs, one for the root, one for an intermediate, and the last one for the end user. You probably don't care about most of the fields of the CSR execpt for "Common Name", eg:
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Create a backend service that simply serves the pod name, and a frontend haproxy instance that balances based on client cookies.
# This is the backend service
apiVersion: v1
kind: Service
metadata:
name: hostname
annotations:
# Enable sticky-ness on "SERVERID"
serviceloadbalancer/lb.cookie-sticky-session: "true"
MongoDB is document database that supports range and field queries.
A single server can run either standalone or as part of a replica set. A "replica set" is set of mongod instances with 1 primary. Primary: receives writes, services reads. Can step down and become secondary. Secondary: replicate the primary's oplog. If the primary goes down, secondaries will hold an election. Arbiter: used to achieve majority vote with even members, do not hold data, don't need dedicated nodes. Never becomes primary.
Create a backend service that simply serves the pod name, and a frontend haproxy instance that balances based on client cookies.
# This is the backend service
apiVersion: v1
kind: Service
metadata:
name: hostname
annotations:
# Enable sticky-ness on "SERVERID"
serviceloadbalancer/lb.cookie-sticky-session: "true"
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: echoheaders-lb | |
annotations: | |
service.alpha.kubernetes.io/only-node-local-endpoints: "true" | |
labels: | |
app: echoheaders-lb | |
spec: | |
type: LoadBalancer |
First make your service type=NodePort
Then create an instance group in UI (console.cloud.google.com), with some pool of instances from one of your zones:
gcloud compute --project $PROJECT instance-groups create unmanaged $K8S_IG
gcloud compute --project $PROJECTinstance-groups unmanaged add-instances $K8S_IG --instances $NODE,$NODE_1...
Add the Service NodePort to the InstanceGroup:
gcloud compute --project $PROJECT instance-groups set-named-ports $K8S_G --named-ports svc1:$SVC1_NODE_PORT
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: echoheaders | |
labels: | |
app: echoheaders | |
spec: | |
type: NodePort | |
ports: | |
- port: 80 |