Here are the high-level steps to use Operator in an airgap fashion
-
Use the BOM Downloader (shell script, below) to download all the regular bom contents. This will do the following (using https):
- Create a directory (in current working directory) called
halconfig
- Get (download and add)
halconfig/versions.yml
, which is the list of all “GA” versions (latest patch for each minor release, e.g., latest 1.19, latest 1.18, etc.) - For each GA version, do the following:
- Get
halconfig/bom/<version>.yml
, which lists all container versions - For each container version, get the service version. For example:
clouddriver:6.5.6-40c9a8c-6a11468-rc30
would have a service version of 6.5.6;clouddriver:2.19.8
would have a service version of 2.19.8
- Get all relevant profiles for the service version (all files in
halconfig/profiles/<servicename>/<serviceversion>
). For example:halconfig/profiles/clouddriver/6.5.6/
has these files:- clouddriver-bootstrap.yml
- clouddriver-caching.yml
- clouddriver-ro-deck.yml
- clouddriver-ro.yml
- clouddriver-rw.yml
- clouddriver.yml
- Generate a list of images for the version (e.g., 2.18.1-images)
- Get
- You’ll end up with a
halconfig
directory that needs to be updated and re-hosted on a local Minio/S3 bucket. You probably wanna tar this up to move it around:tar -czvf halconfig.tgz halconfig/
- Create a directory (in current working directory) called
-
Start a Minio instance near where Operator is gonna run
- It must be accessible by halyard. For example, if you’re running it in Kubernetes, in the namespace
spinnaker
with a service name ofminio
, then you can access it (probably) viahttp://minio.spinnaker:9000
- You must know the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID, which are basically the username/password for Minio
- It must be accessible by halyard. For example, if you’re running it in Kubernetes, in the namespace
-
Take the tarball, move it to somewhere accessible where you can upload it to Minio.
-
Extract the tarball:
tar -xzvf halconfig.tgz
-
Modify
halconfig/bom/2.19.8.yml
(or whatever version you’re working with) so that thedockerRegistry
points to your docker registry. For example:- If your images are as follows:
armory/deck:x
rehosted atmydockerregistry.com/armory-images/deck
armory/gate:x
rehosted atmydockerregistry.com/armory-images/gate
armory/orca:x
rehosted atmydockerregistry.com/armory-images/orca
- etc
- Then you want
dockerRegistry:
[m](http://docker.io/armory)``ydockerregistry.com/armory-images
- If your images are as follows:
-
Install the AWS CLI and then add your username/password to the env, then make sure you can access Minio. You may have to port-forward to Minio (
kubectl -n spinnaker port-forward svc/minio 9000:9000
) export AWS_ACCESS_KEY_ID=username export AWS_SECRET_ACCESS_KEY=passwordaws --endpoint-url http://localhost:9000 s3 ls
-
Make a bucket: aws --endpoint-url http://localhost:9000 s3 mb s3://halconfig
-
Upload your modified halconfig directory to the minio bucket aws --endpoint-url http://localhost:9000 s3 cp --recursive halconfig s3://halconfig/
-
Install Operator in your Kubernetes cluster:
- Download
https://github.com/armory-io/spinnaker-operator/releases/latest/download/manifests.tgz
curl -LO https://github.com/armory-io/spinnaker-operator/releases/latest/download/manifests.tgz - Extract it (
tar -xzvf manifests.tgz
)
- Download
-
Install or apply the CRDs (clusterwide) kubectl apply -f deploy/crds
-
Decide whether you wanna use the basic or cluster installer. The rest of these instructions are for cluster, but the steps are the same
-
Go into deploy/operator/cluster/deployment.yaml, and make these modifications:
configmap.yaml
(update http://minio.spinnaker:9000 to use the cluster-internal dns endpoint for minio) apiVersion: v1 kind: ConfigMap metadata: name: halyard-custom-config namespace: spinnaker data: halyard.yml: | halyard: halconfig: directory: /home/spinnaker/.hal spinnaker: artifacts: debianRepository: dockerRegistry: googleImageProject: config: input: bucket: halconfig region: us-west-2 endpoint: http://minio.spinnaker:9000 enablePathStyleAccess: true anonymousAccess: falsedeployment.yaml
(full modified sample below)- Modify the two Docker images to your local versions of them
- Add any necessary docker registry secrets
- in spec.template.spec (so, the pod spec, not the container spec), add the configMap as a volume (spacing of this should be correct): volumes:
- name: halyard configMap: name: halyard-custom-config
- In spec.template.spec.containers[1] (the container spec for halyard**, not the pod spec**), add a volumeMount (spacing of this should be correct):
volumeMounts:
- mountPath: /opt/spinnaker/config/halyard.yml subPath: halyard.yml name: halyard
- Also in spec.template.spec.containers[1] (the container spec for halyard**, not the pod spec**), add these environment variables (again, spacing should be correct - if it doesn’t line up, you may be in the wrong section)
env:
- name: AWS_ACCESS_KEY_ID value: username
- name: AWS_SECRET_ACCESS_KEY value: justin123
-
Apply all the manifests in
deploy/operator/cluster
-
Wait for the operator pod to start - it has two containers, and needs around 1GB of memory
-
Exec into the halyard container of the operator pod
kubectl -n spinnaker-operator exec -it spinnaker-operator-xyz -c halyard bash
-
Run
hal version list
andhal version bom 2.19.8
to verify you were successful
Here are the high-level steps to use Minnaker in an airgap fashion
-
Use the +Airgap (“Air-gapped”) Notes: BOM (shell script, below) or the script (note comments for a fix on line 34 using yq) linked in the AirGapped Docs to download all the regular bom contents.
- Prereqs: need to have AWS cli installed and yq
- e.g. usage with script -
./bomdownloader.sh 2.20.3 halconfig
- You’ll end up with a
halconfig
directory that needs to be updated and re-hosted on a local Minio/S3 bucket. You probably want to tar this up to move it around:tar -czvf halconfig.tgz halconfig/
-
Install k3s -
**curl -sfL https://get.k3s.io | sh -**
-
Start a Minio instance near where Halyard is gonna run
- It must be accessible by halyard. For example, if you’re running it in Kubernetes, in the namespace
spinnaker
with a service name ofminio
, then you can access it (probably) viahttp://minio.spinnaker:9000
- You must know the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID, which are basically the username/password for Minio
- It must be accessible by halyard. For example, if you’re running it in Kubernetes, in the namespace
-
Take the tarball, move it to the VM e.g.
scp halconfig.tgz ubuntu@ip-address:/home/ubuntu/
-
Extract the tarball:
tar -xzvf halconfig.tgz
-
Modify
halconfig/bom/2.19.8.yml
(or whatever version you’re working with) so that thedockerRegistry
points to your docker registry. For example:- If your images are as follows (e.g.):
armory/deck:x
rehosted atmydockerregistry.com/armory-images/deck
armory/gate:x
rehosted atmydockerregistry.com/armory-images/gate
armory/orca:x
rehosted atmydockerregistry.com/armory-images/orca
- etc
- Then you want
dockerRegistry:
[m](http://docker.io/armory)``ydockerregistry.com/armory-images
- If your images are as follows (e.g.):
-
Install the AWS CLI and then add your username/password to the env, then make sure you can access Minio.
export AWS_ACCESS_KEY_ID=username export AWS_SECRET_ACCESS_KEY=passwordaws --endpoint-url http://localhost:9000 s3 ls
-
Make a bucket: aws --endpoint-url http://localhost:9000 s3 mb s3://halconfig
-
Upload your modified halconfig directory to the minio bucket aws --endpoint-url http://localhost:9000 s3 cp --recursive halconfig s3://halconfig/
-
Before deploying halyard we need to make the following changes:
1. Delploy `configmap.yaml` (update http://minio.spinnaker:9000 to use the cluster-internal dns endpoint for minio)
apiVersion: v1
kind: ConfigMap
metadata:
name: halyard-custom-config
namespace: spinnaker
data:
halyard.yml: |
halyard:
halconfig:
directory: /home/spinnaker/.hal
spinnaker:
artifacts:
debianRepository:
dockerRegistry:
googleImageProject:
config:
input:
bucket: halconfig
region: us-west-2
endpoint: http://minio.spinnaker:9000
enablePathStyleAccess: true
anonymousAccess: false
1. `halyard``.yaml` (see this link for sample:
1. Modify the two Docker images to your local versions of them
2. Add any necessary docker registry secrets
3. in spec.template.spec (so, the **pod spec**, not the **container spec**), add the configMap as a volume (spacing of this should be correct):
volumes:
- name: halyard
configMap:
name: halyard-custom-config
1. In spec.template.spec.containers[1] (the **container spec for** ***halyard*****, not the pod spec**), add a volumeMount (spacing of this should be correct):
volumeMounts:
- mountPath: /opt/spinnaker/config/halyard.yml
subPath: halyard.yml
name: halyard
1. Also in spec.template.spec.containers[1] (the **container spec for** ***halyard*****, not the pod spec**), add these environment variables (again, spacing should be correct - if it doesn’t line up, you may be in the wrong section)
env:
- name: AWS_ACCESS_KEY_ID
value: username
- name: AWS_SECRET_ACCESS_KEY
value: justin123
- Wait for the operator pod to start - it has two containers, and needs around 1GB of memory
- Exec into the halyard container of the operator pod
kubectl -n spinnaker-operator exec -it spinnaker-operator-xyz -c halyard bash
- Run
hal version list
andhal version bom 2.19.8
to verify you were successful
All BoM Downloader (HTTPS, all GA versions):
mkdir -p halconfig/bom
echo "Getting list of versions..."
curl -s https://halconfig.s3-us-west-2.amazonaws.com/versions.yml -o halconfig/versions.yml
grep 'version:' halconfig/versions.yml | awk '{print $NF}' > versions
for SPINNAKER_VERSION in $(cat versions); do
echo "Getting BOM for version ${SPINNAKER_VERSION}..."
curl -s https://halconfig.s3-us-west-2.amazonaws.com/bom/${SPINNAKER_VERSION}.yml -o halconfig/bom/${SPINNAKER_VERSION}.yml
grep '^ ' halconfig/bom/${SPINNAKER_VERSION}.yml \
| grep -v commit \
| tr '\n' ',' \
| sed 's|:,| |g' \
| tr ',' '\n' \
| grep -v dockerRegistry \
| grep -v redis \
| grep -v '^$' \
> ${SPINNAKER_VERSION}-metadata
touch keys
touch images
rm keys
rm images
while read p; do
SVC=$(echo $p | awk '{print $1}')
VERSION=$(echo $p | awk '{print $3}')
MAJOR_VERSION=$(echo ${VERSION} | awk -F'-' '{print $1}')
echo "Getting list of keys for ${SVC} version ${MAJOR_VERSION}"
keys=$(curl -s "https://halconfig.s3-us-west-2.amazonaws.com/?list-type=2&prefix=profiles/${SVC}/${MAJOR_VERSION}" | tr '<' '\n' | tr '>' '\n' | grep profiles | grep "${MAJOR_VERSION}/")
echo ${keys}
for key in $keys;
do echo $key >> keys;
done
echo ${SVC}:${VERSION} >> ${SPINNAKER_VERSION}-images
done <${SPINNAKER_VERSION}-metadata
for key in $(cat keys); do
mkdir -p halconfig/$(dirname ${key})
echo "Downloading ${key}"
curl -s https://halconfig.s3-us-west-2.amazonaws.com/${key} -o halconfig/${key}
done
done
apiVersion: apps/v1
kind: Deployment
metadata:
name: spinnaker-operator
spec:
replicas: 1
selector:
matchLabels:
name: spinnaker-operator
template:
metadata:
labels:
name: spinnaker-operator
spec:
serviceAccountName: spinnaker-operator
# This can be anywhere within the spec.template.spec block
volumes:
- name: halyard
configMap:
name: halyard-custom-config
containers:
- name: spinnaker-operator
image: mydockerregistry.com/armory-images/armory-operator:0.4.0
command:
- spinnaker-operator
imagePullPolicy: IfNotPresent
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "spinnaker-operator"
- name: halyard
image: mydockerregistry.com/armory-images/halyard-armory:operator-0.4.x
imagePullPolicy: IfNotPresent
# This can be anywhere within the spec.template.spec.containers[1] block
# should line up with "image"
env:
- name: AWS_ACCESS_KEY_ID
value: username
- name: AWS_SECRET_ACCESS_KEY
value: justin123
ports:
- containerPort: 8064
protocol: TCP
readinessProbe:
httpGet:
path: /health
port: 8064
failureThreshold: 20
periodSeconds: 5
initialDelaySeconds: 20
# This can be anywhere within the spec.template.spec.containers[1] block
# should line up with "image"
volumeMounts:
- mountPath: /opt/spinnaker/config/halyard.yml
subPath: halyard.yml
name: halyard
livenessProbe:
tcpSocket:
port: 8064
initialDelaySeconds: 30
periodSeconds: 20
apiVersion: apps/v1
kind: Deployment
metadata:
name: spinnaker-operator
spec:
replicas: 1
selector:
matchLabels:
name: spinnaker-operator
template:
metadata:
labels:
name: spinnaker-operator
spec:
serviceAccountName: spinnaker-operator
containers:
- name: spinnaker-operator
image: mydockerregistry.com/armory-images/armory-operator:0.4.0
command:
- spinnaker-operator
args:
- --disable-admission-controller
imagePullPolicy: IfNotPresent
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "spinnaker-operator"
- name: halyard
image: mydockerregistry.com/armory-images/halyard-armory:operator-0.4.x
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8064
protocol: TCP
readinessProbe:
httpGet:
path: /health
port: 8064
failureThreshold: 20
periodSeconds: 5
initialDelaySeconds: 20
livenessProbe:
tcpSocket:
port: 8064
initialDelaySeconds: 30
periodSeconds: 20
# This can be anywhere within the spec.template.spec.containers[1] block
# should line up with "image"
env:
- name: AWS_ACCESS_KEY_ID
value: username
- name: AWS_SECRET_ACCESS_KEY
value: justin123
# This can be anywhere within the spec.template.spec.containers[1] block
# should line up with "image"
volumeMounts:
- mountPath: /opt/spinnaker/config/halyard.yml
subPath: halyard.yml
name: halyard
# This can be anywhere within the spec.template.spec bloc
# but should line up with "containers"
volumes:
- name: halyard
configMap:
name: halyard-custom-config
open the halconfig/bom/X.Y.Z.yml to see list of services with the tag. e.g. 2.20.3.yml Then you can tell customers to pull the images
#!/bin/bash
# BOM for Armory Spinnaker 2.20.3
docker pull armory/clouddriver:2.20.6
docker pull armory/deck:2.20.4
docker pull armory/dinghy:2.20.3
docker pull armory/echo:2.20.9
docker pull armory/fiat:2.20.4
docker pull armory/front50:2.20.6
docker pull armory/gate:2.20.4
docker pull armory/igor:2.20.9
docker pull armory/kayenta:2.20.4
docker pull armory/monitoring-daemon:2.20.0
docker pull armory/monitoring-third-party:2.20.0
docker pull armory/orca:2.20.3
docker pull armory/rosco:2.20.4
docker pull armory/terraformer:2.20.4
docker pull gcr.io/kubernetes-spinnaker/redis-cluster:v2 # we likely need to deploy this manually
we’d like we’d likely need to manually deploy - if so, whatever redis instance they can host
deploy redis.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: spin
cluster: external-redis
name: external-redis
namespace: spinnaker
spec:
replicas: 1
selector:
matchLabels:
app: spin
cluster: external-redis
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: spin
cluster: external-redis
spec:
containers:
- env:
- name: MASTER
value: "true"
image: gcr.io/kubernetes-spinnaker/redis-cluster:v2
name: redis
ports:
- containerPort: 6379
protocol: TCP
readinessProbe:
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 6379
timeoutSeconds: 1
resources: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
app: spin
cluster: external-redis
name: external-redis
namespace: spinnaker
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: spin
cluster: external-redis
sessionAffinity: None
type: ClusterIP
Update Halyard / Operator to use custom redis:
# if halyard
# edit /service-settings/redis.yml
overrideBaseUrl: redis://external-redis.namespace:6379
skipLifeCycleManagement: true
# for operator
service-settings:
redis:
overrideBaseUrl: redis://10.43.31.160:6379
skipLifeCycleManagement: true
Bom Downloader (Single version)
### Specify the version you want; if you don't, this will download the latest
SPECIFY_VERSION=2.18.0
mkdir -p halconfig/bom
echo "Getting list of version..."
curl -s https://halconfig.s3-us-west-2.amazonaws.com/versions.yml -o halconfig/versions.yml
SPINNAKER_VERSION=${SPECIFY_VERSION:-$(grep version versions.yml | awk 'END{print $NF}')}
echo "Getting BOM for version ${SPINNAKER_VERSION}..."
curl -s https://halconfig.s3-us-west-2.amazonaws.com/bom/${SPINNAKER_VERSION}.yml -o halconfig/bom/${SPINNAKER_VERSION}.yml
grep '^ ' halconfig/bom/${SPINNAKER_VERSION}.yml \
| tr '\n' ',' \
| sed 's|:,| |g' \
| tr ',' '\n' \
| grep -v dockerRegistry | grep -v redis | grep -v '^$' \
> ${SPINNAKER_VERSION}-metadata
rm keys
while read p; do
SVC=$(echo $p | awk '{print $1}')
VERSION=$(echo $p | awk '{print $3}')
MAJOR_VERSION=$(echo ${VERSION} | awk -F'-' '{print $1}')
echo "Getting list of keys for ${SVC} version ${MAJOR_VERSION}"
keys=$(curl -s "https://halconfig.s3-us-west-2.amazonaws.com/?list-type=2&prefix=profiles/${SVC}/${MAJOR_VERSION}" | tr '<' '\n' | tr '>' '\n' | grep profiles | grep "${MAJOR_VERSION}/")
echo ${keys}
for key in $keys;
do echo $key >> keys;
done
done <${SPINNAKER_VERSION}-metadata
for key in $(cat keys); do
mkdir -p halconfig/$(dirname ${key})
echo "Downloading ${key}"
curl -s https://halconfig.s3-us-west-2.amazonaws.com/${key} -o halconfig/${key}
done
New Script-ey thing:
# versions.yml
aws --no-sign-request s3 cp s3://halconfig/versions.yml halconfig/versions.yml
# latest
SPINNAKER_VERSION=$(grep version halconfig/versions.yml | awk 'END{print $NF}')
aws --no-sign-request s3 cp s3://halconfig/bom/${SPINNAKER_VERSION}.yml halconfig/bom/${SPINNAKER_VERSION}.yml
# Get space-delimited metadata
grep '^ ' halconfig/bom/${SPINNAKER_VERSION}.yml \
| tr '\n' ',' \
| sed 's|:,| |g' \
| tr ',' '\n' \
| grep -v dockerRegistry | grep -v redis \
> ${SPINNAKER_VERSION}-metadata
while read p; do
SVC=$(echo $p | awk '{print $1}')
# echo $SVC
VERSION=$(echo $p | awk '{print $3}')
# echo ${VERSION}
MAJOR_VERSION=$(echo ${VERSION} | awk -F'-' '{print $1}')
# echo ${MAJOR_VERSION}
aws --no-sign-request s3 cp --recursive s3://halconfig/profiles/${SVC}/${MAJOR_VERSION} halconfig/profiles/${SVC}/${MAJOR_VERSION}
done <${SPINNAKER_VERSION}-metadata
aws --no-sign-request s3 cp --recursive s3://halconfig halconfig
aws --profile minio --endpoint-url http://localhost:9000 s3 cp --recursive halconfig s3://halconfig
export AWS_ACCESS_KEY_ID=minio
export AWS_SECRET_ACCESS_KEY=kLuXaovQnzJAU2/KrA4Hym5HFz4ZE568SAP23UTrCLUmVJDj
aws s3 --endpoint-url http://10.43.60.49:9000 mb s3://halconfig
aws s3 --endpoint-url http://10.43.60.49:9000 cp --recursive halconfig s3://halconfig
justinrlee/armory-operator:8ffa0be-dirty
halyard-custom.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: halyard-custom-config
namespace: spinnaker-operator
data:
halyard.yml: |
halyard:
halconfig:
directory: /home/spinnaker/.hal
spinnaker:
artifacts:
debianRepository:
dockerRegistry:
googleImageProject:
config:
input:
bucket: halconfig
region: us-west-2
endpoint: http://10.43.183.42:9000
enablePathStyleAccess: true
anonymousAccess: false
Look at subPath
Mounting configmap:
- mountPath: /opt/spinnaker/config
name: halyard
- name: halyard
configMap:
name: halyard
I'm pretty sure the only files we need are the following:
versions.yml
bom/2.17.1.yml
# also, anything matching this, where servicename and versionname match the bom
profiles/<servicename>/<versionname>/*
Default
# kubectl exec -it halyard-0 cat /opt/spinnaker/config/halyard.yml
halyard:
halconfig:
directory: /home/spinnaker/.hal
spinnaker:
artifacts:
debianRepository:
dockerRegistry:
googleImageProject:
config:
input:
bucket: halconfig
region: us-west-2
Needs to be modified to this: (mounted at /opt/spinnaker/config/halyard.yml
)
tee halyard-modified.yml <<-EOF
halyard:
halconfig:
directory: /home/spinnaker/.hal
spinnaker:
artifacts:
debianRepository:
dockerRegistry:
googleImageProject:
config:
input:
bucket: halconfig
region: us-west-2
endpoint: http://ec2-54-213-215-151.us-west-2.compute.amazonaws.com:9000
enablePathStyleAccess: true
anonymousAccess: false
EOF
kubectl create secret generic halyard --from-file=./halyard-modified.yml
This uses a hostpath:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: halyard
namespace: spinnaker
spec:
replicas: 1
serviceName: halyard
selector:
matchLabels:
app: halyard
template:
metadata:
labels:
app: halyard
spec:
containers:
- name: halyard
image: armory/halyard-armory:1.8.0
volumeMounts:
- name: hal
mountPath: "/home/spinnaker/.hal"
- name: kube
mountPath: "/home/spinnaker/.kube"
- name: halyard
mountPath: "/opt/spinnaker/config"
env:
- name: HOME
value: "/home/spinnaker"
- name: AWS_ACCESS_KEY_ID
value: username
- name: AWS_SECRET_ACCESS_KEY
value: justin123
securityContext:
runAsUser: 1000
runAsGroup: 65535
volumes:
- name: hal
hostPath:
path: /etc/spinnaker/.hal
type: DirectoryOrCreate
- name: kube
hostPath:
path: /etc/spinnaker/.kube
type: DirectoryOrCreate
- name: halyard
secret:
secretName: halyard
items:
- key: halyard-modified.yml
path: halyard.yml
this uses a pvc:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hal-pvc
labels:
app: halyard
namespace: spinnaker
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: halyard
namespace: spinnaker
spec:
replicas: 1
serviceName: halyard
selector:
matchLabels:
app: halyard
template:
metadata:
labels:
app: halyard
spec:
serviceAccountName: halyard
automountServiceAccountToken: true
containers:
- name: halyard
image: armory/halyard-armory:1.8.0
volumeMounts:
- name: hal
mountPath: /home/spinnaker/.hal
- name: halyard
mountPath: "/opt/spinnaker/config"
env:
- name: HOME
value: "/home/spinnaker"
- name: AWS_ACCESS_KEY_ID
value: username
- name: AWS_SECRET_ACCESS_KEY
value: justin123
securityContext:
runAsUser: 1000
runAsGroup: 65535
volumes:
- name: hal
persistentVolumeClaim:
claimName: hal-pvc
- name: halyard
secret:
secretName: halyard
items:
- key: halyard-modified.yml
path: halyard.yml
key parts from above:
---
...
spec:
containers:
- name: halyard
image: armory/halyard-armory:1.8.0
volumeMounts:
# ... Need to mount secret in
- name: halyard
mountPath: "/opt/spinnaker/config"
env:
# ... Need access key / secret access key
- name: AWS_ACCESS_KEY_ID
value: username
- name: AWS_SECRET_ACCESS_KEY
value: justin123
volumes:
# ... Need secret as a volume
- name: halyard
secret:
secretName: halyard
items:
- key: halyard-modified.yml
path: halyard.yml
get bom and related items for latest versions
mkdir -p halconfig/bom halconfig/profiles
# versions.yml
aws --no-sign-request s3 cp s3://halconfig/versions.yml halconfig/versions.yml
# latest
LATEST=$(grep version halconfig/versions.yml | awk 'END{print $NF}')
aws --no-sign-request s3 cp s3://halconfig/bom/${LATEST}.yml halconfig/bom/${LATEST}.yml
# Get space-delimited metadata
grep '^ ' halconfig/bom/${LATEST}.yml \
| tr '\n' ',' \
| sed 's|:,| |g' \
| tr ',' '\n' \
| grep -v dockerRegistry | grep -v redis \
> ${LATEST}-metadata
while read p; do
SVC=$(echo $p | awk '{print $1}')
# echo $SVC
VERSION=$(echo $p | awk '{print $3}')
# echo ${VERSION}
MAJOR_VERSION=$(echo ${VERSION} | awk -F'-' '{print $1}')
# echo ${MAJOR_VERSION}
aws --no-sign-request s3 cp --recursive s3://halconfig/profiles/${SVC}/${MAJOR_VERSION} halconfig/profiles/${SVC}/${MAJOR_VERSION}
done <${LATEST}-metadata
If you need to pull a BoM from GCS instead: spinnaker.config.input.gcs.enabled: true
# halyard-custom.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: halyard-custom-config
namespace: spinnaker-operator
data:
halyard.yml: |
halyard:
halconfig:
directory: /home/spinnaker/.hal
spinnaker:
artifacts:
debianRepository:
dockerRegistry:
googleImageProject:
config:
input:
bucket: halconfig
region: us-west-2
endpoint: http://10.43.183.42:9000
enablePathStyleAccess: true
anonymousAccess: false
apiVersion: apps/v1
kind: Deployment
metadata:
name: spinnaker-operator
namespace: spinnaker-operator
spec:
replicas: 1
selector:
matchLabels:
name: spinnaker-operator
template:
metadata:
labels:
name: spinnaker-operator
spec:
serviceAccountName: spinnaker-operator
containers:
- name: spinnaker-operator
image: justinrlee/armory-operator:8ffa0be-dirty
command:
- spinnaker-operator
imagePullPolicy: IfNotPresent
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "spinnaker-operator"
- name: halyard
image: armory/halyard-armory:operator-0.3.x
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8064
protocol: TCP
readinessProbe:
httpGet:
path: /health
port: 8064
failureThreshold: 20
periodSeconds: 5
initialDelaySeconds: 20
livenessProbe:
tcpSocket:
port: 8064
initialDelaySeconds: 30
periodSeconds: 20
env:
- name: AWS_ACCESS_KEY_ID
value: minio
- name: AWS_SECRET_ACCESS_KEY
value: 5ayj45gQTc54AL+04grsLRFDNW1WRNxyc7eaUButE+2r7uGL
volumeMounts:
- mountPath: /opt/spinnaker/config/halyard.yml
name: halconfig-volume
subPath: halyard.yml
volumes:
- configMap:
defaultMode: 420
name: halyard-custom-config
name: halconfig-volume
^ Reference kubeconfig secret
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spinnaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
-
kind: ServiceAccount
name: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hal-pvc
labels:
app: halyard
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: halyard-custom-config
data:
halyard.yml: |
halyard:
halconfig:
directory: /home/spinnaker/.hal
spinnaker:
artifacts:
debianRepository:
dockerRegistry:
googleImageProject:
config:
input:
bucket: halconfig
region: us-west-2
endpoint: http://10.43.183.42:9000
# This needs to be updated with your minio path ^
enablePathStyleAccess: true
anonymousAccess: false
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: halyard
spec:
replicas: 1
serviceName: halyard
selector:
matchLabels:
app: halyard
template:
metadata:
labels:
app: halyard
spec:
containers:
- name: halyard
image: armory/halyard-armory:1.8.2
volumeMounts:
- name: hal
mountPath: /home/spinnaker
- name: halyard-custom-config
mountPath: /opt/spinnaker/config/halyard.yml
subPath: halyard.yml
env:
- name: HOME
value: "/home/spinnaker"
- name: AWS_ACCESS_KEY_ID
value: "minio_username"
- name: AWS_SECRET_ACCESS_KEY
value: "minio_password"
securityContext:
runAsUser: 1000
runAsGroup: 65535
fsGroup: 65535
volumes:
- name: hal
persistentVolumeClaim:
claimName: hal-pvc
- name: halyard-custom-config
configMap:
defaultMode: 420
name: halyard-custom-config