Skip to content

Instantly share code, notes, and snippets.

@away168
Created July 10, 2020 15:53
Show Gist options
  • Save away168/804f5496e8b62285bfc3ebbeeacd61e2 to your computer and use it in GitHub Desktop.
Save away168/804f5496e8b62285bfc3ebbeeacd61e2 to your computer and use it in GitHub Desktop.
Air-Gapped Notes

Airgap (“Air-gapped”) Notes

Operator - Summary

Here are the high-level steps to use Operator in an airgap fashion

  1. Use the BOM Downloader (shell script, below) to download all the regular bom contents. This will do the following (using https):

    1. Create a directory (in current working directory) called halconfig
    2. Get (download and add) halconfig/versions.yml, which is the list of all “GA” versions (latest patch for each minor release, e.g., latest 1.19, latest 1.18, etc.)
    3. For each GA version, do the following:
      1. Get halconfig/bom/<version>.yml, which lists all container versions
      2. For each container version, get the service version. For example:
        1. clouddriver:6.5.6-40c9a8c-6a11468-rc30 would have a service version of 6.5.6;
        2. clouddriver:2.19.8 would have a service version of 2.19.8
      3. Get all relevant profiles for the service version (all files in halconfig/profiles/<servicename>/<serviceversion>). For example:
        1. halconfig/profiles/clouddriver/6.5.6/ has these files:
          1. clouddriver-bootstrap.yml
          2. clouddriver-caching.yml
          3. clouddriver-ro-deck.yml
          4. clouddriver-ro.yml
          5. clouddriver-rw.yml
          6. clouddriver.yml
      4. Generate a list of images for the version (e.g., 2.18.1-images)
    4. You’ll end up with a halconfig directory that needs to be updated and re-hosted on a local Minio/S3 bucket. You probably wanna tar this up to move it around:
      1. tar -czvf halconfig.tgz halconfig/
  2. Start a Minio instance near where Operator is gonna run

    1. It must be accessible by halyard. For example, if you’re running it in Kubernetes, in the namespace spinnaker with a service name of minio, then you can access it (probably) via http://minio.spinnaker:9000
    2. You must know the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID, which are basically the username/password for Minio
  3. Take the tarball, move it to somewhere accessible where you can upload it to Minio.

  4. Extract the tarball:

    1. tar -xzvf halconfig.tgz
  5. Modify halconfig/bom/2.19.8.yml (or whatever version you’re working with) so that the dockerRegistry points to your docker registry. For example:

    1. If your images are as follows:
      1. armory/deck:x rehosted at mydockerregistry.com/armory-images/deck
      2. armory/gate:x rehosted at mydockerregistry.com/armory-images/gate
      3. armory/orca:x rehosted at mydockerregistry.com/armory-images/orca
      4. etc
    2. Then you want dockerRegistry: [m](http://docker.io/armory)``ydockerregistry.com/armory-images
  6. Install the AWS CLI and then add your username/password to the env, then make sure you can access Minio. You may have to port-forward to Minio (kubectl -n spinnaker port-forward svc/minio 9000:9000) export AWS_ACCESS_KEY_ID=username export AWS_SECRET_ACCESS_KEY=password

    Replace localhost:9000 with whatever you got

    aws --endpoint-url http://localhost:9000 s3 ls

    Will probably respond with an empty line, indicating no buckets exist yet

  7. Make a bucket: aws --endpoint-url http://localhost:9000 s3 mb s3://halconfig

  8. Upload your modified halconfig directory to the minio bucket aws --endpoint-url http://localhost:9000 s3 cp --recursive halconfig s3://halconfig/

  9. Install Operator in your Kubernetes cluster:

    1. Download https://github.com/armory-io/spinnaker-operator/releases/latest/download/manifests.tgz curl -LO https://github.com/armory-io/spinnaker-operator/releases/latest/download/manifests.tgz
    2. Extract it (tar -xzvf manifests.tgz)
  10. Install or apply the CRDs (clusterwide) kubectl apply -f deploy/crds

  11. Decide whether you wanna use the basic or cluster installer. The rest of these instructions are for cluster, but the steps are the same

  12. Go into deploy/operator/cluster/deployment.yaml, and make these modifications:

    1. configmap.yaml (update http://minio.spinnaker:9000 to use the cluster-internal dns endpoint for minio) apiVersion: v1 kind: ConfigMap metadata: name: halyard-custom-config namespace: spinnaker data: halyard.yml: | halyard: halconfig: directory: /home/spinnaker/.hal spinnaker: artifacts: debianRepository: dockerRegistry: googleImageProject: config: input: bucket: halconfig region: us-west-2 endpoint: http://minio.spinnaker:9000 enablePathStyleAccess: true anonymousAccess: false
    2. deployment.yaml (full modified sample below)
      1. Modify the two Docker images to your local versions of them
      2. Add any necessary docker registry secrets
      3. in spec.template.spec (so, the pod spec, not the container spec), add the configMap as a volume (spacing of this should be correct): volumes:
      • name: halyard configMap: name: halyard-custom-config
      1. In spec.template.spec.containers[1] (the container spec for halyard**, not the pod spec**), add a volumeMount (spacing of this should be correct): volumeMounts:
        • mountPath: /opt/spinnaker/config/halyard.yml subPath: halyard.yml name: halyard
      2. Also in spec.template.spec.containers[1] (the container spec for halyard**, not the pod spec**), add these environment variables (again, spacing should be correct - if it doesn’t line up, you may be in the wrong section) env:
        • name: AWS_ACCESS_KEY_ID value: username
        • name: AWS_SECRET_ACCESS_KEY value: justin123
  13. Apply all the manifests in deploy/operator/cluster

  14. Wait for the operator pod to start - it has two containers, and needs around 1GB of memory

  15. Exec into the halyard container of the operator pod kubectl -n spinnaker-operator exec -it spinnaker-operator-xyz -c halyard bash

  16. Run hal version list and hal version bom 2.19.8 to verify you were successful

“Minnaker” - Summary

Here are the high-level steps to use Minnaker in an airgap fashion

  1. Use the +Airgap (“Air-gapped”) Notes: BOM (shell script, below) or the script (note comments for a fix on line 34 using yq) linked in the AirGapped Docs to download all the regular bom contents.

    1. Prereqs: need to have AWS cli installed and yq
    2. e.g. usage with script - ./bomdownloader.sh 2.20.3 halconfig
    3. You’ll end up with a halconfig directory that needs to be updated and re-hosted on a local Minio/S3 bucket. You probably want to tar this up to move it around:
      1. tar -czvf halconfig.tgz halconfig/
  2. Install k3s - **curl -sfL https://get.k3s.io | sh -**

  3. Start a Minio instance near where Halyard is gonna run

    1. It must be accessible by halyard. For example, if you’re running it in Kubernetes, in the namespace spinnaker with a service name of minio, then you can access it (probably) via http://minio.spinnaker:9000
    2. You must know the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID, which are basically the username/password for Minio
  4. Take the tarball, move it to the VM e.g. scp halconfig.tgz ubuntu@ip-address:/home/ubuntu/

  5. Extract the tarball: tar -xzvf halconfig.tgz

  6. Modify halconfig/bom/2.19.8.yml (or whatever version you’re working with) so that the dockerRegistry points to your docker registry. For example:

    1. If your images are as follows (e.g.):
      1. armory/deck:x rehosted at mydockerregistry.com/armory-images/deck
      2. armory/gate:x rehosted at mydockerregistry.com/armory-images/gate
      3. armory/orca:x rehosted at mydockerregistry.com/armory-images/orca
      4. etc
    2. Then you want dockerRegistry: [m](http://docker.io/armory)``ydockerregistry.com/armory-images
  7. Install the AWS CLI and then add your username/password to the env, then make sure you can access Minio.
    export AWS_ACCESS_KEY_ID=username export AWS_SECRET_ACCESS_KEY=password

    Replace localhost:9000 with whatever you got

    aws --endpoint-url http://localhost:9000 s3 ls

    Will probably respond with an empty line, indicating no buckets exist yet

  8. Make a bucket: aws --endpoint-url http://localhost:9000 s3 mb s3://halconfig

  9. Upload your modified halconfig directory to the minio bucket aws --endpoint-url http://localhost:9000 s3 cp --recursive halconfig s3://halconfig/

  10. Before deploying halyard we need to make the following changes:

1. Delploy `configmap.yaml` (update http://minio.spinnaker:9000 to use the cluster-internal dns endpoint for minio)
apiVersion: v1
kind: ConfigMap
metadata:
  name: halyard-custom-config
  namespace: spinnaker
data:
  halyard.yml: |
    halyard:
      halconfig:
        directory: /home/spinnaker/.hal
    spinnaker:
      artifacts:
        debianRepository:
        dockerRegistry:
        googleImageProject:
      config:
        input:
          bucket: halconfig
          region: us-west-2
          endpoint: http://minio.spinnaker:9000
          enablePathStyleAccess: true
          anonymousAccess: false
1. `halyard``.yaml` (see this link for sample: 
    1. Modify the two Docker images to your local versions of them
    2. Add any necessary docker registry secrets
    3. in spec.template.spec (so, the **pod spec**, not the **container spec**), add the configMap as a volume (spacing of this should be correct):
      volumes:
      - name: halyard
        configMap:
          name: halyard-custom-config
    1. In spec.template.spec.containers[1] (the **container spec for** ***halyard*****, not the pod spec**), add a volumeMount (spacing of this should be correct):
          volumeMounts:
          - mountPath: /opt/spinnaker/config/halyard.yml
            subPath: halyard.yml
            name: halyard
    1. Also in spec.template.spec.containers[1] (the **container spec for** ***halyard*****, not the pod spec**), add these environment variables (again, spacing should be correct - if it doesn’t line up, you may be in the wrong section)
          env:
          - name: AWS_ACCESS_KEY_ID
            value: username
          - name: AWS_SECRET_ACCESS_KEY
            value: justin123
  1. Wait for the operator pod to start - it has two containers, and needs around 1GB of memory
  2. Exec into the halyard container of the operator pod kubectl -n spinnaker-operator exec -it spinnaker-operator-xyz -c halyard bash
  3. Run hal version list and hal version bom 2.19.8 to verify you were successful

BOM

All BoM Downloader (HTTPS, all GA versions):

mkdir -p halconfig/bom

echo "Getting list of versions..."
curl -s https://halconfig.s3-us-west-2.amazonaws.com/versions.yml -o halconfig/versions.yml

grep 'version:' halconfig/versions.yml | awk '{print $NF}' > versions

for SPINNAKER_VERSION in $(cat versions); do

echo "Getting BOM for version ${SPINNAKER_VERSION}..."
curl -s https://halconfig.s3-us-west-2.amazonaws.com/bom/${SPINNAKER_VERSION}.yml -o halconfig/bom/${SPINNAKER_VERSION}.yml

grep '^  ' halconfig/bom/${SPINNAKER_VERSION}.yml \
  | grep -v commit \
  | tr '\n' ',' \
  | sed 's|:,| |g' \
  | tr ',' '\n' \
  | grep -v dockerRegistry \
  | grep -v redis \
  | grep -v '^$' \
  > ${SPINNAKER_VERSION}-metadata

touch keys
touch images
rm keys
rm images

while read p; do
  SVC=$(echo $p | awk '{print $1}')
  VERSION=$(echo $p | awk '{print $3}')
  MAJOR_VERSION=$(echo ${VERSION} | awk -F'-' '{print $1}')

  echo "Getting list of keys for ${SVC} version ${MAJOR_VERSION}"
  keys=$(curl -s "https://halconfig.s3-us-west-2.amazonaws.com/?list-type=2&prefix=profiles/${SVC}/${MAJOR_VERSION}" | tr '<' '\n' | tr '>' '\n' | grep profiles | grep "${MAJOR_VERSION}/")
  echo ${keys}
  for key in $keys; 
    do echo $key >> keys;
  done

  echo ${SVC}:${VERSION} >> ${SPINNAKER_VERSION}-images

done <${SPINNAKER_VERSION}-metadata

for key in $(cat keys); do
  mkdir -p halconfig/$(dirname ${key})
  echo "Downloading ${key}"
  curl -s https://halconfig.s3-us-west-2.amazonaws.com/${key} -o halconfig/${key}
done

done

Sample modified deployment.yaml for cluster mode

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spinnaker-operator
spec:
  replicas: 1
  selector:
    matchLabels:
      name: spinnaker-operator
  template:
    metadata:
      labels:
        name: spinnaker-operator
    spec:
      serviceAccountName: spinnaker-operator
      # This can be anywhere within the spec.template.spec block
      volumes:
      - name: halyard
        configMap:
          name: halyard-custom-config
      containers:
        - name: spinnaker-operator
          image: mydockerregistry.com/armory-images/armory-operator:0.4.0
          command:
            - spinnaker-operator
          imagePullPolicy: IfNotPresent
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: OPERATOR_NAME
              value: "spinnaker-operator"
        - name: halyard
          image: mydockerregistry.com/armory-images/halyard-armory:operator-0.4.x
          imagePullPolicy: IfNotPresent
          # This can be anywhere within the spec.template.spec.containers[1] block
          # should line up with "image"
          env:
          - name: AWS_ACCESS_KEY_ID
            value: username
          - name: AWS_SECRET_ACCESS_KEY
            value: justin123
          ports:
            - containerPort: 8064
              protocol: TCP
          readinessProbe:
            httpGet:
              path: /health
              port: 8064
            failureThreshold: 20
            periodSeconds: 5
            initialDelaySeconds: 20
          # This can be anywhere within the spec.template.spec.containers[1] block
          # should line up with "image"
          volumeMounts:
          - mountPath: /opt/spinnaker/config/halyard.yml
            subPath: halyard.yml
            name: halyard
          livenessProbe:
            tcpSocket:
              port: 8064
            initialDelaySeconds: 30
            periodSeconds: 20

Sample deployment.yaml for basic mode

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spinnaker-operator
spec:
  replicas: 1
  selector:
    matchLabels:
      name: spinnaker-operator
  template:
    metadata:
      labels:
        name: spinnaker-operator
    spec:
      serviceAccountName: spinnaker-operator
      containers:
        - name: spinnaker-operator
          image: mydockerregistry.com/armory-images/armory-operator:0.4.0
          command:
            - spinnaker-operator
          args:
            - --disable-admission-controller
          imagePullPolicy: IfNotPresent
          env:
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: OPERATOR_NAME
              value: "spinnaker-operator"
        - name: halyard
          image: mydockerregistry.com/armory-images/halyard-armory:operator-0.4.x
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8064
              protocol: TCP
          readinessProbe:
            httpGet:
              path: /health
              port: 8064
            failureThreshold: 20
            periodSeconds: 5
            initialDelaySeconds: 20
          livenessProbe:
            tcpSocket:
              port: 8064
            initialDelaySeconds: 30
            periodSeconds: 20
          # This can be anywhere within the spec.template.spec.containers[1] block
          # should line up with "image"
          env:
          - name: AWS_ACCESS_KEY_ID
            value: username
          - name: AWS_SECRET_ACCESS_KEY
            value: justin123
          # This can be anywhere within the spec.template.spec.containers[1] block
          # should line up with "image"
          volumeMounts:
          - mountPath: /opt/spinnaker/config/halyard.yml
            subPath: halyard.yml
            name: halyard
      # This can be anywhere within the spec.template.spec bloc
      # but should line up with "containers"
      volumes:
      - name: halyard
        configMap:
          name: halyard-custom-config

Customer side

1. Download images and re-host them

open the halconfig/bom/X.Y.Z.yml to see list of services with the tag. e.g. 2.20.3.yml Then you can tell customers to pull the images

#!/bin/bash
# BOM for Armory Spinnaker 2.20.3

docker pull armory/clouddriver:2.20.6
docker pull armory/deck:2.20.4
docker pull armory/dinghy:2.20.3
docker pull armory/echo:2.20.9
docker pull armory/fiat:2.20.4
docker pull armory/front50:2.20.6
docker pull armory/gate:2.20.4
docker pull armory/igor:2.20.9
docker pull armory/kayenta:2.20.4
docker pull armory/monitoring-daemon:2.20.0
docker pull armory/monitoring-third-party:2.20.0
docker pull armory/orca:2.20.3
docker pull armory/rosco:2.20.4
docker pull armory/terraformer:2.20.4
docker pull gcr.io/kubernetes-spinnaker/redis-cluster:v2  # we likely need to deploy this manually

Push images to private registry - update the bom yml to point to the server location.

2. Redis

we’d like we’d likely need to manually deploy - if so, whatever redis instance they can host

deploy redis.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: spin
    cluster: external-redis
  name: external-redis
  namespace: spinnaker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spin
      cluster: external-redis
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: spin
        cluster: external-redis
    spec:
      containers:
      - env:
        - name: MASTER
          value: "true"
        image: gcr.io/kubernetes-spinnaker/redis-cluster:v2
        name: redis
        ports:
        - containerPort: 6379
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 6379
          timeoutSeconds: 1
        resources: {}
---
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    app: spin
    cluster: external-redis
  name: external-redis
  namespace: spinnaker
spec:
  ports:
  - port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app: spin
    cluster: external-redis
  sessionAffinity: None
  type: ClusterIP

Update Halyard / Operator to use custom redis:

# if halyard
# edit /service-settings/redis.yml
overrideBaseUrl: redis://external-redis.namespace:6379
skipLifeCycleManagement: true      
# for operator
    service-settings:
      redis:
        overrideBaseUrl: redis://10.43.31.160:6379
        skipLifeCycleManagement: true      

Everything below here is old, but maybe relevant for historical purposes

Bom Downloader (Single version)

### Specify the version you want; if you don't, this will download the latest
SPECIFY_VERSION=2.18.0

mkdir -p halconfig/bom

echo "Getting list of version..."
curl -s https://halconfig.s3-us-west-2.amazonaws.com/versions.yml -o halconfig/versions.yml

SPINNAKER_VERSION=${SPECIFY_VERSION:-$(grep version versions.yml | awk 'END{print $NF}')}

echo "Getting BOM for version ${SPINNAKER_VERSION}..."
curl -s https://halconfig.s3-us-west-2.amazonaws.com/bom/${SPINNAKER_VERSION}.yml -o halconfig/bom/${SPINNAKER_VERSION}.yml

grep '^  ' halconfig/bom/${SPINNAKER_VERSION}.yml \
  | tr '\n' ',' \
  | sed 's|:,| |g' \
  | tr ',' '\n' \
  | grep -v dockerRegistry | grep -v redis | grep -v '^$' \
  > ${SPINNAKER_VERSION}-metadata

rm keys

while read p; do
  SVC=$(echo $p | awk '{print $1}')
  VERSION=$(echo $p | awk '{print $3}')
  MAJOR_VERSION=$(echo ${VERSION} | awk -F'-' '{print $1}')

  echo "Getting list of keys for ${SVC} version ${MAJOR_VERSION}"
  keys=$(curl -s "https://halconfig.s3-us-west-2.amazonaws.com/?list-type=2&prefix=profiles/${SVC}/${MAJOR_VERSION}" | tr '<' '\n' | tr '>' '\n' | grep profiles | grep "${MAJOR_VERSION}/")
  echo ${keys}
  for key in $keys; 
    do echo $key >> keys;
  done

done <${SPINNAKER_VERSION}-metadata

for key in $(cat keys); do
  mkdir -p halconfig/$(dirname ${key})
  echo "Downloading ${key}"
  curl -s https://halconfig.s3-us-west-2.amazonaws.com/${key} -o halconfig/${key}
done

New Script-ey thing:

# versions.yml
aws --no-sign-request s3 cp s3://halconfig/versions.yml halconfig/versions.yml

# latest
SPINNAKER_VERSION=$(grep version halconfig/versions.yml | awk 'END{print $NF}')

aws --no-sign-request s3 cp s3://halconfig/bom/${SPINNAKER_VERSION}.yml halconfig/bom/${SPINNAKER_VERSION}.yml

# Get space-delimited metadata
grep '^  ' halconfig/bom/${SPINNAKER_VERSION}.yml \
  | tr '\n' ',' \
  | sed 's|:,| |g' \
  | tr ',' '\n' \
  | grep -v dockerRegistry | grep -v redis \
  > ${SPINNAKER_VERSION}-metadata

while read p; do
  SVC=$(echo $p | awk '{print $1}')
  # echo $SVC
  VERSION=$(echo $p | awk '{print $3}')
  # echo ${VERSION}
  MAJOR_VERSION=$(echo ${VERSION} | awk -F'-' '{print $1}')
  # echo ${MAJOR_VERSION}
  aws --no-sign-request s3 cp --recursive s3://halconfig/profiles/${SVC}/${MAJOR_VERSION} halconfig/profiles/${SVC}/${MAJOR_VERSION}
done <${SPINNAKER_VERSION}-metadata


aws --no-sign-request s3 cp --recursive s3://halconfig halconfig


aws --profile minio --endpoint-url http://localhost:9000 s3 cp --recursive halconfig s3://halconfig


export AWS_ACCESS_KEY_ID=minio
export AWS_SECRET_ACCESS_KEY=kLuXaovQnzJAU2/KrA4Hym5HFz4ZE568SAP23UTrCLUmVJDj
aws s3 --endpoint-url http://10.43.60.49:9000 mb s3://halconfig
aws s3 --endpoint-url http://10.43.60.49:9000 cp --recursive halconfig s3://halconfig

justinrlee/armory-operator:8ffa0be-dirty

halyard-custom.yml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: halyard-custom-config
  namespace: spinnaker-operator
data:
  halyard.yml: |
    halyard:
      halconfig:
        directory: /home/spinnaker/.hal
    spinnaker:
      artifacts:
        debianRepository:
        dockerRegistry:
        googleImageProject:
      config:
        input:
          bucket: halconfig
          region: us-west-2
          endpoint: http://10.43.183.42:9000
          enablePathStyleAccess: true
          anonymousAccess: false

Look at subPath

Mounting configmap:

        - mountPath: /opt/spinnaker/config
          name: halyard

      - name: halyard
        configMap:
          name: halyard

I'm pretty sure the only files we need are the following:

versions.yml
bom/2.17.1.yml

# also, anything matching this, where servicename and versionname match the bom
profiles/<servicename>/<versionname>/*

Default

# kubectl exec -it halyard-0 cat /opt/spinnaker/config/halyard.yml
halyard:
  halconfig:
    directory: /home/spinnaker/.hal
spinnaker:
  artifacts:
    debianRepository:
    dockerRegistry:
    googleImageProject:
  config:
    input:
      bucket: halconfig
      region: us-west-2

Needs to be modified to this: (mounted at /opt/spinnaker/config/halyard.yml)

tee halyard-modified.yml <<-EOF
halyard:
  halconfig:
    directory: /home/spinnaker/.hal
spinnaker:
  artifacts:
    debianRepository:
    dockerRegistry:
    googleImageProject:
  config:
    input:
      bucket: halconfig
      region: us-west-2
      endpoint: http://ec2-54-213-215-151.us-west-2.compute.amazonaws.com:9000
      enablePathStyleAccess: true
      anonymousAccess: false
EOF


kubectl create secret generic halyard --from-file=./halyard-modified.yml

This uses a hostpath:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: halyard
  namespace: spinnaker
spec:
  replicas: 1
  serviceName: halyard
  selector:
    matchLabels:
      app: halyard
  template:
    metadata:
      labels:
        app: halyard
    spec:
      containers:
      - name: halyard
        image: armory/halyard-armory:1.8.0
        volumeMounts:
        - name: hal
          mountPath: "/home/spinnaker/.hal"
        - name: kube
          mountPath: "/home/spinnaker/.kube"
        - name: halyard
          mountPath: "/opt/spinnaker/config"
        env:
        - name: HOME
          value: "/home/spinnaker"
        - name: AWS_ACCESS_KEY_ID
          value: username
        - name: AWS_SECRET_ACCESS_KEY
          value: justin123
      securityContext:
        runAsUser: 1000
        runAsGroup: 65535
      volumes:
      - name: hal
        hostPath:
          path: /etc/spinnaker/.hal
          type: DirectoryOrCreate
      - name: kube
        hostPath:
          path: /etc/spinnaker/.kube
          type: DirectoryOrCreate
      - name: halyard
        secret:
          secretName: halyard
          items:
          - key: halyard-modified.yml
            path: halyard.yml

this uses a pvc:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: hal-pvc
  labels:
    app: halyard
  namespace: spinnaker
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: halyard
  namespace: spinnaker
spec:
  replicas: 1
  serviceName: halyard
  selector:
    matchLabels:
      app: halyard
  template:
    metadata:
      labels:
        app: halyard
    spec:
      serviceAccountName: halyard
      automountServiceAccountToken: true
      containers:
      - name: halyard
        image: armory/halyard-armory:1.8.0
        volumeMounts:
        - name: hal
          mountPath: /home/spinnaker/.hal
        - name: halyard
          mountPath: "/opt/spinnaker/config"
        env:
        - name: HOME
          value: "/home/spinnaker"
        - name: AWS_ACCESS_KEY_ID
          value: username
        - name: AWS_SECRET_ACCESS_KEY
          value: justin123
      securityContext:
        runAsUser: 1000
        runAsGroup: 65535
      volumes:
      - name: hal
        persistentVolumeClaim:
          claimName: hal-pvc
      - name: halyard
        secret:
          secretName: halyard
          items:
          - key: halyard-modified.yml
            path: halyard.yml

key parts from above:

---
...
    spec:
      containers:
      - name: halyard
        image: armory/halyard-armory:1.8.0
        volumeMounts:
        # ... Need to mount secret in
        - name: halyard
          mountPath: "/opt/spinnaker/config"
        env:
        # ... Need access key / secret access key
        - name: AWS_ACCESS_KEY_ID
          value: username
        - name: AWS_SECRET_ACCESS_KEY
          value: justin123
      volumes:
      # ... Need secret as a volume
      - name: halyard
        secret:
          secretName: halyard
          items:
          - key: halyard-modified.yml
            path: halyard.yml

Script-ey things

get bom and related items for latest versions

mkdir -p halconfig/bom halconfig/profiles
# versions.yml
aws --no-sign-request s3 cp s3://halconfig/versions.yml halconfig/versions.yml

# latest
LATEST=$(grep version halconfig/versions.yml | awk 'END{print $NF}')

aws --no-sign-request s3 cp s3://halconfig/bom/${LATEST}.yml halconfig/bom/${LATEST}.yml

# Get space-delimited metadata
grep '^  ' halconfig/bom/${LATEST}.yml \
  | tr '\n' ',' \
  | sed 's|:,| |g' \
  | tr ',' '\n' \
  | grep -v dockerRegistry | grep -v redis \
  > ${LATEST}-metadata

while read p; do
  SVC=$(echo $p | awk '{print $1}')
  # echo $SVC
  VERSION=$(echo $p | awk '{print $3}')
  # echo ${VERSION}
  MAJOR_VERSION=$(echo ${VERSION} | awk -F'-' '{print $1}')
  # echo ${MAJOR_VERSION}
  aws --no-sign-request s3 cp --recursive s3://halconfig/profiles/${SVC}/${MAJOR_VERSION} halconfig/profiles/${SVC}/${MAJOR_VERSION}
done <${LATEST}-metadata

If you need to pull a BoM from GCS instead: spinnaker.config.input.gcs.enabled: true

Airgap Operator

# halyard-custom.yml:
apiVersion: v1
kind: ConfigMap
metadata:
  name: halyard-custom-config
  namespace: spinnaker-operator
data:
  halyard.yml: |
    halyard:
      halconfig:
        directory: /home/spinnaker/.hal
    spinnaker:
      artifacts:
        debianRepository:
        dockerRegistry:
        googleImageProject:
      config:
        input:
          bucket: halconfig
          region: us-west-2
          endpoint: http://10.43.183.42:9000
          enablePathStyleAccess: true
          anonymousAccess: false


apiVersion: apps/v1
kind: Deployment
metadata:
  name: spinnaker-operator
  namespace: spinnaker-operator
spec:
  replicas: 1
  selector:
    matchLabels:
      name: spinnaker-operator
  template:
    metadata:
      labels:
        name: spinnaker-operator
    spec:
      serviceAccountName: spinnaker-operator
      containers:
        - name: spinnaker-operator
          image: justinrlee/armory-operator:8ffa0be-dirty
          command:
            - spinnaker-operator
          imagePullPolicy: IfNotPresent
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: OPERATOR_NAME
              value: "spinnaker-operator"
        - name: halyard
          image: armory/halyard-armory:operator-0.3.x
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8064
              protocol: TCP
          readinessProbe:
            httpGet:
              path: /health
              port: 8064
            failureThreshold: 20
            periodSeconds: 5
            initialDelaySeconds: 20
          livenessProbe:
            tcpSocket:
              port: 8064
            initialDelaySeconds: 30
            periodSeconds: 20
          env:
          - name: AWS_ACCESS_KEY_ID
            value: minio
          - name: AWS_SECRET_ACCESS_KEY
            value: 5ayj45gQTc54AL+04grsLRFDNW1WRNxyc7eaUButE+2r7uGL
          volumeMounts:
          - mountPath: /opt/spinnaker/config/halyard.yml
            name: halconfig-volume
            subPath: halyard.yml
      volumes:
      - configMap:
          defaultMode: 420
          name: halyard-custom-config
        name: halconfig-volume

https://github.com/armory/spinnaker-operator/blob/267a5a83cb646dc7a70bda18f80b1be1964ba680/integration_tests/testdata/spinnaker/overlay_secrets/spinnaker-template.yml

^ Reference kubeconfig secret

Full Kubernetes Statefulset pointing at Minio

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: spinnaker
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- 
  kind: ServiceAccount
  name: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: hal-pvc
  labels:
    app: halyard
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: halyard-custom-config
data:
  halyard.yml: |
    halyard:
      halconfig:
        directory: /home/spinnaker/.hal
    spinnaker:
      artifacts:
        debianRepository:
        dockerRegistry:
        googleImageProject:
      config:
        input:
          bucket: halconfig
          region: us-west-2
          endpoint: http://10.43.183.42:9000
          # This needs to be updated with your minio path ^
          enablePathStyleAccess: true
          anonymousAccess: false
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: halyard
spec:
  replicas: 1
  serviceName: halyard
  selector:
    matchLabels:
      app: halyard
  template:
    metadata:
      labels:
        app: halyard
    spec:
      containers:
      - name: halyard
        image: armory/halyard-armory:1.8.2
        volumeMounts:
        - name: hal
          mountPath: /home/spinnaker
        - name: halyard-custom-config
          mountPath: /opt/spinnaker/config/halyard.yml
          subPath: halyard.yml
        env:
        - name: HOME
          value: "/home/spinnaker"
        - name: AWS_ACCESS_KEY_ID
          value: "minio_username"
        - name: AWS_SECRET_ACCESS_KEY
          value: "minio_password"
      securityContext:
        runAsUser: 1000
        runAsGroup: 65535
        fsGroup: 65535
      volumes:
      - name: hal
        persistentVolumeClaim:
          claimName: hal-pvc
      - name: halyard-custom-config
        configMap:
          defaultMode: 420
          name: halyard-custom-config
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment