Skip to content

Instantly share code, notes, and snippets.

@vfarcic
Last active December 11, 2023 12:40
Show Gist options
  • Star 12 You must be signed in to star a gist
  • Fork 11 You must be signed in to fork a gist
  • Save vfarcic/28e2adb5946ca366d7845780608591d7 to your computer and use it in GitHub Desktop.
Save vfarcic/28e2adb5946ca366d7845780608591d7 to your computer and use it in GitHub Desktop.
# Source: https://gist.github.com/28e2adb5946ca366d7845780608591d7
###########################################################
# Argo Workflows & Pipelines #
# CI/CD, Machine Learning, and Other Kubernetes Workflows #
# https://youtu.be/UMaivwrAyTA #
###########################################################
# Referenced videos:
# - Argo CD - Applying GitOps Principles To Manage Production Environment In Kubernetes: https://youtu.be/vpWQeoaiRM4
# - Argo Events - Event-Based Dependency Manager for Kubernetes: https://youtu.be/sUPkGChvD54
# - Argo Rollouts - Canary Deployments Made Easy In Kubernetes: https://youtu.be/84Ky0aPbHvY
# - Kaniko - Building Container Images In Kubernetes Without Docker: https://youtu.be/EgwVQN6GNJg
#########
# Setup #
#########
# It can be any Kubernetes cluster
minikube start
minikube addons enable ingress
git clone https://github.com/vfarcic/argocd-production.git
cd argocd-production
export REGISTRY_SERVER=https://index.docker.io/v1/
# Replace `[...]` with the registry username
export REGISTRY_USER=[...]
# Replace `[...]` with the registry password
export REGISTRY_PASS=[...]
# Replace `[...]` with the registry email
export REGISTRY_EMAIL=[...]
kubectl create namespace workflows
kubectl --namespace workflows \
create secret \
docker-registry regcred \
--docker-server=$REGISTRY_SERVER \
--docker-username=$REGISTRY_USER \
--docker-password=$REGISTRY_PASS \
--docker-email=$REGISTRY_EMAIL
# If NOT using minikube, change the value to whatever is the address in your cluster
export ARGO_WORKFLOWS_HOST=argo-workflows.$(minikube ip).nip.io
cat argo-workflows/base/ingress_patch.json \
| sed -e "s@acme.com@$ARGO_WORKFLOWS_HOST@g" \
| tee argo-workflows/overlays/production/ingress_patch.json
kustomize build \
argo-workflows/overlays/production \
| kubectl apply --filename -
kubectl --namespace argo \
rollout status \
deployment argo-server \
--watch
cd ..
#############
# Workflows #
#############
git clone \
https://github.com/vfarcic/argo-workflows-demo.git
cd argo-workflows-demo
cat workflows/silly.yaml
cat workflows/parallel.yaml
cat workflows/dag.yaml
#############
# Templates #
#############
cat workflows/cd-mock.yaml
cat workflow-templates/container-image.yaml
kubectl --namespace workflows apply \
--filename workflow-templates/container-image.yaml
kubectl --namespace workflows \
get clusterworkflowtemplates
########################
# Submitting workflows #
########################
cat workflows/cd-mock.yaml \
| sed -e "s@value: vfarcic@value: $REGISTRY_USER@g" \
| tee workflows/cd-mock.yaml
argo --namespace workflows submit \
workflows/cd-mock.yaml
argo --namespace workflows list
argo --namespace workflows \
get @latest
argo --namespace workflows \
logs @latest \
--follow
open http://$ARGO_WORKFLOWS_HOST
kubectl --namespace workflows get pods
@iliion
Copy link

iliion commented May 24, 2021

Hi. I get UNAUTHORIZED errors and i cant pull the images from the repo. Im confused

vagrant@vagrant:~/argo-workflows-demo$ argo --namespace workflows
logs @latest
--follow

toolkit-rvfbv-1809338610: Enumerating objects: 346, done.
Counting objects: 100% (64/64), done.jects: 1% (1/64)
Compressing objects: 100% (51/51), done.jects: 1% (1/51)
toolkit-rvfbv-1809338610: Total 346 (delta 26), reused 47 (delta 12), pack-reused 282
toolkit-rvfbv-1809338610: error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "vfarcic/devops-toolkit:1.0.0": POST https://index.docker.io/v2/vfarcic/devops-toolkit/blobs/uploads/: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:vfarcic/devops-toolkit Type:repository] map[Action:push Class: Name:vfarcic/devops-toolkit Type:repository]]

@vfarcic
Copy link
Author

vfarcic commented May 24, 2021

I forgot to add the command that would change the registry used from vfarcic to whatever is your user. I just added https://gist.github.com/vfarcic/28e2adb5946ca366d7845780608591d7#file-57-argo-workflows-sh-L100. That should fix the problem. Can you try it out and let me know whether it worked?

@iliion
Copy link

iliion commented May 24, 2021

It correctly edits the file after removing the space before vfarcic and $REGISTRY_USER

cat workflows/cd-mock.yaml
| sed -e "s@value:vfarcic@value:$REGISTRY_USER@g" \
| tee workflows/cd-mock.yaml

Other than that, i get a new error when i submit the workflow.

argo --namespace workflows submit workflows/cd-mock.yaml
FATA[2021-05-24T13:07:58.685Z] Failed to submit workflow: templates.full.tasks.build-container-image template reference container-image.build-kaniko-git not found

This might be related to my Kustomize installation. Im looking into it

@vfarcic
Copy link
Author

vfarcic commented May 24, 2021

That's strange since there is space between value: and vfarcic. Take a look at the following commands and the output:

export REGISTRY_USER=xyz

cat workflows/cd-mock.yaml \
    | sed -e "s@value: vfarcic@value: $REGISTRY_USER@g"

The output:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: toolkit-
  labels:
    workflows.argoproj.io/archive-strategy: "false"
spec:
  entrypoint: full
  serviceAccountName: workflow
  volumes:
  - name: kaniko-secret
    secret:
      secretName: regcred
      items:
        - key: .dockerconfigjson
          path: config.json
  templates:
  - name: full
    dag:
      tasks:
      - name: build-container-image
        templateRef:
          name: container-image
          template: build-kaniko-git
          clusterScope: true
        arguments:
          parameters:
          - name: app_repo
            value: git://github.com/vfarcic/argo-workflows-demo
          - name: container_image
            value: xyz/devops-toolkit
          - name: container_tag
            value: "1.0.0"
      - name: deploy-staging
        template: echo
        arguments:
          parameters:
          - name: message
            value: Deploying to the staging cluster...
        dependencies:
        - build-container-image
      - name: tests
        template: echo
        arguments:
          parameters:
          - name: message
            value: Running integration tests (before, during, and after the deployment is finished)...
        dependencies:
        - build-container-image
      - name: deploy-production
        template: echo
        arguments:
          parameters:
          - name: message
            value: Deploying to the production cluster...
        dependencies:
        - tests
  - name: echo
    inputs:
      parameters:
      - name: message
    container:
      image: alpine
      command: [echo]
      args:
      - "{{inputs.parameters.message}}"

You can see that the output now contains value: xyz/devops-toolkit instead of value: vfarcic/devops-toolkit.

@iliion
Copy link

iliion commented May 24, 2021

I did not manage to coplete the tutorial.

For what its worth:

cat workflows/cd-mock.yaml
| sed -e "s@value: vfarcic@value: $REGISTRY_USER@g"

is working correctly.

While

cat workflows/cd-mock.yaml \
| sed -e "s@value: vfarcic@value: $REGISTRY_USER@g" \
| tee workflows/cd-mock.yaml

is deleting the contents of the file.

@vfarcic
Copy link
Author

vfarcic commented May 24, 2021

If the first command works, the second should work as well since it is piping the output to tee that writes it into the specified file (which happens to be the same one).

Would it help if we do a screen-sharing session and take a look at it together? If that sounds good, please pick any time that suits you from https://calendly.com/vfarcic/meet.

@iliion
Copy link

iliion commented May 25, 2021

Thanks very much for your availability 😃 . I went ahead and almost completed it, so hopefully i wont steal much more of your time.


First of all, the following command worked as expected when i used another terminal..(my bad)..

cat workflows/cd-mock.yaml | sed -e "s@value: vfarcic@value: $REGISTRY_USER@g" | tee workflows/cdock.yaml

Apart from that, building with kustomize generates the following error ( which seems related to this issue kubernetes-sigs/kustomize#2538 )

kustomize build     argo-workflows/overlays/production     | kubectl apply --filename -

Error: accumulating resources: accumulation err='accumulating resources from '../../base': '/home/vagrant/argocd-production/argo-workflows/base' must resolve to a file': recursed accumulation of path '/home/vagrant/argocd-production/argo-workflows/base': accumulating resources: accumulation err='accumulating resources from 'github.com/argoproj/argo/manifests/base': evalsymlink failure on '/home/vagrant/argocd-production/argo-workflows/base/github.com/argoproj/argo/manifests/base' : lstat /home/vagrant/argocd-production/argo-workflows/base/github.com: no such file or directory': git cmd = '/snap/kustomize/28/usr/bin/git init': exit status 1


Another approach

I used the -k flag of kubectl for building. (since kustomize is now integrated in kubectl)

kubectl apply -k argo-workflows/overlays/production/

In order for this to work one must firstly create the namespace using the flag --save-config like this:

kubectl create namespace workflows --save-config

Then i follow the next steps with success

@vfarcic
Copy link
Author

vfarcic commented May 26, 2021

I'm not using kubectl apply -k because it has a very old version of Kustomize without any sign that it'll ever be updated. You could also try upgrading kubestomize. I'm currently using 4+.

There should be no need to create the workflows Namespace separately. You can see that https://github.com/vfarcic/argocd-production/blob/master/argo-workflows/overlays/workflows/kustomization.yaml has namespace.yaml as one of the resources. That file (https://github.com/vfarcic/argocd-production/blob/master/argo-workflows/overlays/workflows/namespace.yaml) is the manifest that defines the workflows Namespace.

@hhuangpen
Copy link

I follow the steps exactly, is there any reason that open http://$ARGO_WORKFLOWS_HOST render 502 bad gateway?

@vfarcic
Copy link
Author

vfarcic commented Jan 11, 2022

That can indicate Ingress not being installed, application not running, and quite a few other server-side issues. Can you do curl -i http://$ARGO_WORKFLOWS_HOST and paste the output?

@hhuangpen
Copy link

@vfarcic I figured out. According to Argo Workflow documentation, when creating Nginx Ingress, nginx.ingress.kubernetes.io/backend-protocol: https needs to be added to the annotations.

Also now it needs client token to access argo-server.
I use "kubectl -n argo exec (argo-server pod name) -- argo auth token" to generate token

@MazenElzanaty
Copy link

Hi @vfarcic, Thanks for posting this, I have one issue where I get

error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "namespace/username/test-repo:1.0.0": POST https://index.docker.io/v2/namespace/username/test-repo/blobs/uploads/: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:namespace/username/test-repo Type:repository] map[Action:push Class: Name:namespace/username/test-repo Type:repository]]

I'm using oracle container registry which the username consists of a namespace and a username, I did setup my regcred correctly, though I can't seem to get it working with the template.

I have this value in the template

      - name: container_image
        value: namespace/username/test-repo

but it says https://index.docker.io/v2 instead of syd.ocir.io which is the registry in the regcred secret

@vfarcic
Copy link
Author

vfarcic commented Jan 30, 2022

@MazenElzanaty Can you confirm that the secret was created in the same namespace where Workflow build is running?

@MazenElzanaty
Copy link

@vfarcic Yes, Actually I think the issue with kaniko itself.

@theodoreandrew
Copy link

Hi @vfarcic I got this error when I was submit the workflow to argo

toolkit-v4l9s-2218966670: Enumerating objects: 350, done.
Counting objects: 100% (68/68), done.jects:   1% (1/68)
Compressing objects: 100% (54/54), done.jects:   1% (1/54)
toolkit-v4l9s-2218966670: Total 350 (delta 28), reused 50 (delta 13), pack-reused 282
toolkit-v4l9s-2218966670: kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue

I already retried everything (deleting workflows namespace and re-adding it), but it still doesn't work. Can you help me with this? Thanks! :)

@vfarcic
Copy link
Author

vfarcic commented Mar 18, 2022

@theodoreandrew I heard a similar complaint a week ago and, if I remember correctly, it was reproducible on Docker Desktop Kubernetes. Where are you running it?

@theodoreandrew
Copy link

theodoreandrew commented Mar 18, 2022

@vfarcic Oh I also use Docker Desktop. I am using minikube as the VM. I am not sure if that's what you are asking since I am also a bit new to k8s

@vfarcic
Copy link
Author

vfarcic commented Mar 18, 2022

Can you try it on, let's say, Rancher Desktop. I'm using it exclusively for a while now (6 months approx) and haven't seen any issues in it. Also, it's been working find in "real" clusters like, for example, GKE and EKS.

If Rancher Desktop is not an option for you (even though I highly recommend it; watch https://youtu.be/evWPib0iNgY), I'll do my best to install whatever you're having and try to reproduce it. In that case, please let me know whether you're using Minikube or Docker Desktop. If it's minikube, please let me know which driver you're using (if it's the default one, you should see it in the output of minikube start).

@theodoreandrew
Copy link

@vfarcic I just run minikube start and I saw that it using docker driver

@vfarcic
Copy link
Author

vfarcic commented Mar 19, 2022

That (minikube with Docker) is the combination I heard others complaining. The workaround is to add --force argument, at least until the "real" fix is done (if ever).

Independently of that issue, I strongly recommend switching to Rancher Desktop as a local Kubernetes cluster.

@saroj2052
Copy link

@vfarcic Would you please help me out to fix this issue. error: no kind "Workflow" is registered for version "argoproj.io/v1alpha1" in scheme "pkg/scheme/scheme.go:28"

@vfarcic
Copy link
Author

vfarcic commented Sep 21, 2022

Where did you observe that error?

@saroj2052
Copy link

saroj2052 commented Sep 22, 2022

@vfarcic I has seen in logs of the workflow pod that is created by sensors.
❯ kubectl get workflow -n argo
NAME STATUS AGE node-test-4cfkz Succeeded 17h

kubectl logs workflow/node-test-4cfkz -n argo
error: no kind "Workflow" is registered for version "argoproj.io/v1alpha1" in scheme "pkg/scheme/scheme.go:28"

@vfarcic
Copy link
Author

vfarcic commented Sep 24, 2022

I haven't experienced that error myself. I'll do my best to reproduce it and, if I do, figure out what to do. However, I'm traveling with limited available time so I can't confirm when I'll get to it.

@saroj2052
Copy link

saroj2052 commented Sep 24, 2022

@vfarcic when I try to log the crd I found similar type error. Would you please check.

@vfarcic
Copy link
Author

vfarcic commented Sep 28, 2022

Sorry for not responding earlier. I was (and still am) traveling with little to no free time. I'll do my best to double-check it soon.

@saroj2052
Copy link

@vfarcic Thanks for the Response, Enjoy the Trip

@cyberslot
Copy link

I follow the steps exactly, is there any reason that open http://$ARGO_WORKFLOWS_HOST render 502 bad gateway?

To avoid it, in argo-server ingress set the path to "/argo(/|$)(.*)", not as it is by design "/" (root directory). See https://argoproj.github.io/argo-workflows/argo-server/#ingress as reference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment