Skip to content

Instantly share code, notes, and snippets.

@shaunc
Created December 14, 2021 02:48
Show Gist options
  • Save shaunc/61f433e655d47dab89693a48caa576f7 to your computer and use it in GitHub Desktop.
Save shaunc/61f433e655d47dab89693a48caa576f7 to your computer and use it in GitHub Desktop.
output from helm fission install
# helm --debug -n fission install fission fission-charts/fission-all --version v1.15.0 --set "routerServiceType=ClusterIP" >helm_install.txt
# stderr output
install.go:178: [debug] Original chart version: "v1.15.0"
install.go:199: [debug] CHART PATH: /Users/shauncutts/Library/Caches/helm/repository/fission-all-v1.15.0.tgz
client.go:128: [debug] creating 29 resource(s)
client.go:128: [debug] creating 1 resource(s)
client.go:528: [debug] Watching for changes to Job fission-fission-all-v1.15.0 with timeout of 5m0s
client.go:556: [debug] Add/Modify event for fission-fission-all-v1.15.0: ADDED
client.go:595: [debug] fission-fission-all-v1.15.0: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:556: [debug] Add/Modify event for fission-fission-all-v1.15.0: MODIFIED
client.go:595: [debug] fission-fission-all-v1.15.0: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:556: [debug] Add/Modify event for fission-fission-all-v1.15.0: MODIFIED
client.go:299: [debug] Starting delete for "fission-fission-all-v1.15.0" Job
# stdout output
NAME: fission
LAST DEPLOYED: Mon Dec 13 21:45:43 2021
NAMESPACE: fission
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
routerServiceType: ClusterIP
COMPUTED VALUES:
analytics: true
analyticsNonHelmInstall: false
azureStorageQueue:
accountName: ""
enabled: false
key: ""
builderNamespace: fission-builder
busyboxImage: busybox
canaryDeployment:
enabled: false
controllerPort: 31313
createNamespace: true
debugEnv: false
enableIstio: false
executor:
adoptExistingResources: false
podReadyTimeout: 300s
priorityClassName: ""
terminationMessagePath: ""
terminationMessagePolicy: ""
fetcher:
image: fission/fetcher
imageTag: v1.15.0
resource:
cpu:
limits: ""
requests: 10m
mem:
limits: ""
requests: 16Mi
functionNamespace: fission-function
gaTrackingID: UA-196546703-1
image: fission/fission-bundle
imageTag: v1.15.0
influxdb:
enabled: false
image: influxdb:1.8
kafka:
authentication:
tls:
caCert: ""
enabled: false
insecureSkipVerify: false
userCert: ""
userKey: ""
brokers: broker.kafka:9092
enabled: false
logger:
enableSecurityContext: false
fluentdImage: fluent/fluent-bit
fluentdImageRepository: index.docker.io
fluentdImageTag: 1.8.8
influxdbAdmin: admin
podSecurityPolicy:
additionalCapabilities: null
enabled: false
mqt_keda:
connector_images:
aws_sqs:
image: fission/keda-aws-sqs-http-connector
tag: v0.8
awskinesis:
image: fission/keda-aws-kinesis-http-connector
tag: v0.8
gcp_pub_sub:
image: fission/keda-gcp-pubsub-http-connector
tag: v0.3
kafka:
image: fission/keda-kafka-http-connector
tag: v0.9
nats_steaming:
image: fission/keda-nats-streaming-http-connector
tag: v0.9
rabbitmq:
image: fission/keda-rabbitmq-http-connector
tag: v0.8
redis:
image: fission/keda-redis-http-connector
tag: v0.1
enabled: true
nats:
authToken: defaultFissionAuthToken
clientID: fission
clusterID: fissionMQTrigger
enabled: false
external: false
hostaddress: nats-streaming:4222
queueGroup: fission-messageQueueNatsTrigger
streamingserver:
image: nats-streaming
tag: 0.23.0
natsStreamingPort: 31316
openTelemetry:
otlpCollectorEndpoint: ""
otlpHeaders: ""
otlpInsecure: true
propagators: tracecontext,baggage
tracesSampler: parentbased_traceidratio
tracesSamplingRate: "0.1"
openTracing:
enabled: false
persistence:
accessMode: ReadWriteOnce
enabled: true
size: 8Gi
postInstallReportImage: fission/reporter
pprof:
enabled: false
preUpgradeChecks:
enabled: true
image: fission/pre-upgrade-checks
imageTag: v1.15.0
priorityClassName: ""
prometheus:
alertRelabelConfigs: null
alertmanager:
affinity: {}
baseURL: http://localhost:9093
clusterPeers: []
configFileName: alertmanager.yml
configFromSecret: ""
configMapOverrideName: ""
deploymentAnnotations: {}
dnsConfig: {}
emptyDir:
sizeLimit: ""
enabled: true
extraArgs: {}
extraEnv: {}
extraInitContainers: []
extraSecretMounts: []
image:
pullPolicy: IfNotPresent
repository: quay.io/prometheus/alertmanager
tag: v0.21.0
ingress:
annotations: {}
enabled: false
extraLabels: {}
extraPaths: []
hosts: []
path: /
pathType: Prefix
tls: []
name: alertmanager
nodeSelector: {}
persistentVolume:
accessModes:
- ReadWriteOnce
annotations: {}
enabled: true
existingClaim: ""
mountPath: /data
size: 2Gi
subPath: ""
podAnnotations: {}
podDisruptionBudget:
enabled: false
maxUnavailable: 1
podLabels: {}
podSecurityPolicy:
annotations: {}
prefixURL: ""
priorityClassName: ""
replicaCount: 1
resources: {}
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
service:
annotations: {}
clusterIP: ""
externalIPs: []
labels: {}
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 80
sessionAffinity: None
type: ClusterIP
statefulSet:
annotations: {}
enabled: false
headless:
annotations: {}
enableMeshPeer: false
labels: {}
servicePort: 80
labels: {}
podManagementPolicy: OrderedReady
tolerations: []
useClusterRole: true
useExistingRole: false
alertmanagerFiles:
alertmanager.yml:
global: {}
receivers:
- name: default-receiver
route:
group_interval: 5m
group_wait: 10s
receiver: default-receiver
repeat_interval: 3h
configmapReload:
alertmanager:
enabled: true
extraArgs: {}
extraConfigmapMounts: []
extraVolumeDirs: []
image:
pullPolicy: IfNotPresent
repository: jimmidyson/configmap-reload
tag: v0.5.0
name: configmap-reload
resources: {}
prometheus:
enabled: true
extraArgs: {}
extraConfigmapMounts: []
extraVolumeDirs: []
image:
pullPolicy: IfNotPresent
repository: jimmidyson/configmap-reload
tag: v0.5.0
name: configmap-reload
resources: {}
enabled: false
extraScrapeConfigs: null
forceNamespace: null
global: {}
imagePullSecrets: null
kube-state-metrics:
affinity: {}
autosharding:
enabled: false
collectors:
- certificatesigningrequests
- configmaps
- cronjobs
- daemonsets
- deployments
- endpoints
- horizontalpodautoscalers
- ingresses
- jobs
- limitranges
- mutatingwebhookconfigurations
- namespaces
- networkpolicies
- nodes
- persistentvolumeclaims
- persistentvolumes
- poddisruptionbudgets
- pods
- replicasets
- replicationcontrollers
- resourcequotas
- secrets
- services
- statefulsets
- storageclasses
- validatingwebhookconfigurations
- volumeattachments
containerSecurityContext: {}
customLabels: {}
extraArgs: []
global: {}
hostNetwork: false
image:
pullPolicy: IfNotPresent
repository: k8s.gcr.io/kube-state-metrics/kube-state-metrics
tag: v2.2.0
imagePullSecrets: []
kubeTargetVersionOverride: ""
kubeconfig:
enabled: false
secret: null
metricAllowlist: []
metricAnnotationsAllowList: []
metricDenylist: []
metricLabelsAllowlist: []
namespaceOverride: ""
namespaces: ""
nodeSelector: {}
podAnnotations: {}
podDisruptionBudget: {}
podSecurityPolicy:
additionalVolumes: []
annotations: {}
enabled: false
prometheus:
monitor:
additionalLabels: {}
enabled: false
honorLabels: false
metricRelabelings: []
namespace: ""
relabelings: []
prometheusScrape: true
rbac:
create: true
useClusterRole: true
replicas: 1
resources: {}
securityContext:
enabled: true
fsGroup: 65534
runAsGroup: 65534
runAsUser: 65534
selfMonitor:
enabled: false
service:
annotations: {}
loadBalancerIP: ""
nodePort: 0
port: 8080
type: ClusterIP
serviceAccount:
annotations: {}
create: true
imagePullSecrets: []
name: null
tolerations: []
kubeStateMetrics:
enabled: true
networkPolicy:
enabled: false
nodeExporter:
dnsConfig: {}
enabled: true
extraArgs: {}
extraConfigmapMounts: []
extraHostPathMounts: []
extraInitContainers: []
hostNetwork: true
hostPID: true
hostRootfs: true
image:
pullPolicy: IfNotPresent
repository: quay.io/prometheus/node-exporter
tag: v1.1.2
name: node-exporter
nodeSelector: {}
pod:
labels: {}
podAnnotations: {}
podDisruptionBudget:
enabled: false
maxUnavailable: 1
podSecurityPolicy:
annotations: {}
priorityClassName: ""
resources: {}
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
service:
annotations:
prometheus.io/scrape: "true"
clusterIP: None
externalIPs: []
hostPort: 9100
labels: {}
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 9100
type: ClusterIP
tolerations: []
updateStrategy:
type: RollingUpdate
podSecurityPolicy:
enabled: false
pushgateway:
deploymentAnnotations: {}
dnsConfig: {}
enabled: true
extraArgs: {}
extraInitContainers: []
image:
pullPolicy: IfNotPresent
repository: prom/pushgateway
tag: v1.3.1
ingress:
annotations: {}
enabled: false
extraPaths: []
hosts: []
path: /
pathType: Prefix
tls: []
name: pushgateway
nodeSelector: {}
persistentVolume:
accessModes:
- ReadWriteOnce
annotations: {}
enabled: false
existingClaim: ""
mountPath: /data
size: 2Gi
subPath: ""
podAnnotations: {}
podDisruptionBudget:
enabled: false
maxUnavailable: 1
podLabels: {}
podSecurityPolicy:
annotations: {}
priorityClassName: ""
replicaCount: 1
resources: {}
securityContext:
runAsNonRoot: true
runAsUser: 65534
service:
annotations:
prometheus.io/probe: pushgateway
clusterIP: ""
externalIPs: []
labels: {}
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 9091
type: ClusterIP
tolerations: []
verticalAutoscaler:
enabled: false
rbac:
create: true
server:
affinity: {}
alertmanagers: []
baseURL: ""
configMapOverrideName: ""
configPath: /etc/config/prometheus.yml
deploymentAnnotations: {}
dnsConfig: {}
dnsPolicy: ClusterFirst
emptyDir:
sizeLimit: ""
enableServiceLinks: true
enabled: true
env: []
extraArgs: {}
extraConfigmapMounts: []
extraFlags:
- web.enable-lifecycle
extraHostPathMounts: []
extraInitContainers: []
extraSecretMounts: []
extraVolumeMounts: []
extraVolumes: []
global:
evaluation_interval: 1m
scrape_interval: 1m
scrape_timeout: 10s
hostAliases: []
hostNetwork: false
image:
pullPolicy: IfNotPresent
repository: quay.io/prometheus/prometheus
tag: v2.26.0
ingress:
annotations: {}
enabled: false
extraLabels: {}
extraPaths: []
hosts: []
path: /
pathType: Prefix
tls: []
livenessProbeFailureThreshold: 3
livenessProbeInitialDelay: 30
livenessProbePeriodSeconds: 15
livenessProbeSuccessThreshold: 1
livenessProbeTimeout: 10
name: server
nodeSelector: {}
persistentVolume:
accessModes:
- ReadWriteOnce
annotations: {}
enabled: true
existingClaim: ""
mountPath: /data
size: 8Gi
subPath: ""
podAnnotations: {}
podDisruptionBudget:
enabled: false
maxUnavailable: 1
podLabels: {}
podSecurityPolicy:
annotations: {}
prefixURL: ""
priorityClassName: ""
probeHeaders: []
probeScheme: HTTP
readinessProbeFailureThreshold: 3
readinessProbeInitialDelay: 30
readinessProbePeriodSeconds: 5
readinessProbeSuccessThreshold: 1
readinessProbeTimeout: 4
remoteRead: []
remoteWrite: []
replicaCount: 1
resources: {}
retention: 15d
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
service:
annotations: {}
clusterIP: ""
externalIPs: []
gRPC:
enabled: false
servicePort: 10901
labels: {}
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 80
sessionAffinity: None
statefulsetReplica:
enabled: false
replica: 0
type: ClusterIP
sidecarContainers: []
sidecarTemplateValues: {}
startupProbe:
enabled: false
failureThreshold: 30
periodSeconds: 5
timeoutSeconds: 10
statefulSet:
annotations: {}
enabled: false
headless:
annotations: {}
gRPC:
enabled: false
servicePort: 10901
labels: {}
servicePort: 80
labels: {}
podManagementPolicy: OrderedReady
storagePath: ""
tcpSocketProbeEnabled: false
terminationGracePeriodSeconds: 300
tolerations: []
verticalAutoscaler:
enabled: false
serverFiles:
alerting_rules.yml: {}
alerts: {}
prometheus.yml:
rule_files:
- /etc/config/recording_rules.yml
- /etc/config/alerting_rules.yml
- /etc/config/rules
- /etc/config/alerts
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
job_name: kubernetes-apiservers
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: default;kubernetes;https
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_service_name
- __meta_kubernetes_endpoint_port_name
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
job_name: kubernetes-nodes
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- replacement: kubernetes.default.svc:443
target_label: __address__
- regex: (.+)
replacement: /api/v1/nodes/$1/proxy/metrics
source_labels:
- __meta_kubernetes_node_name
target_label: __metrics_path__
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
job_name: kubernetes-nodes-cadvisor
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- replacement: kubernetes.default.svc:443
target_label: __address__
- regex: (.+)
replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
source_labels:
- __meta_kubernetes_node_name
target_label: __metrics_path__
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
- job_name: kubernetes-service-endpoints
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_scrape
- action: replace
regex: (https?)
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_scheme
target_label: __scheme__
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_service_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_service_annotation_prometheus_io_param_(.+)
replacement: __param_$1
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_service_name
target_label: kubernetes_name
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: kubernetes_node
- job_name: kubernetes-service-endpoints-slow
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_scrape_slow
- action: replace
regex: (https?)
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_scheme
target_label: __scheme__
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_service_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_service_annotation_prometheus_io_param_(.+)
replacement: __param_$1
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_service_name
target_label: kubernetes_name
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: kubernetes_node
scrape_interval: 5m
scrape_timeout: 30s
- honor_labels: true
job_name: prometheus-pushgateway
kubernetes_sd_configs:
- role: service
relabel_configs:
- action: keep
regex: pushgateway
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_probe
- job_name: kubernetes-services
kubernetes_sd_configs:
- role: service
metrics_path: /probe
params:
module:
- http_2xx
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_probe
- source_labels:
- __address__
target_label: __param_target
- replacement: blackbox
target_label: __address__
- source_labels:
- __param_target
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- source_labels:
- __meta_kubernetes_service_name
target_label: kubernetes_name
- job_name: kubernetes-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape
- action: replace
regex: (https?)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scheme
target_label: __scheme__
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_annotation_prometheus_io_param_(.+)
replacement: __param_$1
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: kubernetes_pod_name
- action: drop
regex: Pending|Succeeded|Failed|Completed
source_labels:
- __meta_kubernetes_pod_phase
- job_name: kubernetes-pods-slow
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape_slow
- action: replace
regex: (https?)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scheme
target_label: __scheme__
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_annotation_prometheus_io_param_(.+)
replacement: __param_$1
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: kubernetes_pod_name
- action: drop
regex: Pending|Succeeded|Failed|Completed
source_labels:
- __meta_kubernetes_pod_phase
scrape_interval: 5m
scrape_timeout: 30s
recording_rules.yml: {}
rules: {}
serviceAccounts:
alertmanager:
annotations: {}
create: true
name: null
nodeExporter:
annotations: {}
create: true
name: null
pushgateway:
annotations: {}
create: true
name: null
server:
annotations: {}
create: true
name: null
serviceEndpoint: ""
pruneInterval: 60
pullPolicy: IfNotPresent
repository: index.docker.io
router:
deployAsDaemonSet: false
displayAccessLog: false
priorityClassName: ""
resources: {}
roundTrip:
disableKeepAlive: false
keepAliveTime: 30s
maxRetries: 10
timeout: 50ms
timeoutExponent: 2
svcAddressMaxRetries: 5
svcAddressUpdateTimeout: 30s
terminationMessagePath: ""
terminationMessagePolicy: ""
unTapServiceTimeout: 3600s
useEncodedPath: false
routerPort: 31314
routerServiceType: ClusterIP
serviceType: ClusterIP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
HOOKS:
---
# Source: fission-all/templates/analytics/post-install-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: fission-fission-all-v1.15.0
labels:
# The "release" convention makes it easy to tie a release to all of the
# Kubernetes resources that were created as part of that release.
release: fission
# This makes it easy to audit chart usage.
chart: fission-all-v1.15.0
app: fission-all
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: fission-fission-all
labels:
release: fission
app: fission-all
annotations:
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: fission/reporter:v1.15.0
imagePullPolicy: IfNotPresent
command: [ "/reporter" ]
args: ["event", "-c", "fission-use", "-a", "helm-post-install", "-l", "fission-all-v1.15.0"]
env:
- name: GA_TRACKING_ID
value: "UA-196546703-1"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccountName: fission-svc
---
# Source: fission-all/templates/analytics/post-upgrade-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: fission-fission-all-v1.15.0
labels:
# The "release" convention makes it easy to tie a release to all of the
# Kubernetes resources that were created as part of that release.
release: fission
# This makes it easy to audit chart usage.
chart: fission-all-v1.15.0
app: fission-all
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-upgrade
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: fission-fission-all
labels:
release: fission
app: fission-all
annotations:
spec:
restartPolicy: Never
containers:
- name: post-upgrade-job
image: fission/reporter:v1.15.0
imagePullPolicy: IfNotPresent
command: [ "/reporter" ]
args: ["event", "-c", "fission-use", "-a", "helm-post-upgrade", "-l", "fission-all-v1.15.0"]
env:
- name: GA_TRACKING_ID
value: "UA-196546703-1"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccountName: fission-svc
---
# Source: fission-all/templates/pre-upgrade-checks/pre-upgrade-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: fission-fission-all-v1.15.0-135
labels:
# The "release" convention makes it easy to tie a release to all of the
# Kubernetes resources that were created as part of that release.
release: "fission"
# This makes it easy to audit chart usage.
chart: fission-all-v1.15.0
app: fission-all
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-upgrade
"helm.sh/hook-delete-policy": hook-succeeded
spec:
backoffLimit: 0
template:
metadata:
name: fission-fission-all
labels:
release: "fission"
app: fission-all
spec:
restartPolicy: Never
containers:
- name: pre-upgrade-job
image: fission/pre-upgrade-checks:v1.15.0
imagePullPolicy: IfNotPresent
command: [ "/pre-upgrade-checks" ]
args: ["--fn-pod-namespace", "fission-function", "--envbuilder-namespace", "fission-builder"]
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccountName: fission-svc
MANIFEST:
---
# Source: fission-all/templates/misc-functions/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: fission-function
labels:
name: fission-function
chart: "fission-all-v1.15.0"
---
# Source: fission-all/templates/misc-functions/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: fission-builder
labels:
name: fission-builder
chart: "fission-all-v1.15.0"
---
# Source: fission-all/templates/common/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fission-svc
namespace: fission
---
# Source: fission-all/templates/misc-functions/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fission-fetcher
namespace: fission-function
---
# Source: fission-all/templates/misc-functions/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fission-builder
namespace: fission-builder
---
# Source: fission-all/templates/controller/cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: feature-config
namespace: fission
data:
"config.yaml": Y2FuYXJ5OgogIGVuYWJsZWQ6IGZhbHNlCiAgcHJvbWV0aGV1c1N2YzogIiIK
---
# Source: fission-all/templates/storagesvc/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: fission-storage-pvc
labels:
app: fission-storage
chart: "fission-all-v1.15.0"
release: "fission"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
---
# Source: fission-all/templates/common/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fission-cr-admin
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- services
- serviceaccounts
- replicationcontrollers
- namespaces
- events
verbs:
- create
- delete
- get
- list
- watch
- patch
- apiGroups:
- apps
resources:
- deployments
- deployments/scale
- replicasets
verbs:
- '*'
- apiGroups:
- batch
resources:
- jobs
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- '*'
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- list
- watch
- apiGroups:
- fission.io
resources:
- canaryconfigs
- environments
- functions
- httptriggers
- kuberneteswatchtriggers
- messagequeuetriggers
- packages
- timetriggers
verbs:
- '*'
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- '*'
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- '*'
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
verbs:
- bind
- apiGroups:
- keda.sh
resources:
- scaledjobs
- scaledobjects
- scaledjobs/finalizers
- scaledjobs/status
- triggerauthentications
- triggerauthentications/status
verbs:
- '*'
- apiGroups:
- keda.k8s.io
resources:
- scaledjobs
- scaledobjects
- scaledjobs/finalizers
- scaledjobs/status
- triggerauthentications
- triggerauthentications/status
verbs:
- '*'
- apiGroups:
- metrics.k8s.io
resources:
- pods
verbs:
- get
- list
---
# Source: fission-all/templates/misc-functions/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secret-configmap-getter
rules:
- apiGroups:
- "*"
resources:
- secrets
- configmaps
verbs:
- get
- watch
- list
---
# Source: fission-all/templates/misc-functions/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: package-getter
rules:
- apiGroups:
- "*"
resources:
- packages
verbs:
- "*"
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- "*"
---
# Source: fission-all/templates/common/clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fission-cr-admin
subjects:
- kind: ServiceAccount
name: fission-svc
namespace: fission
roleRef:
kind: ClusterRole
name: fission-cr-admin
apiGroup: rbac.authorization.k8s.io
---
# Source: fission-all/templates/misc-functions/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: fission-fetcher
namespace: default
rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
- pods
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- "*"
- apiGroups:
- fission.io
resources:
- canaryconfigs
- environments
- functions
- httptriggers
- kuberneteswatchtriggers
- messagequeuetriggers
- packages
- timetriggers
verbs:
- "*"
---
# Source: fission-all/templates/misc-functions/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: fission-builder
namespace: default
rules:
- apiGroups:
- fission.io
resources:
- canaryconfigs
- environments
- functions
- httptriggers
- kuberneteswatchtriggers
- messagequeuetriggers
- packages
- timetriggers
verbs:
- "*"
---
# Source: fission-all/templates/misc-functions/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: fission-function
name: event-fetcher
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""] # "" indicates the core API group
resources: ["events"]
verbs: ["*"]
---
# Source: fission-all/templates/misc-functions/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: fission-fetcher
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: fission-fetcher
subjects:
- kind: ServiceAccount
name: fission-fetcher
namespace: fission-function
---
# Source: fission-all/templates/misc-functions/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: fission-builder
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: fission-builder
subjects:
- kind: ServiceAccount
name: fission-builder
namespace: fission-builder
---
# Source: fission-all/templates/misc-functions/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: fission-fetcher-pod-reader
namespace: fission-function
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: event-fetcher
subjects:
- kind: ServiceAccount
name: fission-fetcher
namespace: fission-function
---
# Source: fission-all/templates/controller/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: controller
labels:
svc: controller
application: fission-api
chart: "fission-all-v1.15.0"
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8888
selector:
svc: controller
---
# Source: fission-all/templates/executor/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: executor
labels:
svc: executor
chart: "fission-all-v1.15.0"
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8888
selector:
svc: executor
---
# Source: fission-all/templates/router/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: router
labels:
svc: router
application: fission-router
chart: "fission-all-v1.15.0"
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8888
selector:
svc: router
---
# Source: fission-all/templates/storagesvc/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: storagesvc
labels:
svc: storagesvc
application: fission-storage
chart: "fission-all-v1.15.0"
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8000
selector:
svc: storagesvc
---
# Source: fission-all/templates/buildermgr/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: buildermgr
labels:
chart: "fission-all-v1.15.0"
svc: buildermgr
spec:
replicas: 1
selector:
matchLabels:
svc: buildermgr
template:
metadata:
labels:
svc: buildermgr
spec:
containers:
- name: buildermgr
image: "index.docker.io/fission/fission-bundle:v1.15.0"
imagePullPolicy: IfNotPresent
command: ["/fission-bundle"]
args: ["--builderMgr", "--storageSvcUrl", "http://storagesvc.fission", "--envbuilder-namespace", "fission-builder"]
env:
- name: FETCHER_IMAGE
value: "fission/fetcher:v1.15.0"
- name: FETCHER_IMAGE_PULL_POLICY
value: "IfNotPresent"
- name: BUILDER_IMAGE_PULL_POLICY
value: "IfNotPresent"
- name: ENABLE_ISTIO
value: "false"
- name: FETCHER_MINCPU
value: "10m"
- name: FETCHER_MINMEM
value: "16Mi"
- name: FETCHER_MAXCPU
value: ""
- name: FETCHER_MAXMEM
value: ""
- name: DEBUG_ENV
value: "false"
- name: PPROF_ENABLED
value: "false"
- name: OPENTRACING_ENABLED
value: "false"
- name: TRACE_JAEGER_COLLECTOR_ENDPOINT
value: ""
- name: TRACING_SAMPLING_RATE
value: "0.5"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: ""
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_TRACES_SAMPLER
value: "parentbased_traceidratio"
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.1"
- name: OTEL_PROPAGATORS
value: "tracecontext,baggage"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccountName: fission-svc
---
# Source: fission-all/templates/controller/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller
labels:
chart: "fission-all-v1.15.0"
svc: controller
application: fission-api
spec:
replicas: 1
selector:
matchLabels:
svc: controller
application: fission-api
template:
metadata:
labels:
svc: controller
application: fission-api
spec:
containers:
- name: controller
image: "index.docker.io/fission/fission-bundle:v1.15.0"
imagePullPolicy: IfNotPresent
command: ["/fission-bundle"]
args: ["--controllerPort", "8888"]
env:
- name: FISSION_FUNCTION_NAMESPACE
value: "fission-function"
- name: DEBUG_ENV
value: "false"
- name: PPROF_ENABLED
value: "false"
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OPENTRACING_ENABLED
value: "false"
- name: TRACE_JAEGER_COLLECTOR_ENDPOINT
value: ""
- name: TRACING_SAMPLING_RATE
value: "0.5"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: ""
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_TRACES_SAMPLER
value: "parentbased_traceidratio"
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.1"
- name: OTEL_PROPAGATORS
value: "tracecontext,baggage"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
readinessProbe:
httpGet:
path: "/healthz"
port: 8888
initialDelaySeconds: 1
periodSeconds: 1
failureThreshold: 30
livenessProbe:
httpGet:
path: "/healthz"
port: 8888
initialDelaySeconds: 35
periodSeconds: 5
volumeMounts:
- name: config-volume
mountPath: /etc/config/config.yaml
subPath: config.yaml
ports:
- containerPort: 8888
name: http
serviceAccountName: fission-svc
volumes:
- name: config-volume
configMap:
name: feature-config
---
# Source: fission-all/templates/executor/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: executor
labels:
chart: "fission-all-v1.15.0"
svc: executor
spec:
replicas: 1
selector:
matchLabels:
svc: executor
template:
metadata:
labels:
svc: executor
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "8080"
spec:
containers:
- name: executor
image: "index.docker.io/fission/fission-bundle:v1.15.0"
imagePullPolicy: IfNotPresent
command: ["/fission-bundle"]
args: ["--executorPort", "8888", "--namespace", "fission-function"]
env:
- name: FETCHER_IMAGE
value: "fission/fetcher:v1.15.0"
- name: FETCHER_IMAGE_PULL_POLICY
value: "IfNotPresent"
- name: RUNTIME_IMAGE_PULL_POLICY
value: "IfNotPresent"
- name: ADOPT_EXISTING_RESOURCES
value: "false"
- name: POD_READY_TIMEOUT
value: "300s"
- name: ENABLE_ISTIO
value: "false"
- name: FETCHER_MINCPU
value: "10m"
- name: FETCHER_MINMEM
value: "16Mi"
- name: FETCHER_MAXCPU
value: ""
- name: FETCHER_MAXMEM
value: ""
- name: DEBUG_ENV
value: "false"
- name: PPROF_ENABLED
value: "false"
- name: OPENTRACING_ENABLED
value: "false"
- name: TRACE_JAEGER_COLLECTOR_ENDPOINT
value: ""
- name: TRACING_SAMPLING_RATE
value: "0.5"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: ""
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_TRACES_SAMPLER
value: "parentbased_traceidratio"
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.1"
- name: OTEL_PROPAGATORS
value: "tracecontext,baggage"
readinessProbe:
httpGet:
path: "/healthz"
port: 8888
initialDelaySeconds: 1
periodSeconds: 1
failureThreshold: 30
livenessProbe:
httpGet:
path: "/healthz"
port: 8888
initialDelaySeconds: 35
periodSeconds: 5
ports:
- containerPort: 8080
name: metrics
- containerPort: 8888
name: http
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccountName: fission-svc
---
# Source: fission-all/templates/kubewatcher/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubewatcher
labels:
chart: "fission-all-v1.15.0"
svc: kubewatcher
spec:
replicas: 1
selector:
matchLabels:
svc: kubewatcher
template:
metadata:
labels:
svc: kubewatcher
spec:
containers:
- name: kubewatcher
image: "index.docker.io/fission/fission-bundle:v1.15.0"
imagePullPolicy: IfNotPresent
command: ["/fission-bundle"]
args: ["--kubewatcher", "--routerUrl", "http://router.fission"]
env:
- name: DEBUG_ENV
value: "false"
- name: PPROF_ENABLED
value: "false"
- name: OPENTRACING_ENABLED
value: "false"
- name: TRACE_JAEGER_COLLECTOR_ENDPOINT
value: ""
- name: TRACING_SAMPLING_RATE
value: "0.5"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: ""
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_TRACES_SAMPLER
value: "parentbased_traceidratio"
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.1"
- name: OTEL_PROPAGATORS
value: "tracecontext,baggage"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccountName: fission-svc
---
# Source: fission-all/templates/mqt-keda/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtrigger-keda
labels:
chart: "fission-all-v1.15.0"
svc: mqtrigger-keda
messagequeue: keda
spec:
replicas: 1
selector:
matchLabels:
svc: mqtrigger-keda
messagequeue: keda
template:
metadata:
labels:
svc: mqtrigger-keda
messagequeue: keda
spec:
containers:
- name: mqtrigger-keda
image: "index.docker.io/fission/fission-bundle:v1.15.0"
imagePullPolicy: IfNotPresent
command: ["/fission-bundle"]
args: ["--mqt_keda", "--routerUrl", "http://router.fission"]
env:
- name: DEBUG_ENV
value: "false"
- name: CONNECTOR_IMAGE_PULL_POLICY
value: "IfNotPresent"
- name: KAFKA_IMAGE
value: "fission/keda-kafka-http-connector:v0.9"
- name: RABBITMQ_IMAGE
value: "fission/keda-rabbitmq-http-connector:v0.8"
- name: AWS-KINESIS-STREAM_IMAGE
value: "fission/keda-aws-kinesis-http-connector:v0.8"
- name: AWS-SQS-QUEUE_IMAGE
value: "fission/keda-aws-sqs-http-connector:v0.8"
- name: STAN_IMAGE
value: "fission/keda-nats-streaming-http-connector:v0.9"
- name: GCP-PUB-SUB_IMAGE
value: "fission/keda-gcp-pubsub-http-connector:v0.3"
- name: REDIS_IMAGE
value: "fission/keda-redis-http-connector:v0.1"
- name: OPENTRACING_ENABLED
value: "false"
- name: TRACE_JAEGER_COLLECTOR_ENDPOINT
value: ""
- name: TRACING_SAMPLING_RATE
value: "0.5"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: ""
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_TRACES_SAMPLER
value: "parentbased_traceidratio"
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.1"
- name: OTEL_PROPAGATORS
value: "tracecontext,baggage"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccountName: fission-svc
---
# Source: fission-all/templates/router/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: router
labels:
chart: "fission-all-v1.15.0"
svc: router
application: fission-router
spec:
replicas: 1
selector:
matchLabels:
application: fission-router
svc: router
template:
metadata:
labels:
application: fission-router
svc: router
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "8080"
spec:
containers:
- name: router
image: "index.docker.io/fission/fission-bundle:v1.15.0"
imagePullPolicy: IfNotPresent
command: ["/fission-bundle"]
args: ["--routerPort", "8888", "--executorUrl", "http://executor.fission"]
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: ROUTER_ROUND_TRIP_TIMEOUT
value: "50ms"
- name: ROUTER_ROUNDTRIP_TIMEOUT_EXPONENT
value: "2"
- name: ROUTER_ROUND_TRIP_KEEP_ALIVE_TIME
value: "30s"
- name: ROUTER_ROUND_TRIP_DISABLE_KEEP_ALIVE
value: "true"
- name: ROUTER_ROUND_TRIP_MAX_RETRIES
value: "10"
- name: ROUTER_SVC_ADDRESS_MAX_RETRIES
value: "5"
- name: ROUTER_SVC_ADDRESS_UPDATE_TIMEOUT
value: "30s"
- name: ROUTER_UNTAP_SERVICE_TIMEOUT
value: "3600s"
- name: USE_ENCODED_PATH
value: "false"
- name: DEBUG_ENV
value: "false"
- name: PPROF_ENABLED
value: "false"
- name: DISPLAY_ACCESS_LOG
value: "false"
- name: OPENTRACING_ENABLED
value: "false"
- name: TRACE_JAEGER_COLLECTOR_ENDPOINT
value: ""
- name: TRACING_SAMPLING_RATE
value: "0.5"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: ""
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_TRACES_SAMPLER
value: "parentbased_traceidratio"
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.1"
- name: OTEL_PROPAGATORS
value: "tracecontext,baggage"
resources: {}
readinessProbe:
httpGet:
path: "/router-healthz"
port: 8888
initialDelaySeconds: 1
periodSeconds: 1
failureThreshold: 30
livenessProbe:
httpGet:
path: "/router-healthz"
port: 8888
initialDelaySeconds: 35
periodSeconds: 5
ports:
- containerPort: 8080
name: metrics
- containerPort: 8888
name: http
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccountName: fission-svc
---
# Source: fission-all/templates/storagesvc/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: storagesvc
labels:
chart: "fission-all-v1.15.0"
svc: storagesvc
application: fission-storage
spec:
replicas: 1
selector:
matchLabels:
svc: storagesvc
application: fission-storage
template:
metadata:
labels:
svc: storagesvc
application: fission-storage
spec:
containers:
- name: storagesvc
image: "index.docker.io/fission/fission-bundle:v1.15.0"
imagePullPolicy: IfNotPresent
command: ["/fission-bundle"]
args: ["--storageServicePort", "8000", "--storageType", "local"]
env:
- name: PRUNE_INTERVAL
value: "60"
- name: DEBUG_ENV
value: "false"
- name: PPROF_ENABLED
value: "false"
- name: OPENTRACING_ENABLED
value: "false"
- name: TRACE_JAEGER_COLLECTOR_ENDPOINT
value: ""
- name: TRACING_SAMPLING_RATE
value: "0.5"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: ""
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_TRACES_SAMPLER
value: "parentbased_traceidratio"
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.1"
- name: OTEL_PROPAGATORS
value: "tracecontext,baggage"
volumeMounts:
- name: fission-storage
mountPath: /fission
readinessProbe:
httpGet:
path: "/healthz"
port: 8000
initialDelaySeconds: 1
periodSeconds: 1
failureThreshold: 30
livenessProbe:
httpGet:
path: "/healthz"
port: 8000
initialDelaySeconds: 35
periodSeconds: 5
ports:
- containerPort: 8000
name: http
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccountName: fission-svc
volumes:
- name: fission-storage
persistentVolumeClaim:
claimName: fission-storage-pvc
---
# Source: fission-all/templates/timer/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: timer
labels:
chart: "fission-all-v1.15.0"
svc: timer
spec:
replicas: 1
selector:
matchLabels:
svc: timer
template:
metadata:
labels:
svc: timer
spec:
containers:
- name: timer
image: "index.docker.io/fission/fission-bundle:v1.15.0"
imagePullPolicy: IfNotPresent
command: ["/fission-bundle"]
args: ["--timer", "--routerUrl", "http://router.fission"]
env:
- name: DEBUG_ENV
value: "false"
- name: PPROF_ENABLED
value: "false"
- name: OPENTRACING_ENABLED
value: "false"
- name: TRACE_JAEGER_COLLECTOR_ENDPOINT
value: ""
- name: TRACING_SAMPLING_RATE
value: "0.5"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: ""
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_TRACES_SAMPLER
value: "parentbased_traceidratio"
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.1"
- name: OTEL_PROPAGATORS
value: "tracecontext,baggage"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
serviceAccountName: fission-svc
NOTES:
1. Install the client CLI.
Mac:
$ curl -Lo fission https://github.com/fission/fission/releases/download/v1.15.0/fission-v1.15.0-darwin-amd64 && chmod +x fission && sudo mv fission /usr/local/bin/
Linux:
$ curl -Lo fission https://github.com/fission/fission/releases/download/v1.15.0/fission-v1.15.0-linux-amd64 && chmod +x fission && sudo mv fission /usr/local/bin/
Windows:
For Windows, you can use the linux binary on WSL. Or you can download this windows executable: https://github.com/fission/fission/releases/download/v1.15.0/fission-v1.15.0-windows-amd64.exe
2. You're ready to use Fission!
# Create an environment
$ fission env create --name nodejs --image fission/node-env
# Get a hello world
$ curl https://raw.githubusercontent.com/fission/examples/master/nodejs/hello.js > hello.js
# Register this function with Fission
$ fission function create --name hello --env nodejs --code hello.js
# Run this function
$ fission function test --name hello
Hello, world!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment