Skip to content

Instantly share code, notes, and snippets.

@v9n
Last active May 30, 2025 19:01
Show Gist options
  • Save v9n/3ce3ad7d70e45d89c2bf0ef5dc679914 to your computer and use it in GitHub Desktop.
Save v9n/3ce3ad7d70e45d89c2bf0ef5dc679914 to your computer and use it in GitHub Desktop.
https://docs.gomplate.ca/syntax/ tool to generate helm value for canton

Create an .env file with these value

export SCAN_ADDRESS=your-sponsor-scan-address
export SV_ADDRESS=your-sponsor-sv-address

export PG_HOST=<pg-host>
export PG_PORT=<pg-port>

# Authentication
export AUTH_JWKS_URL="jwks url"
export OIDC_AUTHORITY_LEDGER_API_AUDIENCE="aud"
export VALIDATOR_AUTH_AUDIENCE="aud"

export PARTY_HINT=<party-hint>

export MEM_LIMIT=8

# add more user if needed
export VALIDATOR_WALLET_USERS='"4" # akadmin,"7" # other user, "abc" # other user'

# Change these each deployment
export CHART_VERSION=<canton-verison>
export MIGRATION_ID=<migration-id>

# Change to true when you deploy as part of migration
export MIGRATING=false

Put all of file into helm-tmpl

Then run something like this.

gomplate --input-dir=./helm-tmpl --output-dir=helm-$(MIGRATION_ID)

then you can apply the helm value

o helm upgrade --install participant oci://ghcr.io/digital-asset/decentralized-canton-sync/helm/splice-participant -n validator1 --version ${CHART_VERSION} -f helm-${MIGRATION_ID}/participant-values.yaml -f helm-${MIGRATION_ID}/standalone-participant-values.yaml --wait
helm upgrade --install validator oci://ghcr.io/digital-asset/decentralized-canton-sync/helm/splice-validator -n validator1 --version ${CHART_VERSION} -f helm-${MIGRATION_ID}/validator-values.yaml -f helm-${MIGRATION_ID}/standalone-validator-values.yaml --wait
persistence:
host: {{ .Env.PG_HOST }}
port: {{ .Env.PG_PORT }}
secretName: "postgres-secrets"
databaseName: participant_{{ .Env.MIGRATION_ID }}
schema: participant
auth:
jwksUrl: "{{ .Env.AUTH_JWKS_URL }}"
targetAudience: "{{ .Env.OIDC_AUTHORITY_LEDGER_API_AUDIENCE }}"
enableHealthProbes: true
resources:
limits:
cpu: "{{ getenv "CPU_LIMIT" | default 4 }}"
memory: {{ .Env.MEM_LIMIT | default 8 }}Gi
requests:
cpu: "100m"
memory: 1Gi
participantAdminUserNameFrom:
secretKeyRef:
key: ledger-api-user
name: splice-app-validator-ledger-api-auth
optional: false
persistence:
secretName: postgres-secrets
host: {{ .Env.PG_HOST }}
port: {{ .Env.PG_PORT }}
databaseName: participant_{{ .Env.MIGRATION_ID }}
schema: participant
participantAddress: "participant"
#imageDigests:
# validator_app: "@sha256:5c34fc8c0e001cfaebaf71fc6e50c18174bb8ae5c74d04d692483b456849a514"
# wallet_web_ui: "@sha256:28d053d35c034eb22d4f069a352d0b0e11f0f50be5f286d3a396464abd6c1d94"
# ans_web_ui: "@sha256:ffde0794e1fef7b0669ae029c305b0017599de533890156e4e11fbd5d2d92a72"
# URL of sponsoring SV for onboarding your validator
svSponsorAddress: "{{ .Env.SV_ADDRESS }}"
onboardingSecretFrom:
secretKeyRef:
name: splice-app-validator-onboarding-validator
key: secret
optional: false
# Party ID hint for the validator operator party, should be of format <organization>-<function>-<enumerator>,
# e.g. digitalAsset-finance-1
validatorPartyHint: "{{ .Env.PARTY_HINT }}"
# MIGRATION_START
# Replace MIGRATION_ID with the migration ID of the global synchronizer.
migration:
id: "{{ .Env.MIGRATION_ID }}"
# Uncomment this when redeploying as part of a migration, i.e., MIGRATION_ID was incremented and a migration dump was exported to the attached pvc.
{{ if eq (getenv "MIGRATING" "false") "true" }}migrating: true{{ end }}
#MIGRATION_END
persistence:
secretName: postgres-secrets
databaseName: validator
host: {{ .Env.PG_HOST }}
port: {{ .Env.PG_PORT }}
# Uncomment the following block if you want to restore from a participant dump
# and recover your balance
# PARTICIPANT_BOOTSTRAP_MIGRATE_TO_NEW_PARTICIPANT_START
# participantIdentitiesDumpImport:
# secretName: participant-bootstrap-dump
# # Make sure to also adjust nodeIdentifier to the same value
# newParticipantIdentifier: put-some-new-string-never-used-before
# migrateValidatorParty: true
# PARTICIPANT_BOOTSTRAP_MIGRATE_TO_NEW_PARTICIPANT_END
# Replace YOUR_VALIDATOR_NODE_NAME with the name you provided for your validator identity.
# This value will be used for the node identifier of your participant.
nodeIdentifier: "{{ .Env.PARTY_HINT }}"
# CONFIGURING_TOPUP_START
# Configuring a validator's traffic top-up loop;
# see documentation for more detailed information.
topup:
# set to false in order to disable automatic traffic top-ups
enabled: true
# target throughput in bytes / second of sequenced traffic; targetThroughput=0 <=> enabled=false
targetThroughput: 20000
# minTopupInterval - minimum time interval that must elapse before the next top-up
minTopupInterval: "1m"
# CONFIGURING_TOPUP_END
# update pvc template to your likeness
pvc:
volumeName: domain-migration-validator-pvc
volumeStorageClass: my-local
spliceInstanceNames:
networkName: "Canton Network"
networkFaviconUrl: "https://www.canton.network/hubfs/cn-favicon-05%201-1.png"
amuletName: "Canton Coin"
amuletNameAcronym: "CC"
nameServiceName: "Canton Name Service"
nameServiceNameAcronym: "CNS"
scanAddress: "{{ .Env.SCAN_ADDRESS }}"
# TRUSTED_SINGLE_SCAN_START
# If you want to configure validator to use a single trusted scan, set ``nonSvValidatorTrustSingleScan`` to true.
# It will only connect to the scan specified in ``scanAddress``. This does mean that you depend on that single SV and if it is broken or malicious you will be unable to use the network so usually you want to default to not enabling this.
# nonSvValidatorTrustSingleScan: true
# TRUSTED_SINGLE_SCAN_END
# TRUSTED_SINGLE_SEQUENCER_START
# If you want to configure validator to connect to a single trusted sequencer, set ``useSequencerConnectionsFromScan`` to false.
# and replace ``TRUSTED_SYNCHRONIZER_SEQUENCER_URL`` with the publicly accessible URL of the trusted sequencer.
# This does mean that you depend on that single SV and if it is broken or malicious you will be unable to use the network so usually you want to default to not enabling this.
# decentralizedSynchronizerUrl: "TRUSTED_SYNCHRONIZER_SEQUENCER_URL"
# useSequencerConnectionsFromScan: false
# TRUSTED_SINGLE_SEQUENCER_END
# Replace OPERATOR_WALLET_USER_ID with the user id in your IAM that you want to use to log into
# the wallet as the SV party. Note that this should be the full user id, e.g., ``auth0|43b68e1e4978b000cefba352``
# not only the suffix ``43b68e1e4978b000cefba352``:
# You can specify multiple user ids if you want multiple users to be able to log into your wallet.
validatorWalletUsers:
{{ range (.Env.VALIDATOR_WALLET_USERS | strings.Split ",") -}}
- {{ . }}
{{ end }}
auth:
# replace OIDC_AUTHORITY_VALIDATOR_AUDIENCE with the audience of your choice
audience: "{{ .Env.VALIDATOR_AUTH_AUDIENCE }}"
# replace OIDC_AUTHORITY_URL with your provider's OIDC URL
jwksUrl: "{{ .Env.AUTH_JWKS_URL }}"
# ENABLEWALLET_START
# This will disable the wallet HTTP server and wallet automations when set to false
enableWallet: true
# ENABLEWALLET_END
# SWEEP_START
# If you want funds sweeped out of parties in this validator, uncomment and fill in the following:
#walletSweep:
# "<senderPartyId>":
# maxBalanceUSD: <maxBalanceUSD>
# minBalanceUSD: <minBalanceUSD>
# receiver: "<receiverPartyId>"
# useTransferPreapproval: false # sweep by transferring directly through the transfer preapproval of the receiver,
# if set to false sweeping creates transfer offers that need to be accepted on the receiver side.
# SWEEP_END
# AUTO_ACCEPT_START
# To configure the validator to auto-accept transfer offers from specific parties, uncomment and fill in the following:
#autoAcceptTransfers:
# "<receiverPartyId>":
# fromParties:
# - "<senderPartyId>"
# AUTO_ACCEPT_END
# Contact point for your validator node that can be used by other node operators
# to reach you if there are issues with your node.
# This can be a slack username or an email address.
# If you do not wish to share this, set it to an empty string.
contactPoint: "nodeadmin@fivenorth.io"
# PARTICIPANT_PRUNING_SCHEDULE_START
# To configure participant pruning uncomment the following section.
# Refer to the documentation for more details.
# participantPruningSchedule:
# cron: 0 /10 * * * ? # Run every 10min
# maxDuration: 5m # Run for a max of 5min per iteration
# retention: 48h # Retain history that is newer than 48h.
# PARTICIPANT_PRUNING_SCHEDULE_END
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment