Skip to content

Instantly share code, notes, and snippets.

@onpaws
Created May 12, 2019 13:59
Show Gist options
  • Save onpaws/9db6215593e2083849c7a19b0323e684 to your computer and use it in GitHub Desktop.
Save onpaws/9db6215593e2083849c7a19b0323e684 to your computer and use it in GitHub Desktop.
Fixed manifests for parity 2.5.0
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: default
name: pv-default-100g-disk01
annotations:
volume.beta.kubernetes.io/storage-class: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
name: parity-service
namespace: default
spec:
selector:
app: parity
ports:
- name: eth-net
port: 30303
protocol: TCP
---
apiVersion: v1
kind: ConfigMap
metadata:
name: parity-config
namespace: default
data:
parity.toml: |
[parity]
mode = "dark"
base_path = "/data"
[footprint]
db_compaction = "hdd"
pruning_memory = 128
tracing = "off"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: parity
namespace: default
labels:
app: parity
spec:
replicas: 1
selector:
matchLabels:
app: parity
template:
metadata:
labels:
app: parity
name: parity
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- image: parity/parity:v2.5.0
name: parity
imagePullPolicy: IfNotPresent
args: ["--config=/config/parity.toml"]
volumeMounts:
- name: parity-config
mountPath: /config
- name: pv
mountPath: /data
volumes:
- name: parity-config
configMap:
name: parity-config
- name: pv
persistentVolumeClaim:
claimName: pv-default-100g-disk01
@ArseniiPetrovich
Copy link

Hey, @onpaws. Just found this gist.
Did you add the autoscaling to this particular config? How you deal with the fact, that parity has to be synced before being queried and it take a looot of time to sync on the main network?

@onpaws
Copy link
Author

onpaws commented Oct 3, 2019

Hey!
This manifest is around 1, maybe 1.5 years old. I originally got it from a Parity maintainer and it should work great with whatever version of Parity was out at that time. I patched it when 2.x came out to support running as a hardcoded UID/GID but that's basically it. This manifest may also support the latest version but not 100% sure and I'm not actively working on it these days.
If you don't already know this whole 'run your own Parity' thing easily ends up becoming a non-trivial effort so unless you truly really need to run your own node I'd very much suggest checking out Infura or similar.

So regarding the sync time issue you mentioned. We interact with Parity via JSON RPC and I was able to add a k8n native readiness check based on an existing Parity call. I don't remember what the call was named but I did that and it worked fine.

You might logically expect, if I want to scale Parity by running multiple instances in a cluster, I should be able to share a network drive of blockchain data across each of the Parity instances. Unfortunately last time my team looked into it the consensus was Parity's blockchain writes are stateful/'singleton oriented' if you will. So unless newer release have changed things, I don't think you will be able to use a single network drive across all instances.

If you need to be better prepared than "lets starting a new instance from scratch" every time, I speculate you might improve things by maintaining e.g. a rolling set of daily snapshots of the blockchain volume. When you need to scale, the next marginal Parity instance can start with a slightly stale copy of blockchain data and face a limited time to sync, vs starting from zero and taking ~days or whatever.

@ArseniiPetrovich
Copy link

Well, we are running a set of our own Parity nodes for a long time, so we are a bit "experienced" about all the Parity problems. And now we are thinking if that would be handy to migrate Parity to K8s.
Also we do have a pretty simple custom serverless script that do copy the Blockchain state and makes a snapshot from it in AWS. The thing I don't know is how to tell Kubernetes to not to bring up node's disk from scratch while creating a new pod, but to copy it from snapshot. Any tips here, maybe?

Thanks, Arsenii

@onpaws
Copy link
Author

onpaws commented Oct 15, 2019

Make sure you're on v1.13 or later so you get CSI GA release. Then you can check out volume snapshots.

NB: I haven't personally tried this with Parity myself

@ArseniiPetrovich
Copy link

Thanks, @onpaws, will check!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment