This is a small write up about how to migrate your pritunl install between servers. It's not especially detailed because I'm lazy and your migration story will most likely be different. All this can be avoided by using a remote/hosted mongo instance(compose.io, mongolab, etc.) and simply pointing your pritunl instance at that. If you want more details ask, and I'll do my best to answer and update this write-up accordingly. Also, feel free to criticize my grammar and spelling.
FX: | |
<Headset> | |
1. Master Power: ✔ | |
2. Playback Gain Control: ✘ | |
3. FIR Equalizer: ✘ | |
4. Convolver: ✔ | |
(ii) Impulse Response: Sony Xperia Rev2 ClearAudio+ | |
5. Field Surround: ✘ | |
6. Headphone Surround + | |
7. Reverberation: ✘ |
Run this in order to backup all you k8s cluster data. It will be saved in a folder bkp. To restore the cluster, you can run kubectl apply -f bkp
.
Please note: this recovers all resources correctly, including dynamically generated PV's. However, it will not recover ELB endpoints. You will need to update any DNS entries manually, and manually remove the old ELB's.
Please note: This has not been tested with all resource types. Supported resource types include:
- services
- replicationcontrollers
- secrets
- deployments
- horizontal pod autoscalers
FROM python:2.7-alpine | |
COPY . /app/ | |
WORKDIR /app | |
ENTRYPOINT ["python", "connectbug.py"] |
1. dump source db instance | |
pg_dumpall -g --no-role-passwords > global.sql | |
pg_dump --schema-only --section=pre-data > pre-schema.sql | |
pg_dump --schema-only --section=post-data > post-schema.sql | |
2. load schema to target db | |
global and pre-schema | |
3. create publication on source db instance | |
CREATE PUBLICATION pub FOR ALL TABLES; |