REFERENCE: HERE
pg_dump -h <public dns> -U <my username> -f <name of dump file .sql> <name of my database>
psql -U <postgresql username> -d <database name> -f <dump file that you want to restore>
kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar
kubectl cp <local file location> <cluster namespace, the POD resides in>/<POD name>:<destination inside the POD>
scp -i <pem file location> <username>@<ip>:<location of the file which is to be copied from remote> <location on local to which file has to be downloaded>
REMOTE_STORAGE=s3 S3_ACCESS_KEY=<Access key of you AWS account> S3_SECRET_KEY=<Secret key of your AWS account> S3_BUCKET=<S3 bucket name in which you want to create the backup> S3_REGION=<S3 bucket region> BACKUP_NAME=<s3 backup name> clickhouse-backup create_remote <s3 backup name, sometimes BACKUP_NAME variable does not work>
REMOTE_STORAGE=s3 S3_ACCESS_KEY=<Access key of you AWS account> S3_SECRET_KEY=<Secret key of your AWS account> S3_BUCKET=<S3 bucket from which the backup is to be extracted> S3_REGION=<S3 bucket region> clickhouse-backup restore_remote <Backup file name on S3 bucket>
-
Use your credentials to login to aws account via cli
aws configure
-
Update kubeconfig locally
aws eks update-kubeconfig --name <cluster-name> --region ap-southeast-1
-
Write kubeconfig to remote
eksctl utils write-kubeconfig --cluster=<cluster-name>
Shifting from local to S3 or vice versa (source)
The following cp command copies a single file to a specified bucket and key
aws s3 cp test.txt s3://mybucket/test2.txt
The following cp command copies a single s3 object to a specified bucket and key
aws s3 cp s3://mybucket/test.txt s3://mybucket/test2.txt
The following cp command copies a single object to a specified file locally
aws s3 cp s3://mybucket/test.txt test2.txt
The following cp command copies a single object to a specified bucket while retaining its original name
aws s3 cp s3://mybucket/test.txt s3://mybucket2/