Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save linuxmalaysia/61f5ceb297447a806cb1dbe9dfd098a9 to your computer and use it in GitHub Desktop.
Save linuxmalaysia/61f5ceb297447a806cb1dbe9dfd098a9 to your computer and use it in GitHub Desktop.
Clustercontrol v2 in Podman

Podman Container Configuration for ClusterControl with Custom User and SSH

This document outlines how to configure a Podman container for ClusterControl with a custom user and SSH access.

1. Pulling the Image:

podman pull docker.io/severalnines/clustercontrol:latest

This command downloads the latest ClusterControl image from Docker Hub.

2. Creating Backup Directory:

mkdir -p /storage/clustercontrol/backups

This command creates the directory /storage/clustercontrol/backups on the host system for persistent backups.

3. Running the Container:

podman run -d --privileged \
  --name clustercontrol \
  --hostname clustercontrol \
  --env DOCKER_HOST_ADDRESS="192.168.0.148" \
  -p 5000:80 -p 5001:443 \
  -p 19501:19501 -p 9443:9443 -p 9999:9999 \
  -v clustercontrol-datadir:/var/lib/mysql \
  -v clustercontrol-cmond:/etc/cmon.d \
  -v clustercontrol-cmonlib:/var/lib/cmon \
  -v clustercontrol-promdata:/var/lib/prometheus \
  -v clustercontrol-promconf:/etc/prometheus \
  -v /storage/clustercontrol/backups:/root/backups \
  -v /root/.ssh:/root/.ssh \
  -v /etc/localtime:/etc/localtime:ro \
  --restart always \
  docker.io/severalnines/clustercontrol:latest

This command:

  • Runs the ClusterControl container in detached mode (-d).
  • Grants the container additional privileges (--privileged).
  • Names the container clustercontrol and sets its hostname.
  • Sets the DOCKER_HOST_ADDRESS environment variable to podman host ip.
  • Maps host ports to container ports for service access.
  • Mounts volumes for persistent data and configuration.
  • Enables automatic container restart on failure (--restart always).

4. Creating a New User and Setting Password:

  1. Access the container shell using podman exec -it clustercontrol /bin/bash.
  2. Create a new user with the desired username and group using s9s user --create --generate-key --controller="https://localhost:9501" --group=admins <username>.
  3. Reset the user's password using s9s user --change-password --new-password=<password> <username>.
  4. Exit the container shell using exit.

5. Accessing ClusterControl:

  • Access the ClusterControl web interface through your browser at https://<ip_address>:5001.
  • Use the created user credentials to log in.

6. SSH Access Path Customization:

  • The default SSH certificate path is /root/.ssh.
  • To modify it, update the volume mount for the SSH directory in the run command.
  • Example: -v /path/to/your/ssh/directory:/root/.ssh

Additional Notes:

This guide provides a basic configuration for running ClusterControl with a custom user and SSH access. You can further adapt and optimize the container setup based on your specific needs and environment.

@linuxmalaysia
Copy link
Author

You need to delete all the podman volumes related to clustercontrol if need to start back from scratch clustercontrol.

podman volume ls

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment