- Pod
-- container 1: webapp
exposes port: 80
-- container 2: r-service
exposes port: 8070
For example. The webapp can call the r-service via localhost:8070
I could imagine something like this:
- Pod
-- container 1: webapp
exposes port: 80
-- container 2: r-service
exposes port: 8070
-- container 3: mongo-on-digital-ocean
Runs: ssh-tunnel or sshuttle or similar
exposes port: 27017
-
webapp and the r-service could theoretically access mongo via localhost:27017
-
For sshuttle, we'd have to figure out how incoming traffic gets routed.
- I'm not sure if k8s would expose the digital ocean IP to other containers.
-
A safer bet may be to expose only a port--similar to an ssh tunnel
- Exposing a port is commonly done in k8s for somewhat similar services such as redis or sql dbs.
- For example, here's redis in a k8s pod:
- Note: preprocess-web can access redis via
localhost:6379
- name: redis
image: redis:4.0
ports:
- containerPort: 6379
- name: preprocess-web
image: tworavens/raven-metadata-service:latest
env:
- name: REDIS_HOST
# Note this is within the same pod
value: localhost
- name: REDIS_PORT
value: 6379
The containers get created/destroyed quite often.
Any credentials are stored in k8s secret settings and accessible by the running containers. e.g. We can store opaque keys, database usernames/passwords, etc.
I think the goal is to create a separate docker container with any needed software (python, iptables, etc), that can be deployed within a k8s config and:
- connect to digital ocean's mongo
- be available to the other containers