Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
kubernetes pod example for atmoz/sftp
apiVersion: v1
kind: Namespace
metadata:
name: sftp
---
kind: Service
apiVersion: v1
metadata:
name: sftp
namespace: sftp
labels:
environment: production
spec:
type: "LoadBalancer"
ports:
- name: "ssh"
port: 22
targetPort: 22
selector:
app: sftp
status:
loadBalancer: {}
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: sftp
namespace: sftp
labels:
environment: environment: production
app: sftp
spec:
# how many pods and indicate which strategy we want for rolling update
replicas: 1
minReadySeconds: 10
template:
metadata:
labels:
environment: production
app: sftp
annotations:
container.apparmor.security.beta.kubernetes.io/sftp: runtime/default
spec:
#secrets and config
volumes:
- name: sftp-public-keys
configMap:
name: sftp-public-keys
containers:
#the sftp server itself
- name: sftp
image: atmoz/sftp:latest
imagePullPolicy: Always
env:
# - name: PASSWORD
# valueFrom:
# secretKeyRef:
# name: sftp-server-sec
# key: password
args: ["myUser::1001:100:incoming,outgoing"] #create users and dirs
ports:
- containerPort: 22
volumeMounts:
- mountPath: /home/myUser/.ssh/keys
name: sftp-public-keys
readOnly: true
securityContext:
capabilities:
add: ["SYS_ADMIN"]
resources: {}
@jujhars13

This comment has been minimized.

Copy link
Owner Author

@jujhars13 jujhars13 commented Oct 24, 2017

Mount public keys in bash as a K8s secret

  #public ssh keys
  kubectl delete configMap sftp-public-keys || true #if error, just carry on
  kubectl create configmap sftp-public-keys \
  --from-file=${PROJECT_DIR}/build/sftp || true
@gfcannon12

This comment has been minimized.

Copy link

@gfcannon12 gfcannon12 commented Feb 4, 2018

@jujhars13 Thanks for sharing. I'm trying your example, and I can SSH to the pod through kubectl exec. However, I cannot connect to the pod using SFTP (connection timeout). I am using the worker node public IP. Is there a different location I should use? Kubernetes is new to me, so I'm still learning the ropes.

@gfcannon12

This comment has been minimized.

Copy link

@gfcannon12 gfcannon12 commented Feb 4, 2018

@jujhars13 My issue was that my provider doesn't support LoadBalancer on free clusters, so I used NodePorts instead. I was able to receive SFTP by specifying the NodePort. Unfortunately, I need to send through port 22, which is below the NodePort range. It seems possible to change the NodePort range with the kube-apiserver command line tool, but I can't figure out how to install it.

@wireless00

This comment has been minimized.

Copy link

@wireless00 wireless00 commented Feb 7, 2018

There is a error in the YAML at line 34. Value environment: is there 2 times on the line.

@deepforu47

This comment has been minimized.

Copy link

@deepforu47 deepforu47 commented Jul 19, 2018

Hi, I have tried this on AKS and not getting external IP. I would like to access sftp using external Public IP. But some how its just showing in pending state.
Tried to test with one sample java application and there I am able to get external IP. Is this issue for sftp only?
I tried with specifying the Load BalancerIP as well, but no luck :(.

$ kubectl.exe get svc sftp -n sftp

NAME      TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
sftp      LoadBalancer   10.0.90.238   <pending>     22:32119/TCP   13m

$kubectl.exe describe svc/sftp -n sftp

Name:                     sftp
Namespace:                sftp
Labels:                   environment=dev
Annotations:              <none>
Selector:                 app=sftp
Type:                     LoadBalancer
IP:                       10.0.90.238
IP:                       104.40.207.44
Port:                     ssh  22/TCP
TargetPort:               22/TCP
NodePort:                 ssh  32119/TCP
Endpoints:                10.244.0.28:22
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

$kubectl.exe get svc sftp -n sftp -o yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-07-19T07:47:53Z
  labels:
    environment: dev
  name: sftp
  namespace: sftp
  resourceVersion: "808194"
  selfLink: /api/v1/namespaces/sftp/services/sftp
  uid: 0ae31448-8b28-11e8-b6d4-468924ce4efc
spec:
  clusterIP: 10.0.90.238
  externalTrafficPolicy: Cluster
  loadBalancerIP: 104.40.207.44
  ports:
  - name: ssh
    nodePort: 32119
    port: 22
    protocol: TCP
    targetPort: 22
  selector:
    app: sftp
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}
@nicoleannkim

This comment has been minimized.

Copy link

@nicoleannkim nicoleannkim commented Aug 10, 2018

@gfcannon12

Below is what I used for nodeport

kind: Service
apiVersion: v1
metadata:
name: sftp
labels:
environment: production
spec:
type: NodePort
ports:

  • name: ssh
    port: 22
    targetPort: 22
    nodePort: 30022
    selector:
    app: sftp

Using OpenSSH client you can use your pulic IP address for the node example - using ibmcloud ks workers DSGPOC1
and use the node port - 30022

@jdwrink

This comment has been minimized.

Copy link

@jdwrink jdwrink commented Nov 5, 2018

First, thank you @jujhars13 for providing this yaml file, it is extremely helpful.

I am running into a problem. I can't get the host keys to persist. I have tried mounting host keys as secrets, or creating a persistent volume claim and then mounting /etc/ssh into the pvc. Nothing I try seems to work. Has anyone figured out how to persist host keys on Kubernetes? I have searched for example yaml files and no one has ever addressed this.

@Novex

This comment has been minimized.

Copy link

@Novex Novex commented Feb 27, 2019

Not sure if you're still looking for a way to get host keys to persist @jdwrink, but mounting host key secrets into their relevant /etc/ssh/ files seems to work for me, eg.

kind: Deployment
...
spec:
  template:
    spec:
      #secrets and config
      volumes:
      ...
      - name: sftp-host-keys
        secret:
          secretName: sftp-host-keys
          defaultMode: 0600
      ...
      containers:
        #the sftp server itself
        - name: sftp
          image: atmoz/sftp:latest
          ...
          volumeMounts:
          - mountPath: /etc/ssh/ssh_host_ed25519_key
            name: sftp-host-keys
            subPath: ssh_host_ed25519_key
            readOnly: true
          - mountPath: /etc/ssh/ssh_host_ed25519_key.pub
            name: sftp-host-keys
            subPath: ssh_host_ed25519_key.pub
            readOnly: true
          - mountPath: /etc/ssh/ssh_host_rsa_key
            name: sftp-host-keys
            subPath: ssh_host_rsa_key
            readOnly: true
          - mountPath: /etc/ssh/ssh_host_rsa_key.pub
            name: sftp-host-keys
            subPath: ssh_host_rsa_key.pub
            readOnly: true
            ...
---
apiVersion: v1
kind: Secret
metadata:
  name: sftp-host-keys
  namespace: sftp
stringData:
  ssh_host_ed25519_key: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    ...
    -----END OPENSSH PRIVATE KEY-----
  ssh_host_ed25519_key.pub: |
    ssh-ed25519 AAAA...
  ssh_host_rsa_key: |
    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
  ssh_host_rsa_key.pub: |
    ssh-rsa AAAA...
type: Opaque
@gijo-varghese

This comment has been minimized.

Copy link

@gijo-varghese gijo-varghese commented Jul 14, 2019

I'm getting an error "No supported authentication methods available (server sent: publickey)"

@salesh

This comment has been minimized.

Copy link

@salesh salesh commented Aug 30, 2019

@deepforu47 did you maybe manage this?

@deepforu47

This comment has been minimized.

Copy link

@deepforu47 deepforu47 commented Aug 30, 2019

@salesh sorry didn't get your question. You are asking me if I manage this?

@salesh

This comment has been minimized.

Copy link

@salesh salesh commented Aug 30, 2019

@deepforu47 wow, thank you for fast replay 😮 so I am asking like did you manage maybe to work problem that you wrote in your comment?

@deepforu47

This comment has been minimized.

Copy link

@deepforu47 deepforu47 commented Aug 30, 2019

@salesh - Yes it worked for me, that was the issue on our side regarding the Azure ALB. It was small POC which I did during that time.

@salesh

This comment has been minimized.

Copy link

@salesh salesh commented Sep 2, 2019

@deepforu47

I have some problems with this. The challenging part is that I need to figurate out how can be mounted directory that contains our python scripts.
This service basically is I want to mount a directory with scripts I then I start two python scripts that are listening input directory and output directory that we mount inside this mounted directory in our container.
In a local environment, this is easy because I start my docker like

docker run -v //c/Users/..../model:/home/foo/upload -p 2222:22 -d testko:1.0.0 foo:pass:1001

User is now simple, currently, I don't want to bother with password security, for sure that I will cover that after i
get this working.
And this is all working...

#1.1 Do I need to create an Azure file share on AKS and then Persistent Volume? How would all of this look like?
I am quite a new to Azure and Kubernetes, I learn much stuff in the last few days, but maybe is there someone that worked on something
like this?

@mbieren

This comment has been minimized.

Copy link

@mbieren mbieren commented Feb 17, 2020

This example is perfectly working for me. But running under Azure I experience the following problem. Each node in the Cluster is issuing a tcp connect to the running pod. This results in the following Log Message Spamming the ELK Stack :

Did not receive identification string from 10.240.0.4 port 50255

10.24.0.4 is one IP of one of the Cluster Nodes. The Message is repeating once per Minute by all nodes. Pretty Annoying. A solution would be to reduce the log level of the ssh daemon. Any Ideas how to accomplish this ?

@strus38

This comment has been minimized.

Copy link

@strus38 strus38 commented May 2, 2020

what should be done to allow anonymous PUT/GET?
Thanks

@ToMe25

This comment has been minimized.

Copy link

@ToMe25 ToMe25 commented Feb 16, 2021

@jujhars13 I made a slightly improved version of this here,
if you are interested you can copy the changes to here.
Changes i did:

  • Update some things to allow it to work with newer Kubernetes versions
  • Fix environment: environment: production
  • Only pull the image if it isn't already present
  • Rename sftp-public-keys configmap to sftp-client-public-keys and change it to a generic secret
  • Add a generic secret called sftp-host-keys for the servers host keys
  • Make user directory a persistent volume
  • Change sftp port to 23 to allow connecting to the kubernetes node this runs on with ssh
  • Disable apparmor because i couldn't get it to work
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment