Skip to content

Instantly share code, notes, and snippets.

@jujhars13
Last active March 7, 2024 00:16
Show Gist options
  • Star 31 You must be signed in to star a gist
  • Fork 13 You must be signed in to fork a gist
  • Save jujhars13/1e99cf110e5df39d4ae3c7fef81589f8 to your computer and use it in GitHub Desktop.
Save jujhars13/1e99cf110e5df39d4ae3c7fef81589f8 to your computer and use it in GitHub Desktop.
kubernetes pod example for atmoz/sftp
apiVersion: v1
kind: Namespace
metadata:
name: sftp
---
kind: Service
apiVersion: v1
metadata:
name: sftp
namespace: sftp
labels:
environment: production
spec:
type: "LoadBalancer"
ports:
- name: "ssh"
port: 22
targetPort: 22
selector:
app: sftp
status:
loadBalancer: {}
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: sftp
namespace: sftp
labels:
environment: environment: production
app: sftp
spec:
# how many pods and indicate which strategy we want for rolling update
replicas: 1
minReadySeconds: 10
template:
metadata:
labels:
environment: production
app: sftp
annotations:
container.apparmor.security.beta.kubernetes.io/sftp: runtime/default
spec:
#secrets and config
volumes:
- name: sftp-public-keys
configMap:
name: sftp-public-keys
containers:
#the sftp server itself
- name: sftp
image: atmoz/sftp:latest
imagePullPolicy: Always
env:
# - name: PASSWORD
# valueFrom:
# secretKeyRef:
# name: sftp-server-sec
# key: password
args: ["myUser::1001:100:incoming,outgoing"] #create users and dirs
ports:
- containerPort: 22
volumeMounts:
- mountPath: /home/myUser/.ssh/keys
name: sftp-public-keys
readOnly: true
securityContext:
capabilities:
add: ["SYS_ADMIN"]
resources: {}
@gfcannon12
Copy link

@jujhars13 My issue was that my provider doesn't support LoadBalancer on free clusters, so I used NodePorts instead. I was able to receive SFTP by specifying the NodePort. Unfortunately, I need to send through port 22, which is below the NodePort range. It seems possible to change the NodePort range with the kube-apiserver command line tool, but I can't figure out how to install it.

@wireless00
Copy link

There is a error in the YAML at line 34. Value environment: is there 2 times on the line.

@deepforu47
Copy link

deepforu47 commented Jul 19, 2018

Hi, I have tried this on AKS and not getting external IP. I would like to access sftp using external Public IP. But some how its just showing in pending state.
Tried to test with one sample java application and there I am able to get external IP. Is this issue for sftp only?
I tried with specifying the Load BalancerIP as well, but no luck :(.

$ kubectl.exe get svc sftp -n sftp

NAME      TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
sftp      LoadBalancer   10.0.90.238   <pending>     22:32119/TCP   13m

$kubectl.exe describe svc/sftp -n sftp

Name:                     sftp
Namespace:                sftp
Labels:                   environment=dev
Annotations:              <none>
Selector:                 app=sftp
Type:                     LoadBalancer
IP:                       10.0.90.238
IP:                       104.40.207.44
Port:                     ssh  22/TCP
TargetPort:               22/TCP
NodePort:                 ssh  32119/TCP
Endpoints:                10.244.0.28:22
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

$kubectl.exe get svc sftp -n sftp -o yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-07-19T07:47:53Z
  labels:
    environment: dev
  name: sftp
  namespace: sftp
  resourceVersion: "808194"
  selfLink: /api/v1/namespaces/sftp/services/sftp
  uid: 0ae31448-8b28-11e8-b6d4-468924ce4efc
spec:
  clusterIP: 10.0.90.238
  externalTrafficPolicy: Cluster
  loadBalancerIP: 104.40.207.44
  ports:
  - name: ssh
    nodePort: 32119
    port: 22
    protocol: TCP
    targetPort: 22
  selector:
    app: sftp
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}

@nicoleannkim
Copy link

@gfcannon12

Below is what I used for nodeport

kind: Service
apiVersion: v1
metadata:
name: sftp
labels:
environment: production
spec:
type: NodePort
ports:

  • name: ssh
    port: 22
    targetPort: 22
    nodePort: 30022
    selector:
    app: sftp

Using OpenSSH client you can use your pulic IP address for the node example - using ibmcloud ks workers DSGPOC1
and use the node port - 30022

@jdwrink
Copy link

jdwrink commented Nov 5, 2018

First, thank you @jujhars13 for providing this yaml file, it is extremely helpful.

I am running into a problem. I can't get the host keys to persist. I have tried mounting host keys as secrets, or creating a persistent volume claim and then mounting /etc/ssh into the pvc. Nothing I try seems to work. Has anyone figured out how to persist host keys on Kubernetes? I have searched for example yaml files and no one has ever addressed this.

@Novex
Copy link

Novex commented Feb 27, 2019

Not sure if you're still looking for a way to get host keys to persist @jdwrink, but mounting host key secrets into their relevant /etc/ssh/ files seems to work for me, eg.

kind: Deployment
...
spec:
  template:
    spec:
      #secrets and config
      volumes:
      ...
      - name: sftp-host-keys
        secret:
          secretName: sftp-host-keys
          defaultMode: 0600
      ...
      containers:
        #the sftp server itself
        - name: sftp
          image: atmoz/sftp:latest
          ...
          volumeMounts:
          - mountPath: /etc/ssh/ssh_host_ed25519_key
            name: sftp-host-keys
            subPath: ssh_host_ed25519_key
            readOnly: true
          - mountPath: /etc/ssh/ssh_host_ed25519_key.pub
            name: sftp-host-keys
            subPath: ssh_host_ed25519_key.pub
            readOnly: true
          - mountPath: /etc/ssh/ssh_host_rsa_key
            name: sftp-host-keys
            subPath: ssh_host_rsa_key
            readOnly: true
          - mountPath: /etc/ssh/ssh_host_rsa_key.pub
            name: sftp-host-keys
            subPath: ssh_host_rsa_key.pub
            readOnly: true
            ...
---
apiVersion: v1
kind: Secret
metadata:
  name: sftp-host-keys
  namespace: sftp
stringData:
  ssh_host_ed25519_key: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    ...
    -----END OPENSSH PRIVATE KEY-----
  ssh_host_ed25519_key.pub: |
    ssh-ed25519 AAAA...
  ssh_host_rsa_key: |
    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
  ssh_host_rsa_key.pub: |
    ssh-rsa AAAA...
type: Opaque

@gijo-varghese
Copy link

I'm getting an error "No supported authentication methods available (server sent: publickey)"

@salesh
Copy link

salesh commented Aug 30, 2019

@deepforu47 did you maybe manage this?

@deepforu47
Copy link

@salesh sorry didn't get your question. You are asking me if I manage this?

@salesh
Copy link

salesh commented Aug 30, 2019

@deepforu47 wow, thank you for fast replay 😮 so I am asking like did you manage maybe to work problem that you wrote in your comment?

@deepforu47
Copy link

@salesh - Yes it worked for me, that was the issue on our side regarding the Azure ALB. It was small POC which I did during that time.

@salesh
Copy link

salesh commented Sep 2, 2019

@deepforu47

I have some problems with this. The challenging part is that I need to figurate out how can be mounted directory that contains our python scripts.
This service basically is I want to mount a directory with scripts I then I start two python scripts that are listening input directory and output directory that we mount inside this mounted directory in our container.
In a local environment, this is easy because I start my docker like

docker run -v //c/Users/..../model:/home/foo/upload -p 2222:22 -d testko:1.0.0 foo:pass:1001

User is now simple, currently, I don't want to bother with password security, for sure that I will cover that after i
get this working.
And this is all working...

#1.1 Do I need to create an Azure file share on AKS and then Persistent Volume? How would all of this look like?
I am quite a new to Azure and Kubernetes, I learn much stuff in the last few days, but maybe is there someone that worked on something
like this?

@mbieren
Copy link

mbieren commented Feb 17, 2020

This example is perfectly working for me. But running under Azure I experience the following problem. Each node in the Cluster is issuing a tcp connect to the running pod. This results in the following Log Message Spamming the ELK Stack :

Did not receive identification string from 10.240.0.4 port 50255

10.24.0.4 is one IP of one of the Cluster Nodes. The Message is repeating once per Minute by all nodes. Pretty Annoying. A solution would be to reduce the log level of the ssh daemon. Any Ideas how to accomplish this ?

@strus38
Copy link

strus38 commented May 2, 2020

what should be done to allow anonymous PUT/GET?
Thanks

@ToMe25
Copy link

ToMe25 commented Feb 16, 2021

@jujhars13 I made a slightly improved version of this here,
if you are interested you can copy the changes to here.
Changes i did:

  • Update some things to allow it to work with newer Kubernetes versions
  • Fix environment: environment: production
  • Only pull the image if it isn't already present
  • Rename sftp-public-keys configmap to sftp-client-public-keys and change it to a generic secret
  • Add a generic secret called sftp-host-keys for the servers host keys
  • Make user directory a persistent volume
  • Change sftp port to 23 to allow connecting to the kubernetes node this runs on with ssh
  • Disable apparmor because i couldn't get it to work

@duxinxiao
Copy link

@ToMe25 I don't think it's necessary to change sftp-client-public-keys & sftp-host-keys to a secret. It's doesn't matter if someone else see the public key.

@ToMe25
Copy link

ToMe25 commented Apr 14, 2021 via email

@devops-abhishek
Copy link

devops-abhishek commented Jul 30, 2021

How to run this pod(sftp) with Non-Root user ??

@riprasad
Copy link

I deployed in openShift and when I ssh I end up getting Permission denied (publickey,gssapi-keyex,gssapi-with-mic). anybody geeting the same issue?

@ToMe25
Copy link

ToMe25 commented Oct 14, 2021

Are you trying to connect to the node or the pod using ssh?

If the node, are you trying to connect to the port specified for the pod?
This yaml file sets the port on which you can connect to the pod to port 22.
This is the default ssh port.
That means unless you changed that you won't be able to connect to the node using ssh while the pod is running.
This is because trying to connect to the node will instead connect you to the pod.

If you are trying to connect to the pod using ssh, that can't work.
The SFTP pod is configured to only allow SFTP connections, no ssh connections.
This might cause a different error message tho, I can't remember.

@ToMe25
Copy link

ToMe25 commented Oct 17, 2021

@riprasad as I said in my last message, you shouldn't try to ssh into the sftp server.
The sftp server only allows SFTP connections.
If that isn't what you tried, I'm sorry, but I have no idea what you tried then.
I don't know much about OpenShift tho, so if that wasn't what you tried I probably can't help anyways.

@riprasad
Copy link

@ToMe25 Sorry I deleted my last comment since I was not doing few things right. OpenShift by default runs image as user 1001 and doesn't allow root access. With few tweaks here and there I was able to deploy the server. And you were right, I was trying to ssh into the sftp server. Now I am able to connect by doing sftp to port 23 and CLUSTER-IP of the service.

But, connecting to the sftp server using CLUSTER-IP is possible only if I am logged in into the internal openShift Network. How do I connect to it remotely using some other machine? I tried exposing the service using OpenShift Route and tried connecting to it via sftp but that ain't working.(OpenShift Routes are equivalent to ingress in kubernetes). I am getting the same error Permission denied (publickey,gssapi-keyex,gssapi-with-mic)

@ToMe25
Copy link

ToMe25 commented Oct 18, 2021

@riprasad I have just tested it, and the error message for connecting to the sftp server using ssh is a different one.

Also I have never used OpenShift, so this is just my guess, but I have an idea why it doesn't work using the Route.
If a Route really is something similar to a Kubernetes Ingress, then it can't work with SSH connections afaik.
This is because the Ingress system uses the target subdomain to determine which pod to route the connection to.
However a TCP connection does not contain this information.
Only higher level protocol specifications sometimes add this information.
HTTP adds this information, so it can work with Ingress like structures, SSH does not add this information afaik, so it cannot work with Ingress like structures.

Simply put SSH does not specify which domain on that IP it wants to connect to, so there is nothing something Ingress like can do to route the connection to the correct target.

What you have to do instead is reserve some port on the host exclusively for connections to the sftp server.
That is the only way I know of, at least.

@riprasad
Copy link

riprasad commented Nov 2, 2021

That makes sense. Thanks for the explanation @ToMe25

Also, these lines from the documentation pretty much confirms that

An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.

@afshinyavari
Copy link

@riprasad

Is it possible to get some help with the tweaks you made to get it working on openshift?

@riprasad
Copy link

riprasad commented Jan 13, 2022

@afshinyavari Sure. You'll basically have to create a service account and grant it anyuid SCC to bypass the default security constraints in OpenShift. You can run the below commands as admin to achieve the same: -

$ oc create serviceaccount sftp-sa
$ oc adm policy add-scc-to-user anyuid -z sftp-sa

Use the created service account in your deployment. In addition, you will also need to configure the security context for the container. Here's the snippet:-

spec:
   serviceAccountName: sftp-sa
    containers:       
       securityContext:
            privileged: true

@riprasad
Copy link

@afshinyavari Also, I found this project which is compatible with OpenShift https://github.com/drakkan/sftpgo

I did not find time to deploy this but please feel free to explore it, since it is openshift compatible out-of-the-box and offers better features too. Let me know if you're able to deploy this successfully, in case you decide to choose this one over atmoz-sftp

@ToMe25
Copy link

ToMe25 commented Jan 13, 2022 via email

@riprasad
Copy link

yea, sftpgo indeed is an interesting project! Do share the manifests if you decide to give it a shot :)

@marcinkubica
Copy link

sftpgo is all fine, sadly until you actually need a debug - drakkan/sftpgo#1412

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment