apiVersion: v1 | |
kind: Namespace | |
metadata: | |
name: sftp | |
--- | |
kind: Service | |
apiVersion: v1 | |
metadata: | |
name: sftp | |
namespace: sftp | |
labels: | |
environment: production | |
spec: | |
type: "LoadBalancer" | |
ports: | |
- name: "ssh" | |
port: 22 | |
targetPort: 22 | |
selector: | |
app: sftp | |
status: | |
loadBalancer: {} | |
--- | |
kind: Deployment | |
apiVersion: extensions/v1beta1 | |
metadata: | |
name: sftp | |
namespace: sftp | |
labels: | |
environment: environment: production | |
app: sftp | |
spec: | |
# how many pods and indicate which strategy we want for rolling update | |
replicas: 1 | |
minReadySeconds: 10 | |
template: | |
metadata: | |
labels: | |
environment: production | |
app: sftp | |
annotations: | |
container.apparmor.security.beta.kubernetes.io/sftp: runtime/default | |
spec: | |
#secrets and config | |
volumes: | |
- name: sftp-public-keys | |
configMap: | |
name: sftp-public-keys | |
containers: | |
#the sftp server itself | |
- name: sftp | |
image: atmoz/sftp:latest | |
imagePullPolicy: Always | |
env: | |
# - name: PASSWORD | |
# valueFrom: | |
# secretKeyRef: | |
# name: sftp-server-sec | |
# key: password | |
args: ["myUser::1001:100:incoming,outgoing"] #create users and dirs | |
ports: | |
- containerPort: 22 | |
volumeMounts: | |
- mountPath: /home/myUser/.ssh/keys | |
name: sftp-public-keys | |
readOnly: true | |
securityContext: | |
capabilities: | |
add: ["SYS_ADMIN"] | |
resources: {} |
This comment has been minimized.
This comment has been minimized.
@jujhars13 Thanks for sharing. I'm trying your example, and I can SSH to the pod through kubectl exec. However, I cannot connect to the pod using SFTP (connection timeout). I am using the worker node public IP. Is there a different location I should use? Kubernetes is new to me, so I'm still learning the ropes. |
This comment has been minimized.
This comment has been minimized.
@jujhars13 My issue was that my provider doesn't support LoadBalancer on free clusters, so I used NodePorts instead. I was able to receive SFTP by specifying the NodePort. Unfortunately, I need to send through port 22, which is below the NodePort range. It seems possible to change the NodePort range with the kube-apiserver command line tool, but I can't figure out how to install it. |
This comment has been minimized.
This comment has been minimized.
There is a error in the YAML at line 34. Value environment: is there 2 times on the line. |
This comment has been minimized.
This comment has been minimized.
Hi, I have tried this on AKS and not getting external IP. I would like to access sftp using external Public IP. But some how its just showing in pending state. $ kubectl.exe get svc sftp -n sftp
$kubectl.exe describe svc/sftp -n sftp
$kubectl.exe get svc sftp -n sftp -o yaml
|
This comment has been minimized.
This comment has been minimized.
Below is what I used for nodeport kind: Service
Using OpenSSH client you can use your pulic IP address for the node example - using ibmcloud ks workers DSGPOC1 |
This comment has been minimized.
This comment has been minimized.
First, thank you @jujhars13 for providing this yaml file, it is extremely helpful. I am running into a problem. I can't get the host keys to persist. I have tried mounting host keys as secrets, or creating a persistent volume claim and then mounting /etc/ssh into the pvc. Nothing I try seems to work. Has anyone figured out how to persist host keys on Kubernetes? I have searched for example yaml files and no one has ever addressed this. |
This comment has been minimized.
This comment has been minimized.
Not sure if you're still looking for a way to get host keys to persist @jdwrink, but mounting host key secrets into their relevant
|
This comment has been minimized.
This comment has been minimized.
I'm getting an error "No supported authentication methods available (server sent: publickey)" |
This comment has been minimized.
This comment has been minimized.
@deepforu47 did you maybe manage this? |
This comment has been minimized.
This comment has been minimized.
@salesh sorry didn't get your question. You are asking me if I manage this? |
This comment has been minimized.
This comment has been minimized.
@deepforu47 wow, thank you for fast replay |
This comment has been minimized.
This comment has been minimized.
@salesh - Yes it worked for me, that was the issue on our side regarding the Azure ALB. It was small POC which I did during that time. |
This comment has been minimized.
This comment has been minimized.
I have some problems with this. The challenging part is that I need to figurate out how can be mounted directory that contains our python scripts. docker run -v //c/Users/..../model:/home/foo/upload -p 2222:22 -d testko:1.0.0 foo:pass:1001 User is now simple, currently, I don't want to bother with password security, for sure that I will cover that after i #1.1 Do I need to create an Azure file share on AKS and then Persistent Volume? How would all of this look like? |
This comment has been minimized.
This comment has been minimized.
This example is perfectly working for me. But running under Azure I experience the following problem. Each node in the Cluster is issuing a tcp connect to the running pod. This results in the following Log Message Spamming the ELK Stack :
10.24.0.4 is one IP of one of the Cluster Nodes. The Message is repeating once per Minute by all nodes. Pretty Annoying. A solution would be to reduce the log level of the ssh daemon. Any Ideas how to accomplish this ? |
This comment has been minimized.
This comment has been minimized.
what should be done to allow anonymous PUT/GET? |
This comment has been minimized.
This comment has been minimized.
@jujhars13 I made a slightly improved version of this here,
|
This comment has been minimized.
Mount public keys in bash as a K8s secret