On host server for NFS
yum install nfs-utils nfs-utils-lib -y
Create dir for nfs share and set permissions
mkdir /nfs
chmod 655 /nfs
chown nfsnobody:nfsnobody /nfs
Edit the exports for the NFS server
vi /etc/exports
Add the export to the NFS share that you setup above i.e. the following
/nfs AE5_MASTER_IP(rw,sync,no_root_squash)
Enable and start up processes for NFS share
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start nfs-server
systemctl start rpcbind
On the AE5 master go to OPS Center and login using your credentials. Click on Configuration on the left hand side and do a search for NFS. You will see the following uncommented section that is already setup/
volumes:
{}
You would want to replace/setup this with the NFS details that you setup above with something like the example below.
volumes:
mynfs: # Directory to access in the projects themselves
nfs:
server: NFS_SERVER_IP
path: "/nfs/" # Mount point on the NFS server and in this example was /nfs/
readOnly: true
Click Apply at the bottom of the file to save the changes. Login to the AE5 master through SSH (You can also access this through OPS Center by getting root of the master AE5 server).
gravity enter
# Restart all of the pods to take the changes we made above
kubectl get pods | cut -d' ' -f1 | xargs kubectl delete pods
Watch the pods come back up and when all are running and ready proceed to the next steps.
watch kubect get pods
Login to the AE5 UI and try to create a new project. Once the project comes up you can start a session for the project as well.
From the project you should be able to access the NFS share at the following location:
/data/mynfs/DATAFILE.csv
Below is a quick pandas example reading a CSV from an NFS share called bigdata
import pandas as pd
data = pd.read_csv("/data/bigdata/test_data.csv")
# Check the first 10 rows of the data
data.head()