Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Install a NFS Server inside a LXC Container on Proxmox 5.1

Installing NFS inside LXC Container on Proxmox 5.1

Host Setup:

Create LXC Container as usual, but do not start it yet.

# Install NFS-Kernel on Host
apt install nfs-kernel-server

# Create a new AppArmor file: 
touch /etc/apparmor.d/lxc/lxc-default-with-nfsd

# Write Profile:
cat > /etc/apparmor.d/lxc/lxc-default-with-nfsd << 'EOF'
# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-with-nfsd flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

  # the container may never be allowed to mount devpts.  If it does, it
  # will remount the host's devpts.  We could allow it to do it with
  # the newinstance option (but, right now, we don't).
  deny mount fstype=devpts,
  mount fstype=nfsd,
  mount fstype=rpc_pipefs,
  mount fstype=cgroup -> /sys/fs/cgroup/**,
}
EOF

# Activate the new Profile:
apparmor_parser -r /etc/apparmor.d/lxc-containers

# Add Profile to Container:
# (in this case: id = 200)
echo 'lxc.apparmor.profile = lxc-container-default-with-nfsd' \
  >> /etc/pve/nodes/sniebel/lxc/200.conf

# As well as to it's config:
echo 'lxc.apparmor.profile = lxc-container-default-with-nfsd' \
  >> /var/lib/lxc/200/config
  
# Also add your mountpoint to the container:
# If you have a cluster setup:
echo 'mp0: /mnt/host_storage,mp=/mnt/container_storage' \
  >> /etc/pve/nodes/cluster_node/lxc/200.conf

# If you have a single node setup:
echo 'mp0: /mnt/host_storage,mp=/mnt/container_storage' \
  >> /etc/pve/lxc/200.conf

# Finall start the container:
lxc-start -n 200

Container Setup:

ssh into the container or do a simple lxc-attach -n 200 on your host (where 200 is the id).

# Install nfs
apt update
apt install nfs-kernel-server

# Edit Exports
nano /etc/exports

# or append like so (example):
echo '/mnt/container_storage 192.168.0.0/16(rw,async,insecure,no_subtree_check,all_squash,anonuid=501,anongid=100,fsid=1)' \
  >> /etc/exports

# disconnect from the container

# Restart it:

Host again:

Back on the Host restart the container:

lxc-stop -n 200
lxc-start -n 200

Because the nfs-kernel is on the host, the container cannot access it's status. service nfsd status therefore shows as 'not running' inside the container. .. this seems to be normal (?)


Further useful commands:

nfsstat # list NFS statistics
@hvisage

This comment has been minimized.

Copy link

hvisage commented Feb 19, 2019

For Proxmox 5.3, there is a Options -> Features where you have to select "Nesting" and "NFS". Will still need to check what of the above is needed as I used a big hammer "turn off apparmor" approach, but still had to do the Nesting 7 NFS selections

@rebootit

This comment has been minimized.

Copy link

rebootit commented Apr 30, 2019

Hi,

I confirm in 5.4.4 this is not necessary anymore. Ticking Options -> Features, "Nesting" and "NFS" is enough (for privileged CT). And you can keep Apparmor activated ;-)

@rwenz3l

This comment has been minimized.

Copy link
Owner Author

rwenz3l commented Jun 4, 2019

Oh Nice, I never really checked back on this and also GitHub did not notify me about gist comments. Thanks for the inputs! @hvisage and @rebootit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.