Basic guide to deploying a simple three-node Lustre cluster on Centos 7.
Updated fork of joshuar/Three-Node-LustreFS-Cluster-Quickstart.md, which was based on Intel's How to Create and Mount a Lustre Filesystem.
(Note: I am a newbie to Lustre so this overview is extremely light and hopefully what depth there is isn't full of mistakes!)
The original document was
- based on Centos 6; this is for Centos 7
- assumed use of ZFS; this one uses Ext
The commands are presented in shell-script form, so that they can be easily copied and pasted into a terminal window. You should never blindly copy strings of commands into a terminal, especially as a privileged user. They are presented here because many of the commands must be replicated in multiple nodes.
The following is needed for ZFS which is not required for this cluster, which uses the Ext filesystem as a basis for Lustre.
- Downgrade kernel if needed (an upgrade may be required instead):
yum remove kernel-$(uname -r)
- Enable EPEL repo:
yum install epel-release
- Enable ZFS on Linux repo (not sure why this is necessary):
yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7_3.noarch.rpm
- Enable lustre repos:
sudo sh -c 'cat >/etc/yum.repos.d/lustre.repo' <<EOF
[lustre-server]
name=CentOS-$releasever - Lustre
baseurl=https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/el7/server/
gpgcheck=0
[e2fsprogs]
name=CentOS-$releasever - Ldiskfs
baseurl=https://downloads.hpdd.intel.com/public/e2fsprogs/latest/el7/
gpgcheck=0
[lustre-client]
name=CentOS-$releasever - Lustre
baseurl=https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/el7/client/
gpgcheck=0
EOF
# upgrade e2fsprogs
sudo yum upgrade -y e2fsprogs
# install lustre-tests
sudo yum install -y lustre-tests
# create lnet module configuration (use appropriate interconnect in place of
# "tcp0" and appropriate interface in place of "eth0"
sudo sh -c 'cat > /etc/modprobe.d/lnet.conf' <<EOF
options lnet networks=tcp0(eth0)
EOF
- There is now a Lustre kernel installed; reboot to activate:
sudo reboot
- On the MGS and OSS only, have the
lnet
module auto-load on boot:
sudo sh -c 'cat > /etc/sysconfig/modules/lnet.module' <<EOF
#!/bin/sh
if [ ! -c /dev/lnet ] ; then
exec /sbin/modprobe lnet >/dev/null 2>&1
fi
EOF
sudo chmod 744 /etc/sysconfig/modules/lnet.module
- On the MGS/MDT/MDS:
- Intialise a disk or partition to use for lustre.
- Create a lustre MDT:
mkfs.lustre --fsname=whatevs --mgs --mdt --index=0 /dev/sdX
- Create a mount point and mount the lustre FS:
mkdir /mnt/mdt && mount -t lustre /dev/sdX /mnt/mdt
- On the OST/OSS:
- Intialise a disk or partition to use for lustre.
- Create a lustre OST:
mkfs.lustre --ost --fsname=whatevs --mgsnode=192.168.N.N@tcp0 --index=0 /dev/sdX
- Adjust the
--mgsnode
parameter for the address and protocol used for the MGS.
- Create a mount point and mount the lustre FS:
mkdir /ostoss_mount && mount -t lustre /dev/sdX /ostoss_mount
- On the client:
# load the Lustre kernel module
sudo modprobe lustre
# create script to load Lustre module on boot
sudo sh -c 'cat > /etc/sysconfig/modules/lustre.modules' <<EOF
#!/bin/sh
/sbin/lsmod | /bin/grep lustre 1>/dev/null 2>&1
if [ ! $? ] ; then
/sbin/modprobe lustre >/dev/null 2>&1
fi
EOF
sudo chmod 744 /etc/sysconfig/modules/lustre.modules
- Create a mount point:
mkdir /mnt/lustre
.- Mount the lustre FS:
mount -t lustre 192.168.N.N@tcp0:/whatevs /mnt/lustre
- Mount the lustre FS: