Cinder maintains an iSCSI connection between the cinder-volume and compute node when utilizing the LVM driver (cinder.volume.drivers.lvm.LVMISCSIDriver).
If one uses the NFS driver for a backend (cinder.volume.drivers.nfs.NfsDriver), the volume is served to the compute node via NFS share.
- Controller (192.168.2.1)
apt-get install cinder-api
apt-get install cinder-scheduler
cat /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
sql_connection = mysql://cinder:notcinder@192.168.2.1/cinder
my_ip= 192.168.2.1
enabled_backends=cinder-volumes-1-driver
- Compute (192.168.2.2)
apt-get install nova-compute
apt-get install nfs-common
- Cinder-Volume (192.168.2.3)
apt-get install cinder-volume
apt-get install nfs-common
cat /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
sql_connection = mysql://cinder:notcinder@192.168.2.1/cinder
my_ip= 192.168.2.3
rabbit_host = 192.168.2.1
enabled_backends=cinder-volumes-1-driver
[cinder-volumes-1-driver]
nfs_shares_config=/etc/cinder/shares.txt
volume_driver=cinder.volume.drivers.nfs.NfsDriver
volume_backend_name=NFS
- NFS-Server 1 (192.168.2.50)
apt-get install nfs-kernel-server
cat /etc/exports
/share1 192.168.2.0/24(rw,fsid=0,insecure,no_subtree_check,async)
- NFS-Server 2 (192.168.2.60)
apt-get install nfs-kernel-server
cat /etc/exports
/share2 192.168.2.0/24(rw,fsid=0,insecure,no_subtree_check,async)
When one creates a cinder volume utilizing a NFS backend, a loop device is created in one of the specified NFS shares (/etc/cinder/shares.txt).
The driver determines which share the loop device should be created on based off available capacity for volume size specifed.
When one attaches a volume to an instance, the compute node mounts the nfs server share. The volume (loopback device) that resides in the share is served to the instance via libvirt.
- Attaching one volume to multiple vm's is prohibited.
- Snapshots are prohibited.
I believe the line...
...should read...