Skip to content

Instantly share code, notes, and snippets.

@frjaraur
Last active September 25, 2018 14:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save frjaraur/3b468af52f6f0a61069d6c6a6235bffc to your computer and use it in GitHub Desktop.
Save frjaraur/3b468af52f6f0a61069d6c6a6235bffc to your computer and use it in GitHub Desktop.
Ceph Cluster on One Node

Ceph Cluster on One Node

Server Configurations

Host Specific Config

Ensure that "ceph" hostname's resolves to your IP

cat /etc/hosts
127.0.0.1	localhost
127.0.1.1	ceph

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

192.168.2.23 ceph

Add repository and install ceph-deploy

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -

echo deb http://download.ceph.com/debian-jewel/ xenial main | sudo tee /etc/apt/sources.list.d/ceph.list

sudo apt-get update && sudo apt-get install ceph-deploy

Prepare a user for managing the cluster

sudo useradd -m -s /bin/bash ceph-deploy

echo "ceph-deploy:changeme"|sudo chpasswd

echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploy

sudo chmod 0440 /etc/sudoers.d/ceph-deploy

Prepare ceph-deploy user's ssh keys

su - ceph-deploy

ssh-keygen  -t rsa -P "" -f ~ceph-deploy/.ssh/id_rsa

ssh-copy-id ceph-deploy@ceph

--Or --
cat ~ceph-deploy/.ssh/id_rsa.pub >~ceph-deploy/.ssh/authorized_keys

sudo chown -R ceph-deploy:ceph-deploy ~ceph-deploy/.ssh/ && chmod 600 ~ceph-deploy/.ssh/*
-------

Enable access on your .ssh/config

Host ceph
  Hostname ceph
  User ceph-deploy

Deploy ceph cluster as user "ceph-deploy"(our test ceph cluster is called "ceph" in this demo)

su - ceph-deploy

cd ~

mkdir my-cluster

cd my-cluster

Create an initial cluster config in this directory (my-cluster).

ceph-deploy new ceph

Change configuration for just one node

Default pool size is how many replicas of our data we want (2). The chooseleaf setting is required to tell ceph we are only a single node and that it’s OK to store the same copy of data on the same physical node. Normally for safety, ceph distributes the copies and won’t leave all your eggs in the same basket (server).

echo "osd pool default size = 2" >> ceph.conf
echo "osd crush chooseleaf type = 0" >> ceph.conf

This installs the ceph binaries and copies our initial config file

ceph-deploy install ceph

Before we can create storage OSDs we need to create a monitor.

ceph-deploy mon create-initial

Clear up the disks to remove all the pre-existing data and partitioning tables.

sudo /usr/sbin/ceph-disk zap /dev/sdb

sudo /usr/sbin/ceph-disk zap /dev/sdc

sudo /usr/sbin/ceph-disk zap /dev/sdd

Create the OSDs that will hold our data

ceph-deploy osd prepare ceph:sdb

ceph-deploy osd prepare ceph:sdc

ceph-deploy osd prepare ceph:sdd

And activate them.

ceph-deploy osd activate ceph:/dev/sdb1

ceph-deploy osd activate ceph:/dev/sdc1

ceph-deploy osd activate ceph:/dev/sdd1

Restribute our config and keys and fix permissions.

ceph-deploy admin ceph

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

Cluster is up and running and we can check health

ceph -s

Install object storage gateway:

ceph-deploy rgw create ceph

Install cephfs:

ceph-deploy mds create ceph

Before we can create a filesystem we need to create an osd pool to store it on.

ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128

Now create the filesystem.

ceph fs new cephfs cephfs_metadata cephfs_data

By default all access operations require authentication. The ceph install has created some default credentials for us. We will use it later on client configuration.

cat ~/my-cluster/ceph.client.admin.keyring

[client.admin]
    key = AQCv2yRXOVlUMxAAK+e6gehnirXTV0O8PrJYQQ==

Client Configurations

In order to mount it in Linux we need to install the ceph client libraries.

sudo apt-get install ceph-fs-common

Next we need to create a mountpoint for the filesystem.

sudo mkdir /mnt/mycephfs

Now, on client node, we will use it to mount this newly created filesystem.

sudo mount -t ceph ceph:6789:/ /mnt/mycephfs -o name=admin,secret=AQCv2yRXOVlUMxAAK+e6gehnirXTV0O8PrJYQQ==

Testing one file creation

  1. Create one file
echo "This a TEST" >testfile.txt
  1. Now create a pool (in this case "mytest")
ceph osd pool create mytest 8
  1. Now we put testfile on the pool
rados put testfile $(pwd)/testfile.txt --pool=mytest
  1. We can now check file existence on "mytest" pool
rados -p mytest ls
  1. We can check object mapping
ceph osd map mytest testfile
  1. We can now delete the file previously created on "mytest" pool
rados rm testfile  --pool=mytest
  1. Finally, we delete the pool
ceph osd pool delete mytest mytest --yes-i-really-really-mean-it

REFERENCE: http://prashplus.blogspot.com/2018/01/ceph-single-node-setup-ubuntu.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment