Setting up a Ceph storage platform
Ceph is a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage.
Wikipedia article: https://en.wikipedia.org/wiki/Ceph_(software)
The Ceph homepage.
The ceph-users mailing list.
RedHat Ceph page.
Ceph-salt Salt states for Ceph cluster deployment.
ceph-ansible Ansible playbooks for Ceph
RedHat Storage Cluster Installation manual.
Ceph implements distributed object storage. Ceph’s software libraries provide client applications with direct access to the reliable autonomic distributed object store (RADOS) object-based storage system, and also provide a foundation for some of Ceph’s features, including RADOS Block Device (RBD), RADOS Gateway, and the Ceph File System.
Get an overview of current stable and development versions:
The Ceph_releases page.
First follow the preflight instructions for RHEL/CentOS.
Enable the EPEL repository:
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Then enable the Ceph repository for the current mimic or luminous release Yum repository:
cat << EOM > /etc/yum.repos.d/ceph.repo [ceph-noarch] name=Ceph noarch packages # baseurl=https://download.ceph.com/rpm-luminous/el7/noarch baseurl=https://download.ceph.com/rpm-mimic/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc EOM
The baseurl determines which release you will get.
Install this package:
yum install ceph-deploy
Make sure that NTP is installed and configured:
yum install ntp ntpdate ntp-doc
Install SSH server:
yum install openssh-server
Following the tutorial How to build a Ceph Distributed Storage Cluster on CentOS 7 we first create a Ceph user:
export CephUSER=984 groupadd -g $CephUSER cephuser useradd -m -c "Ceph storage user" -d /var/lib/cephuser -u $CephUSER -g cephuser -s /bin/bash cephuser passwd cephuser
Please do NOT use
ceph as the user name.
The Ceph preflight instructs to add ceph-mon and ceph services to firewalld.
Om Monitor nodes:
firewall-cmd --zone=public --add-service=ceph-mon --permanent
On OSD and MDS nodes:
firewall-cmd --zone=public --add-service=ceph --permanent
On all nodes then reload the firewalld:
Quickstart instructions for RHEL/CentOS
Follow the quickstart instructions for RHEL/CentOS.
You need an admin node which is not one of the Ceph nodes. Log in to the admin node and run these instructions as user cephuser (not as root or by sudo)!
Create a Ceph Storage Cluster with one Ceph Monitor and three Ceph OSD Daemons:
mkdir my-cluster cd my-cluster
Create the cluster on node mon1:
ceph-deploy new mon1
This will create
ceph.conf and other configuration files in the current directory.
The ceph-deploy tool will install the old jewel v.10 release by default!
You need to specify the current stable mimic v.13 (or the older luminous v.12) release explicitly, otherwise you will get the old jewel v.10 by default!! See this thread.
Install Ceph on the monitor and OSD nodes:
ceph-deploy install --release mimic mon1 osd1 osd2 osd3
After the installation has been completed, you may verify the Ceph version on all nodes:
cephuser# sudo ceph --version
Deploy the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial
Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes:
ceph-deploy admin mon1 mds1 osd1 osd2 osd3
Deploy a manager daemon on the monitor node (required only for luminous+ builds):
ceph-deploy mgr create mon1
Create data devices (here assuming /dev/sdXX - change this to an unused disk device) on all the OSD nodes:
ceph-deploy osd create --data /dev/sdXX osd1 ceph-deploy osd create --data /dev/sdXX osd2 ceph-deploy osd create --data /dev/sdXX osd3
Check the health details:
ceph health detail
The correct result would be:
Test the Ceph cluster
Do the Exercise: Locate an Object section at the end of the quickstart page.
Remember that all commands must be preceded by
At the end of the exercise the storage pool is removed, however, this is not permitted with the mimic release. The following error is printed:
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
Configure Ceph with Ansible
Instructions are in the ceph-ansible page. onsult the Ceph_releases page for Check out the master branch:
git clone https://github.com/ceph/ceph-ansible.git git checkout masteronsult the Ceph_releases_ page for
yum install ansible
See also our Wiki page Ansible configuration of Linux servers and desktops.
Ceph Filesystem (CephFS)
The CephFS filesystem (Ceph FS) is a POSIX-compliant filesystem that uses a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3 and Swift APIs, or native bindings (librados).
See the CephFS_best_practices for recommendations for best results when deploying CephFS. For RHEL/CentOS 7 with kernel 3.10 the following recommendation of the FUSE client applies:
As a rough guide, as of Ceph 10.x (Jewel), you should be using a least a 4.x kernel. If you absolutely have to use an older kernel, you should use the fuse client instead of the kernel client.
For a configuration guide for CephFS, please see the CephFS instructions.
Create a Ceph filesystem
See the createfs page.
First create two RADOS pools:
ceph osd pool create cephfs_data 128 ceph osd pool create cephfs_metadata 128
The number of placement-groups (PG) is 128 in this example, as appropriate for <5 OSDs, see the placement-groups page.
An erasure-code pool may alternatively be created on 3 or more OSD hosts, in this case one also needs (see createfs):
ceph osd pool set my_ec_pool allow_ec_overwrites true
List the RADOS pools by:
ceph osd lspools
Create a filesystem by:
ceph fs new cephfs cephfs_metadata cephfs_data ceph fs ls
Show the metadata server mds1 status:
ceph mds stat
To check a cluster’s data usage and data distribution among pools, you can use the df option on the monitoring node:
Mount CephFS using FUSE
Installation of ceph-fuse package seems to be undocumented. The CephFS client host must first install some prerequisites:
Copy the file
/etc/yum.repos.d/ceph.repoto the client host to enable the Ceph repository.
Then install the FUSE package:
yum clean all yum install ceph-fuse
FUSE documentation is in http://docs.ceph.com/docs/mimic/cephfs/fuse/
Copy the Ceph config and client keyring files from the monitor node (mon1):
client# mkdir /etc/ceph mon1# cd /etc/ceph; scp ceph.conf ceph.client.admin.keyring client:/etc/ceph/
Do not give extra permissions to the secret
Mounting on the client host in
mkdir /u/cephfs ceph-fuse /u/cephfs
The FUSE server is read from
ceph.conf, or may be specified explicitly by the option
List all mounted FUSE filesystems by:
findmnt -t fuse.ceph-fuse
Umount the filesystem by:
fusermount -u /u/cephfs
Mount by fstab
The FUSE mount can be added to
/etc/fstab as follows:
Add a FUSE mount point to fstab like this example:
none /u/cephfs fuse.ceph ceph.conf=/etc/ceph/ceph.conf,_netdev,defaults 0 0
Now you can mount the filesystem manually:
Start and enable Systemd services for the
/u/cephfs mount point:
systemctl start ceph-fuse@/u/cephfs.service systemctl enable ceph-fuse@/u/cephfs.service