Setting up a Ceph storage platform
Ceph is a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage.
- Wikipedia article: https://en.wikipedia.org/wiki/Ceph_(software)
- The Ceph homepage.
- The ceph-users mailing list.
- Ceph Github repository.
- RedHat Ceph page.
- SUSE Ceph Solution.
- Ceph-salt Salt states for Ceph cluster deployment.
- ceph-ansible Ansible playbooks for Ceph
- RedHat Storage Cluster Installation manual.
- How to build a Ceph Distributed Storage Cluster on CentOS 7
- Using Ceph as Block Device on CentOS 7
- How to Mount CephFS on CentOS 7
- Ceph implements distributed object storage. Ceph’s software libraries provide client applications with direct access to the reliable autonomic distributed object store (RADOS) object-based storage system, and also provide a foundation for some of Ceph’s features, including RADOS Block Device (RBD), RADOS Gateway, and the Ceph File System.
Get an overview of current stable and development versions:
- The Ceph_releases page.
First follow the preflight instructions for RHEL/CentOS.
Enable the EPEL repository:
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
cat << EOM > /etc/yum.repos.d/ceph.repo [ceph-noarch] name=Ceph noarch packages # baseurl=https://download.ceph.com/rpm-luminous/el7/noarch baseurl=https://download.ceph.com/rpm-mimic/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc EOM
The baseurl determines which release you will get.
Install this package:
yum install ceph-deploy
Make sure that NTP is installed and configured:
yum install ntp ntpdate ntp-doc
Install SSH server:
yum install openssh-server
Following the tutorial How to build a Ceph Distributed Storage Cluster on CentOS 7 we first create a Ceph user:
export CephUSER=984 groupadd -g $CephUSER cephuser useradd -m -c "Ceph storage user" -d /var/lib/cephuser -u $CephUSER -g cephuser -s /bin/bash cephuser passwd cephuser
Please do NOT use ceph as the user name.
Om Monitor nodes:
firewall-cmd --zone=public --add-service=ceph-mon --permanent
On OSD and MDS nodes:
firewall-cmd --zone=public --add-service=ceph --permanent
On all nodes then reload the firewalld:
Follow the quickstart instructions for RHEL/CentOS.
You need an admin node which is not one of the Ceph nodes. Log in to the admin node and run these instructions as user cephuser (not as root or by sudo)!
mkdir my-cluster cd my-cluster
Create the cluster on node mon1:
ceph-deploy new mon1
This will create ceph.conf and other configuration files in the current directory.
The ceph-deploy tool will install the old jewel v.10 release by default!
Install Ceph on the monitor and OSD nodes:
ceph-deploy install --release mimic mon1 osd1 osd2 osd3
After the installation has been completed, you may verify the Ceph version on all nodes:
cephuser# sudo ceph --version
Deploy the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial
Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes:
ceph-deploy admin mon1 mds1 osd1 osd2 osd3
Deploy a manager daemon on the monitor node (required only for luminous+ builds):
ceph-deploy mgr create mon1
Create data devices (here assuming /dev/sdXX - change this to an unused disk device) on all the OSD nodes:
ceph-deploy osd create --data /dev/sdXX osd1 ceph-deploy osd create --data /dev/sdXX osd2 ceph-deploy osd create --data /dev/sdXX osd3
Check the health details:
ceph health detail
The correct result would be:
Do the Exercise: Locate an Object section at the end of the quickstart page. Remember that all commands must be preceded by sudo.
At the end of the exercise the storage pool is removed, however, this is not permitted with the mimic release. The following error is printed:
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
git clone https://github.com/ceph/ceph-ansible.git git checkout masteronsult the Ceph_releases_ page for
yum install ansible
See also our Wiki page Ansible_configuration.
The CephFS filesystem (Ceph FS) is a POSIX-compliant filesystem that uses a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3 and Swift APIs, or native bindings (librados).
As a rough guide, as of Ceph 10.x (Jewel), you should be using a least a 4.x kernel. If you absolutely have to use an older kernel, you should use the fuse client instead of the kernel client.
For a configuration guide for CephFS, please see the CephFS instructions.
See the createfs page.
First create two RADOS pools:
ceph osd pool create cephfs_data 128 ceph osd pool create cephfs_metadata 128
ceph osd pool set my_ec_pool allow_ec_overwrites true
List the RADOS pools by:
ceph osd lspools
Create a filesystem by:
ceph fs new cephfs cephfs_metadata cephfs_data ceph fs ls
Show the metadata server mds1 status:
ceph mds stat
To check a cluster’s data usage and data distribution among pools, you can use the df option on the monitoring node:
Installation of ceph-fuse package seems to be undocumented. The CephFS client host must first install some prerequisites:
- Enable the EPEL repository as shown above for preflight.
- Copy the file /etc/yum.repos.d/ceph.repo to the client host to enable the Ceph repository.
Then install the FUSE package:
yum clean all yum install ceph-fuse
Copy the Ceph config and client keyring files from the monitor node (mon1):
client# mkdir /etc/ceph mon1# cd /etc/ceph; scp ceph.conf ceph.client.admin.keyring client:/etc/ceph/
Do not give extra permissions to the secret ceph.client.admin.keyring file!
Mounting on the client host in /u/cephfs:
mkdir /u/cephfs ceph-fuse /u/cephfs
List all mounted FUSE filesystems by:
findmnt -t fuse.ceph-fuse
Umount the filesystem by:
fusermount -u /u/cephfs
The FUSE mount can be added to /etc/fstab as follows:
none /u/cephfs fuse.ceph ceph.conf=/etc/ceph/ceph.conf,_netdev,defaults 0 0
Now you can mount the filesystem manually:
Start and enable Systemd services for the /u/cephfs mount point:
systemctl start ceph-fuse@/u/cephfs.service systemctl enable ceph-fuse@/u/cephfs.service