Ceph single node deployment

Lars Jönsson 2020-01-14

Instructions how to install Ceph on a single node.

To be added:

Background info

General guides.

Official Kubernetes documents.

Official Ceph documents.

Preparations

Even though we’re doing a single node deployment, the ceph-deploy tool expects to be able to ssh into the local host as root, without password prompts. So before starting, make sure to install ssh keys and edit /etc/ssh/sshd_config to set PermitRootLogin to yes. Also ensure that SELinux is set to Permissive or less.

First, we need the ceph-deploy tool installed

$ pip install ceph-deploy

Installation of Ceph

Everything that follows should be run as root.

ceph-deploy will create some config files in the local directory, so it is best to create a directory to hold them and run it from there.

# mkdir pv-cluster
# cd pv-cluster

Make sure that the hostname for the local machine is resolvable, both with domain name and unqualified. If it is not, then add entries to /etc/hosts to make it resolve. The first step simply creates the basic config file for ceph-deploy.

# export CEPH_HOST=`hostname -f`
# ceph-deploy new $CEPH_HOST

Since this will be a single node deployment there are 2 critical additions that must be made to the ceph.conf that was just created in the current directory

# echo "osd crush chooseleaf type = 0" >> ceph.conf
# echo "osd pool default size = 1" >> ceph.conf

Without these two settings, the storage will never achieve a healthy status.

Now tell ceph-deploy to actually install the main ceph software. By default it will try to activate YUM repos hosted on ceph.com, but Fedora has everything needed, so the ‘--no-adjust-repos‘ argument tells it not to add custom repos.

# ceph-deploy install --no-adjust-repos $CEPH_HOST

With the software install the monitor service can be created and started.

# ceph-deploy mon create-initial

Executing ceph-deploy admin will push a Ceph configuration file and the ceph.client.admin.keyring to the /etc/ceph directory of the nodes, so we can use the ceph CLI without having to provide the ceph.client.admin.keyring each time to execute a command.

# ceph-deploy admin $CEPH_HOST

Install the manager daemon. It's responsible for monitoring the state of the Cluster and also manages modules/plugins.

# ceph-deploy mgr create $CEPH_HOST 

Storage

Create the Object Storage Device (osd).

# ceph-deploy osd create --data /dev/common/share $CEPH_HOST

Create pools.

NOTE: This is just an example!

# ceph osd pool create k8s-pool-1 128 128 replicated 
# ceph osd pool create kube 100 100

Dashboard

Start the dashboard. The package for the dashboard needs to be installed before it can be enabled.

# ceph-deploy pkg --install ceph-mgr-dashboard $CEPH_HOST 
# ceph mgr module enable dashboard
# ceph config set mgr mgr/dashboard/ssl false

Create the admin user. Replace secret with your own password.

# ceph dashboard ac-user-create admin secret administrator

By default, the dashboard is available on the host running the manager on port 8080. Use the password specified when creating the user.

Update configuration

The configuration is updated by editing the ceph.conf file and pushing the updated file to the nodes. All command should be run as root.

Start with some preparation.

# export CEPH_HOST=`hostname -f`
# cd pv-cluster

Update the configuration file using your favorite editor.

# emacs ceph.conf

Push the updated configuration to the nodes.

# ceph-deploy --overwrite-conf config push $CEPH_HOST

Restart the monitor on the nodes.

# ssh $CEPH_HOST systemctl restart ceph-mon.target

Cleanup

Remove volumes

Commands for removing osd. Run the commands as root on the host with the volume.

# ceph-volume lvm zap /dev/common/share
# ceph-volume lvm list

Remove Ceph

Commands for removing Ceph. Run the commands as root.

# export CEPH_HOST=`hostname -f`
# ceph-deploy purge $CEPH_HOST
# ceph-deploy purgedata $CEPH_HOST
# ceph-deploy forgetkeys
# rm ceph.*
# rm ceph-deploy-ceph.log

Removing the Ceph deployment command. Run the pip command as a regular user.

$ pip uninstall ceph-deploy

Adding Ceph to Kubernetes

Followed the guide An Innovator’s Guide to Kubernetes Storage Using Ceph, but stopped at the end of Ceph-RBD and Kubernetes. Did not get any PV.

The following adaptations were done:

  • Cloned the git repo at https://github.com/ajaynemade/K8s-Ceph
  • Updated Ceph-RBD-Provisioner.yaml
    • Changed extensions/v1beta to apps/v1 in the deployment
    • Added a selector to the deployment
  • Resused the already created pool K8s-pool-1

Next steps:

  • Check why only one pool can be created
    • Redo with a new pool
  • Follow another instruction?