Instructions how to install Ceph on a single node.
To be added:
- Info about Rook, see Rook: Automating Ceph for Kubernetes
- Info about K8s usage of Ceph, see An Innovator’s Guide to Kubernetes Storage Using Ceph
- Why Is Storage On Kubernetes So Hard? (2019-01-11)
- An Innovator’s Guide to Kubernetes Storage Using Ceph (2019-07-10)
- Running ownCloud in Kubernetes With Rook Ceph Storage – Step by Step (2019-08-08)
- Rook: Automating Ceph for Kubernetes (2018-08-02)
- Kubernetes 1.14: Local Persistent Volumes GA (2019-04-04)
- Dynamic Provisioning and Storage Classes in Kubernetes (2017-03-29)
Official Kubernetes documents.
Official Ceph documents.
Even though we’re doing a single node deployment, the ceph-deploy
tool expects to be able to ssh into the local host as root, without
password prompts. So before starting, make sure to install ssh keys
/etc/ssh/sshd_config to set
yes. Also ensure that SELinux is set to
Permissive or less.
First, we need the ceph-deploy tool installed
$ pip install ceph-deploy
Everything that follows should be run as root.
ceph-deploy will create some config files in the local directory, so it is best to create a directory to hold them and run it from there.
# mkdir pv-cluster # cd pv-cluster
Make sure that the hostname for the local machine is resolvable, both with domain name and unqualified. If it is not, then add entries to /etc/hosts to make it resolve. The first step simply creates the basic config file for ceph-deploy.
# export CEPH_HOST=`hostname -f` # ceph-deploy new $CEPH_HOST
Since this will be a single node deployment there are 2 critical additions that must be made to the ceph.conf that was just created in the current directory
# echo "osd crush chooseleaf type = 0" >> ceph.conf # echo "osd pool default size = 1" >> ceph.conf
Without these two settings, the storage will never achieve a healthy status.
Now tell ceph-deploy to actually install the main ceph software. By
default it will try to activate YUM repos hosted on ceph.com, but
Fedora has everything needed, so the ‘
tells it not to add custom repos.
# ceph-deploy install --no-adjust-repos $CEPH_HOST
With the software install the monitor service can be created and started.
# ceph-deploy mon create-initial
Executing ceph-deploy admin will push a Ceph configuration file and the ceph.client.admin.keyring to the /etc/ceph directory of the nodes, so we can use the ceph CLI without having to provide the ceph.client.admin.keyring each time to execute a command.
# ceph-deploy admin $CEPH_HOST
Install the manager daemon. It's responsible for monitoring the state of the Cluster and also manages modules/plugins.
# ceph-deploy mgr create $CEPH_HOST
Create the Object Storage Device (osd).
# ceph-deploy osd create --data /dev/common/share $CEPH_HOST
NOTE: This is just an example!
# ceph osd pool create k8s-pool-1 128 128 replicated # ceph osd pool create kube 100 100
Start the dashboard. The package for the dashboard needs to be installed before it can be enabled.
# ceph-deploy pkg --install ceph-mgr-dashboard $CEPH_HOST # ceph mgr module enable dashboard # ceph config set mgr mgr/dashboard/ssl false
Create the admin user. Replace
secret with your own password.
# ceph dashboard ac-user-create admin secret administrator
By default, the dashboard is available on the host running the manager on port 8080. Use the password specified when creating the user.
The configuration is updated by editing the
ceph.conf file and
pushing the updated file to the nodes. All command should be run as
Start with some preparation.
# export CEPH_HOST=`hostname -f` # cd pv-cluster
Update the configuration file using your favorite editor.
# emacs ceph.conf
Push the updated configuration to the nodes.
# ceph-deploy --overwrite-conf config push $CEPH_HOST
Restart the monitor on the nodes.
# ssh $CEPH_HOST systemctl restart ceph-mon.target
Commands for removing osd. Run the commands as root on the host with the volume.
# ceph-volume lvm zap /dev/common/share # ceph-volume lvm list
Commands for removing Ceph. Run the commands as root.
# export CEPH_HOST=`hostname -f` # ceph-deploy purge $CEPH_HOST # ceph-deploy purgedata $CEPH_HOST # ceph-deploy forgetkeys # rm ceph.* # rm ceph-deploy-ceph.log
Removing the Ceph deployment command. Run the
pip command as a
$ pip uninstall ceph-deploy
Followed the guide An Innovator’s Guide to Kubernetes Storage Using Ceph, but stopped at the end of Ceph-RBD and Kubernetes. Did not get any PV.
The following adaptations were done:
- Cloned the git repo at https://github.com/ajaynemade/K8s-Ceph
- Updated Ceph-RBD-Provisioner.yaml
apps/v1in the deployment
- Added a selector to the deployment
- Resused the already created pool
- Check why only one pool can be created
- Redo with a new pool
- Follow another instruction?