For anybody enjoying Latin pop in the 2000s, RBD was a popular Mexican pop band from Mexico City labeled by EMI Virgin. The group achieved international success from 2004 until their separation in 2009 and sold over 15 million records worldwide, making them one of the best-selling Latin music artists of all time. The group was composed of Anahí, Alfonso Herrera, Dulce María, Christopher von Uckermann, Maite Perroni, and Christian Chávez.
For everybody else who is working with open source technologies, the Rados Block Device, or RBD, is software that facilitates the storage of block-based data in the open source Ceph distributed storage system. RBD breaks up block-based application data into small chunks.
The synopsis of rbd is: rbd [ -c ceph.conf ] [ -m monaddr ] [–cluster cluster-name]
[ -p | –pool pool ] [ command … ]
Specifically, RBD is a utility for manipulating Rados Block Device images, used by the Linux RBD driver and the RBD storage driver for QEMU/KVM. RBD images are simple block devices that are striped over objects and stored in a RADOS object store. The size of the objects the image is striped over must be a power of two.
Below you can find an architectural overview of the Ceph stack.
RBD has integration with Kubernetes as well. You can dynamically provision RBD images to back Kubernetes volumes, mapping the RBD images as block devices. Because Ceph ultimately stores block devices as objects striped across its cluster, you’ll get better performance out of them than with a standalone server. Moreover, OKD clusters also can be provisioned with persistent storage using Ceph RBD. Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project. While the Ceph RBD-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
RBD can also be used to create cluster snapshots and provide mirroring. Another option is using it in combination with an iSCSI Gateway, which presents an iSCSI target that exports RBD images as iSCSI disks. Although this last option is not always recommended.
Furthermore, Ceph provides a kernel module for the RBD and a librados library which libvirt and KVM can be linked against. This is essentially a virtual disk device that distributes its “blocks” across the OSDs in the Ceph cluster. It can also be used as a virtual machine’s drive store in KVM. Because it spans the OSD server pool, the guest can be hot migrated between cluster CPUs by literally shutting the guest down on one CPU and booting it on another. Libvirt and Virt-Manager have provided this support for some time now.
So, there you have it; a little more information about RBD. Have you been listening to RBD in the 2000s or are you using RBD in combination in one of the above use cases? Let us know in the comment section.
Would you like to read something more in-depth about RBD? You can read about it in our blog through the following link https://42on.com/rbd-latency-with-qd1-bs4k/ .