Ceph Quincy, a look into the future

Ceph Quincy, a look into the future

Every year, a new Ceph version is released. In 2019, version 14 (Nautilus) was released, in 2020, version 15 (Octopus) and in 2021, version 16 called Pacific. These versions have an end of life date, so make sure you are up-to-date and operate the same version throughout your Ceph clusters.

The end of life date for Octopus is June 2022, the end of life date for Pacific is June 2023. I wrote a little more about Ceph version management in my blog: ‘Why upgrading and updating Ceph consistently is so important’.

At this moment we are looking at the first release candidate of the new Ceph release: Ceph Quincy (version 17). With all version releases, major updates and improvements have been made. In this article I will share ten of these new updates and improvements. The final release of Quincy is slated for the end of March 2022. So, I thought it would be nice to share some planned features with you. Still, it is in development so final changes can be made; this means that the discussed features in this article could deviate from final release.

Quality of service: One major improvement is in the quality of service, the mclock scheduler, providing quality of service for Ceph clients relative to background operations, is now the default.

Objectstore: Filestore has been deprecated in Quincy, considering that BlueStore has been the default objectstore for quite some time. So, if you are still using Filestore, it would be a good time to upgrade to Bluestor this or next year.

New library: A new library is available, libcephsqlite. It provides an SQLite Virtual File System (VFS) on top of RADOS. The database and journals are striped over RADOS across multiple objects for virtually unlimited scaling and throughput only limited by the SQLite client. Applications using SQLite may change to the Ceph VFS with minimal changes, usually just by specifying the alternate VFS. We expect the library to be most impactful and useful for applications that were storing state in RADOS omap, especially without striping which limits scalability.

Filesystem feature: A file system can be created with a specific ID (“fscid”). This is useful in certain recovery scenarios, for example monitor database lost and rebuilt, and the restored file system is expected to have the same ID as before.

MDS upgrades: MDS upgrades no longer require stopping all standby MDS daemons before upgrading the sole active MDS for a file system.

RGW rate limiting: RGW now supports rate limiting by user and/or by bucket. With this feature it is possible to limit user and/or bucket, the total operations and/or bytes per minute can be delivered. This feature is allowing the admin to limit only READ operations and/or WRITE operations. The rate limiting configuration could be applied on all users and all buckets by using a global configuration.

MGR: The pg_autoscaler has a new ‘scale-down’ profile which provides more performance from the start for new pools. However, the module will remain using its old behavior by default, now called the ‘scale-up’ profile.

OSD Release warning: A health warning will now be reported if the ‘require-osd-release’ flag is not set to the appropriate release after a cluster upgrade.

MON/MGR: Pools can now be created with `–bulk` flag. Any pools created with `bulk` will use a profile of the `pg_autoscaler` that provides more performance from the start. However, any pools created without the `–bulk` flag will remain using its old behavior by default.

Telemetry: The opt-in flow is improved so that users can keep sharing the same data, even when new data collections are available. A new ‘perf’ channel that collects various performance metrics is now avaiable to opt-in to with:

–       `ceph telemetry on`

–       `ceph telemetry enable channel perf`

There you have it; ten scheduled improvements and updates in Ceph Quincy. These features can be found on the GitHub Ceph page. If you want to read about all improvements, you can find these here: https://github.com/ceph . We do not advice to upgrade right away though; we recommend using the second-last version of Ceph because you want to have software running that is as stable as possible and has been viewed as much as possible in the market.

Do you want to know why it is important to update and upgrade consistently? Read about in our blog through the following link https://42on.com/the-importancy-of-upgrading-and-updating-ceph-consistently/ .

We are hiring!
Are you our new