Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

An error occurred while submitting your form. Please try again or file a bug report. Close

  1. Blog
  2. Article

Philip Williams
on 22 November 2021

Dell EMC PowerEdge and Canonical Charmed Ceph, a proven solution


Here at Canonical, we have lots of industry partnerships where we work jointly, hand-in-hand, to produce the best possible outcomes for the open source community. From getting early access to next generation hardware to ensure Ubuntu is fully compatible when it’s released, to creating solution orientated reference architectures for products built on top of Ubuntu like Charmed Ceph, Canonical is committed to engineering the best possible computing experience.

Recently, our product management and hardware alliances teams came together with Dell Technologies to collaboratively define, test, and validate a Dell EMC PowerEdge based Charmed Ceph reference architecture.

Reference architecture

The goal of this exercise was to produce a guide to building a capacity orientated Ceph cluster that could be used for block (RBD), file (CephFS) or object (Swift or S3) workloads, and demonstrate the performance that can be achieved with similar hardware.

We took relatively standard components (four Dell EMC R740xd2 servers with Intel Xeon processors and NICs, a few SSDs, and lots of high capacity NL-SAS disks) and connected them all together with 25GbE networking.

The R740xd2 provides an ideal building block for Ceph clusters due to its highly configurable nature, which allows users to make performance, capacity, and price adjustments as needed. For example, to create a higher performance cluster, the CPUs could be swapped for another model that has more cores and cache, and the disks could be changed to NVMe and/or SSD if required.

Learn more

During this exercise, we tested the performance of the cluster with various different workloads, such as small block and large block, with and without bcache.

We also demonstrated the scalability of Ceph, by adding an extra storage node and re-running the performance tests to show the improvement in cluster performance. We were able to achieve over 75,000 random read IOPs, and over 6 GBps sequential read from a 4 node capacity orientated cluster, as well as demonstrating how our unique OSD deployment approach using bcache can provide up to 2.5x improvement in performance for small block workloads.

All of the test results and detailed hardware architecture information can be found in the whitepaper on Dell Technologies InfoHub, here. We also discussed our findings in this webinar which can be watched back on-demand.

Related posts


Canonical
22 July 2025

Native integration available for Dell PowerFlex and Canonical LXD

Cloud and server Article

The integration delivers reliable, cost-effective virtualization for modern IT infrastructure  Canonical, the company behind Ubuntu, has collaborated with Dell Technologies on a native integration between Canonical LXD and Dell PowerFlex software-defined infrastructure. The combined solutions for open source virtualization and high-perfor ...


Sharon Koech
2 September 2025

Join Canonical at the first-ever African OpenInfra Days

Ceph OpenStack

For the second time, and in less than one month, Canonical is coming to East Africa! Three weeks ago, we had the first-ever UbuCon Africa, which was co-located with DjangoCon Africa 2025, and on September 6, Canonical will be coming to Kenya to support OpenInfra Days Kenya 2025. This event is set to be the ...


Tytus Kurek
2 September 2025

OpenStack PoC? No problem!

Cloud and server Article

Setting up a proof of concept (PoC) environment is often one of the first steps in any IT project. It helps organizations to get to grips with the technology, validate the idea, and identify any potential risks. However, setting up an OpenStack PoC has always been a challenge due to the overall complexity of the ...