Erin Conley
on 10 July 2025

Canonical’s Platform Engineering team has been hard at work crafting documentation in Rockcraft and Charmcraft around native support for web app frameworks like Flask and Django. It’s all part of Canonical’s aim to write high quality documentation and continuously improve it over time through design and development processes. One way we improve our documentation is by engaging with our team members and the external community. Their perspectives and feedback provide valuable insight into our product design, clarify any confusing explanations, and enhance the user experience (UX) of the tooling.
We’ve focused on making this documentation user-friendly – but how do we ensure that our documentation truly benefits our readers?
Since last November, we’ve been testing tutorials for the various frameworks we support, conducting a total of 24 UX sessions (so far!). These participants spent their valuable time and energy working their way through our tutorial, allowing us to observe their attempts and collect their feedback on the instructions and explanations.
How we chose participants
We created the web app framework support as an approachable introduction to Canonical products through a familiar entry point for most users: web app development. Our goal was to attract a wide variety of users, from seasoned engineers to newcomers. To do so, we collaborated with our internal teams, like Web, who use Canonical products every day, as well as reaching out to external developers through online communities and conferences. To make sure our documentation met real-world needs, we actively sought feedback from those who were unfamiliar with Canonical. We even tested the experience with university students, to confirm it would be accessible across all skill levels.
The sessions
After recruiting each participant, we began the most important phase: the sessions themselves. We carefully crafted these sessions to provide a consistent, comfortable experience for the participant, encouraging their honest feedback about anything – and everything! – in the tutorial.
A typical session begins with a few quick questions to understand each participant’s background, so we could contextualize their experiences. Then, we begin the tutorial. We observe what the participant notices, how they interpret the instructions, and what obstacles they run into. After they complete the tutorial, we ask a set of post-session questions to collect their overall feedback, and explore if the tooling meets their expectations of the upstream framework.
What we learned about documentation UX
I’ve felt the full spectrum of human emotions over the course of the 24 sessions. First, there’s a huge deal of helplessness that comes from writing and publishing documentation – as soon as the documentation is out in the world, I’m powerless to help my readers! I found it surprisingly difficult to watch users run into problems that I couldn’t help them solve. Thankfully the engineers were there to provide some aid, although even that wasn’t enough at some points. The sessions have been a learning opportunity for me to accept the helplessness that comes with the author role.
Along with helplessness, there were also plenty of moments where I felt panic. There’s an element of risk associated with documentation: Sometimes, I would argue for documentation changes, thinking that they would provide better UX or mitigate confusion, only for those changes to blow up in my face in real time. I’ve learned to keep a straight face, and I accept any criticism or feedback directed at the changes I pushed for. New ideas (at least in documentation) are definitely worth trying, but they only become quality ideas once proven through UX.
Most of the time, the sessions were silent, and I struggled to keep my attention on the participants and their actions. There are many points in the tutorials where the user has to wait – for software to download, for rocks and charms to pack, for their apps to deploy, and so on. It’s very tempting to look away in those moments and focus on other activities, but as I learned, important observations and details can emerge at any time and stage. Paying attention, even in the most innocuous moments, is a vital part of understanding the participant’s experience and their feedback.
The participants provided insightful feedback about both the tooling and the documentation. Here are some of the most common themes we noticed:
- When testing with university students, we found that these participants became stuck when they were asked to create a new text file from the terminal. This session marked their very first time using a terminal text editor, and we hadn’t accounted for this momentous occasion in our instructions. (I felt quite a bit of panic in these moments, too!)
- Participants working on ARM64 machines commented about the incomplete experience, as later parts of the tutorial were only compatible for AMD64 machines.
- We found some common places where participants would miss an instruction, causing them to experience issues down the line. The participants noted that the instructions felt “buried” in the text and wished the tutorial better highlighted their significance and impact.
- External participants asked for more explanations of Canonical products and how the tooling works – they were curious and interested in digging into the “why” behind the tutorial.
Prioritizing and acting on feedback
For each of the sessions, we culminated all observations into individual documents. Then we collected all the direct feedback and suggestions into a main document; for the Flask tutorial, the main feedback document spans 16 pages. From there, the project lead, UX designer, technical author (myself!), and the engineers discuss the feedback to determine how we will incorporate it. While prioritizing feedback, we account for the following considerations:
- Blocking issues: Prioritize feedback pointing out major issues.
- Isolated incidents: Identify feedback where more research is needed.
- Design trade-offs: Respond to feedback based on specific design choices made.
We incorporate feedback in small batches over time, prioritizing major blockers and typos. This way, we can resolve issues quicker, meaning our readers reap the benefits right away!
We’ve found that the changes proposed by earlier UX sessions have improved the quality and outcome of later sessions. Common pitfalls in the first couple of sessions are no longer an issue. Questions about how the tooling works come up less. And – some of you will be glad to hear – users with ARM64 machines can go through the entire tutorial.
Get involved: help us improve
There are always improvements to make in our documentation, and these UX sessions are a great way for us to include our community members and make our documentation more accessible. If you’re interested in getting involved, please reach out to us on our public Matrix channel!