#ExpertiseEditions: Valuable burndown charts and streamlining productivity at Birdie
September 4, 2020
We're immensely proud of our flock 🐦
Every employee is a master of their skills, and we thought it was high-time we introduced you to the team - so you can meet the brains behind the business and see what really goes into building Birdie.
We're calling these articles our #ExpertiseEditions, and they'll give you a behind the scenes look at the day to day lives of our team.
So without further ado, meet Leo. Leo's a full-stack software developer here at Birdie, and we invited him to tell us a bit about burndown charts, and how his team uses them to streamline success. 📉
Not sure what a burndown chart is? You're in the right place. 👇
Now you know who's behind the words in this post, you might be wondering...
What is a burndown chart?
We'll let Leo explain that one...
There are many Scrum teams that work with story points in order to estimate the complexity of work items and also to try to gauge their own velocity. Here at Birdie, we're trying to radically improve the lives of older adults.
One part of that involves the work of a few "squads" (we base ourselves on the Spotify model), who focus their time on various aspects of delivering person-centred care.
I work in one of these squads - the Delivery squad. Our main focus is to provide a stable mobile application for carers who are visiting elderly adults: to help them to deliver and record their care as smoothly as possible.
Whenever we plan a sprint, we estimate each ticket so that we can try to understand how well we are performing and what our velocity is. One of the key instruments for us to measure this is a burndown chart.
For a while now, we've used a chart built by a previous member of our squad. It took account of our availability each day as developers and calculated how much we could expect to have 'done' each day. However, although it succeeded in giving us a clear idea of the quantity of story points that were done versus those that weren't, we wanted something that could provide us with some more actionable insights. These charts are also very useful for those outside our immediate squad to have some insight on how we're doing too.
You see, in an ideal world, knowing what's done and what isn't done might be enough for a team to just plough on to the next work item and keep churning out code like the proverbial hamster on the wheel.
An ideal burndown chart
However, we do in fact exist in a world full of unexpected delays and surprise hinderances. 'Variety is the spice of life' as someone once said so I won't lament that fact, but perhaps we can do better to help out our friend the hamster whose wheel has come undone?
Perhaps we could try to explain where these delays might be coming from?
For example, in the picture above we can see a situation - a very real one, I might add - where there is a huge discrepancy between the expected and actual number of points that are considered 'done'. I honestly can't remember what happened that week, but needless to say there was clearly some sort of spanner in the works.
The problem with this chart, though, is that it is only helpful insofar as it informs you that you do have a delay. Somewhere.
Understanding why delays happen
What would be really useful is having an immediate understanding of where the delay might be.
In most workflows the state of a work item isn't simply described by a binary done/not done status. In fact there are usually many different states in between. Therefore in order to have a valuable burndown chart it might be a good idea to try to represent these states in the chart.
That's exactly what we tried to do and happily we managed to have some success!
Firstly, to provide you with some context: our Jira workflow consisted of 5 different states:
If there indeed was a delay, like above, then it could be in any of the intermediate states: There could be many tickets in code review at the same time, for example. Being able to visualise this immediately informs us before our morning stand up that we should start the day with a round of code review in order to move work items along the pipeline with efficiency.
The OG chart
Firstly, how did it work to begin with?
Our original chart worked by taking several parameters to calculate how much could be done each day. These were:
An estimate of each developer's availability each day
The number of story points that we commit to
These were then thrown into a Google Sheet where we constructed the chart from the relevant columns.
Once this was set up, each day someone would then figure out how many story points were moved to done over the last day in order to show our progress. This means manually combing through Jira to tot up all the points and make sure everything's where it should be. It sounds pretty easy but it becomes a logistical nightmare when priority tickets end up in the sprint and all of a sudden you have to make it seem like they were never there.
I never wanted to be an accountant anyway... but this manual process was a big pain point. More on this later 🙂
Adding more data points
In order to understand the delays in our workflow, we did what any database engineer would do and just added some more columns. Our trusty sheet could now represent work items that were in QA, code review etc:
Getting a clearer picture
We have now a few new features which differentiate this new chart from its predecessor. Here's a quick summary of each line:
A "stretch goal": this line represents the velocity we would need to move every single ticket across to done. We felt we needed this as we built this at a time where we were quite stretched, and had to commit to lots of work items.
Expected burn: seen in yellow, this graph accounts for our availability and calculates how much we "could do" based on this.
Actual remaining: this red line represents what we currently have left to do in the sprint (any points that are not "done")
QA Delay: This represents where we could be if all the tickets in QA were moved to done.
Code review delay: This represents where we could be if all the tickets in code review and QA were moved to done.
This was a fantastic improvement for us in terms of clarity. You could see, for example, that on the 10/3/2020, although being fractionally behind where we ought to be, a great deal of this was due to a huge number of points in code review! We immediately had an actionable insight into the status of our sprint.
There was a large downside however: it was a real pain to have to calculate all of this data each morning in Jira.
The final step we made was to add an element of automation to these calculations. As I mentioned above, it's a rather mundane task having to count all these variables up yourself, so if we can get the machines to do it, we should. It's bound to be more accurate too.
The first step here was to make use of Jira Cloud for Sheets. It's a very simple to use tool which pulls data from Jira into a Google Sheet - perfect for our use case! We set up a filter that corresponds to all the issues in our current sprint and then grab all the data out that we needed.
The magic of integrations
From here, it's just a case of setting up a separate sheet to calculate your various states so that you can easily copy and paste them into the master burndown chart sheet.
The second big improvement was to automate the calculation of each developer's availability for any given sprint. Up until this point, we basically took a brief look at our calendars at the start of each sprint and gave a rough estimate of the fraction of time we could allocate to work items.
This was OK I suppose. But we can do better.
We wrote a Google Apps Script which integrated with our sheet to look up each developer's calendar and calculate how much time they had that wasn't spent in meetings. Not only is this a reasonable proxy of their availability, but it's also much more accurate (thanks computers).
Putting this all together, we ended up with something that looks like this:
The green cells on the right hand side represent values that need updating each morning - everything else is calculated for you and results in the beautiful burndown chart which you see at the bottom. For us, it has been:
Far more useful for quickly spotting and actioning bottlenecks
Very easy to set up and maintain
Much clearer for people outside our immediate team to picture our progress
These reasons - combined with our passionate (and I mean passionate) distaste for Jira's out of the box solution - meant that we feel this chart has been pretty successful for us, and maybe it could be for you too. I'd love to hear your thoughts.