Fraud Blocker

Lab Notes

birdie platform

Building in plain sight: behind the scenes in our AI team

A look inside one week of building AI for homecare — from dignity standards to phone call detection.

Table of contents

We talk a lot at Birdie about building AI in plain sight. It's the principle that runs through everything we do with AI: if a client can't understand how AI was used in their care, we shouldn't be using it.

But "in plain sight" isn't just about the product. It's about how we build it, too.

So here's what our Kites squad (the team behind SmartPlans) actually shipped last week.

What’s in the works?

First, a quick introduction for anyone new here.

SmartPlans is Birdie's AI-powered care assessment tool, a product we’re currently building and looking to launch very soon.

Here’s how it works: a care professional records the client conversation on their phone during a first assessment visit. SmartPlans transcribes it and suggests answers directly into the Birdie assessment fields - cited back to what was actually said, ready for the assessor to review and confirm before anything is saved.

To see it in action, watch this short video:

Right now, SmartPlans is in alpha. That means a small group of care agencies are using it in their real workflows, feeding back on what works and what doesn't, while we iterate week by week. It's not a finished product yet - being built alongside the people who'll use it (for more on how we co-develop our products at Birdie, have a read of this).

Teaching the AI to speak with dignity

Care plans are clinical documents, but they're also deeply personal ones. They describe how someone likes their tea, what name they prefer, what frightens them at night. Language matters.

This week, we introduced CQC dignity language standards directly into our AI layer. That means the AI now enforces rules about how it talks about people - no ageist phrasing, no clinical shorthand that strips away personhood, no undignified language. Each assessment type has its own list of prohibited terms, because dignity isn't one-size-fits-all.

We also changed how the AI writes about clients' lives. Fields like occupations, hobbies, routines and life history now come through in a warm, narrative style - written in the client's own words, not in clinical bullet points. Because "enjoys gardening and watching Countdown with a cup of tea" is a better foundation for a care relationship than "hobbies: gardening, TV."

Getting the details right

A few of the changes this week were about precision - the kind of thing that doesn't make a headline but makes the difference between a care plan you can trust and one you have to second-guess.

AI suggestions for ethnicity, gender, pronouns and religion now correctly map to the exact dropdown values in our system. Previously, the AI might suggest a value that was technically correct but didn't match the form field, meaning the assessor couldn't apply it without manually finding the right option. Fixed.

One voice doesn't fit all

Different agencies have different voices. Some write care plans in first person ("I like to have my curtains open in the morning"), others in third person ("Mrs Davies prefers her curtains open in the morning"). This week, we shipped per-agency voice configuration, so SmartPlans matches the style each agency already uses. It's a small thing that makes the output feel like theirs, not ours, and supports a person-centred approach to care plans.

Under the bonnet

Not everything we ship is visible to the person using the product, but it keeps the product working.

We fixed two production challenges this week. One was caused by assessments containing a religion field that sent too many options to the AI model. The other hit when three of our largest assessment templates exceeded the AI's processing limits. We split each into two smaller templates that produce identical output. The assessor never sees the difference. That's the point.

We also sorted British English spellings across all transcripts. Our speech-to-text model was returning American spellings despite being configured for British English. "Colour" not "color." "Recognise" not "recognize." Small, but one that any Brit will know well!

Why we share this

It’s a week of careful, unglamorous work: dignity standards, spelling fixes, phone call handling, billing plumbing. But this is what building AI for care actually looks like. Not a single breakthrough moment - a hundred small decisions, each one made with the person on the other end of the care plan in mind.

We'll keep sharing these. Building in plain sight means you get to see the scaffolding, not just the finished building.

— The Kites, Birdie's SmartPlans squad

Published date:

April 2, 2026

Author:

Johanna Barlow

Share on socials

Join the mailing list

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ready to work smarter, not just harder?

Transform your homecare agency with technology that connects, informs, and supports your team every step of the way.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo.

99.9% uptime

99.9% uptime

99.9% uptime

99.9% uptime