Table of contents
At Birdie, we love a good survey - feedback is crucial to what we do, so we tend to ask a lot of questions. And recently, we asked a brand new one - and the answer might surprise you.
We asked newly-onboarded partners which AI features they'd be interested in us developing, and we gave them seven options. We expected cautious interest, and maybe some scepticism about trusting technology with something as important as client care.
Instead, every single one of the 50+ respondents chose all seven.
The signal is clear: care providers aren't afraid of AI. They're hungry for it.
The reason is straightforward. Care professionals are drowning in admin that keeps them away from clients. AI could reclaim hours each week. That's not a nice-to-have - it's a lifeline when you're struggling with staffing pressures, regulatory demands, and an overwhelming volume of work.
But hunger and adoption aren't the same thing. There's a huge difference between wanting AI and knowing which AI to trust.
The trust gap
We’re hearing this in so many conversations: care providers are stuck at knowing which AI to trust.
For starters, 'normal' LLMs like ChatGPT or Gemini aren't fit for care's strict regulatory constraints. Care plans don't belong in these systems, and even some AI-driven grammar tools that are fine for everyday use could risk client data being shared with third parties. It’s a potential minefield.
‘But there are some care-centric AI tools on the market now’, I hear you cry. That’s true, but there’s another problem here: most AI in care operates in the shadows. It stays behind the scenes, hidden from clients and families. It works as a black box, delivering answers without explanation, so care professionals don't trust it - they override it constantly. Worse, it sometimes makes suggestions and decisions where AI should never venture, undermining professional judgment.
So there’s a big problem to solve here - and that’s what we’re hoping to do here at Birdie.
AI that earns trust through transparency
We believe AI in care should work the way care professionals work: transparently, collaboratively, and accountable to the people it serves.
We call this ‘AI in plain sight’, and it means that every AI feature we build follows three core principles:
Visible to the whole circle of care. Not just visible to managers in the background -visible to carers and clients too. If a client can't understand the role of AI in their own care, we shouldn't be using it.
Explainable. AI should surface its reasoning. Care professionals need to understand why it's flagging something or suggesting an action. That transparency builds the trust that actually matters.
Controllable. Decisions stay in human hands, where they always should be. AI is a partner in the process, never a replacement for professional judgment.
This isn't just design philosophy. It's rooted in how care actually works - built on human judgment, accountability, and genuine relationships. It also comes directly from our values as a company: we always try to listen hard to care professionals' real concerns and we’ll always talk straight about how our systems work.
What this looks like in practice
Take SmartPlans, the AI-powered assessment tool that we’ll be launching soon. Care managers can show the transcription of a client conversation to the client and their families. They can see the clinical reasoning behind suggested assessment content. Crucially, they maintain complete control - deciding what stays, what changes, and what needs further investigation. The AI becomes a visible partner in the assessment process, not a decision-maker operating in shadows.
The same principle applies to every AI feature we develop. Care professionals understand what the system is doing and why. Organisations can explain decisions to regulators and families. The AI works with care teams, not instead of them.
Working smarter together
The care sector has always known something that technology companies are only now learning: the best outcomes come from combining human skill with smart tools. Care professionals don't need AI to replace their judgment. They need AI that respects it.
That's the difference. And it's why transparency isn't just good ethics - it's good practice.
Dr Jo Barlow is our AI Product Lead at Birdie. She leads the development of AI features designed specifically for homecare, with a focus on transparency, explainability, and keeping care professionals in control of decisions that affect their clients.
You can watch her explaining our approach to AI - accompanied by a glamorous dog - right here:
To stay up-to-date on the latest developments in Birdie products - including our upcoming AI launches - head to the Smarter Care Lab.
Published date:
February 19, 2026
Author:
Jo Barlow



