Table of contents
We spoke with Dr Green to understand the real-world impact of AI on care workers, the ethical complexities we need to navigate, and how we can build a more human-centred approach to technology in care.
The Institute for Ethics in AI hosted a roundtable with the Care Workers’ Charity. What was the goal of that session?
The idea came from our monthly steering group meetings. The Care Workers’ Charity, together with us, said it would be really good to get care workers who are actually doing this work to come together and talk about AI – to make sense of it, share their experience and what they need in order to feel well supported.
After we’d had a conversation about what generative AI is, the care workers arranged themselves in working groups and spoke amongst each other about what they need. We took notes and created a text which was sent round to the entire group, and they signed it off. That was the statement that came out of the day.
What was the most insightful thing that came from those conversations?
First, the passion that care workers expressed for their job and their concern about how AI is going to influence the relationships they have with the people they care for. Specifically, being worried about having to look at their phone the whole time for note-taking, and that people may think they're not paying attention.
Another point is the need for training. People want a feeling of safety, to know what they're doing and how they are supposed to be using these systems. They also want to know who's liable when something goes wrong. There was a call for a policy and some transparency from their employers.
One thing I was surprised about was how concerned people were that their jobs will be taken over by AI. The reason it surprised me was that we have such staff shortages, and caring is such a relational job. But now, seeing some of the effects of AI on staffing levels, I do understand where people are coming from.
How has the landscape of AI in social care changed since then?
On one hand, there's been rapid development in AI and it's being rolled out in social care as well. The use cases are enormous, from purely administrative tasks like writing letters, all the way to some of the most intimate tasks.
But on the other hand, there's still a gap in understanding what AI means for the regulation of high-quality care. We're seeing a lot of need for guidance, governance frameworks and clarity from the CQC and the Department of Health and Social Care. This hasn't changed, but it has become even more pressing now. It really is urgent.
What work is the Oxford project doing now to address this?
We've kept on working on the tech pledge, created by a group of tech providers, and we've also been working on creating an alliance on the responsible use of AI in social care. We want to create a space where people can exchange good practice knowledge and talk about new developments in a protected way. We are also working with Care England on a new blueprint for an AI governance framework for care providers.
You mentioned some "frightening and problematic" cases of AI use. Could you share an example?
In an interview, a care worker told us they work with people with learning disabilities and mental health conditions. One person they support hears voices and had a voice-activated AI, which became one of the main voices they were talking to. The AI was pretending to be an alien, giving them what they were looking for.
The caregivers were excited because it had a positive effect on the person's wellbeing. But from an ethical point of view, you wouldn't allow a real person to pretend to be an alien talking to someone with a mental health condition like that. Now, I'm not a clinician, and I would want to hear from a clinician what they would think, but it’s likely not best practice. There are ethical issues here we need to pick apart. My message is that it's complicated.
How can care workers get more involved in this work?
The alliance that we are building now will have a website where people can get involved in specific projects or advocacy efforts. My role as head of engagement is to break down barriers and build bridges between people who think that institutions like Oxford aren't for them. It's only with them that we can forge a better and positive future for AI in adult social care.
One care leader, a woman of colour, raised a concern that AI models are "built by Silicon Valley white men" and questioned their trustworthiness. How should we respond to the risk of bias in AI?
It is a huge concern. Not only have these systems been built by a particular group of people from a particular background, but when we come to generative AI, they are also built on the internet, which has a lot of discrimination and bias in it.
Sam Rickman, did a great study on various generative AI systems and adult social care in which he could evidence that some systems do have a gender bias – they were highlighting male needs for care more than those of females. We are starting to get lots of evidence around racial bias as well.
It is so important to ensure that we have a grassroots-up approach, so that the voices of people and their experiences really matter in how we not only decide to develop, but then also how we roll out and monitor these AI systems. And of course, how we co-produce or involve people mustn't just take from people, but also make sure that we actually give back.
It's really important to find mechanisms to do that. There are brilliant AI systems created for public deliberation and citizen assemblies. We need to build much more from these experiences, and also from how different world regions are going about AI. There is fantastic work happening in Kenya, for example, shedding a light on how systems are currently impacting various groups. To raise awareness of that is important.
This conversation has been lightly edited for length and clarity. To learn more about the Oxford Institute for Ethics in AI, visit their website: https://www.oxford-aiethics.ox.ac.uk/
Published date:
March 13, 2026
Author:
The Birdie team
.png)
.jpg)

.jpg)