Insight

AI in Government: How can we avoid ‘the empathy gap’?

Ian Huckvale
Principal UX Consultant at Valtech

January 17, 2024

The challenges and opportunities of AI are many and varied. Ian Huckvale, Principal UX Consultant at Valtech, examines whether it can help us deliver public services that feel more human. Will AI make things worse, or can it help us close ‘the empathy gap’?

“The whole process was just inhumane...”

“It was like talking to a robot...”

“The computer says ‘No’...”

When people have a bad experience with poorly designed or administered public services, their reaction is usually very strong. Some services leave citizens feeling that they’ve been treated unfairly and without compassion. There is still some way to go to make digital public services accommodating and transparent, enabling the people who need them can access them easily and without frustration.

The irony is that, even when users regard a service as robotic and understand that they’re using digital technology, they’re almost certainly interacting with systems where humans still make the ultimate decisions. Why do some services feel inflexible and opaque, and what is it that stops them from displaying empathy?

Where is AI today?

Let’s start with a simple fact. Most of us use AI-based systems every day, in many cases, without realising it. For example, when we use Google to search the web, when Netflix recommends a TV show, or when we scroll through our social media feeds. We’re all used to, and accepting of, AI in at least some circumstances.

We can all benefit from the technology if it is used well, and I’ll discuss how I think that applies to public services as much as it does to the commercial world. But it is not risk-free, and there are well-documented and widely reported cases where things have gone badly wrong. With the great power that AI offers, it also brings with it the need for great responsibility from the people deploying it.

There are, of course, some barriers to the deployment of AI by public sector organisations. Issues such as trust, accountability, fairness, transparency and explainability. Some of these barriers are not simple to overcome. In most cases, we still require a human ‘in the loop’ so, for anyone worried, that should mean the robots will not be entirely replacing civil servants any time soon. AI is better positioned to help civil servants make important decisions rather than attempting to replace them. If used in the right way, it could even help to create services that feel more, well, human.

Real-world use of AI

In the media and marketing world, generative AI tools like Chat GPT and Bard are bringing rapid change. The tools can produce a reasonably accurate product summary or consumer blog in seconds, translate material into different languages, adapt it for different audiences and even help identify the features consumers care about most. On the more creative side, Dall.E and Midjourney can create fantastic images of strange new worlds, including stills from films that were never made. AI is also used in planning, with film studios like Warner Brothers and 20th Century Fox using it to predict audiences for their movies and inform their release strategy. The potential is clearly huge.

Turning to the public sector, the UK Government sees AI as a game-changing opportunity to revolutionise public services. It is hoped that it can help to speed up processes, improve efficiency and free civil servants to apply their skills and knowledge to more complex tasks. There are already well-established uses of AI in the public sector, such as facial recognition systems deployed by our Border Force at passport control and by the Metropolitan Police to identify people known to them in crowds. Machine learning is being applied to help regulators like the Food Standards Agency to prioritise inspections. Decision support systems are used in healthcare with classical ‘rules-based’ AI and neural networks helping diagnose cases and advising patients and clinicians of their options.

The buzz around generative AI has led to several government departments trialling services that use large language models and AI-based chat interfaces to respond to questions from citizens. Most of the examples so far keep a human in the loop, with the chatbot suggesting responses that can be used and a contact centre advisor deciding whether to use the AI-generated answer as is, edit it before sending or replace it with their own response. The Government Digital Service (GDS) took this further with a limited pilot of an AI chatbot for GOV.uk that responded directly to queries. Human reviewers behind the scenes grade the accuracy of responses to see how well the system performs. Most experts agree that, right now, there is too much risk of inaccuracy to allow the chatbot to respond with no human involvement at all.

We’re already seeing some obvious applications for AI, some more novel uses, and the potential to use it in ways we may not even recognise yet. There are plenty of ‘unknown unknowns’ around the use of AI in the public sector.

Where can AI go wrong?

There are, of course, growing concerns about the impact AI will have on society. For example, Hollywood screenwriters and actors went on strike for six months, worried studios would use AI to replace them. In the public sector, there have been widely reported issues of AIs displaying racial bias. A computer vision system used by HM Passport Office was shown to exhibit bias against people with darker skin.

There have been issues with Generative AI chatbots like Chat GPT ‘making up’ plausible facts or distorting the truth, a phenomenon known as ‘hallucination’. In America, the team acting for someone suing an airline for an injury sustained on one of their flights was reprimanded after they had been found to have used ChatGPT to research legal precedents as part of their case. The AI had literally made-up six non-existent but believable cases that were presented to the court. When the opposing legal team tried to verify the details, they found no record of these cases ever occurring. That resulted in the collapse of the case, and an order by the judge that the principal lawyer involved to explain why he should not be sanctioned for professional misconduct.

In the Netherlands, the “Toeslagenaffaire” (Child Care Benefits Affair) led to tens of thousands of families being accused of fraud and ordered to repay thousands of Euros. This shocking situation went undetected for six years before problems were officially acknowledged. Families were driven into poverty, and sadly there were some suicides, as a result of an AI system using a profiling model that was proven to be biased against low-income families and ethnic minorities. It was sending out automated fraud accusations that were not checked. This highlights the importance of keeping a ‘human in the loop’ when using AI for case processing, something that is a legal requirement in the EU and the UK on decisions that have ‘legal effect’ on individuals under Article 22 of GDPR. Clearly, that wasn’t applied in this case.

Closer to home, there was a scandal in the UK around the grades given to school pupils in 2020, the year GCSE and A-level exams had to be cancelled due to COVID restrictions. Ofqual, the exams regulator, devised an algorithm to convert teacher-predicted grades into actual grades. The trouble was that their algorithm was flawed. It led to many pupils in large, state-run schools receiving results one or two grades lower than expected. Many students in small classes or studying minority interest subjects received grades higher than predicted, predominantly those at private schools. 36% of pupils received lower grades than expected, meaning that around 15,000 would have been rejected by their first-choice university. Less than a week later, Ofqual was forced to withdraw the algorithm and award teacher-predicted grades. The reputational damage to Ofqual was significant, but nothing compared to the emotional distress caused to pupils, parents and teachers affected.

What do we want from public services?

Increasingly, we want rapid turn-around and resolution, especially since most public services are now accessible online. However, even when we apply online, we like to know who we’re dealing with, why decisions were made and that we have been treated fairly. These can be significant challenges when AI is applied to public services. Who is accountable if things go wrong? Can an AI explain its reasoning? Can we challenge decisions that we consider erroneous or unfair, and how do we seek redress when things go wrong?

Fairness, repeatability, transparency. These are all aspects of services that most people would say they want. Most people would say they want everyone to be treated equally if their individual circumstances are the same, without biases or stereotypes. But do we really want systems that are based entirely on rules and automated decisions? I’d argue that we don’t. We’re back to ‘the computer says no’ which leaves us cold and, at the extreme end, we’re back to the Dutch system that applied rigid, but mistaken, profiling with no human checks.

I’d argue we want public services that allow for subjective judgement around edge cases that demonstrate empathy and understanding when people make honest mistakes or have mitigating circumstances that should be considered. Whether that’s a decision by a Traffic Warden not to give a parking ticket when they see you returning to your vehicle with a young family and a load of heavy shopping just a few minutes after the ticket expired, or the decision by a local authority team to reach out to an elderly citizen who has not filled out a form correctly, but seems genuine, to offer additional assistance rather than just rejecting the application.

AI can bring empathy to digital services

The use of technology is not all negative, of course. AI systems can review cases and help spot incorrect decisions, offer a second opinion and detect bias. Humans are not infallible at processing cases, of course. AI can also see patterns that humans may overlook. For example, it can spot that someone has called repeatedly over several days and, therefore, may need extra support. Perhaps sentiment analysis of emails or even voice messages can lead to an understanding of the service user’s emotional state – from frustration to anger or even desperation – that can be flagged to staff handling their case. There have even been uses of Virtual Reality to teach staff working in mental healthcare to have more empathy with their service users. Immersive experiences can allow them to see and, to some degree, experience the world through the eyes of people with particular conditions. There are clearly opportunities for teams working with vulnerable people to utilise such approaches and make services more compassionate.

If we’re going to use AI positively, the issues I’ve highlighted need holistic thought and planning. The AI tools themselves won’t do this out of the box. Good public services will continue to involve people, processes and technology working in harmony.

Plan and interrogate your use of AI rigorously

With any new technology it can be tempting to try to apply it in every situation. The adage of “a solution looking for a problem” is a trap we must avoid. The Cabinet Central Digital and Data Office (CDDO) and the newly formed Centre for Data, Ethics and Innovation (CDEI) have issued guidance for organisations using AI in the public sector which is very clear on this topic. We need to identify use cases for AI where it can offer tangible and financially viable benefits over more traditional approaches.

We also need to be rigorous about research. Even where there are clear user needs, end users of our services may have legitimate concerns over AI-based processing. The GDPR gives them a legal right to opt-out, and to have decisions reviewed by a human. We also need to research with the civil servants who process cases. They are the experts and will recognise the aspects of case handling that can be fully automated, as there are objective principles and rules. They will also know where subjective judgement should be allowed in determining cases. They will also have experience with types of routine cases that can be approved without risk based on automation, and cases where human review is beneficial or even essential.

If you are using AI, someone senior in your organisation needs to have ownership, and there should also be someone independent of project teams with oversight of ethics who can help hold project teams to account. There should be transparency around what you’re doing and, again, the CDDO and CDEI have been leading the way with the Algorithmic Transparency Standard. This standard is currently advisory, but I would be surprised if it isn’t used as best practice in CDDO service assessments where AI has been used. Ultimately, it may become mandated.

We must consider data from early in the planning process too. Who owns the data and content we’re feeding the AI systems, where did it come from, how was it obtained, and where does it reside? Data protection experts should be engaged early too, so we don’t hit any showstoppers later.

A bright future or another false dawn?

There have been huge claims about how AI will change society, as well as expressions of concern going back to the 1860s (when programmable computers were purely theoretical rather than a functioning reality). So, are we finally at the advent of the brave new world, or does it remain further away than some people think?

Whatever the zealots may tell you, it is true that artificial intelligence is not ready to fully replace human operators when it comes to decisions that have legal consequences, especially when dealing with vulnerable people. Some argue that the lack of empathy in AI-based systems will be a long-term barrier to the achieving General Artificial Intelligence. Creating systems that can deal with any arbitrary task could take decades. It may even prove impossible.

That will not stop AI from making a massive difference to how we deliver public services. To those who remain sceptical, I would remind you that most of us already use AI every day without even realising it. If you work in the public sector, you can be sure that there are people in your organisation using AI now. Some of that will be in planned and sanctioned ways, and even if your organisation hasn’t started experimenting yet, there are likely to be some under-the-counter uses by individuals of which your management probably should be more aware. I recommend that we accept that it’s here and being used and focus on getting ahead of the situation.

Using AI to help society move forward together

AI will, without doubt, revolutionise public service delivery, supporting quicker, more reliable and fairer decisions. It will draw attention to unusual patterns, help identify sentiment in communications with citizens, and simulate scenarios that help us plan future policy interventions. It will, in turn, free up time for civil servants to focus on complex needs.

If you’re looking to use AI in public sector projects and you want to build a service that feels more human, here are my six tips to bear in mind:

  • Think holistically: you need to think about the service as a whole and everyone who might be impacted by the use of AI, including the service users, the administrative staff, senior managers and stakeholders within your organisation and external to it.

  • Research behavioural needs, but also feelings and attitudes: there needs to be a clearly defined user need, as with any service, and you need to be sure that you’re adding value over meeting these needs through more traditional means. You also need to understand people’s attitudes, concerns and fears if you are going to build a service that gains wide adoption. Conduct research throughout the whole project lifecycle and involve people in co-designing the service.

  • Keep a human in the loop: AI-based systems are not mature enough to be left to make decisions that have legal effects on people or where mistakes in information provided could cause harm. You need to plan carefully for human safeguards around your use of AI, but also remember that humans are not infallible either, AI could come full circle and help humans avoid biases and clerical errors.

  • Consider data and data quality early: you need to review your data pipelines going into and out of the AI-based system and do this early. You may need to seek special permission to use data in your service, and you may need to make new declarations under GDPR and revise terms and conditions and consent processes. Involve your Data Protection Officer early to reduce the risk of showstoppers later in the project.

  • Consider the risks and the impact of problems: You need to take time to identify all possible risks that your use of AI could introduce – to your service users, your staff, and the public reputation of your organisation. Decide which you can monitor and manage, and which need extra mitigations and guardrails. There may even be some risks that mean you need to change plans on your use of AI more fundamentally. Decide who owns these risks and who is accountable.

  • Be transparent and open about your use of AI to maintain trust in your service: the public has a right to know when AI is used in processing their personal data and a legal right to opt out. Being open and transparent is vital in maintaining trust. You should extend this openness to your staff, too, especially if they are worried about AI-based systems replacing them. Be open with them about your motivations for using AI, how you hope it will benefit them, and who they can speak to if they have concerns.

Public services must be universal, including those that use AI-based systems. The commercial sector can dismiss niche needs as not in their business interest and take a more experimental approach to AI, but the public sector cannot. People can’t go elsewhere for their public services. It needs to be a journey society goes on together.

 

If you’d like to explore the possibilities for AI in your digital services, contact us to arrange a free-of-charge Discovery Session with our public sector team.

Contact us

Let's reinvent the future