Contact us

Does the Service Standard need a reboot?

Director of Public Sector Practice
Valtech

March 20, 2024

Everything requires a reboot once in a while to keep it updated and valuable. The Government Service Standard is no exception. When it launched in 2013, alongside its accompanying phases and ways of working, it powered the user-centred change in approach that catapulted the UK to the top of the digital government league. A decade and a bit further on, is it still up to date and valuable or does it need a reboot to navigate the challenges and opportunities of the next ten years?

I love the Government Service Standard. People will have heard me say that if I was to go on Mastermind, I would choose it as my specialist subject. When it appeared, it pivoted my career for the better and I was able to play a big part in the revolution that GDS sparked and reinforced with it. The Service Standard wrenched attitudes away from subjectivity and drove fundamental changes in behaviour and thinking that meant we were able to build services that were actually useful. The discovery phase that went with the Standard meant we got to spend time exploring user needs and context, and then alpha -such a lovely thing- is all about de-risking which should all mean we waste less taxpayers’ money on things that don’t work. It wasn’t always practical, but it was a real game changer.

But more recently, people have started questioning the Standard’s relevance, not because they want to avoid it but because they want to move beyond it. My protectiveness means I instinctively want to preserve it because it was such a great thing, but by doing so am I stopping it from evolving? Is it time for an evidence-based change of focus? Is the Service Standard still fit for purpose? Do we need to adapt it, or do we need another revolution?

Why exactly was it good in the first place? 

I’d been working at the Home Office for 7 years before GDS turned up and the Standard appeared. I had managed to lead my team in creating an award-winning user-centred Home Office website in that time that people (honestly) described as warm and friendly! It wasn’t easy and there were daily difficult conversations with colleagues who just wanted to put content live, not matter how poorly written it might be, and I normally lost those battles based on time and cost (and “we know what our users want”).

Initial reports about the Standard were scary but publication of the draft version coincided with me setting up the Home Office's first agile team, and I decided to experiment with what it might be able to offer me by trying to use it on our services. We did well (very well indeed) and from there, things snowballed. Within days, I changed from a nervous onlooker to a passionate advocate. To be proven to be on the right side of the user-centred argument, and to have a powerful big brother in GDS to back me up, was a real vindication after all those years and it certainly improved the quality of our services.

I started helping people all over the Home Office (and beyond) to understand these new ways of working to build better services (and pass the assessments).  I worked with a Home Office colleague to build a pioneering coaching and peer assessment function within the Department. The initial version of the Standard had 25 points and it wasn’t perfect. In assessments, we tended to spend a couple of hours talking about the five user-focused points, cramming the other 20 or so into the remaining time. It’s always needed iteration!

The Service Standard has evolved, but maybe not enough

GDS owned the standard until 2021 when they handed it to the Central Digital and Data Office (CDDO). GDS put it through two iterations during their time, taking it from 25 points to 18 and then to 14. They were trying to iterate it in an evidence-based, user-focussed way, but the last iteration was five years ago and CDDO have been a bit quiet about further iterations. They’ve also devolved responsibility for its application to departments. Unfortunately, this means that the mandate has been devolved too, which unfortunately can’t help but water it down, making it harder to enforce. Some people or teams see the process as unnecessary bureaucracy and seek out loopholes and ways to avoid complying with it. In other areas, it’s feeling irrelevant as they have moved on to more mature ways of thinking.

Here are a few of the tensions that I’ve observed in how the Standard and its phases are being used.

  • Diversity of digital scale and maturity across government means some teams and services have evolved to a product-centred continuous delivery method. For them Discovery-Alpha-Beta feels waterfall-like as they are using the concepts at micro level all the time.

  • Commercial processes around contract structure, and the governance requirements of how to fit an agile project into a programme, both often refer to Discovery-Alpha-Beta but are actually using them as a non-agile, gated process that can stifle progress and change.

  • Tom Loosemore, one of the people who helped to enact the changes in the early 2010s, took to Twitter in 2021 to comment that "what was useful scaffolding is now often a straightjacket (sic). Beware any process ossifying into dogma."

On one level, a service should be fluid, evolving to meet existing needs better and address new needs. Big and well-funded programmes can often achieve this nowadays and can aim to be ‘fully agile’. But for many others, the delivery needs to be budgeted for, delivered and measured as it is spending tax-payers money. I worked within Government for long enough to understand the need to work within a budget, timing or phasing framework where the reality is a need to accurately predict delivery and scope to politically influential stakeholders who don't care about methodology.

Do we need more flexibility in the assessments?

Valtech has an excellent track record of taking more than 40 services successfully through assessment, but we’ve been able to do that because we have learned how to tackle them. I’ve personally spent a lot of time with the MoD and Planning Inspectorate helping teams to structure their narrative to be able to meet the Standard’s points. Having to learn how to pass an assessment isn’t an ideal use of an expensive team’s time. No matter how much the assessors tell the teams they are 'friendly chats', a lot is riding on the assessment session. Civil Servants can go through the assessment feeling nervous, and I've known them to completely forget superb pieces of work, leading to a 'not met' for an exemplary service.

These big assessments at the end of phases may not be the best way assessing a service anyway. Some would argue that ongoing assessments throughout a project are a better option. When I was a Civil Servant at the Valuation Office Agency (VOA), one of my teams worked with GDS to pilot ongoing assessment throughout the delivery process. It was a great idea, influenced what we delivered, and the discussions helped. The project was challenging from a stakeholder and political perspective, and users were not always considered correctly, but the ongoing process really helped. But…the end of the story wasn’t that great. GDS still conducted a formal assessment ahead of Public Beta launch and the service didn't ‘meet’ three points (due to things we couldn’t control). There was no way we couldn't go live due to a ministerial commitment, so I was forced to negotiate away the discrepancies with the people who had been so helpful during the ongoing process up until that point. That slightly undermined the trust that we’d built up.

People have refined the process since that pilot. DfE conduct peer reviews, which are light-touch regular check-ins, and DLUHC (where many of the people from that pioneering VOA team now work) run a gently effective ongoing assessment process. Ongoing assessment has been proven to work, but it relies on having well-informed and motivated assessment teams within departments, with enough ring-fenced time to dedicate to it to keep up with the process and not cut corners. There’s a lot of responsibility resting on their shoulders and a big time commitment.

Are we losing sight of our users in all this process?

Government assessors are marvellous people. They really care and there is an amazing community of assessors across government, who give their time to ensure that the Standard is adhered to. They are as responsible as anyone for the step change in the quality of government services over the past eleven years. But is it possible that the process has become so industrialised for them, as with everyone else, that we’ve all forgotten what is truly important?

Being able to consider users in government digital is a big deal for me – I mentioned earlier the years of unsuccessful battles and how great it felt to have the Service Standard to back me up. Research and testing with users is vital, but I do wonder if sometimes we go too far. Are we pushing for perfection no matter the expense? I've seen services that have done months of good user research not meet points 1 and 4 because they haven't been tested with a sufficiently diverse set of users – often very specific user groups. Given that user testing isn't cheap (typically over £500 per user if you're using contractor user researchers and sourcing users through recruitment), I wonder if things are getting a bit too purist? Is it genuinely value for money, or is there a more pragmatic approach we can take with taxpayers' money? 

Plus, in the same way that the phases have become appropriated for governance and reporting purposes, in some cases users have also become commoditised. The need to test with sufficient numbers or to tick off all possible user groups means they become quotas to be achieved and teams can lose sight of the human behind the 'user'. This is something that we focus on a lot in Valtech, not just in public sector work but in private sector too. Our mission is to innovate user experience so that we can consider our users as human beings. The Service Standard aimed to do the same thing in focusing teams on creating services that consider the people that use them. Research should be based on real life stories that focus on experiences and emotions, and can prove that our services are making things better, not just on checking off all the user groups on a prescribed list.

How can the Service Standard be rebooted to ensure it stays fit for purpose?

If we are to ensure the Standard continues to have a positive role in shaping government digital services, it needs to evolve to acknowledge the tensions that I’ve talked about above. We need a new version and revised ways of applying it.

The reboot should also consider how much the world has evolved a lot in the last 5 years. People are at home more of the time, AI has become mainstream, and we should be considering sustainability in our services. The Standard hasn’t kept up.

I think there are some key areas to adapt and revisit in a reboot, and which we might all want to consider as we approach our assessments right now.

  • Build more flexibility into assessments I don’t think every service or initiative can be judged in the same way. Whether it’s the investment in user testing or how complete the problem-solving needs to be, there must be scope for adopting a methodology that’s appropriate to the requirements’ scope and scale. What’s good for one service may not be good for another. It seems like peer assessment is a good approach wherever possible, perhaps leading to something more formal before public beta. It enables problems to be caught early and by the time the formal assessment comes round, teams should be well-practised in discussing their work, and the preparation required should be minimal.

  • Be pragmatic and aim for value-for money: Can we balance how we spend taxpayers’ money with high standards? More mature teams should be able to deviate from discard the phases, backed up by commercial teams that are mature enough to support them in that approach. Similarly, less mature teams should be able to use their limited funding to deliver the best possible value for money for them, the compromise being that they can’t always accommodate edge case user groups as fully as best practice requires. Pragmatism needs to be part of the process.

  • Reconnect with humans Let’s not lose sight of the human beings in the process, whether it’s the citizens or Civil Servants using the services that we build, or the in-house assessment teams who will need to shoulder the burden of the bespoke advice that the approaches above demand. Users should never be reduced purely to numbers, they should always be described through experiences and emotions, and we should always be striving to make their experiences of everyday life that bit better. In house teams are utterly critical as they need to be able to advise on approaches and pragmatism, as well as upholding the Standard throughout a process. They need to be trained on things like cost-benefit considerations and have enough time to do devote to the process – it’s a full-time job and if people are stretched beyond this then the process will break.

We need to reboot more frequently. The Service Standard hasn’t been changed for 5 years, which is close to half the time it has existed. We need another iteration, and it probably needs to be reviewed based on real user data every couple of years to keep up with developments like AI. It’s an overhead for CDDO, but something as powerful as the Service Standard does need continual maintenance and updating in the same way we continuously improve our government digital services. 

We need the Service Standard but it’s becoming obsolete and the more it is watered down or avoided, the less effective it becomes. We need better and clearer ways of applying it that don’t forget what it’s there for. The Service Standard isn't dead, but it does need to reboot. If not, excessive governance or increasing circumnavigation may finish it off. 

I’d be interested to know your thoughts and experiences, message me on LinkedIn to continue the discussion:
Emma Charles
Director of Public Sector Practice, Valtech

Contact us

Let's reinvent the future