Lessons from developing a Consequences tool for local authorities
Rachel Coldicutt, Careful Industries and Prof. Albert Sanchez-Graells, University of Bristol
The first of three blog posts outlining the research and processes that went into creating the Careful Consequence Check, developed with seedcorn funding from the Bristol Digital Futures Institute.
Jamillah Knowles & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
The original aim of this project was to create a tool to support AI procurement in local authorities. Over the course of our research we realised what was needed was broader – that more people would benefit from better support for AI adoption and adaptation.
New AI features are increasingly surfacing as add-ons or enhancements to workplace software. As a result, cross-functional tools such as Microsoft Copilot or Google Gemini are being used by many more people, as are new “enhanced” features that have been added to existing software packages. This adoption of AI tooling often doesn’t need to go through a procurement process for local authorities, while the AI add-ons that appear in frequently used software packages have been designed to be adapted as the user sees fit. Through our research, we heard that AI adoption can be organic and tactical – occurring as a response to short-term challenges or opportunities rather than as part of a bigger, more strategic procurement initiative.
Factors we heard that were driving adoption included:
the introduction of significant cost-saving measures,
pressure for greater efficiency — to “do more with less”, and
serendipity, for instance the availability of a free tool.
As such, there was often very little accompanying process: the availability and affordability of a tool might be the primary reason for its adoption, rather than because it has been selected as “best in class” to meet a range of requirements.
Findings from our research
We conducted interviews with a number of people responsible for implementing and purchasing technology in local authorities in England.
For local authorities, AI adoption is as important as procurement
Procurement is an ongoing issue for use of AI, but the prevalence of general purpose tools and platforms created by Microsoft and Google means supporting the adoption of existing software in new contexts is just as critical as supporting the purchase of new tools.
Non-procurement routes to adoption are also becoming more widely available, both through “AI injection” into existing contracts and the development of new tools by central government’s Incubator for Artificial Intelligence (I.AI).
Many of these AI tools and products on the market are general purpose and might be adopted to solve a wide range of problems rather than to meet a single specific use case. For instance, I.AI’s Minute tool is for transcribing and summarising speech, but transcription is an extremely broad class of activity. Even if such a tool was only used to transcribe meetings, informal workshops with colleagues are very different to line-management meetings, which are different again to team meetings and convenings with external stakeholders. As such, a broad approach to understanding benefits and risks was required.
AI governance is not yet a standard practice
We didn’t find a universal standard for AI governance in the local authorities we spoke to – while information governance is an established practice, it is not yet routine to explore the potential wider risks and benefits of adopting or using an AI tool or service in a systemic, measurable way. For some of the people we interviewed, particularly those in larger local authorities, AI governance was an emerging capability, but for others it was not yet established as a routine practice and was definitely secondary to responding to urgent financial pressures.
It was also notable that often no additional governance was perceived to be needed for tools that had been procured for one context and rolled out in another. Making good governance easier and more accessible, particularly to people with limited technology experience, would be a useful new norm.
Single suppliers can become ubiquitous
In some circumstances, once Microsoft or, more unusually, Google was adopted as the default supplier, it could be difficult to justify the additional cost of purchasing a specific tool when an existing application could be repurposed or rolled out. As such, support to more comprehensively stress test an existing tool in a different context and build a robust business case for another (if needed) would be useful.
It’s difficult to try before you buy
The opportunity to try things out for a short period of time to understand if it will work can be limited by internal processes, vendor terms and conditions, and team capacity, so being able to use alternative tools work through potential future outcomes is useful.
Knowledge and expertise is varied
Capacity, knowledge, and experience of implementing AI tools and products varied widely between both the individuals we spoke with and the contexts they were working in – as such, whatever we created would need to be useful for people with different levels of understanding would need to be catered for.
Informal approaches to quality assurance
Consequence Scanning is a methodology that Rachel co-developed while at responsible tech think tank Doteveryone. Published in 2018, it was originally designed to help digital teams understand the wide range of impacts technologies create, and the format was developed to be used by product and development teams as part of an Agile process. When Doteveryone closed in 2021, stewardship of Consequence Scanning passed to the Open Data Institute, but the original tool was shared under a Creative Commons license, and it has been frequently adopted and reshared over the years by many organisations including Government Digital Service, Salesforce, NCVO, and Thoughtworks. Revisiting Consequence Scanning seemed like a good starting point for exploring an informal, day-to-day approach to governance.
However, seven years is a very long time in digital development; the growth of AI in that time meant we needed to rethink some of the context and content of the tool. It also became clear that we would need something more lightweight in approach, that — if needed — could almost be done on the back of an envelope by people with differing levels of technical expertise. This meant reworking the methodology and, essentially, starting again from scratch.
So, our brief in rethinking Consequence Scanning was to create something that:
Is useful for assessing the adoption and roll-out of technology products, not just their acquisition
Helps teams assess the impacts of changing or expanding the use of an existing technology
Works for both broad, general-purpose tools and more specific ones
Makes the first steps to AI governance easy for busy people and for cross-functional teams
Helps to generate business cases as well as risk assessments
Takes some of the guesswork out of deployment
Works for people with a range of knowledge and expertise about technology and AI
After playtesting a couple of iterations of the updated Consequences workshop format we learnt the following:
It was important to keep the process simple and straightforward, rather than comprehensive and precise; this gives teams the opportunity to explore important issues as they arise
Prioritisation is useful for singling out the most important issues, but having space for everyone to brainstorm and think freely is also really important for generating a range of responses.
However, we also found that:
For people to get the best out of the tool, it is useful to have some existing AI literacy and an understanding of the most frequent AI impacts. In our testing, people who had been through a short “AI Impacts 101” session were more able to foresee potential outcomes more easily than those with no background in AI.
Concrete examples are also more useful than hypotheticals, as people can bring previous experience to help discern likely outcomes.
Developing a “self-study” kit for understanding AI impacts was out of scope for this project, so the challenge was how to make the tool both useful for day-to-day use and a jumping off point for further investigation.
The second post in this series outlines our experiments in developing the Consequences framework and the development of a granular workshop format; the third shares an outline of the Careful Consequences Check.

