Introducing the Careful Consequence Check

This is a guide to delivering a self-directed Careful Consequence Check.

A screengrab of the Careful Consequence Check mural.

The Careful Consequence Check can be used to make a rapid assessment of the potential risks and issues involved in using or adopting an AI-powered product, feature, or service. It’s not a substitute for a full risk analysis – it’s meant to answer the question, “is this okay?” and be the start of a process of enquiry, learning and monitoring. 

Time and resources: You can either use the Mural template or complete the process on a piece of paper or a flipchart. If the product you’re checking is straightforward, a quick version of the check will take about 30 minutes. If it’s more complex, you may find it takes longer and that you need to do some additional research to understand the potential impacts.

The following prompts and questions are also shown on the Mural template.

1.  Description 

In one sentence, describe what you’re making and explain the problem it solves. 

Is this a single-purpose tool or will it be used for other things? If yes, describe each additional use. 

NOTE: You may want to come back and run the process again for each additional purpose.

2. Who benefits and how?

Identify each person or group that will benefit and briefly explain the difference or improvement they will experience. 

Include people who use the tool, people who will be affected by any outcomes, and any institutional benefits such as cost savings. 

3. Success 

List up to five things this project will achieve if it’s a success.

4. Context 

What does reliability look like? 

Are errors acceptable? If not, why? 

How quickly does this need to be stable and operating at scale? 

5. Specific Impacts 

This section introduces the PRET framework: People, Resilience, Environment, Trust and Democracy. This is the place to do a deeper dive into the specific impacts of your product or service. You may not know all the answers instantly; flag what you don’t know and return to it later on.

PEOPLE

  • Will this change people’s jobs? 

  • Will people who use it need training? 

  • Will it replace a skilled human task? 

  • Will it affect relationships? Will it change workplace culture or lessen human interaction? 

  • Will this change the experience that customers or service users have? What will be different? 

  • Will anyone know if this goes wrong? 

  • What redress or repair mechanisms are in place?

  • Who are the most vulnerable possible users for this service? Will they need additional support or safeguards to ensure they have a good experience of the service? 

  • Will it change people’s behaviour? If so, how?

  • Will people need their own equipment or any specific skills to benefit from this project?

RESILIENCE 

  • What kind of data do you need? Is it available and easy to access?

  • Does your idea rely on other suppliers or third-party services - what happens if that service goes down or closes? 

  • Maintenance: Will your product need to be updated to keep using those third-party services? Will it be easy to fix if things go wrong? 

  • What happens if your product breaks or produces unreliable outcomes? 

  • Will new hardware be required?

  • How will benefits be measured?

  • Will new costs be introduced? 

  • What safeguards are in place to protect against bad actors? 

ENVIRONMENTAL IMPACT

  • Does your idea require new hardware? 

  • Is accurate data available about the environmental footprint of this project? Are renewable energy sources available? 

  • Do the benefits accrued justify the use of water and electricity? 

TRUST, FAIRNESS AND DEMOCRACY

  • Are you using any data about people? This could include their location, their likeness, information about their friends and family. If so, does this meet your internal data governance requirements?

  • If your project uses a Large Language Model (LLM), do you know how it was trained or where the data was sourced? 

  • What does your idea mean for people’s privacy? Could your idea lead to people being – or feeling as if they are being – tracked or surveilled?

  • Will your project require or encourage additional surveillance? 

  • Does anything about it feel weird or creepy? Do you have any underlying reservations? 

  • Will your product make decisions about people? How will you make sure those decisions are fair, and that people can seek redress?  

  • Will your proposed idea make recommendations about content or further actions? How will those recommendations be formulated, and what would happen if someone acted on each one? 

  • Can you explain how your idea works to someone who isn’t technical? 

6. Most significant benefits 

Read back through your answers to 2, 3, 4, 5: what are the most significant benefits of this project? 

7. Biggest risks and greatest harms 

Read back through your answers to 2, 3, 4, 5: what are the biggest risks and greatest harms likely to be caused by this project?  

8. Mitigations

What mitigations can be put in place to address the risks and harms? 

9. Trade-offs 

What are the manageable risks and harms? What do you have to trade-off and accept will be imperfect? 

10. Is this viable? 

Taking all the inputs from above, is this project viable? Explain why. 

Happy Consequence Checking. Let us know how you get on at hello@careful.industries.

This project was made possible by seedcorn funding from the Bristol Digital Futures Institute.

Next
Next

Developing Consequences 2.0