Freelance Researchers: Safe Adoption of AI
Careful Industries has received funding from Lloyd’s Register Foundation to conduct a foresight review on the safe adoption of AI. We are seeking freelance research support to conduct three literature searches to support the delivery of this work.
We welcome expressions of interest for either one of the individual topics or for all three topics.
DEADLINE FOR EXPRESSIONS OF INTEREST: 12pm on Friday 10 October
Project timing: all three lit review databases need to be completed by Friday 21 November
Review 1: AI and Infrastructure Resilience - 4 days @ £400/day: £1600
Review 2: AI and Environmental Safety - 4 days @ £400/day: £1600
Review 3: AI and Worker Safety - 4 days @ £400/day: £1600
About the Foresight Review
The aim of the foresight review is to surface some of the current and future safety impacts of AI in engineered systems and enable a better understanding for AI developers, purchasers, implementers, and policymakers of the potential harms to people, infrastructure and the environment that occur through AI adoption.
The project methodology combines horizon scanning techniques with participatory foresight to build a view of current and emerging issues in the fields of worker safety, environmental safety, and critical infrastructure resilience. Final outputs will include a taxonomy of harms and a framework for mapping the impacts of AI. Working with international partners, the project will bring together people with lived, learnt, and practical experience of AI implementation to develop a rounded perspective on the impacts of AI on three domains: infrastructure resilience, environmental safety, and worker safety.
This project focuses on the safety considerations that arise when AI systems are adopted; rather than taking a single industry or application as the starting point, the focus is on three applied issues that separately and in combination pose a wide-range of risks to the concept of engineering safety.
Project Brief
We are looking for freelance researchers to support the delivery of the literature review phase of the project, specifically:
- conducting a grey literature search building a database of both grey and established literature sources, and create detailed summaries and initial thematic analysis.
The infrastructure resilience track will understand the future of critical systems. The rapid speed of technological development and roll-out is already placing pressure on both the physical and digital infrastructure needed for creating, developing and delivering AI systems, while the global dominance of a few corporate suppliers creates governance and resilience risks and is forcing new conceptions of “public infrastructure”. This creates new vulnerabilities for information systems, smart infrastructure, and global supply chains, with direct applicability for a wide range of contexts including shipping and transport logistics, defence, food security, and healthcare.
Initial focus territories: India, EU.
The environmental safety track will explore the paradox of AI and environmental safety. On the one hand, the physical impacts of AI infrastructure such as data centres and super computers and the mining of rare-earth elements will continue to have considerable impacts on the health, welfare, and quality of life of many global communities; meanwhile, applied uses of AI and machine learning in environmental monitoring and renewable energy are making considerable contributions towards NetZero. This investigation will begin in territories with high existing levels of AI adoption and advanced infrastructure and extrapolate potential future scenarios for other, more emergent territories.
Initial focus territories: China, US
The worker safety track will understand the impacts on workers of developing and deploying AI systems, from data labelling and moderation labour to the risk factors involved in working alongside robotics and cyber-physical systems. Initial focus territories: China, Kenya, UK.
Expressions of Interest
Send us a short email (no more than 3 or 4 paragraphs) outlining to hello@careful.industries with the title “AI Safety Literature”. Emails should explain:
which grey literature search you are interested in. (Expressions of interest in all three are welcome)
your relevant experience, giving examples of similar projects you have undertaken
detail of your availability to conduct this work in October and November 2025
Include a CV and up-to-date contact details. Deadline: 12pm Friday 10 October.
Please don’t apply via social media as it will be difficult to keep track of messages.