Bridging the AI Valley of Doubt

I recently attended the thought provoking AI Ethics, Risks and Safety Conference 2025. It featured presentations from the UK government’s Department for Science, Innovation and Technology (DSIT), the Alan Turing Institute, Mind Foundry and Google. The talks focused on trying to stimulate economic growth by building competence and confidence for UK businesses looking to adopt AI, the balance of human-AI collaboration and the wider societal and environmental impacts. It provided considerable food for thought regarding the challenges businesses face and the chasm between them and implementing a successful AI solution. Here are my thoughts and key takeaways from the event.

The AI Adoption Paradox: Potential vs. Hesitation

A striking paradox emerged from Amy Dickinson’s opening talk at the conference: despite the UK being the 3rd largest global AI industry and larger than any other country in Europe, only 1 in 6 UK firms are actively using AI technology. This hesitation stems not from a lack of opportunity but from a complex web of challenges including:

  • financial costs
  • skills gaps
  • data issues
  • risk management concerns
  • reputational risk, and
  • business integration challenges

UK firms are risk averse, and naturally, given the current geopolitical climate. Avoiding what firms simply don’t understand highlights the biggest barrier at play. These challenges are compounded by the absence of clear and proven standards, frameworks and the rapidly changing technological environment. All this turbulence creates uncertainty that paralyses organisations. However, there is an appetite to advance and benefit from this technology. Some of that is driven by the concern of being left behind as competitors make these leaps of faith and successfully bridge that gap. But it’s a game of who jumps first.

Building Bridges: From Hesitation to Confident Implementation

The UK government department for Science, Innovation and Technology (DSIT) and the Alan Turing Institute are actively working to bridge this confidence gap. The UK Government DSIT is focused on:

  • building competence and confidence
  • improving quality and competitiveness, and
  • advancing innovation.

Encouraging AI adoption is a multi-pronged approach starting with the AI Management Essentials (AIMES), “a self-assessment tool designed to help businesses establish robust management practices for the development and use of AI systems”. The UK government is actively encouraging businesses to adopt AI (responsibly) as it recognises the huge potential for economic growth and aims to establish the UK as a major powerhouse in the AI industry, to be an “AI maker” rather than an “AI taker”. The AI Opportunities Action Plan was set in motion in January 2025 to ensure the UK has a clear path to economic growth, provide jobs and improve people’s lives. The UK government’s support and guidance is a clear indicator that support is available, and that businesses should have confidence moving forward and adopting AI technology wherever possible.

Bridge AI, a £100m Innovate UK funded programme with a mission to drive AI adoption, is taking a practical approach. Partnered with the Alan Turing Institute, Bridge AI is an AI adoption framework intended to strengthen the ecosystem, enable growth and advance digital innovation across industry sectors. Its first phase has developed a practical framework for stakeholders to identify the risks, challenges and the risk mitigation strategies. In it’s later phases a database of the use cases across industry sectors will provide an invaluable tool for business looking for guidance. It will map the risks associated with industry sectors, and with them, the risk mitigation strategies. As they explore this data set and framework they can gain insight and visibility to the challenges, and enable them to prepare and place the necessary mitigation strategies. Arcangelo Leone de Castris, from the Alan Turing Institute provided detailed examples of how businesses could use the framework to drive and guide AI adoption.

This framework will become an important and invaluable lens for organisations to start uncover and understand these risks and challenges, and most importantly, enable proactive planning and AI adoption strategy. It should enable organisations to navigate and find a path forward and simply step across the gap of uncertainty.

Adopting AI, is still not without risk and doing so responsibly and progressing with the right strategy is key to success.

Human-AI Collaboration: Finding the Right Balance

Owen Parsons’ (Mind Foundry), discussion on “AI in the loop” versus “human in the loop” systems addressed a fundamental question of agency and control. A “human in the loop” is a system where AI is left to autonomously complete tasks, with the occasional human input or verification. An “AI in the loop” is the opposite, a human driven task with the occasional AI system handling a complex or a mundane, repetitive task.

The distinction between AI augmenting human work (with humans maintaining control) versus AI taking the lead has significant implications for risk management and its effectiveness. Implementing guardrails in AI systems is key to maintaining human control and keeping AI within well controlled and human defined boundaries. Systems should leverage AI to handle tasks beyond human capacity, supporting human processes and assisting users to make well informed decisions.

The conclusion that “AI in the loop” approaches are safer and more effective points to the importance of maintaining human oversight and intentionality in AI implementation. This augmentation and human-AI collaboration is key step in AI adoption that builds trust, confidence and minimises the risks organisations face. This is similar to the approach we took during the InferESG project, a recent ESG data challenge. The systems was developed to augment the ESG analyst, using AI to evaluate large documents and indicate potential greenwashing concerns.

This kind of responsible adoption will help prevent some of the wider societal dangers that could impact people’s everyday lives.

Broader Societal Implications

Geoff Keeling’s (Google) talk expands beyond the organisational risks to the societal concerns, highlighting how advanced and unregulated AI assistants will impact both individuals and communities. We face a critical juncture where we must grapple with more than just technological advancement and opportunities. The rapid expansion and complexity of AI systems raises profound questions about agency and control. Without regulation, cause and effect relationships will explode as AI systems continue to expand and overreach. There is a growing risk that unregulated AI systems will trigger unintended and unexpected consequences, while also enabling deliberate misuse by bad actors seeking personal gain or intent on inflicting damage. But who should determine this regulation? Who decides right from wrong? Geoff’s call for “coordination, cooperation, and public input” to shape a collectively desired future emphasises that AI governance isn’t just a technical challenge but a democratic one.

My colleague Oliver Cronk provides further thought-provoking questions in his article: “Are we sleepwalking into AI-driven societal challenges”. The longer-term societal impacts are increasingly raising more questions, but we are yet to really understand these impacts as the technology continues to reach further into our lives. The danger is a matter of when, not if the consequences are realised.

Beyond the potential societal impact, there is also the existing and growing environmental impact of AI.

Environmental Considerations: The Hidden Cost of AI

Sam Young’s (Catapult Energy Systems) presentation introduced an often-overlooked dimension of AI ethics: environmental impact. With data centers currently demanding 1-1.5% of global electricity and growing 20-40% annually and the projection that AI-related energy use in data centers could grow from 15% to 50% by 2030 (IEA, Energy and AI Report) adds urgency to sustainable AI development.

The adoption and drive to use AI must also come with environmental considerations. Whilst AI could provide economic benefits and boost productivity, an important question remains regarding when it is appropriate to use. There is gradually improving public awareness that generating an action figure of yourself comes at an environmental cost. As this public concern grows, as will the AI adoption by businesses meet further resistance from employees and end-users.

The question of when AI is acceptable, responsible, and practical to use is increasingly complex. Another common question and thought process is: can AI be used to provide climate change solutions that outweigh the environmental impact of using it? The short answer is that, it could, but there needs to be a practical and genuine business need that supports the requirement of an AI system. The problem is that most AI applications are not being used for environmental or social benefit, they are being used for economic and productivity gains, even to the other extreme, in the extraction of fossil fuels.

If implementing an AI system, the next simplest recommendation is right-sizing models for the task (Small Language Models, SLMs vs Large Language Models, LLMs). Reaching for the latest, largest and most powerful LLM isn’t necessarily going to wield the best results and it certainly wouldn’t be the most efficient. Some smaller open-source LLMs may be perfectly suited to an application rather than the latest large proprietary model.

Oliver has also given a take on Environmental and Financial Sustainability of GenerativeAI in his piece from last year: Will Generative AI Implode and Become More Sustainable?

Conclusion

Finding the most appropriate, simplest tool that can effectively complete the job rather than defaulting to the most powerful option available is crucial to responsible use of technology, minimising impact and risk to business, society and environment. If the right solution is an AI solution, then implementing an adoption strategy is crucial for identifying the risks and minimising business impacts. The wider implications of AI should be considered thoughtfully and with good intention, which is key to adopting AI responsibly.

For UK organisations looking to adopt AI for a genuine business case, there is government support and guidance, emerging frameworks and tools to lay down an adoption strategy. Start small, and take steps to use “AI in the loop” rather than leaps into AI automation and a “human in the loop” system. Use guardrails to keep the system within safe contextual boundaries with access and control limitations. These place safeguards in your business to help reduce and mitigate the risks.

Right-size the models required for your needs for effectiveness and efficiency, minimising the environmental impact. By embracing this measured, responsible approach to AI adoption, UK businesses can confidently cross the valley of doubt and harness AI’s transformative potential whilst safeguarding our collective future.