It’s a fact that over 80 percent of AI projects fail. Not because of AI’s potential, but because businesses prioritize minor use cases over real transformation. Automated insights and meeting summaries may be impressive, but AI only drives impact when seamlessly integrated into workflows, turning insights into action.
Deploying AI successfully isn’t simple, and organizations are complex. Effective AI deployment requires a clear framework for human oversight. AI should usually enhance human decision-making, providing targeted, explainable, and interactive insights. But in some cases — especially when decisions are time-sensitive or involve vast amounts of data — humans cannot oversee every output in real time. This raises a key question: when should humans be ‘in the loop,’ actively making decisions, and when should they be ‘over the loop,’ overseeing AI without direct intervention? Getting this balance right is crucial for both AI’s effectiveness and its responsible use.
In the Loop: AI as Decision Support
When decisions carry high stakes, due to ethical considerations, regulatory scrutiny, or the need for contextual judgement, AI needs to augment human decision-making, not replace it.
Take the NHS during COVID-19. AI helped predict hospital capacity needs, providing structured, explainable insights that informed human-led decisions on resource allocation and patient transfers. AI technologies like this have been safely used for decades and fall within the ‘green zone’ — fully understood, explainable, and under human control. This transparency allows outputs to be scrutinized, fostering trust in the system.
In high-risk domains with subjective use cases and abstract data, AI moves into the ‘red zone’ — a space where technology operates beyond our current understanding and control. This raises obvious concerns about unintended consequences, including opaque decision-making, lack of accountability, and ethical breaches. To mitigate these risks, transparency and strong governance are essential, particularly as nation-states and bad actors seek to exploit AI for their own gain.
Over the Loop: AI Operating Autonomously
For well-defined use cases with large, structured data sets, AI can operate with minimal human intervention, working ‘over the loop’ rather than ‘in the loop.’ This enables AI to handle tasks that are too large-scale or time-sensitive for human oversight.
One example is tackling online terrorist propaganda. When extremist groups flood social media with recruitment content, human moderation alone is not enough. AI, however, can rapidly detect and remove harmful material at scale. Faculty developed a system that achieved 94 percent accuracy with just a 0.005 percent false-positive rate.
In these cases, AI runs autonomously within strict parameters. Humans define rules and audit performance but do not intervene in real-time decisions. The key to safe ‘over the loop’ AI is rigorous safeguards, ensuring alignment with ethical and operational standards.
The Future of AI Deployment
The future of AI deployment is not a binary choice between automation and human oversight but about designing systems that align with organizational needs and ethical principles. AI should enhance, not replace, human expertise.
Success hinges on clearly defined use cases and operating largely within the “green zone”, where AI functions are well understood, controlled, and easily validated. This allows for efficient refinement and optimization. In such cases, humans can sometimes be ‘over the loop.’ Conversely, the “red zone” includes technologies that we neither fully understand nor control. Here, AI has to remain one component of a broader system involving human oversight, as outputs cannot be easily corrected or validated for autonomous decision-making.
By defining AI’s role more carefully, leaders can maximize its potential while ensuring safety, accountability, and trust. AI is not just a tool for analysis, it is a tool for action, but its power must be applied responsibly.
Image credit: AndreyPopov/depositphotos.com

Dr Marc Warne is CEO of Faculty. He founded the company to help organizations make better decisions using human-led AI. For over 10 years he has worked with government agencies and leading brands to implement impactful AI solutions. Before Faculty, Marc was a Marie Curie Fellow in Physics at Harvard University, specializing in quantum sensing. His work has been published in prominent academic journals and he is regularly featured as an expert in top-tier media.