Most AI projects fail. Not because the technology isn’t ready, but because the thinking isn’t. In operations where margins are tight and decisions matter, success depends on three things: a defined goal, verifiable proof and repeatable outcomes.
XpertRule’s Aden Hopkins and Darren Falconer cut through the hype and show what it really takes to bring AI to the factory floor.
You know, ChatGPT has a lot to answer for. On one hand, it’s sparked a wave of curiosity and made AI feel more accessible than ever. Millions now use it daily to draft emails, explore ideas, and create images or video. But that accessibility has come at a cost.
Ever since ChatGPT catapulted large language models (LLMs) and AI chatbots into the spotlight, public perception of AI has been skewed towards content generation. In reality, AI spans a wide range of techniques, many of which are far better suited to manufacturing.
This distorted view has led to companies either underusing AI due to mistrust or overusing it without understanding risks – both harmful. No surprise then that 80% of AI projects fail to meet their goals* – twice the rate of non-AI IT initiatives.
Why gen AI is a poor fit for factories
“We tried AI but it didn’t work for us.” We hear this all the time. And in most cases, the issue isn’t the technology. It’s the fit. Generative AI models like ChatGPT are trained on vast amounts of unstructured text data to produce human-like content. That’s great for language-based and creative tasks like drafting emails, telesales scripts and marketing plans. But manufacturing isn’t powered by words, it’s powered by numeric measurements, cause and effect, tolerances and thresholds, constraints and consequences.
Factories run on structured data, like sensor readings, cycle times and throughput. That’s why spreadsheets still dominate. They offer rules, control and repeatability. Most generative tools, by contrast, offer speed and the illusion of transparency at the expense of consistency and explainability. The same input won’t always give the same result. There’s no reasoning path. No way to trace embedded logic.
That might be fine for low-stakes creativity. But not on the factory floor, where safety, compliance and reputation are on the line. In these environments, decisions need to be backed by systems that are explainable, transparent and auditable.
That’s why design time matters, working with subject matter experts to define the rules, constraints and logic that underpin decisions. This goes beyond training a model; it’s embedding operational expertise into systems that can be trusted, tested and improved. When expert input stays in the loop – not just at the point of use but at the point of creation – AI becomes a powerful, human-guided tool that is controlled and verifiable.
That’s where Decision Intelligence stands apart. It combines AI techniques – like symbolic reasoning, optimisation, decision flows and machine learning – to deliver outcomes that are explained, controllable and aligned to real-world operations. Done right, that creates what we call ‘glass box’ AI – systems that show how decisions are made, not just what they are.
Black box vs glass box
In manufacturing it’s not enough for AI to deliver an answer. People need to understand why that answer was given. Without that, trust breaks down. Many AI tools operate as black boxes. They provide predictions or recommendations but no insight into how they arrived there. This lack of clarity into AI’s decision-making breeds uncertainty. People distrust what they can’t explain or understand, making AI adoption an uphill struggle.
We recently helped an organisation that had spent 18 months trying to deploy a model. The accuracy was there but they couldn’t see how it made decisions, making it impossible to validate or approve. The project had ground to a halt and faced being scrapped.
Within 24 hours of applying our explainable modelling approach, we were able to deliver the same accuracy and full transparency. That single shift turned the model from a stalled pilot into a trusted system and is now embedded directly into their production system.
Losing those 18 months cost them time, resources and competitive advantage. That project proved just how valuable explainability can be when modelling high-stake processes and it is why it is a central feature of our platform.
Case study: From gut feel to full control
Libra Speciality Chemicals historically relied on manual updates, siloed data and plant-floor checks to manage production. In recent years, incremental improvements to data capture and monitoring on site have been made but they were looking for a step-change technology to take them to the next level of manufacturing excellence. This technology would need to allow Libra the ability to track issues over time, understand root causes and stop problems from recurring.
Using XpertFactory XpertRule’s Decision-Intelligence powered manufacturing platform, Libra now has full visibility across its operations. Tank levels, batch status, agitator speeds, temperature profiles and energy use are all tracked live. Instead of relying on verbal updates or gut feel, every team works from the same real-time data, accessible in seconds and automatically flagged when conditions change.
That connected view has removed delays, improved product consistency and enabled proactive, evidence-backed decision-making. Libra can now track trends across batches and identify where small changes can unlock measurable gains. Instead of reacting to problems, teams can now anticipate them and act before they escalate.
“We’ve gone from gut feel to data-driven,” says Sahd Hussain, Engineering and Process Safety Manager. “That changes everything. How you operate. How you plan. And how you grow. It means fewer delays, better planning, stronger compliance and, ultimately, better outcomes for our customers.”
Click here to read the full story of how Libra turned data into decisions
Decision intelligence is the missing link
If glass box AI is about seeing how decisions are made, decision intelligence is about knowing what to do next. It connects insight into action – not just flagging problems, but explaining root cause, consequences and resolutions.
Take one manufacturer using our AI-powered monitoring system to oversee airflow in their grinding operations. When the system flagged an anomaly, engineers didn’t receive a vague alert. They were able to trace it to a drop in airflow, validate the logic behind the warning and use their own knowledge to diagnose the root cause: a blocked air nozzle. That combination of explainable AI and expert oversight led to a fast fix, avoiding the cost and inconvenience of unnecessary downtime.
The power of decision intelligence is bridging the gap between what a system recommends and what the business needs to do. It turns AI from a prediction engine into a decision partner – one that frontline teams can rely on.
If you can’t explain it, you can’t trust it – and you shouldn’t use it
Too many AI projects are implemented without a clear problem, neglect human expertise and chase novelty over need. That’s why they fail.
What separates AI that delivers, from AI that doesn’t? Start with a clear, high-value problem, build solutions that can explain their logic and prove their results every step of the way and design systems to scale reliably. That’s how AI moves from hype to real-world impact – by being problem-led, proven and scalable.
So, ask yourself:
- Can you see how your AI makes decisions?
- Can you trace its logic?
- Can you audit its outcomes?
- Can your team?
If not, your AI isn’t a solution. It’s a risk.
To find out how XpertRule can deliver effective and responsible AI solutions for your manufacturing business, go to www.xpertrule.com/xpertfactory or email info@xpertrule.com.
For more articles like this, visit our Industrial Data & AI channel.
