AI at scale

Why automotive AI fails between pilot and plant

Published Modified
9 min
Four remote panellists on a livestream discussing automotive manufacturing and AI.
Machines that learn demand organisations willing to change

The automotive industry has mastered the AI pilot. It has not yet mastered scaling one. Across OEMs and suppliers, a structural gap persists between promising controlled-environment results and sustainable, plant-wide deployment.

There is a phrase that has become quietly familiar to anyone working at the intersection of manufacturing and artificial intelligence, and it captures, with uncomfortable precision, the central frustration of the age. A promising pilot. Strong results in a controlled environment. And then, as the AMS editorial team noted while framing a recent livestream on digitalisation and smarter automation, somewhere between that controlled environment and production, it stalls.

That observation does not feature in many press releases. It is, nonetheless, a consequential sentence for modern automotive manufacturing.

The automotive industry has been automating its factories for more than six decades. Robotic arms performing spot welding, paint application and body assembly have long since ceased to be remarkable. But something material has shifted in recent years. The technology has not merely become more capable. It has become more complex, more data-hungry and more dependent on organisational conditions that many manufacturers have yet to create.

Automation once required capital and process discipline. The new kind requires something considerably harder to requisition: coherent data governance, distributed skill, and the willingness to treat AI as an enterprise-wide transformation rather than a sequence of engineering experiments.

The seduction of the pilot

There is a reason pilots succeed. They are, by design, set up to. They operate in controlled conditions, with clean data, dedicated expertise and clear metrics. A computer vision system trained to detect weld spatter defects will perform admirably in a pilot where the data has been curated, the lighting is consistent and a data scientist is on hand to iterate the model. None of those conditions are guaranteed to persist when the same system is asked to run across multiple production plants with different machine configurations, incompatible data structures and no dedicated data science resource.

This is the gap that separates proof of concept from production, and it is wider than it appears from the outside.

Andreas Kühne, Program Manager for Artificial Intelligence in Production and Logistics at Audi, is one of the people most directly responsible for closing it. Audi currently manages more than 100 AI initiatives across production and logistics, spanning applications from automated label verification at Ingolstadt and weld spatter detection developed with Siemens, to a generative AI tool called Tender Toucan that cuts procurement analysis time by up to 30 per cent. The portfolio spans body shop, press shop, paintshop, assembly, logistics and infrastructure management. Managing that range coherently is, in Kühne’s own assessment, “maybe one of the most important tasks” he has to do.

But the insight that matters most is not about the breadth of the portfolio. It is about what makes the whole thing structurally fragile.

Kühne is direct on the point: “You have to have this data in a certain quality, you have to have this data integrated, and you have to have this data available - and then you have to have the data, in an ideal world, with a certain standard, with certain semantics, that is working in every production line and also in every production plant. Because if you have this all individually, then you have to - let’s say, build some translator for the AI solution for each of the systems. And this takes quite a lot of time, and is one of the key challenges and opportunities to make AI work in production.”

You have to have this data in a certain quality, you have to have this data integrated, and you have to have this data available — in every production line and also in every production plant

Andreas Kühne, Program Manager for Artificial Intelligence in Production and Logistics, Audi

The challenge Kühne is describing is not a technology failure. The algorithms work. The challenge is structural: data that exists in abundance on the shop floor but arrives in formats that are inconsistent, siloed and incompatible across plants and production lines. Building a bespoke translator for every data source is not a scalable approach. It is a tax on every AI project, levied in time, cost and specialist attention.

A.J. Camber VP, Head of Software Business Group Solidigm

The tools are built for the wrong people

If data fragmentation is the first bottleneck, the second is expertise, or rather, the dangerous concentration of it.

A.J. Camber, VP and Head of the AI Software Business Group at livestream sponsors Solidigm, approaches this problem from the supplier side. Solidigm’s platform Lucetta, which launched in March 2026, is built on a pointed argument: the technology is not the problem. The models for visual inspection and defect detection are proven. The obstacle is the requirement for specialist AI expertise to deploy and maintain them, a requirement that most manufacturing organisations cannot sustainably meet.

Camber puts it plainly: “The ease of use for an industrial engineer or mechanical engineer online is just not there. A lot of the tools are actually built for developers. And then, there’s a lot of complexity even just getting started setting up a quality data set, especially for a data set that accommodates, and has the type of variation that’s required to really be successful in production.”

A lot of the tools are actually built for developers. The ease of use for an industrial engineer or mechanical engineer is just not there

A.J. Camber VP, Head of Software Business Group, Solidigm

The consequence is a dependency structure that is not only inefficient but brittle. “If you’re lucky enough to have one of these data scientists, you still also have this side-problem where you have a dependency on just a few individuals, which can be challenging as well,” Camber observed.

This is a structural fragility that scales badly. An organisation with ten AI applications, each dependent on a specific individual’s expertise, has not built a manufacturing AI capability. It has built ten single points of failure. When that individual leaves, changes role or is stretched across too many simultaneous projects, the applications stall. When a new production line comes online and the AI system needs retraining on new data, the queue for specialist time grows longer.

Mike Wilson Chief Automation Officer Manufacturing Technology Centre

Camber’s proposed response is explicit: broader organisational involvement, tools designed for domain experts rather than data scientists, and the kind of transparency that enables an engineer on the line to understand and iterate a visual model without knowing what an epoch is. “We believe that it would be easier to build trust in the technology across the organisation if we can get more people involved,” he said.

The problem-first imperative

The third dimension of the scaling challenge is perhaps the most behavioural, and it is the one that Mike Wilson, Chief Automation Officer at the Manufacturing Technology Centre, is best placed to describe.

Wilson brings more than 40 years of experience to this question. He is a Visiting Professor of Robotics and Automation at Loughborough University, Chair of the UK Automation Forum and a General Assembly member of the International Federation of Robotics. His vantage point is not that of the vendor or the OEM. It is that of an organisation sitting between all of those layers, working with manufacturers to understand precisely what they are trying to achieve before any solution is prescribed.

Wilson’s framing of how the MTC approaches automation problems is worth quoting at length: “It’s not so much about the technology. It is more about how to solve some fairly fundamental kinds of problems... Our approach is not necessarily driven by the technology, it’s about finding the answer to that particular problem.”

This is a deceptively simple statement with significant implications. The failure mode Wilson is identifying is not technological. It is methodological. Organisations that start with a technology and then look for a problem to apply it to tend to generate pilots. Organisations that start with a clearly defined operational problem and then ask what technology could address it tend to generate solutions.

Our approach is not necessarily driven by the technology. It’s about finding the answer to that particular problem [of automation]

Mike Wilson, Chief Automation Officer, Manufacturing Technology Centre

The distinction is not merely philosophical. It affects how a use case is funded, how its success is measured and, critically, who owns it once the pilot phase ends. A pilot conceived as a demonstration of what AI can do has no natural owner in the operational business. A solution conceived to reduce scrap rates in the paintshop by a specified amount has a stakeholder, a metric and a reason to survive.

Governance as the missing architecture

None of this means that technology choices are irrelevant. But the evidence from this discussion suggests that the most consequential decisions in automotive AI are not about algorithms. They are about governance.

Engineer standing between orange robotic arms in an automotive factory.
Andreas Kühne Program Manager for Artificial Intelligence in Production and Logistics Audi

Audi’s approach to this is instructive. With over 100 active initiatives across production and logistics, the risk of fragmentation is not theoretical. Kühne’s team manages it through a centralised portfolio model combined with what Audi calls an “agile use case journey,” a maturity framework that grades each initiative against its readiness for serious production.

 The framework distinguishes between rough ideas, technically validated concepts, pilots with demonstrated business benefit and cases ready for plant-wide rollout.

But the governance mechanism Kühne identifies as most critical is not the maturity framework. It is the commitment structure. “When we bring it to serious production, it is a commitment from, for example, the press shop or the paint shop, over all the production plants that are saying, 'okay, this is the use case that we want, and we want to bring it into - not only one plant - but all of the plants,'” he explained. 

"Because this is one of the prerequisites we are saying has to be achieved. Otherwise, we generally find the use cases are not scaling.”

Otherwise, we generally find the use cases are not scaling

Andreas Kühne, Program Manager for Artificial Intelligence in Production and Logistics, Audi

That is a governance insight of considerable practical importance. An AI use case piloted in one plant and approved for rollout by the central portfolio team is still not guaranteed to scale unless the relevant business owner has explicitly committed to deploying it across all relevant sites. Without that cross-plant commitment, the default is always fragmentation.

The same logic extends to architecture. Kühne’s team applies governance at the platform level as well as the use case level, ensuring that new AI solutions are built on existing patterns and core products rather than creating bespoke infrastructure that duplicates capability already available elsewhere. The cost of not doing this compounds over time. An organisation managing fifty individually-architected AI solutions faces an exponentially more difficult maintenance challenge than one running fifty use cases on a common platform.

On the question of who should own AI in the factory, Kühne’s position is unambiguous. “In particular for digitalisation and AI, it’s a team sport,” he said. “You have to bring the people together, you also have to have common ownership and move it forward from that point.” Responsibility for platforms and enablers sits with IT; responsibility for individual use cases sits with the business function that owns the problem. Neither can operate in isolation.

What the leaders actually do differently

The picture that emerges from this discussion is not one of a technology industry struggling to demonstrate value. It is one of a manufacturing sector that has demonstrated value at the margins and is now confronting the organisational challenge of making that value systematic.

The leaders are not distinguished primarily by the sophistication of their algorithms. They are distinguished by the maturity of the organisational structures surrounding those algorithms. Centralised portfolio governance. Cross-plant business ownership. A consistent data and architecture layer that prevents each new use case from becoming its own bespoke integration project. And a deliberate investment in broadening the base of people who can work with AI tools, reducing the concentration of expertise that makes scaling so fragile.

Audi’s participation in the Innovation Park Artificial Intelligence in Heilbronn, one of Europe’s largest AI networks, reflects the same logic applied externally. “In particular for digitalisation and AI, we need expertise inside of Audi, but also we need expertise and impulses from the outside - because the world is moving so fast,” Kühne explained. The ecosystem provides early access to research, collaboration with peers facing analogous challenges and, critically, an independent read on what generative AI is ready for in production and what it is not.

For smaller manufacturers, the picture is different. Wilson notes that the governance challenges Kühne describes are largely a feature of organisations above a certain scale. Many smaller players are not yet past the starting line. The industry does not have a single scaling problem. It has at least two: the large OEM challenge of governing complexity across hundreds of initiatives and dozens of plants, and the smaller manufacturer challenge of knowing where to begin at all.

They’re not businesses there to develop AI, they’re there to use it as a tool

Mike Wilson, Chief Automation Officer, Manufacturing Technology Centre

Wilson frames the underlying principle with characteristic directness: “All the businesses are there. They’re not businesses there to develop AI, they’re there to use it as a tool. And therefore, it’s about ensuring that whatever you do is producing the right kind of outputs or actions that actually are improving the business.”

That is, in its way, the most clarifying statement in this entire conversation. AI is not a strategy. It is an instrument for executing one. Organisations that treat it as the former tend to accumulate pilots. Organisations that treat it as the latter tend to build plants.

The transition from one mode to the other is not a technology upgrade. It is, in the fullest sense, an organisational one. And it is, by any honest assessment, one that the majority of automotive manufacturers have yet to complete. The question for 2026 is not whether the technology is ready. The question is whether the organisations are.