AI in Truck Production

Key insights from Daimler Truck’s Twin4Trucks project
Daimler Truck’s Twin4Trucks project has provided significant insights into AI-driven data consistency in truck production. Discover how its successes and challenges in AI, 5G, and local solutions are shaping the company’s roadmap for future manufacturing advancements.

Mr. Brümmer, the Twin4Trucks research project ends this year. Looking back on the three years of research and practical application, what was the most important finding for you?
A project of this magnitude with many partners is anything but trivial. It takes time for the consortium to find its footing and get into a productive working mode. At the same time, our use cases developed very dynamically. This ensured that we were able to make excellent use of the project time. It wasn't about more content or more budget, but rather about a sensible time extension. What has been clearly confirmed to me is that cooperation of this kind and with this setup is extremely valuable.
And which expectations were not met?
In terms of content, there were certainly differences in the depth of development. Some topics, such as the use of AI in visual data processing, developed very well and quickly. Others, however, such as 5G in industrial settings or positioning systems, proved to be significantly slower. It took time to find suitable partners here, and implementation also revealed that we had achieved significantly less maturity and speed than originally hoped.
Another topic where expectations and reality still diverge is Gaia-X. Within the consortium, we quickly agreed that politically high expectations exist, but that these cannot yet be fulfilled in practical implementation. The maturity of the systems and the degree of standardisation suggest that a considerable amount of groundwork is still necessary here – more than we originally assumed.
In an earlier conversation with AMS' sister publication, Automobil Prodution, a good year and a half ago, you emphasised that you really want to use the data you've collected. How has the Digital Foundation Layer specifically contributed to process improvements in Wörth—and how measurable are these effects?
The effects are always most clearly measurable in the specific use cases. We have several demonstrators in real-world applications, such as data monitoring via camera, which has never existed in this form before. The data collected can be used directly for process optimisation. Less tangible, but equally important, is the question of how we link and make data available across multiple use cases. This is a key enabler for more efficient work, but more difficult to evaluate quantitatively. The benefits become most apparent when considering the concrete effects on production: reduced manual interventions, improved production times, and less workload for employees.
For example, in the "Q-Tor" use case, automated monitoring eliminates the previous manual screw testing. This saves effort and increases process reliability. The basis for this has been laid. What is still missing are stronger, cross-cutting data connections; there is a lot of potential there that we still need to tap.
You've already mentioned a milestone in the project's progress: the Q-Tor. How mature is this system today?
The Q-Tor is a complex system that analyses film-based data in high resolution. After approximately three years of development, it has now reached a high level of maturity. Depending on the selected sensitivity, false alarms still occur, but the accuracy for detecting actual errors is now very high. User confidence in the technology has grown accordingly. While it is still a demonstrator, meaning it is not yet in widespread use, the technical capabilities are there.
I would also like to particularly highlight the front end: The user interface is very user-friendly and practical, which facilitates integration into everyday operations. Our impression is that the system is now absolutely suitable for reliably and efficiently solving problems related to automated screw inspection.
How does staff react to “algorithmic quality control”?
We're seeing our employees increasingly work with digital information, such as visualised assembly processes. This creates a certain familiarity with systems like the Q-Tor. It's by no means an isolated solution, but rather integrates seamlessly into existing processes. Acceptance is therefore very high. One reason for this is its ease of use. The information is displayed on a screen, and employees can immediately understand, acknowledge, and, if necessary, rework errors. Only relevant data is displayed, which improves clarity.
The high hit rate also contributes to acceptance. We haven't received any negative feedback; no one perceives the system as an alien element. On the contrary: It replaces a traditional, non-value-adding, and potentially tiring activity—manual inspection—with a significantzly safer, automated solution.
Do you need to feed the AI models with synthetic images due to the low error rate? How robust are such models in reality?
The diversity of real data is sufficient to train our AI models without synthetic data. The topic of synthetic data is nevertheless extremely exciting. Together with the DFKI, we investigated an alternative approach and published a paper on it. In a simulated demonstrator, real images and images generated from CAD models were combined. The results clearly show: Combining synthetic and real data increases both learning speed and accuracy.
So will synthetic data also be used at Daimler Truck in the future?
For us, the classic approach using real images was successful, even if stabilising the system may have taken a little longer. Nevertheless, one thing is clear: In future use cases, especially for rare error patterns, the use of synthetic data is highly relevant. And this isn't just important for us. The results are also interesting for small and medium-sized enterprises because they enable the implementation of projects that might previously have seemed unrealistic. This was one of the surprising discoveries of the project for us, especially in the area of visual detection.
This brings us to the topic of scaling. Which use cases do you consider suitable for series production? And what time horizon do you see for this?
Scaling is proceeding in two directions. Firstly, we are discovering new, similar use cases based on existing ones – a nice side effect of the project. For example, we now also check the completeness of components using camera tunnels. A particularly interesting case is the automated control of tire rolling direction. Incorrectly mounted tires can also become a problem in the passenger car sector, but the situation is even more complex for trucks.
The profiles differ considerably depending on the type, making manual inspection difficult. Here, too, we now have a functioning demonstrator with camera monitoring. Secondly, the existing systems – such as the Q-Tor – have reached a very high level of technical maturity. It is now basically just a question of resources to realise the rollout. After the project is completed, we want to implement the Q-Tor on all three assembly lines at the Wörth plant as soon as possible.
Do you feel pressure from management to quickly bring such innovations into series production given the current particularly high cost and efficiency pressures?
I would describe it more as a great deal of interest. The impetus for rapid implementation stems primarily from the maturity of the technologies. If we can demonstrate that a system like the Q-Tor is technically mature and delivers measurable efficiency gains, then there is naturally also a business obligation to put it into practice.
A key insight from the project is that it doesn't always make sense to collect all data and store it on a large scale in the cloud. Some issues can be efficiently resolved locally at the point of origin.
Ekkehard Brümmer, Daimler Truck
The primary goal of Twin4Trucks is interoperability and universal data availability. The project involves many partners from various industries with varying innovation speeds. What lessons have you learned from this collaboration?
The key challenge lies in striking a balance: On the one hand, the partners must act independently and assume responsibility in their respective areas, while on the other, bridges between individual interests must be carefully built. This was not trivial at the beginning, especially with new players and in unfamiliar working constellations. But it was to be expected that a common approach would have to develop first.
Once this approach is found, the advantages become apparent: The partners can work on specific topics in great depth – such as data architecture or the topic of Gaia-X together with Eviden – and at the same time contribute their expertise to the practical implementation of concrete use cases. Each consortium partner is responsible for its project scope, receives independent funding, and must provide its own evidence. This structure creates clear responsibilities, but still allows for the necessary networking and collaboration at the interfaces. I find this to be a very functional and helpful logic.
Speaking of Industrial Edge Cloud: Another important aspect of the project was the ability to process data largely locally with the help of partners like Pfalzkom. Do you see such an approach as a model for the entire industry, for example, to break away from dependence on US hyperscalers?
That's difficult to answer in general terms. One key finding from the project is that it doesn't always make sense to collect all the data and store it on a large scale in the cloud. Some questions, such as the correct tire rolling direction, can be answered locally at the point of origin.efficiently. If such a use case already creates concrete added value locally, central data storage is not necessarily required. Of course, this data can also be stored centrally. But if no immediate benefit is apparent, the need for it may not even arise. In such cases, it depends on the company's strategy: How much do I want to keep locally in-house? And what do I really want to transfer to larger cloud systems?
What we definitely learned in the project: There are extremely competent local partners like Pfalzkom who already serve large customers and offer suitable infrastructures. For example, we visited one of the partner's local data centers and were impressed by its performance. This builds trust and opens up real alternatives to using global hyperscalers.
Large edge cloud solutions, on the other hand, are usually needed for particularly data-intensive or cross-company applications. This is where Gaia-X comes into play again – for example, when we want to exchange data with other players. It is therefore crucial to carefully consider where external data storage is actually required and which partner is the right one for the job.
Twin4Trucks would certainly also be a good use case for the Catena-X ecosystem.
Definitely. We're monitoring developments there very closely. In the project, we've worked intensively on data exchange, data spaces, and European standards—especially Gaia-X. Partners like Pfalzkom and Eviden have been particularly active here and contribute in-depth expertise. We've already tested initial use cases in which data was exchanged between us and a supplier.This was implemented exemplarily using SmartFactory KL as a virtual partner. We built an administration shell for simple data and exchanged it bidirectionally.
In my view, this is a highly relevant future topic that will become significantly more important. However, it is also clear that there is still a lot to do before practical standards are established and data spaces are truly used on a large scale by OEMs and suppliers. Therefore, we have decided to further pursue the Catena-X topic in detail in the post-project phase. For now, the focus is on further testing our existing use cases and bringing them to technical maturity.
Twin4Trucks is ending – but the topic of data consistency remains. What follow-up initiatives, transfer projects, or internal roadmaps are currently being developed at Daimler Truck based on the findings from this project?
Several topics will continue to occupy us beyond the end of the project. The question of how quickly and practically AI-based solutions can be implemented – with manageable use of resources – is a key insight gained. The topic of 5G, which we addressed in the project, will also be pursued further – particularly with regard to positioning quality. We have only reached an interim result here so far. We will closely monitor how the provider landscape develops and who is a potential partner for the next phase. One focus in the future will be on accelerating implementation and transferring concrete use cases into everyday production. The project has opened new doors in many areas, for example in the interaction of AI and Data availability.

In Person:
Dr. Ekkehard Brümmer is currently Senior Manager Manufacturing Engineering at Daimler Truck. Before the spin-off of the truck subsidiary, Brümmer spent 13 years within the Daimler Group as an expert in production and quality management.Between 2009 and 2015, he held the position of Senior Manager Quality Management at various German locations and in Sao Bernardo do Campo, Brazil.
In 2015, he moved to the truck plant in Wörth am Rhein as Senior Manager Manufacturing Engineering. Brümmer studied mechanical engineering at the Technical University of Darmstadt and economics at the University of St.Gallen.