Nissan's AI vehicle testing
Nissan is using AI to support engineering teams in accelerating vehicle testing
Nissan’s collaboration with Monolith AI is speeding up vehicle validation while keeping engineers firmly in the loop and is reshaping how the OEM approaches testing in an era of rising vehicle complexity and electrification.
The Nissan–Monolith collaboration set out to accelerate and de‑risk vehicle testing by combining Nissan’s long institutional knowledge and extensive historical test data with Monolith’s machine‑learning platform and engineering‑facing deployment approach. Emma Deutsch, Director of Customer Orientated Engineering and Test Operations at Nissan Technical Centre Europe, framed the initial problem as a convergence of rising vehicle complexity, increasing test costs and the shift to EVs: Nissan needed “a solution of how we were going to achieve that” while ensuring the organisation did not lose core engineering capability. That concern about preserving tacit knowledge and judgment informed the project’s selection, scope and how success would be judged.
From the outset Nissan deliberately chose an entry point with strong, well understood data and an engineering team who could act as domain experts. The first project, a bolt‑tightening study, was selected because the data and subject‑matter expertise were robust and accessible. Nissan’s approach was not to hand over design or decision‑making to an algorithm but to use AI as an assistive tool embedded within existing workflows. As Deutsch explained, Nissan sought to introduce AI “in a way that I didn’t lose what is our biggest asset here and that that’s our engineering community.” Monolith’s team worked closely onsite with engineers and data scientists to curate the input data, iterate models and reconcile outputs with engineering intuition.
Monolith’s framing of the technology reinforced that collaborative stance. Sam Emeny‑Smith, Head of Automotive at Monolith, emphasised that the aim is to help engineers prioritise higher‑value work rather than replace them. His analogy was deliberately conservative: “AI is not really your Google Maps on your phone, but actually it’s more of a compass.” He also stressed that while model architectures and algorithms matter, preparing and curating the right data and embedding expert knowledge are where most of the project effort and value lie: “the model is actually the easy bit” once the appropriate data and domain expertise are in place.
The combination of Nissan’s long test archives and Monolith’s explainable outputs produced tangible early success. Deutsch cited having decades of institutional records – “We have 90 years of data” –which allowed Monolith to train and validate models in ways newer OEMs cannot easily replicate. Crucially, Monolith delivered explainable features and contribution weightings that engineers could inspect without needing to interpret model internals. Deutsch highlighted that the platform shows “what the biggest influence is” and the “weight behind it,” enabling fast expert validation and rapid organisational buy‑in. That explainability and the close collaboration led to a validated outcome the team reported as “at least 96% true,” a result that materially reduced scepticism and spurred further projects.
Beyond that initial win, the program prompted Nissan to rethink instrumentation and data practices across test programs. Teams have combined virtual and physical sensor strategies and begun cross‑functional standardisation, so tests produce datasets useful in multiple contexts. Deutsch described adopting a Formula‑1 mentality – collecting comprehensive data but being disciplined about what to retain and what to discard – asking “what data do you keep? What do you throw away?” That question is now central to how Nissan scales the approach across durability, ADR and chassis programmes.
Cross‑functional collaboration
Nissan’s work with Monolith illustrates how cross‑functional collaboration and a disciplined data strategy can unlock faster, more reliable engineering decisions without sacrificing institutional knowledge.
To meet that constraint Nissan reorganised testing and engineering responsibilities to encourage cross‑functional working. Deutsch explained that her remit brought together powertrain, electrical, body and business functions and that the organisation ran events across sites so teams could “connect” and see how different projects could be reused or adapted across disciplines. Those events and resulting conversations drove a clear pattern: investments in tools and data collection were intentionally shared rather than siloed. A practical example was the deployment of scanners originally purchased for crash‑test workflows; once introduced, they were used across seats, body and styling teams, multiplying the return on capital and spreading skills among technicians. Deutsch emphasised that technicians rotated across sections, carrying knowledge and practice with them, which made cross‑functional adoption more sustainable.
Adapting the data strategy
The data strategy evolved in tandem with organisational change. Nissan leveraged decades of historical records giving the company a significant foundation for model training. That legacy prompted a dual focus: instrument more comprehensively where necessary and be more selective about what to retain.
Monolith’s role reinforced the human‑centred deployment of analytics. Emeny‑Smith emphasised that the technical work was embedded within engineering workflows and subject‑matter expertise. That framing guided decisions about explainability and verification: models needed to present contributions and weights in ways engineers could readily inspect and validate. Deutsch highlighted that capability, noting the platform allowed users to see “what the biggest influence is” and the “weight behind it,” enabling quick expert checks without requiring engineers to become data scientists.
Practical collaboration methods mattered. Monolith’s data scientists sat alongside Nissan experts to iterate model inputs and outputs; when initial models faltered, expert feedback directly informed adjustments until predictions aligned with engineering intuition.
Monolith’s approach and project implementation
Monolith’s work with Nissan combined this engineering‑first philosophy with pragmatic machine‑learning application. From the outset the engagement was framed as a collaborative, expert‑in‑the‑loop programme rather than a vendor drop‑in.
Project scoping favoured domains with dense, trustworthy data and strong domain experts. Nissan and Monolith chose an initial demonstrator that met those criteria, in this case a bolt‑tightening study where historical measurements and engineering knowledge were plentiful. Monolith’s team embedded data scientists with Nissan engineers to curate inputs, iterate models and translate domain intuition into model features. Monolith worked closely with the Nissan engineers to reconcile model outputs with the engineering experience until the predictions aligned with expectation. This close co‑location and iterative loop allowed the team to spot missing signals, remove misleading records and refine feature engineering in ways remote or purely technical workflows could not.
Monolith’s technical posture emphasised explainability, robust deployment and targeted problem selection over flashy generative capabilities. Data curation was central to implementation. Monolith recognised that models are only useful when trained on representative, well‑managed data. Emeny‑Smith noted that much of the programme’s work was upstream, locating, cleaning and structuring decades of test records, combining CAN‑bus captures with physical sensor data and identifying where virtual sensors could replace intrusive instrumentation.
Validation and trust building were practical priorities. Early models initially struggled until engineers and Monolith data scientists iterated together. This human‑in‑the‑loop process elevated confidence and produced a validated outcome the team reported as 96% true.
Deployment considerations shaped the product roadmap. Beyond building accurate models, Monolith focused on delivering outputs that non‑data‑scientist engineers could use day‑to‑day: dashboards with clear contributors, integration into existing workflows, and guidance for ongoing data governance. Emeny‑Smith highlighted deployment as the toughest part of the process – translating mathematical models into resilient decision tools that handle messy, changing operational data and that engineers will trust and use.
Evolving data collection and future potential
Nissan’s partnership with Monolith has shifted the company from siloed, department‑level testing toward a deliberate, cross‑functional data strategy designed to accelerate validation while protecting institutional engineering knowledge.
A first practical change has been purposeful instrumentation and consolidation. Nissan moved from many teams independently capturing overlapping measurements to joining datasets so a single test can feed multiple validation objectives.
Nissan’s long archives provided a distinct advantage in early modelling work. That historical depth, however, also revealed typical legacy challenges; gaps, inconsistent labels, discarded “bad” runs and nonstandard formats. Addressing these issues required significant upstream curation – identifying which historical streams to retain, reformulating older logs into coherent features and deliberately preserving negative examples so models learn both good and bad system behaviours.
Looking ahead, both partners see broad potential tempered by governance needs. Technical avenues include richer CAN‑bus integration, hybrid physical/virtual sensor strategies, and agentic or natural‑language interfaces that let engineers query models in familiar terms. Practically, progress depends on codifying which data streams are maintained at scale, how to compress and archive representative samples, and how to version and monitor models against shifting vehicle software and hardware baselines.
Operational deployment remains a core focus: models must be resilient to changing operational conditions and produce outputs that integrate into daily engineering decision cycles. Emeny‑Smith noted that refining interfaces and deployment pathways is ongoing work; delivering robust, engineer‑facing workflows that tolerate imperfect data and evolving vehicle architectures will determine where the approach can be scaled most effectively.
Strategic outlook
This project has matured from a proof‑of‑concept into a strategic programme that balances speed, cost and the preservation of engineering expertise. Crucially, adoption must be consistent with organisational culture and capability; Deutsch insisted AI be introduced in a manner that retained the experience and expertise of the engineering teams at its core. That dual constraint – accelerate validation while safeguarding tacit knowledge – underpins the strategic outlook.
Near‑term priorities are pragmatic and operational. The company will scale projects that are high‑data, repeatable and representative across programmes so early wins can be translated into broader value. Explainability remains central, enabling rapid expert validation and executive buy‑in.
Data strategy is both an enabler and a governance challenge. Nissan’s deep institutional archives provide a competitive advantage for training robust models, but legacy collections also expose inconsistencies and omissions. The imperative now is to move from indiscriminate capture to deliberate curation.
Emeny‑Smith noted that Monolith’s strategic stance complements Nissan’s, with a focus on embedding analytics into engineering workflows rather than replacing judgement. Therefore, investments in data engineering, explainable interfaces and change management are higher‑priority than experimenting with AI model architectures alone.
Longer‑term potential depends on governance, tooling and cultural uptake. Governance must codify sensor standards, retention policies, model versioning and monitoring against evolving vehicle software and hardware baselines. Culturally, continued rotation of technicians across functions and visible executive sponsorship will be necessary to sustain cross‑functional practices and to prevent re‑siloing.
Risks to execution include data bloat, brittle models under distribution shifts, and slow adoption if explainability or integration into daily engineering decisions is inadequate. Mitigations are clear: prioritise representative data retention, keep engineers central to model iteration, and deliver transparent, easily interpretable outputs so the technology acts as a “compass” rather than an opaque oracle.
In summary, the strategic outlook is to scale AI where it demonstrably shortens validation cycles and reduces cost while institutionalising disciplined data curation and engineer‑facing deployment. By preserving engineering judgement, codifying data governance, and focusing on explainable, workflow‑integrated tools, Nissan and Monolith aim to realise sustainable, cross‑functional value across chassis, durability, ADR and beyond.