Rob Ferrone’s and Oleg Shilovitsky’s LinkedIn bromance 😉 has resulted in an interesting post by Oleg: How to Build PLM Applications We Love Using AI. His premise (my interpretation) is to build Minimum Loveable PLM applications as small applications supported by AI that fulfill a specific purpose using data from a shared data platform. No MVPs, but MLPs!
Around the same time, Michael Finocchario published The Agentic AI Revolution: Reimagining PLM as a Flexible Microservices Ecosystem, in which he explains how Agentic AI will become the orchestrators across existing monolithic systems. While I believe that this would be the first step, it will not end here. Once you have these agentic AI applications, the next step is to consolidate toward a single open data platform. But to do that, we need to expand the scope beyond PLM.
Not just PLM, all Enterprise Applications
Taking this beyond the PLM application and applying it to all enterprise applications would be a completely new way of looking at the enterprise application landscape. Instead of having large siloed applications with each their own databases for PLM, ERP, CRM, etc, you will have a large Open Data Platform that enables you to connect small, intuitive, purpose-oriented applications supported by agentic AI.
Open Data Platform
In the past, we started out creating point-to-point interfaces, which resulted in a giant web of bespoke interfaces. With the introduction of the Enterprise Service Bus, we could hook up the applications to the bus, and this reduced the number of interfaces significantly. However, in both cases, each application still had its own database and required a lot of bespoke data transformations. This resulted in delayed synchronization with other applications, real-time data sharing was still not possible. The open data platform differs from the Enterprise Service Bus because the open data platform replaces the application-specific databases and, therefore, the open data platform is the only source. This makes the data instantly available for all applications and is much more standardized, making it easier to reuse data in other applications. Note that while the total number of interfaces will not become less and even increase, the interfaces will be more standardized and therefore easier to maintain.

Such an open data platform must be very robust, flexible, and fast. To make this work, the open data platform must have some standard functions that all the applications must reuse, like a security model for all the data, version management, workflow management, blockchain, data audit, CM Baseline, etc. This is to ensure data integrity and that data can only be accessed by the right people and applications. Having a CM Baseline will be foundational.
What is the CM Baseline?
In traditional Configuration Management, a baseline is defined as an approved, stable configuration of a system or product at a specific point in time. It acts as a reference point for future development, maintenance, and audits.
CM2 (Institute for Process Excellence) elevates this concept, emphasizing continuous iteration, traceability, and adaptability, which are essential characteristics for guiding agentic AI.
The CM2 Baseline isn’t a static blueprint. It’s a dynamic, living reference that evolves in tandem with each change made. This adaptability ensures that AI systems can innovate within safe, controlled parameters. The CM Baseline is based on the CM2 Baseline:
“This baseline not only contains the latest Released and the Effective configuration, but it also contains all planned changes against the configuration (in context). And not just as a tag that the item/node or dataset will change, it will allow you to show how the delta will impact the item, its structure, and its related datasets.”
CM2 – Institute for Process Excellence
In other words, you do not only look at the current state; you also consider the future state.
The CM Baseline is an enterprise knowledge graph that connects all the data across the enterprise and product lifecycle.

To read more about the CM Baseline and its components, please check out the previous posts from the CM Baseline series:
Agentic AI-supported Applications
Configuration Management has traditionally been the backbone of systematic control over an organization’s assets and their changes throughout the system lifecycle. As defined by the SAE-EIA-649 standard, CM encompasses planning, identification, change management, status accounting, and verification and audit disciplines that ensure system integrity and traceability.
However, the rise of artificial intelligence, particularly agentic AI that can perceive, decide, and act autonomously, introduces new complexities and opportunities. These autonomous systems require robust baseline configurations to operate effectively within defined parameters while adapting to changing environments.
So, let’s start with what agentic AI is:
“An AI that uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.”
Nvidia
In other words, agentic AI can automate a lot of tasks by analyzing data, finding patterns, and taking the appropriate action. Here are some examples of what an Agentic AI could do:
- Handle Customer Feedback/Complaints: An Agentic AI could receive and process customer feedback or complaints and identify the products and parts affected, raise an investigation request for performing a root cause analysis, and help facilitate the entire change process from start to finish.
- Support Impact Analysis: An Agentic AI could support the impact analysis of a change request. Read also: Applying AI/ML to Configuration Management
- Support Implementation Planning: An Agentic AI could support the implementation planning of a change by identifying the best possible cut-in date.
- Track progress: An Agentic AI could actively track progress and warn stakeholders of potential issues.
- Find alternative parts in case of end-of-life notifications: An Agentic AI could proactively search for alternative parts the moment an end-of-life notification is received. Create the change request and analyze the impact based on the alternative part.
How does Agentic AI benefit from an Open Data Platform?
Because all the data across the enterprise is real-time available in one source, and there is an enterprise knowledge graph that semantically connects all this data, the agentic AI-supported applications will be able to work for you. The knowledge graph helps prevent the AI from hallucinating and drifting. AI model drift happens when an AI model’s performance degrades over time due to evolving data patterns. A CM Baseline ensures that organizations can provide a clear audit trail for AI models, showing exactly how and why decisions were made.
The CM Baseline becomes an enabler as it:
- defines the boundaries for safe operation with its semantic Enterprise Knowledge Graph.
- enables reasoning and therefore preserves traceability
- supports in the management of the AIs’ learning environments
- ensures version control that allows to trace all changes made by the AI.
The agentic AIs have all the data available to take the right actions at the right time and keep learning new patterns, while you can focus on the real important work. While this is still a very futuristic outlook. It starts with data and the way we connect it, and how we ensure its integrity. That is why starting to build your CM Baseline or Enterprise Knowledge Graph is so crucial.
What do you think?
For more futuristic outlooks, also check out: