Skip to content

Next Disruption Please…

This blog post is based on the presentation that I gave during the CIMdata PLM Roadmap PDT Fall 2021

Are you implementing Model-Based Initiatives and wonder how this will impact your company from an end-to-end perspective? And how will these initiatives impact configuration Management? Let’s look at the disruptions Configuration Management has faced and will face, focusing on Model-Based Initiatives.

 Where, from a Configuration Management perspective, a digital file still in many ways behaves like a paper document, a model is something different.

  • Because, what is the deliverable? How granular will a deliverable become?
  • How do you manage change in models?
  • How do you manage ownership?
  • How should CM adopt MBx?
  • And what requirements to support CM should be considered in the successful implementation of MBx?

It’s time to start unraveling these questions in search of answers.

A brief history of the evolution of CM

Already for a very long-time, configuration information was controlled via a paper-based process. Possibly on stone tablets before the invention of paper. Even when Configuration Management started to become more formally defined, starting in the 1950s all the way to the ’80s, paper still ruled the configuration. 

A shift occurred only when the PC and office applications like Lotus 1-2-3 and Microsoft Office were introduced. Instead of documenting the information directly on paper, a lot of the configuration information was recorded in digital files.

While the introduction of Office tools allowed people to copy-paste data easily, modify data, it did not really change the way configuration management was applied. Yes, on work instruction level perhaps, but the overall process did not change. Signatures, for instance, were still captured on a printout, which made the baseline still a paper beast. 

The introduction of tools that supported workflow management allowed us to move this paper-based process to the digital realm. Some companies were faster to adopt this than others. In some cases, it also had to do with regulations that did not immediately follow suit with technological advancements. So submitting a request to the FDA for approval of a new medical device required signed paper till 1996 with the introduction of 21 CFR Part 11.

Below you can see the results of the question about the state of automation within the CM processes for July of 2020 and August of 2021. 

You might read that paper increased, but that has mainly to do with the low number of responses. But it does indicate that most companies, to various degrees, still use paper as part of their CM processes. Companies were running in all kinds of issues due to the pandemic. Mainly these issues were addressed by mitigation. They did not have the luxury to spend time on a holistic, end-2-end solution. Rather than meeting face to face, change board meetings took place online. Or, email replaced the envelope instead of sending around an approval sheet via an envelope to all people who needed to sign. People printed the sheet, signed it, scanned it, and forwarded it to the next person in line. It is slightly more digital, but the medium is still paper.

If not the pandemic, what will impact/disrupt CM

While the impact of the pandemic on CM might still take a few years to become fully visible, what are other triggers for CM to change?

There are various initiatives to extend Agile/DevOps to Hardware development where there is a disconnect with current CM practices.

If you want to use AR in the field to allow a less experienced/skilled workforce to execute the work, the configuration information needs to be near perfect. A few issues can make a digital twin worthless.

If you go from selling products to selling services, the way you look at configurations might change.

Products will be more connected, and with the introduction of AI/ML, it becomes more complex to understand the impact this will have on configuration management. See these posts: What is the configuration, if the product has an AI? and Export Control and Machine Learning.

Also, think about software over-the-air updates and how this impacts the way you manage your configuration. See also: Software Over-the-Air updates and CM.

And finally, the impact Model-Based Initiatives will have, whether it is MBD, MBSE, MBQ, MBWI, etc. See also Where does the deliverable begin and where does it end?

But to understand the impact of Model-Based Initiatives on CM, we have to go back to the purpose of CM.

Purpose of CM

An organization exists to try and fulfill its purpose. To get closer to the unobtainable purpose day by day. To do that, it needs to stay in business; it needs to keep playing the infinite game.

Staying in the game requires them to keep innovating. This is where CM comes in. The biggest enemies of innovation are delay, avoidable cost, and invisible risks. CM exists to support organizations to stay in business and help fulfill their purpose by avoiding costpreventing delays, and making risks visible.

As described by CM2: This is achieved by managing all information that could impact safety, security, quality, schedule, cost, profit, the environment, corporate reputation or brand recognition and having processes in place that:

  • accommodate fast & efficient change
  • ensure documentation remains clear, concise, and valid
  • guards the integrity of the configuration
  • streamline communication and collaboration across functions & company borders
  • continually improve these practices to stay ahead in the game

If you look at the first part, it mentions: ‘CM Manages all information that….’. So it is about information, about data.

Data Universe

Every organization has a data universe. In the data universe, there is structured data and unstructured data. And this data universe is ever-expanding. And the expansion rate is increasing by creating data, collecting data via edge devices, for instance, or buying data from third parties like weather data. But nowadays, we can also generate more structured data by mining structured and unstructured data. Model-based initiatives increase the granularity of the structured data.

The focus of CM is on this structured data; this includes the linkages made to unstructured data, like a word document. CM provides a level of trust. Without CM, there is no trust in the data.

When I asked on LinkedIn What the impact of MBD/MBSE on CM will be, I did not get many replies to this question. But the majority indicated at least moderate to major impact.


The following picture is based on pictures from Aras as well as from Paul Nelson

It shows how you have different system breakdowns, from Requirements all the way to the As Maintained. And while these are very useful to explain the various breakdowns and traceability thread you can draw between the various nodes, the reality is that it is not a neat breakdown with a simple thread. It is a network of nodes and relationships. A system can require all kinds of parameters to be set to function as per the sold configuration. It is not just parts that make up the configuration.

Therefore it is not easy to implement Model-Based Initiatives. Going Model-based does not always result in a happy end. I have seen examples where going model-based resulted in inefficiencies, more corrective actions simply because the focus was too much on the deliverable that had to go model-based versus the processes and way of working around it to make this work. An often-heard complaint is that the change process is too slow. But is it? Is it the change process, or is it the way people have to document their deliverable or how the data model behaves in the tools? Is it interoperability? Or is it the way organizations have organized themselves around this change process?

Pyramid of Configuration Management

If you look at the pyramid of Configuration Management, where cross-functional collaboration interacts with the various expert domains. In the expert domains, it is about the content, release management, and execution of orders. At the cross-functional level, from a CM perspective, it is about the impact of a change recorded in an impact matrix, the business case to help make decisions about changes, and the implementation plan to support the end-to-end implementation of changes.


In the past, the granularity of information in the expert and the cross-functional domain was more or less similar. You had items/parts and documents. Today with the introduction of MBx, which mainly focuses on the expert domains, the granularity of information in the expert domains has increased. At the same time, there is, in many cases, a disconnect with the cross-functional domain. But does the cross-functional domain need the same level of granularity? Maybe it needs more granularity than today, but I do not think every detail node and relationship needs to be exposed on this level. For instance, if you have work instructions, and in your work instructions management tool, you have activities that make up a work instruction. These activities can be used across multiple work instructions. The Manufacturing Engineer responsible for the instructions wants to find all impacted Work instructions if one Activity has an issue. But on the cross-functional level, you will only be interested in the impacted work instructions. Because these are used when executing orders in the factory or the field. The common business object here is the Work instruction, not the Activity.

But how CM can handle the information also depends on the data model of a tool. If you can only revise an entire model and not the individual nodes, you can only manage snapshots of the entire baseline. But how do you manage change to these models, especially if these models have different owners depending on the type of nodes? This might be easier to solve if you can manage it on the individual level or on sets of nodes. But on an individual level, you get tiny datasets that need to be managed and planned. An implementation plan will get a lot of tiny tasks and, therefore, basically not usable. You need to be able to define what the deliverable is from a CM perspective. Tools need to support this for the change process to be efficient and effective. Ownership requires to be on a name basis on the right level.


RAPID is about the ability to ensure flow within your processes. In a rapid, the obstacles just make the water go faster because the flow minimizes delay. That means you have to organize for change by managing ownership on a name basis and not on departments or functions. If you need support in impact analysis of a change, you need a person, not a function or department. Data models need to facilitate change by allowing the definition of deliverables to ensure there is flow in your change process. But also to reduce planning, release, and review dependencies. Even when things get in the way, the change must keep its flow. 


Adaptive is about the ability to deal with various circumstances without having to find the solution first. The solutions or paths are already available; you just need to pick the one that fits this circumstance—for instance, delegating authority based on risk in the various processes, especially the change process. Like delegating the authority of approval for simple changes to its creators and involvement of documentation users, it can significantly speed up the processing of changes. Also, be able to add an approval or consultation in your process whenever needed without having to go through IT.

Or ensuring that if a new part is delayed that potential alternatives are already defined. In today’s supply chain, you have to prepare for scenarios that not everything will be on time. So whenever things do not go according to plan, you can easily choose the scenario that fits and move on without a lot of delays.


DIGITAL speaks for itself. It is the ability to automate as much as you can. Wherever you can, go digital with your CM processes based on the principles of RAPID & ADAPTIVE. But first, address RAPID and ADAPTIVE, and finally, you can focus on making it digital to speed up your CM processes further. This can, for instance, be the fact that you automate all kinds of checks to automatically release content based on a risk analysis that compares the agreed impact versus the delivered work and categorizes any differences based on risk. If you stay within a specified range, automatic approval is a fact. If not, somebody needs to check why there are deltas and if that is allowed before you can release.

I’m working with the IPX Congress to perform a risk-based analysis to determine the impact MBx will have on CM and define strategies and recommendations to deal with the identified impact. Similarly as what we did for Industry 4.0: The True Impact of Industry 4.0 Revealed. Join us in the conversation and leave a comment or reach out to me.

Thanks to Greg Russ, Mark PinchakMartin HaketMax GravelPaul NelsonRay Wozny and Susan Despotopulos for the inspiration.

 Header photo by Yoav Hornung on Unsplash. Modified by adding text Next disruption please...

6 thoughts on “Next Disruption Please…”

  1. Ramkumar Dhanasekaran

    Excellent article Martijn! I believe it provides decision-makers and readers a quick understanding of the current situation and the need to strategize for the future.

    1. Thanks for your comment Paritosh. I agree, but it needs to go further than that. You need a business object model that is shared across the organization, at least on the level that has cross function relevancy.


    Thanks for expressive and visionary points that needs to be discussed further. I would like share advantages/disadvantages regarding with Module Based Approach in PLM environment by considering CM2 attitude, at least in accordance with my experiences. Firstly, showing & specifying changes in models could ease implementation of changes in 3D right after the change approvals and enables to determine the impacts of changes at once in terms of clash, clearance & interferences etc. on condition that 3D match with physical product. Secondly, if studying individually in same parts/products is necessary, secondary ownership could be provided in PLM tools or physical product duplication could be done with keeping duplication relations to original product. I strongly agree with that reading papers is sometimes hard process due to tight schedules & extra workload etc. However, if linkages are not set properly, managing models and its extensions could be nightmare in terms of Network that you have mentioned above. That was my thoughts…

    1. Thanks for your comment Hatice. I fully recognize what you are saying. I think Impact analysis should be supported by the CM2 Baseline (As Planned/As Released baseline) this will allow you to document impact in context of the models but supported by the power of the baseline to manage future dependencies. Watch out for one of my upcoming articles where I want to address this.
      And indeed linkages are everything. They contain the power of an organizations ability to change fast when everything is set properly and controlled properly.

Leave a Reply

Your email address will not be published.

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.