Aras CTO Rob McAveney presented the Aras Vision on Wednesday 11 November 2020 at the Aras Digital 2020 – Europe virtual event. During the Q&A he explained that AI/ML (artificial intelligence and machine learning) would change the way we do Impact Analysis. We either had the same ideas or Aras really listens when good ideas are shared.
A little over a year ago I created the picture in the header as part of my work in the Industry 4.0 committee of the IPX Congress. Hearing the idea from Rob, triggered me to write this post.
But let’s first go back to the basics and look what Machine Learning (ML) and Artificial Intelligence (AI) is about:
Machine learning (ML):
“the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence.” (Wikipedia)
Artificial Intelligence (AI):
“intelligence demonstrated by machines, that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.” (Wikipedia)
Essentially it is about using information to find patterns and reaching conclusions based on these patterns to achieve a goal. To put this in the context of CM: if we have information that can be processed to find patterns that can improve the quality of the work done in the CM domain, we can improve decision making and/or reduce rework.
Context for the idea
Within the change process one of the most important steps is the Impact Analysis part of the Change Request (CR). This is where impact is documented in an Impact Matrix. The Impact Matrix documents the change decisions for items and datasets.
Dataset: a set of information that must be released as a whole but can be released independently from other datasets. E.g a BoM, Bill of Process, Work Instruction, Test plan, etc. It can be structured data or just a document.
When the CR is approved the Impact Matrix might still require to be enriched with additional insights or details needed for planning the implementation of the change. There can be delay between approval and picking up the change for implementation or not all details are required for the approval of a CR. After the changes have been executed there can be differences between what has been documented in the Impact Matrix and the actuals.
The idea
By comparing the Impact Matrix of the CR with the Impact Matrix of the Change Notice and the Impact Matrix of the Change Notice with Actual changes and using Machine Learning to find patterns in the differences, it will be possible to help improve Impact analysis.
e.g. When a cable changes, 90% of the cases the connector(s) needs to be changed and in 50% of the cases the connector(s) are not added to the impact matrix. The user could get a suggestion to add the connector(s) as impacted items as well.
When you run a check on the impact matrix, the machine learning model can check if anything is missed or if items or datasets are added that are potentially not needed. The user can make the final decision and indicate if the suggestion was useful in a particular case. This can then be be used as input for the machine learning model to improve the analytics.
The goal to improve quality of the Impact Matrix early in the process, improving the quality of the business case and reducing rework or reducing delays in approvals because actuals do not match the agreed scope. It can happen that someone forgot to indicate that a certain dataset needs to be changed and added it during execution. When the release needs to be done, and it does not match the agreed scope, it leads to discussion, potential rework and/or delay of the release.
I’m curious to see which vendor will be first to bring this capability to market. But it is good to know that companies like Aras are already looking into it. Who will follow…..?
All pictures by Martijn Dullaart. All rights reserved.
Hello Martijn, strong linkages between physical product objects and readable Relationship results could provide better solutions. However, as you stated, comparison capabilities should be also improved by PLM vendors to get high quality analyze regardless of object type.
Thanks for the comment Hatice. Linkages are indeed very important. As I describe in my other posts, the linkages (connections) tell the story. But support from PLM Vendors is also needed as not every company is able to develop all these tools themselves.
Pingback: How ChatGPT will reshape Impact Analysis - MDUX