Skip to content

The End of Binary Configuration Management

This article is part of the How Do YOU CM2? blog series in collaboration with the Institute for Process Excellence (IpX). Although I receive compensation for writing this series, I stand behind its content. I will continue to create and publish high-quality articles that I can fully endorse. Enjoy this new series, and please share your thoughts! 

Configuration management in aerospace assumes deterministic systems. A configuration item either matches its approved baseline or it doesn’t. A change is either authorized or it isn’t. This binary certainty enables governance frameworks that make CM valuable for safety-critical systems.

Machine learning operates probabilistically. It assigns confidence levels, not certainties. An algorithm identifies duplicate records with 87% confidence. Neural networks predict the impact of change with 76% likelihood.

One global process manufacturer reported that AI surfaced more than 3,000 duplicate materials and highlighted 2,200 items at stockout risk.” “Result: 21 million dollars in verified savings, outages reduced from 4 plus weeks to 3 days, and unified visibility across plants.” 

Yet this reveals the challenge: the AI system assigns confidence scores ranging from 62% to 99.8%. The manufacturer established an 85% threshold requiring manual review below that level, but this was organizational policy, not guidance from configuration management standards.

SAE EIA-649, the configuration management standard adopted by the U.S. Department of Defense, defines five core functions without constraining implementation. This flexibility permits AI adoption but provides no framework for validating probabilistic outputs against deterministic compliance requirements.

When your PLM system’s AI-based data classification automatically categorizes product data, what confidence threshold triggers manual review? At what precision level does automatically identified traceability become trustworthy for audit purposes?

Traditional configuration audits check baselines through manual inspection or automated inventory comparison. When AI uses computer vision to verify physical assemblies against digital models, the output includes confidence intervals and edge-case ambiguity. A 97% confidence that an assembly matches its baseline might be excellent for screening, but is it sufficient for regulatory compliance?

Industry has pragmatically adopted AI where business value exceeds implementation risk, but current standards provide no framework for incorporating probabilistic confidence measures into configuration decisions. What’s missing isn’t better algorithms, it’s governance frameworks that specify acceptable error rates, validation requirements for probabilistic decision support, and audit approaches for systems that combine human and machine judgment.

When your AI flags a change impact with 73% confidence in an avionics system, and your approval process demands binary yes/no decisions, whose judgment determines whether 73% is adequate, and what happens when that judgment is wrong?

What confidence thresholds has your organization established for AI-assisted configuration decisions?

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.