My previous article: Models, Models, and more Models…
generated quite some reactions
. The main discussion triggered by Patrick Hillberg
was about the V(ee) Model and whether the V(ee) model encourages the creation of silos versus Agile creating an environment for cross-cultural learning concerning some of the well know scandals like the GM Ignition Switch or the Boeing 737 Max 8. It is time to look at the role of Ethical fading in the decline of applying good practices, specifically Configuration Management Principles, leading up to these scandals.
When I searched for ‘V Model and Agile’ on Google, I got all kinds of results claiming that the V(ee) Model is a waterfall approach or that we cannot go to production when we do not have a complete system or product. Very similar to the premise that the V(ee) Model encourages the creation of silos. And is an indication that Patrick has a point. As many people view the V(ee) Model in that light, it can be wrongly interpreted and implemented.
However, saying the V(ee) Model is a waterfall approach or we cannot do iterative or incremental design is incorrect. The fact is that when we develop a product, we start with requirements and, through various steps, get to a working product. The V Model is just a representation of these steps; it is not the representation of the delivery system; it is a representation of the system development life cycle. The delivery system can be project-based using Prince2 or PMP using a waterfall approach, or we can use Kanban, Scrum, or SAFe as our delivery system.
The V(ee) model does imply some requirements to move from one stage to the next, but to be honest, in Agile, we do the same. If a requirement or user story or whatever you call it is not clear, it needs to be clarified first and made sure everyone understands the same about the requirement.
When we want to implement the requirement, we need an idea of how to implement it. If we do, great, we move on faster. Still, if we don’t, it might require some analysis or a Spike (Note: Spikes are an invention of Extreme Programming (XP), and are a special type of user story that is used to gain the knowledge necessary to reduce the risk of a technical approach, better understand a requirement, or increase the reliability of a story estimate.). You do not have to complete all requirements first to move down and up the legs of the V(ee), as also indicated by Kevin Forsberg and Harold Mooz in their paper The Relationship of System Engineering to the Project Cycle, 1991.
The article states:
If the User Requirements are too vague to permit final definition at PDR, one approach is to develop the project in predetermined incremental releases.”
That was more than 30 years ago before the Agile Manifesto was created. The V(ee) Model and its implementation have been evolving for over 30 years. Agile and the V model are not mutually exclusive. On the contrary, even the Agile Manifesto and many of its principles can easily be applied in the context of the V model to get what we all want: Faster Innovation to Market or, in other words, when we can bring a sellable iteration of the product that conforms to its requirements to the market sooner than, for instance, in a waterfall approach, it reduces the risk that toxic leadership creates an environment of failure.
The V emphasizes quality, as conformance is the standard—and make sure that we think of how to achieve this from the beginning, like in Agile when we have to agree on the Definition of Done upfront.
Enter: Ethical fading
So if the V(ee) model is not the cause of the problem, what is? I put it to Ethical fading and the mental models leading to Ethical fading.
What is Ethical fading?
According to Ann Tenbrunsel
(Notre Dame) and her co-author David Messick (Northwestern University, Retired), Ethical fading is:
“the process by which the moral colors of an ethical decision fade into bleached hues that are void of moral implications.”
In other words:
“Ethical fading is a form of self-deception. It occurs when we subconsciously avoid or disguise the moral implications of a decision. It allows us to behave in immoral ways while maintaining the conviction that we are good, moral people.”
explains Ethical fading very well in the following video:
When Ethical fading occurs, the results can be mind-boggling… And most of the time, Configuration Management principles are the first to go out of the window. From GM’s Ignition Switch with 124 deaths to Boeing’s 737 Max 8 with 346 malignant deaths to BP’s Deepwater Horizon with 11 fatalities and unparalleled economic and ecological damage to Pfizer’s Mylan, increasing the cost of the EpiPen by almost 600%. All situations where ethical fading played an important part.
Cutting corners to ‘save time,’ not speaking up when it is the right thing to do because you fear for your job, or somebody else will take the time to review, or just hiking the prices over the top just because you can. Toxic leadership is, in many cases, the root cause of these scandals.
While in many companies, the V model for development is the systems lifecycle model used for new product developments, it does not mean the V(ee) model had anything to do with scandals that occurred.
No amount of processes, models, or organizational structures can prevent failures like these when toxic leaders are at the helm. Ethical fading will continue with all associated risks for new scandals if people do not feel safe speaking up or interacting on things that matter.
A scandal in the making?
Tesla, known for being agile and applying agile methodologies, is just an inch shy of a scandal of its own making if all these signs are correct:
- This analysis by Randy Whitfield of a company called Quality Control Systems indicates that when auto steering (a function of the autopilot) is enabled, you are 2.4 times more likely to crash. In sharp contrast to what Tesla claims.
- This test by the Dawn Project indicates that the Full Self Driving mode (FSD) of Tesla could not identify a stationary child-size mannequin while driving 24 to 27 miles per hour.
- Musk even proudly admits that Tesla lets customers do the beta testing.
- Ethical fading already starts with the terminology Tesla uses, which is confusing at best. “Autopilot” and “Full Self Driving” imply that the car is in complete control, while in reality, as a driver, you need to keep your hands on the wheel and be alert at all times. Honda approached this a bit better by just calling it Sensing.
- Out of the approximately 825970 Teslas with autopilot capability, 273 were reported to have crashed between July 20th, 2021, and May 21st, 2022. Whereas for the roughly 5 million Honda’s with Sensing technology, only 90 crashes were reported in the same timeframe. So relatively speaking, a Tesla with Autopilot technology is 18,36 times more likely to crash than a Honda with sensing technology.
- Add to this the letter to Elon Musk written by the NTSB chair Jennifer L. Homendy and the many ongoing investigations by NHTSA into the crashes and related issues like phantom braking.
It seems that by cutting corners to beat the timeline, Tesla is maneuvering itself to become another name on the list of scandals. I hope Tesla can use their agility to correct and timely address the problems before it is too late.
How to prevent Ethical fading?
Ethical fading will not be solved by not applying the V model or Agile practices; we must ensure the organization’s culture and individual behaviors match the company’s values at all times. The big question remains how? Patrick Hillberg mentioned
the use of mental models.
A mental model
is an explanation of someone’s thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person’s intuitive perception about their own acts and their consequences. Mental models can help shape behaviour and set an approach to solving problems (similar to a personal algorithm) and doing tasks.
It is essential to identify suitable mental models, make them usable and train everyone and make them part of your company’s DNA by building a culture where decisions are always made using these mental models. These decisions and mental models are regularly evaluated to ensure they keep improving. Here you can find descriptions of more than 80 mental models.
Let me know what you think.
Header photo by Ryan Loughlin on Unsplash