Control is at the heart of professional risk management. Risk management refers to “coordinated activities to direct and control an organization with regard to risk, ” according to the standard-setting international guidelines, ISO 31000:2018 (https://www.iso.org/obp/ui/#iso:std:iso:31000:ed-2:v1:en ). These do admit, however, that “Controls may not always exert the intended or assumed modifying effect,” without feeling obligated to underscore that such aftermaths increase risks all too often.
The danger in stopping short by organizing around sundered probabilities and consequences of failure (the defining topoi of “risk”) is that, stranded at your cognitive limits, you don’t realize what you have before you are little more than contingencies associated with aftermaths. At these limits, your risk management framework misleads you in thinking otherwise. You think controls are or can be scalable to what is not causally understood.
It’s worth asking then what it would the results be if we were pulled from cognitive limits we already recognized to risks that can actually be managed rather than thinking we’re pushed to having to manage risks because the former certitudes are no longer tenable.
This means starting with and taking seriously our cognitive limits and biases.
For example, the Fundamental Attribution Error has been defined as: The failure to recognize and explain human behavior by reference to situation in which the person finds himself or herself. Do appeals then to the absolute priority of universal human rights over the irreducible particularities of being commit this error? Or is one human right to commit that error?
One can well imagine that frameworks for risk management differ radically if the former or if the latter. Or to put the point better: Both the former and the latter are part of the bigger mess to be managed when it comes to the deeply unknowable and unstudiable, at least for when that too matters for decisionmaking, now.