[Ref Knowledge in Risk Assessment and Management (ISBN: 9781119317883)]
ISO31000 standard on risk management (ISO2009) built the definition of risk on uncertainty and not probability. In the standard, risk is defined as the effect of uncertainty on objectives.
Uncertainties are related to knowledge, but it is not only talking about the knowledge itself, but also the quality of this knowledge.
Chapter 1 compare the tradition and new approach of risk management.
The writer emphasise that in certain cases the traditional methods (probability models – fault trees, Bayesian networks) are not really suitable for risk management due to limitations. Also, the method in chapter 1 could be used for studying both positive and negative consequences.
The method demonstrated in chapter 1 is using simply analysis methods – ident threats/hazards, brainstorming, use subjective probability with judgements, adopt interval probabilities (based on background knlowledge, including assumption) or scale (unlikely, less likely, likely, very likely)
Chapter 2 discuss the importance of knowledge in risk analysis
Definitions of knowledge (Ref table 2.1)
Then, the writer discuss in what way to generate Knowledge
5 common approaches to generate Knowledge:
- empiricism: objective facts that can be gained from the external world
- rationalism : through reasoning
- social constructionism : knowledge is never fixed, but is under a constant construction process – through debate and power aspect
- circumscribe by specific historical, economic, social condition and even time pressure
- pragmatism : validated by the consequences (e.g PDCA loop)
Empiricism involves the amount of realiable and relevant data/information;
Rationalism are covered by justification of assumptions made and understanding of the phenomenon involved;
Social constructionism needs agreement among experts
The remain two ways seem to be lacking of strength of knowledge justments.
When we are performing risk assessment, knowledge may be needed to create subjective probability and judgements. However, we should scrutinise the strength of knowledge for potential surprise.
Chapter 3 About (semi)quantitative Risk Assessments
In chapter 3, the writer illustrates the importance and role of knowledge dimension when dealing with uncertain assumptions.
From risk assessment and management perspective, “assumptions” are “conditions/inputs that are fixed in the assessment but which are acknowledge or known to possibly to deviate to greater or lesser extent in reality” (Berner and Flage 2016a, 46)
The more important or critical an uncertain assumption is, the more justifiable it is to spend resources on characterising the uncertainty and assessing the effects of potential deviations from the basecase assumption.
3.2 connecting Risk and related concept
There are 3 common definitions:
- Possibility of an unfortunate occurence
- Potential for realisation of unwanted, negative consequences of an event
- Consequence of the activity, and associated uncertainties
Interestingly, they are not formulated in terms of probability, but they are expressed as “uncertainty”, “possibility” and “potential”.
Information is needed to identify, classify, analyse and mitigate the uncertain assumptions. (Section 3.4)
3.6 Communicating Uncertain Assumptions
Pedigree refers to a qualitative evaluation of the information provided by the numeral, unit, spread and assessment qualifiers
Sample of pedigree matrix:
|Score||Theoretical structure||Data input||Peer acceptance||Colleague Consensus|
|4||Established theory||Experimental data||Total||All but cranks|
|3||Theoretically based model||Historic/ field data||High||All but rebels|
|2||Computational model||Calculated data||Medium||Competing schools|
|1||Statistical processing||Educated guesses||Low||Embryonic field|
|0||Definitions||Uneducated guesses||None||No opinion|
Sample of pedigree matrix based on the strength of knowledge (SoK) criteria of Flage and Aven (2009):
|Score||SoK label||Phenomena/model||Data||Expert agreement||Realism of assumption|
|3||Strong||The phenomena involved are well understood; the models used are known to give predictions with the required accuracy||Much reliable data is available||There is broad agreement among experts||The assumption made is seen as very reasonable|
|2||Moderate||Conditions in between strong and weak: say the phenomena involved are well understood, but the models used are considered simple/crude||Conditions in between strong and weak; say some reliable data are available||Conditions in between strong and weak||Conditions in between strong and weak|
|1||Weak||The phenomena involved are not well understood; models are nonexistent or known/believed to give poor predictions||Data are not available or are unreliable||There is lack of agreement/ consensus among experts||The assumption made represent a strong simplification|
3.7 Uncertain Assumptions in Risk Management: The Risk Manager Perspective
Assumptions constitute a key set of premises for a risk assessment. Deviations from, or “failures” of these assumptions could invalidate the results of a risk assessment to greater or lesser extent. Therefore, we may consider to use “assumptionbased planning” framework.
“Assumptionbased planning” is a framework developed by Dewar and Levin as a tool for strategic planning by the US Army (Dewar 2002). There are three (3) strategies to make assumptions:
- “an event or threshold that indicates an important change in the validity or vulnerability of an assumption”
- Shaping actions
- “an organizational action to be taken in the current planning cycle and is intended to control the vulnerability of a loadbearing assumption” – to avoid significant (and unwanted) deviations from an original assumption
- Hedging actions
- “an organizational action to be taken in the current planning cycle and is intended to better prepare the organization for the potential failure of one of its loadbearing assumptions”
Contingency actions ≠ hedging actions ;
Contingency actions are performed before the plan is carried out – are performed if and when deviations occur during the execution of the plan.
Chapter 4 Critical slowingdown framework for monitoring early warning signs of surprise and unforeseen events
The chapter 4 introduces a framework to access the validity of the assumptions about a system’s future behavior, the aim being to provide early warnings.
How can we develop earlywarning signs based on the scientific method? The answer is detecting anomaly in the signal (i.e an abductive anomaly). The presence of an anomaly in the system signals challenges our knowledge.
Abductive reasoning (also called abduction, abductive inference, or retroduction) is a form of logical inference which starts with an observation or set of observations and then seeks to find the simplest and most likely explanation for the observations.
In chapter 4, the writer tried to bring out a framework (function) called “description of risk”
description of risk : [A,C,Q,K(B,I),x(t)]
A represents the events (that is the risks);
C is the consequences;
Q is a general measure of uncertainty;
K is the knowledge support Q, A and C
B is a set of basic representations (i.e rules);
I is a set of sanctioned inferences over B;
x(t) is the signal
It seems to be a quite abstract concept if we only look at the function above. In simply words, the so called “description of risk” is dynamic since the environment is every changing. It is not unusual to see that early warnings signs of failure are difficult to obtain.
Rightly, there are various challenges in applying the function:
- Timescale (lead time)
- “lead time” is the time from the first appearance of the signal of critical slowing down
- the early warning signs arrive with a short lead time – e.g earthquakes, tsunamis
- As a result, leaving no time to make decisions
- Systems on multiple scales
- monitoring critical transitions in signals itself is challenging – e.g Weather department gives information about storm
- Therefore, the real information or related data should be gathered by other groups (e.g fisherman)
Anyway, the writer in chapter 4 has just suggested that data collection and signal processing methods to risk analysis is quite dynamic with limitations.
Chapter 5 Improving the foundation and Practice of Uncertainty Analysis: Strengthening Links to Knowledge and Risk
In chapter 5, the writer tried to argue that uncertainty analysis can benefit from recent developments within the risk analysis field that underline the knowledge and SoK (strength of knowledge) judgments supporting the quantitative uncertainty judgements.
The writer also explained the subjective probabilities simply as probabilities. Rightly, there are also other ways of representing or expressing the uncertainties than using probability.
In short, the chapter 5 aims at improving the foundation and practice of uncertainty analysis by presenting a framework for uncertainty analysis which allows for all types of uncertainty representations and measures.
5.2 The Uncertainty Analysis Framework
Five (5) main pillars in the framework:
- What are we uncertain about? [i.e quantities of interests (X)]
- Who is uncertain and who are interested in X? (i.e Related actors – analysts, experts, decisionmakers, other stakeholders)
- How the uncertainties are represented or expressed
- How the uncertainties are dealt with through modelling and analysis
- How the uncertainty characterizations are followed up and used by the relevant actors
Function of uncertainty analysis?
Gain insights to improve communication and support decisionmaking~
5.2.1 What are we Uncertain About? – quantities of interests (X)
When we are uncertain about the frequentist probability distribution of something, we can try to constructed probaility model. Then, we “know” the presumed true underlying value. However, be noticed that there are limitations still. For example, a company has defined performance measures covering production volumes and loss of lives and injuries due to accidents, but no measure is defined in relation to loss of reputation.
5.2.2 Who is uncertain and who are interested in X?
In general, there are two types of frameworks – highlevel quantities (main interest for the study, for the decisionmakers) and lowlevel quantities (for analysts and experts). The lowlevel quantities tend to be more technical and often parameters of models.
5.2.3 How the uncertainties (U) are represented or expressed – the link to knowledge and risk
We use our knowledge to define “uncertainty (U)” in relation to the quantities of interests (X). If X is observable in the future, then the uncertainties (U) are the “risk”, which captures two dimensions:
- values at stake: the future consequences (e.g lives, assets…)
- uncertainties: what will these consequences be?
An example is that, quantities of interests (X) referring to the number of fatalities, then the “uncertainty (U)” is the risk that why people die:
- values: number of sick people
- uncertainties: Why? by using knowledge and information
Subjective probability is one of the common tool for the subjective judgments approach – by Lindley (2006) and Aven (2013).
However, there are some limitations, and the main problem is that some background knowledge (K) is getting involved – but the knowledge can deviate to greater or lesser extent in reality. Unfortunately, it is not possible or desirable to transfer all knowledge available to the probability figures. As a result, the quality of the knowledge also needs to be taken into account.
In short, the SoK (strength of knowledge) is needed to be considered. For example, using NUSAP system – pedigree matrix.
In science, objectivity seems to be ideal, but it is still essential to add the knowledge K and a judgement of its strength.
In short, two approachs are not in conflict; rather, they supplement each other. For the subjective approach, specific probabilities may often be preferred, whereas in the objective case, interval probabilities could be the standard measure used.
5.2.4 How the uncertainties are dealt with through modelling and analysis
Modelling means that we should put different uncertainty (U) and even quantities of interests (X) together. However, one thing we should remember is that we should always prepare for “surprises”/”black swan effect” for the sake of lacking of knowledge or incompleteness uncertainty.
5.2.5 How the uncertainty characterizations are followed up and used by the relevant actors
It is quite straightforward on how relevant actors using the uncertainty analysis:
- get the “big picture” – ident and structure the “risk” and ident alternatives
- Assess pros and cons of alternatives
- Assess uncertainties and risk of alternatives
- Communication with other stakeholders and decisionmakers
Chapter 6 Completeness Uncertainty: Conceptual Clarification and Treatment
So far, there are still a lot of limitations about the Uncertainty Analysis Framework above. Rightly, lack of a clear definition of completeness uncertainty hampers risk assessment and risk management. However, it is very difficult to quantify and analyze completeness uncertainty.
According to the NUREG1855 document, to meet unknown completeness uncertainty, safety margins, defense in depth and performance monitoring are recommended. (NRC 2013). There is a need to see beyond probability to make judgments about significance!
So, what’s “unknown completeness uncertainties” and “completeness uncertainty”?
An example is that quantities of interests (X) is the number of fatalities for rockslides. However, we only put tidal-wave-caused rockslides into consideration as uncertainty (U). So, what if the tidal wave occurs at night (completeness uncertainty = other known uncertainty) or for other reason (unknown completeness uncertainty = unknown/unspecified uncertainty), is the model valid/sensitive enough still? Of course not! But it’s hard to put “all” uncertainties into the model because there are infinite.