RIS-505 – Foundations of Risk Analysis and Risk Science – Exam Summary

This blog post aims to provide an overview of the key concepts and methodologies covered in the course RIS505 – Foundations of Risk Analysis and Risk Science at the University of Stavanger (UiS). The summary is divided into six chapters.

Risk, probability and related concepts. 

Definitions of risk 

Definition 1: The potential for undesirable consequences of the activity. (C,) 

The term potential relates to the consequences but points also to the uncertainties  – an accident with some injuries/fatalities may be the result, but we do not know before the activity is realized. Therefore we have a second definition: 

Definition 2: The consequences of the activity and associated uncertainties.  

  • This definition incorporates both consequences and uncertainties. (A, C, U) 
  • Here A is an event, and the C consequences given the occurrence of A. 

Definition 3: The deviation D from a reference value r, and associated uncertainties U. (D= C – r) or just (D, U). 

This definition brings r into the question, which is a common reference to a value, an objective understanding of the consequence. When considering an activity for a specific period, we refer to T as the length of this interval. Illustrations can be found here Aven & Thekdi, 2022, p. 13.  

Risk = ‘event risk’ (A,U) & Vulnerability (C, U|A) 

The risk concept and the risk description 

The risk concept is a mental concept of what risk is. To specify the risk concept more we have to specify the Event and Consequences, to be able to describe the risk. And we get the general risk description below. 

The general risk description

The curriculum focuses on the most common way of expressing uncertainty, which is to use probability together with the strength of knowledge judgments, so Q = P and SoK. 

Here we use the (C, U) and (A, C, U) definitions of risk, we derive the general risk characterizations (C’, Q, K) and (A’, C’, Q, K) where A’ is a set of specified events, C’ some specified consequences, A measurement or description of uncertainties and K the knowledge that Q and (A’, C’) are based on.  

  • The risk concept is (A, C, U) 
  • And the risk description is (A’, C’, P, SoK, K) 

Why do we differentiate them? 

  • Opens up the idea that risk exists as a mental concept – without having to measure or describe it.  
  • Allows for the acknowledgement that the description of risk (A’, C’, P, SoK, K) could be more or less good in capturing the actual events or consequences occurring, and their associated uncertainties (A, C, U). 

Vulnerability 

The vulnerability concept is closely linked to risk. Starting from the risk representation (A, C, U), we can split risk into two main contributions: 

  1. Event risk (A, U) 
  1. Vulnerability (C, U|A) 

| = as “given”. Thus vulnerability represents the potential for undesirable consequences given an event (risk source).  

So we have two definitions:  

  1. The consequences C of the activity and associated uncertainties U, given an event A (risk source). (C, U|A) 
  1. The deviation D from a ‘reference value’ r, and associated uncertainties U given an event A (risk source). (D, U|A) 

We use the term vulnerability in relation to the activity considered and the values addressed. Often vulnerability is discussed concerning a system, like a person or a technical structure.  

Rehearsal  

Problem 1:

  • Use the boulder example from the curriculum to explain what vulnerability means. Use both symbols and daily language.
  • Discuss in general to what extent a system that is considered not vulnerable (robust) could have undesirable consequences for some possible events A. 

Solution: 

  • Vulnerability is described as (C, U|A), and we can look at the consequences C as the boulder crushing the people beneath. Uncertainties related to this would be that there could be elements unknown that could cause the boulder to fall, if we are talking about a safety issue the wind or some sort of other event could cause the boulder to fall, or if we talk about a security issue there could be someone trying to loosen the boulder. And |A we have to look at the consequences and uncertainties given the event A happening, which is the boulder dislodging and falling. 
  • When describing a system as robust, we’re not calling it indestructible or completely avoiding any dangers. There are always undesirable consequences one cannot foresee. Since the vulnerability concept takes into consideration the K, and knowledge always changes, an unforeseen event is likely to happen as well.  

Problem 2:

  • Use the risk concept to explain vulnerability.  

Solution: 

  • Here one would have to show that the risk concept (A, C, U) can be divided into the ‘Event risk’ (A, U) and (C, U|A). Vulnerability is an aspect of risk. 

Resilience 

  • The ability of a system to sustain or restore its basic functionality. Often is concerned about recovery given a major event.  
  • Resilience provides key input to the vulnerability concept, as resilience influences the degree to which something undesirable is to happen given the event or risk source. 
  • It’s tempting to say that a lack of resilience is the same as vulnerability however, there is a difference. Vulnerability is a broader concept. Lack of resilience would mean that the people struggle to return to a normal health state given the risk source (one example). The vulnerability concept highlights what the actual consequences could be of this lack of resilience. Referring to (C, U|A), the vulnerability concept addresses all consequences defined by the values considered. Resilience is thus an input to and an aspect of vulnerability (Aven & Thekdi, 2022, p. 19).  
  • The model below will show the link between resilience and vulnerability. 
This model shows the link between resilience and vulnerability. 

Reliability 

  • The reliability of a system – for example, a water pump or a power plant, is defined as the ability of the system to work as intended. The reliability – or rather the unreliability – concept can be viewed as a special case of risk by considering the activity generated by the operation of the system and limiting the consequences C to failures or reduced performance relative to the system functions.  
  • Unreliability captures the potential for undesirable consequences, namely systems failure or reduced performance, and is represented by (C, U) with this understanding of C. The (C, U) definition of risk covers both negative and positive consequences; hence, it is a matter of preference whether we refer to risk in this setting as unreliability or reliability.  

Safe and safety, secure and security 

  • Safety can be viewed as the antonym of risk (as a result of accidents).  
  • Consider a situation in which you walk on a frozen lake with a thick layer of ice. The risk is low, and the safety is high. We may conclude that walking on the lake is safe, but then we have made a judgment about the risk: we find it acceptable or tolerable. 

    – Therefore the term safe refers to acceptable or tolerable risk. We also often use the safety term in this way, for example, when saying that safety is achieved.  
  • Secure is the same, as acceptable or tolerable risk when restricting the concept of risk to intentional malicious acts by intelligent actors.  
  • Security is understood in the same way as secure and as the antonym of risk when restricting the concept of risk to intentional malicious acts by intelligent actors.  

Probability 

  • Is a way to describe uncertainty about an event and its consequences.  
  • There is uncertainty to the frequentist probability because of the estimation uncertainty. Because there is uncertainty between the estimate of our probability and the underlying true value. Subjective knowledge-based probability is not uncertain, because there is no true underlying probability. After all, it’s just based on our (expert) judgment.  

Frequentist probability  

  • Expresses variation. 
  • The fraction of times an event occurs if we repeat the situation an infinite number of times under the same conditions. 
  • This method is looked upon as being uncertain.  
  • Typical example: 

    If we play a dice game. If the outcome is 1, we lose 100 kr. If the outcome is 2, 3, 4, 5 or 6, we win 50 kr. What is the probability that we are going to lose money? The probability is of course 1/6. But what if the die is weighted or has been tampered with in some other way? We then have to do a frequentist probability experiment on the die. If the die was fair, we would have expected a relative frequency of 10/60 for each outcome. But if we roll the die 60 times, we won’t get anything near 10/60 frequency, because it’s all random. But the more and more we roll the die (more than 60 times), we see that the probability of getting a 1 on the die is getting close to 1/6. That means that we can assume that the die is fair, out of an assumption of the frequentist probability. Because a frequentist probability consists of doing the experiment an infinite amount of times, which we can’t do. It’s a mind-constructed quantity. In general, the frequentist probability is unknown.  
  • What if we knew the die was not fair? What is then the frequentist probability of getting the outcome 1? 
  • The answer is that we do not know. It needs to be estimated. Yet this underlying frequentist probability exists. We therefore need to be careful in writing and distinguishing between the frequentist probability and estimates of this probability.  
  • We happen to know the true frequentist probability in this case 1/6, but in general, that is not the case. In writing, we refer in general to a frequentist probability as Pf and its estimate as (Pf)*. Thus we need to take into account that the estimates could deviate from the frequentist probabilities. There is estimation uncertainty. A frequentist probability represents the fraction of time the event considered occurs if the situation can be repeated under similar conditions infinitely. Whether such a population of similar situations can be meaningfully defined, is a judgment call. Thus, the frequentist probability is a model, which needs to be justified. It does not exist in all situations.  

Subjective (Knowledge-based) probability 

  • Expresses the assessor’s uncertainty, or degree of belief, for an event to occur.  
  • Not uncertain (no true, underlying probability to compare with), but can be incorrect depending on the supporting knowledge. 
  • P|K. P(A) =  0,8. The assessor has the same uncertainty / the same degree of belief for A to occur, as randomly drawing a red ball out of an urn containing 10 balls, of which 8 are red.  
  • P(A) >= 0.8. The assessor has the same uncertainty / the same degree of belief for A to occur, as randomly drawing a red ball out of an urn containing 10 balls, of which 8 or more are red.  
  • Example: Expressing the knowledge-based probability. 

    Let A denote the event that Bernoulli will finish the book within one year following the letter to Leibniz. We can think about Jacob making a subjective probability of A to occur equal to 0.10, P(A)=0.10. It means that this friend has the same uncertainty and the same degree of belief for A to occur as randomly drawing a red ball out of an urn containing ten balls, of which one is red. Hence the probability is a judgment of the assessor, not a property of “the world”.  
  • A subjective probability always has to be seen in relation to the knowledge.  
  • The term knowledge-base probability is motivated by the construction P(A|K).  
  • Typical exam question: 

    Consider a production process of units of a specific type. let p = Pf(A) be the frequentist probability that an arbitrary unit from the production the coming week is defective, where A refers to the event that the unit is defective. This probability is interpreted as the fraction of units being defective in this production process in the coming week. To estimate p, a sample of 100 units is collected, showing 2 defects. From this, an estimate p* equal to 2/100 is derived. How could a subjective probability express this? 

    Solution:  

    P(A|K) = 2/100, seeing these observations as the background knowledge K. 

Expected values 

Centre of gravity of the probability distribution.  

Limitations: 

  • Does not reflect the potential for extreme outcomes.   
  • Does not reflect the strength of the supporting knowledge.  

Method: 

  • Multiply the outcomes with the probabilities and summarize them.  

P(N=0) 0.80 = 0  

P(N=1) 0.10 = 0.10 

P(N=2) 0.05 = 0.10 

P(N=3) 0.03 = 0.09 

P(N=4) 0.02 = 0.08  

= 0.37. (Aven & Thekdi, 2022, p. 54) 

Risk Matrices/Matrixes 

  • Based on expected values, which does not reflect the potential for extreme outcomes. 
  • Does not reflect the strength of the supporting knowledge. 

Black Swans

Black Swans

  • Extreme events with extreme consequences relative to one’s knowledge.  

Three types of black swans:

Unknown Unknowns

  • Extreme cases, where no one has any knowledge. 
  • Swine flu vaccine in 2009. No one knew what would happen, and now we see the side effects. These are unknown to all of us.  

Unknown Knowns

  • A surprising event for us does not have to be a surprising event for someone else, because of available knowledge. This can also occur within organizations. 

9/11. Surprise for us, but someone out there had information about this. 

Known, but not believed to occur

  • The assigned probability is so small, so we don’t expect the event to occur. This happens all the time.  
  • Fukushima. We knew the tsunami could happen because it happened many years ago. But the probability of it happening was judged small, so we did not believe it to happen. We don’t plan for it, because the probability is so low.  

Knowledge is the best way to confront black swans. Gathering new and including already generated knowledge is crucial when dealing with black swans. Risk assessment here is of course an important process, as risk assessment is where knowledge is generated.  

Risk assessment 

  • Risk assessment = Risk analysis + Risk evaluation. 
  • The main features of a risk assessment is covering events, causes and consequences. It’s a tool used to understand the risk related to an activity, evaluate the significance of the risk and rank or rate relevant options based on established criteria.   
  • The bow-tie model is a good representation of what is included in the risk assessment process.  
A standard bowtie layout with preventive and corrective barriers made with Presight Bowtie Workbench™

Risk Assessment stages 

  1. Planning of the risk assessment. 
  • Problem/issue definition 
  • Clarify who are the stakeholders. 
  • Set study objectives. 
  • Establishing relevant principles and approaches. 
  • Data and information gathering. 
  1. Risk analysis. 
  • Identifying events (hazards/threats/opportunities) 
  • Cause analysis 
  • Consequences analysis 
  • Risk characterization 
  • Studying and comparing alternatives and measures wrt risk. 
  1. Risk evaluation 
  • Judging the significance of the risk. 
  • Ranking alternatives and measures wrt risk 
  1. Use of risk assessment 
  • Use of the risk assessment in cost-benefit analysis and other types of studies. 
  • Management review and judgement (MRJ). 
  • Decision. 

Exam question: 

  1. A person states that the main purpose of a risk assessment is to improve the understanding of risk and in this way reduce risk. Is the person right, according to the curriculum? Explain. 
  1. Discuss to what extent the purpose of risk assessment is to accurately determine risk. 

Solution

  1. Not necessarily. Risk in itself is usually displayed when trying to minimize a negative outcome, but risk is not always about negatives. Risk is also used to display positive outcomes. Risk assessment is something we use to display different outcomes from different actions, and from there give decision-makers the option of choice. Here they might choose something with a higher risk because the outcome is preferable to the decision maker.  
  1. Not necessarily the purpose of risk assessment no. The process of risk assessment includes trying to accurately determine risk yes, but also the uncertainties, characterization of risk, how to measure the risk, judging the significance of the risk and ranking alternatives.  
  • Another answer here could focus more on that trying to accurately determine any risk is highly difficult, as we then have to compare it with one true underlying probability, which we know is problematic.  

Event tree analysis 

  • Probability of scenario 1 happening, given a fire. 

    0.1*0.3*0.5= 0.015. 
  • Suppose the expected rate of fires is 2 per year. What is then the probability of a fire occurring which leads to scenario 2?  

    Follow the trace to scenario two, but multiply it by 2 in the end: 

    E[X] = 2. 0.1*0.3*0.5*2 =  0.03. 
  • Suppose for scenario 1, an expected number of fatalities of 5 is specified. And suppose no fatalities for the other scenarios. Compute an expected number of fatalities for this event tree. 

    E[S1] = 5 and the S2 to S4 = 0. The expected number of fatalities for this event tree, would be the probability for each scenario multiplied by the expected number of fatalities for each scenario, and summarize each scenario.   

P(Scenario 1)*E[Y1] = 0.03*5 = 0.15 

                                  + 

P(Scenario 2)*E[Y2] = 0.015*0 = 0 

                                 + 

P(Scenario 3)*E[Y3] = 0.07*0 = 0  

                                 + 

P(Scenario 4)*E[Y4] = 0.9*0 = 0 
 

= 0.15. The expected number of fatalities is 0.15. 

PLL 

  • Is defined as the expected number of fatalities in a period of one year. 
  • This value is usually one you presume, one you calculate (like below) or one that is given, say 1/10000.  
  • You find this value by computing the probability of the event tree and multiplying it by the number of fatalities a year that is given in the assignment.  
  • So to follow the event tree above, one would do 2*0,1*0,3*0,5*5 = PLL = 0,15.  
  • The PLL value for the first scenario of this event tree is 0,15.  

FAR 

  • Is defined as the expected number of fatalities per 100 million exposed hours. 
  • This is always a set value.  
  • PLL/FAR (Exposed hours specified in this organization) * FAR (universal value of FAR).  
  • So to follow the event tree above, including the PLL found and assuming the exposed hours in a year is 1 million. 0,3/1,000,000*100,000,000 = FAR = 30.  
  • The FAR value for the first scenario of this event tree is 30. 

IR 

  • Individual risk. 
  • Defined as the probability that a specific person is killed in a year.  
  • This can be calculated by dividing the PLL value by the number of people exposed. So assuming 10 people are equally exposed to the problem above: PLL/10 = 0,15/10. 

Fault tree analysis 

The PowerPoint below explains the different boolean operators / logical gates, and how to transform a fault tree into a reliability block diagram.

Risk Perception 

Psychometric paradigm 

The basic idea is to identify the main factors that influence the perception of hazards and risks. The following factors were considered: 

  • Voluntariness of risk 
  • Immediacy of effect 
  • Knowledge of risk of those who are exposed 
  • Scientific knowledge 
  • Control over risk 
  • Newness 
  • Number of people killed in an incident 
  • Dread potential 
  • High severity of an incident 

Dread-newness map

  • Research shows that for many types of hazards, the higher the perceived benefit, the lower the perceived risk, and vice versa.  

Trust 

  • For many situations and cases where risk perception is an issue, knowledge about the risks is weak. Then factors of trust and affect are of special importance in explaining the risk perception. 
Trust Risk Perception Acceptance 
Affect Benefit Perception 
  • The figure above shows a common model for understanding lay people’s acceptance of an activity. Trust and affect influence risk and benefit perceptions, which in turn influence the conclusion of acceptance or not. 
  • Trust and affect also interact. The term affect is an umbrella word to describe anything related to emotions.  
  • Research shows that trust is formed on the belief that the relevant actors share the same values. People rely on trust when making judgements about a hazard or risk when they have little knowledge about that hazard or risk. As trust is related to values, which are fundamental to people and do not vary much over time, it is to be expected that the link between trust and perceived perception also remains relatively stable.  
  • In Risk research, there is a general understanding that trust affects how one understands and perceives risks and risk events and also risk response.  
  • Strategies like stakeholder involvement, public participation and communication of scientific uncertainties in risk governance processes are increasingly drawn upon in order to rebuild or increase levels of public trust.  
  • A perspective is to recognize distrust as a potential resource in risk assessment and risk management contexts. A certain amount of distrust is necessary for political accountability in a participatory democracy. Distrust serves important functions, for instance, ensuring social and political oversight, generating alternative control mechanisms and holding in check the power of elites and technical experts. 
  • The complexity between trust and distrust needs to be explained. It’s a continuum, ranging from critical emotional acceptance at one end of the extreme to downright rejection at the other. Between these two extremes on the continuum of trust lies what is defined as a healthy type of distrust, reflecting that the public can rely on institutions and at the same time possess a critical attitude towards them.  
  • This is the typology of trust: Different degrees of trust coexisted with different degrees of scepticism.  
  • This model basically shows us that we should not aim for blind trust. 
  Level of general trust (reliance) High Acceptance (trust) Critical trust 
Low Distrust Rejection (cynicism) 
  Low high 
  Level of skepticism 

System 1 and 2 thinking 

Self-evidently valid: “Experiencing is believing”. Analytic system (system 2) 
Holistic Analytical 
Affective: pleasure-pain oriented Logical: reason oriented (what is sensible) 
Associationistic connections Logical connections 
Behavior mediated by “vibes” from past experiences Behaviour mediated by conscious appraisal of events 
Encodes reality in concrete images Encodes reality in abstract symbols, words and numbers 
More rapid processing: oriented toward immediate action. Slower processing: oriented toward delayed action. 
Risk perception is based on system 1 thinking. Requires justification via logic and evidence. 
Biases and heuristics. Risk assessment is based on this system. 
Risk perception is based in system 1 thinking.  
  • In short, System 1 thinking operates automatically and quickly and is based on instinct, intuition and emotions. 
  • System 2 thinking operates more slowly and deliberately. Is logical and analytical.  
  • Biases and heuristics are examples of what is referred to in risk perception research as system 1 thinking. 

Biases and heuristics 

  • Cognitive processes and mental “shortcuts” that influence how we perceive information and make judgments. 
  • Confirmation bias:  

    The tendency to seek out and favour information that confirms one’s existing belief/opinion, while downplaying or ignoring contradictory evidence/information. 
  • Anchoring bias: 

    Giving disproportionately large weight to the first piece of information encountered. 
  • Representation: 

    Singular events experienced in person or associated with the properties of an event are regarded as more typical than information based on frequency of occurrence.  
  • Cognitive dissonance: 

    Information that challenges perceived probabilities that are already part of a belief system will either be ignored or downplayed. 

Affective heuristics are mental shortcuts in which people rely on the positive or negative valence associated with a hazard to judge its benefits and risks. If negative feelings overrule, then people will associate the hazard with low benefits and high risks, and vice versa.  

  • Availability heuristics: 

    Using the ease with which relevant examples come to mind as an indicator of how likely an event is to occur. 
  • Representativeness heuristic: 

    The tendency to assess the likelihood of an event based on how closely it resembles the prototype or stereotype.  

Difference between professional risk judgements and risk perception 

  • The psychometric paradigm showed that experts’ judgments of risk differed systematically and markedly from those of non-experts.  

    A common explanation for this difference is that when experts judge risks, they tend to relate the risks to the probability of harm or expected loss. The expert responses correlated strongly with estimates of annual fatalities. In contrast, the laypeople’s judgments were found to have a broader conception of risk – a risk perception – also capturing considerations of uncertainty and factors such as dread, catastrophic potential, equity and risk to future generations. 
  • Seemingly, the experts’ perspective – mainly based on System 2 thinking – is more rational than that of laypeople, which is to a large extent rooted in System 1 thinking. But the issue is not as straightforward. Experts seemed to believe that they possessed the truth about risk and that lay people were influenced strongly by fear and other perceptional factors.  

    – We should be careful stating that the risk is ‘misjudged’ because the risk is related to the future; there are uncertainties about the potential consequences, and there is no ‘true reference’ to compare with! 

    – People’s risk perception could cover aspects of risk and uncertainty that are not adequately captured by professional risk judgements.  

Therefore professional risk judgments should take into account that of the lay people, especially the people who are involved or exposed to the risk in question.  

Social amplification of risk 

  • SARF describes how responses come about and explains in detail how risks or risk events, assessed by experts as low risks, still produce significant public concern that often has large societal impacts. This tendency reflects the complexity of risk judgment and the fact that risk is more than just quantitative expressions in the form of probabilities or risk estimates. The framework brings together technical analysis and social experience of risk and describes how risk amplification occurs at two levels:  
  1. It happens in the processing of risk-related information 
  1. And in societal responses. 
  • Risk amplification involves intensifying or increasing the importance or ‘volume’ of certain risk signals and symbols. It is generally associated with a heightened perception of risk and tends to trigger risk-reductive measures.  
  • It is also explained how this process is repeated, as the responses go through different rounds of interpretation, spurring secondary and even tertiary effects. These waves of effects spread the impacts of the original risk event far away from where it initially took place and are referred to in the framework as ripple effects.  

Exam question: 

  1. A person perceives the risk related to an event as high nothing that the event is of a new type, the knowledge is weak, there are delays in effect and there is a potential for severe consequences. An expert claims that this person overrates the risk. Is the expert, right? Explain. 
  1. A person argues that trust in relation to public safety is not really what we aim at, rather it should be ‘critical trust’. Discuss this view.  

Solution

  1. Here we can draw two conclusions. The first one focuses on the social amplification of risk, saying that the expert could be correct that lay people tend to amplify the risk event, and therefore the amplification could become a problem in itself, and we could justify the expert’s statement. 

    On the other hand, we can never really say that anyone is overrating a risk, because then we would have to have a true underlying risk to compare it with, which is problematic. Risk perception also plays an important role in this problem, as for that person, the risk can feel very real, and if there is any potential for severe consequences, we should not then take the person’s perception of the risk for granted.  
  1. I agree with this person. According to the typology of trust, we have a level of scepticism and a level of general trust. Here we get blind trust, which is low-level scepticism and high trust, which is not something we aim for because we should not take everything the government say for granted, but we should be critical of it and still have trust towards it.  

Risk Communication 

  • Main aim: to improve the understanding of the risk to make appropriate judgements and decisions. 
  • Risk communication is very much about communicating the results of a risk assessment in a trustworthy manner. Without a solid scientific basis, the value of the information provided will be reduced, the results will be questioned, and the trustworthiness will be hampered. 

The functions and aims of risk communication:  

  1. Enlightenment function (to improve risk understanding among target groups) 
  1. Right-to-know function (to disclose risk-related information to potential victims) 
  1. Attitude change function (to legitimize risk-related decisions, to improve the acceptance of a specific risk source or to challenge such decisions and reject specific risk sources) 
  1. Legitimization function (to explain and justify risk management routines and to enhance the trust in the competence and fairness of the management process) 
  1. Risk reduction function (to enhance risk reduction through information about risk reduction measures) 
  1. Behavioural change function (to encourage behavioural change) 
  1. Emergency preparedness function (to provide guidelines for emergencies or behavioural advice during emergencies) 
  1. Involvement function (to educate decision-makers about concerns and perceptions) 
  1. Participation function (to assist in reconciling conflicts about risk-related disputes and controversies). 

Openness and transparency 

  • How then should risk be communicated?  
  • Should we strive for complete openness and transparency? 
  • Aven argues for critical trust, not a blind, acceptance trust. A point being made is that authorities are faced with dilemmas and to avoid panic etc, they may not be fully open about certain issues, there could also be overall goals and policies that make them not be fully open. 

Successful risk communication 

  1. Assessing and characterizing the risks according to state-of-the-art risk science. 
  1. Applying an open, transparent and timely risk communication policy, providing a clear overview of the uncertainties and risks involved. 
  1. Highlighting involvement, taking into account and addressing public concerns and issues.  

Basic stages of the risk information process 

Risk Management 

Refers to all activities used to address risk, such as:  

  • Avoiding risk 
  • Reducing risk 
  • Accepting risk 
  • Sharing risk 

The terms risk management and risk handling are interchangeable. 

  • How to determine the most appropriate risk-handling strategies and policies. 
  • Risk management is often referred to as a balancing act between development and protection.  

Main risk handling strategies and policies 

  1. Use of risk assessment (for short, being risk-informed). Simple risk: codes and standards and risk-informed. (accurate predictions using risk assessments)
  2. Giving weight to the cautionary and precautionary principles. Vulnerability (robustness) and resilience management are key instruments. Uncertainty: risk-informed (highlighting knowledge and lack of knowledge assessments) and cautionary/precautionary.  
  3. Discursive. Differences in values. 
  • The appropriate strategy in practice would typically be a mixture of the basic strategies 1-3. The strategy to give weight to depends on the risk issue or problem.  
  • Smoking and many health risk issues are examples of simple risk problems (1). Climate change risk is an example of both Uncertainty and Difference in values (2 & 3). The nuclear industry is subject to moderate levels of uncertainties, but rather high levels of differences in values (2 but even more weight to 3).  

IMPORTANT: In practice, the appropriate strategy would typically be a combination of these three. Which strategy to give weight to depends on the context – the type of risk problem(s) we are dealing with. 

Some tools give more weight to either protection or development. 

  • Cautionary/precautionary principles both give weight to the protection.  
  • ALARP gives weight to protection. It’s designed to give weight to protection, but in practice, it’s often this gross-disproportion criterion in the cross-benefit analysis, and then ALARP leans more towards development. Because CBA does not give weight to protection because it relies on expected values.  

Risk informed. 

  • The risk-informed strategy makes use of formal risk assessments to support decision-making. A detailed description of the process covers the following steps: 
  • Establishing the context, which means, for example, to define the objectives of the risk assessment and to organize the work. Often a set of decision alternatives are identified. 
  • Perform the risk assessment. 
  • Perform a management review and judgement (MRJ). 
  • Handle the risk (decision-making). 
  • Risk acceptance is introduced to ensure some maximum risk level for values such as life and health that need to be protected.’ 
  • There are many challenges with using risk acceptance criteria, the conclusion is that if one would like to apply such criteria, they should be used with care. SoK judgements should always accompany probabilistic risk metrics and thus need to be considered in the risk acceptance discussion.  

Weak criteria 

Weak risk acceptance criteria refer to standards that allow higher risk thresholds, typically prioritizing development, cost-effectiveness, and operational flexibility over stricter safety measures. These criteria are useful in contexts where companies aim to optimize resources, avoiding unnecessary constraints that might arise from stricter risk thresholds.

  • In practice, weak criteria allow for quicker decision-making and fewer resources dedicated to mitigating risks that are deemed acceptable within a less stringent framework.
  • Downside: However, these weaker criteria might expose the organization to greater vulnerabilities, particularly if underestimated risks materialize.

MRJ 

  • Management review and judgement.  
  • Is formally defined as the process of summarizing, interpreting and deliberating over the results of risk and other assessments, as well as other relevant issues to make decisions.  
  • The MRJ is based on the recognition that all assessments have limitations: there are aspects not fully captured by the assessments. The decision-maker needs to take into account all aspects of importance for the decision, not only those addressed by the assessments. Most assessments are based on specific assumptions and beliefs, and decision-makers must also consider the validity of these assumptions and beliefs.  
  • Risk communication is central in all features of this process for the sharing of data, information and knowledge between experts, analysts, decision-makers and other stakeholders.  
  • There is always a need for an MRJ process that takes into account the limitations of the analysis and add concerns and issues not reflected in the formal analysis process. An MRJ process will involve a certain loss of transparency, but this loss must be balanced against the need to see the results of the analysis in a larger context that considers all aspects of the decision-making. The arguments for having an MRJ are strong – it is professionally difficult to justify decision-making process that do not consider the limitations of the approach used (Aven & Thekdi, 2022, p. 244).  
  •  

The precautionary principle  

  • Definitions from SRA: 
  • An ethical principle expressing that if the consequences of an activity could be serious and subject to scientific uncertainties, then precautionary measures should be taken, or the activity should not be carried out. 
  • A principle expressing that regularity actions may be taken in situations where potentially hazardous agents might induce harm to humans or the environment, even if conclusive evidence about the potential harmful effects is not (yet) available.  
  • The SRA does not – however, explain what “scientific uncertainty” and related statements like “conclusive evidence not yet available” mean in this setting.  
  • Aven argues that it is sensible to relate scientific uncertainties to the difficulty of establishing an accurate prediction model for the consequences considered (Aven & Thekdi, 2022, p. 217). 
  • This applies when we are faced with the possible serious consequences of an activity, and there is fundamental, “scientific” uncertainty associated with what these consequences will be -> measures must be taken to reduce the risk, meaning possibly not carrying out the activity.  
  • If a company would like to introduce a new product into the market, the basic idea of the precautionary principle is that the company has the burden of proof of showing that the product is safe and that the negative risks associated with its use are acceptable. Risk science provides knowledge about how to make judgements about what is safe and acceptable risk in such a context. 

Burden of proof 

  •  The burden of proof is different for the risk-handling strategies. Risk management strategies that focus on development consider the burden of proof to be towards proving that the mitigating measure is effective. Whilst risk handling strategies that focus on protection consider the burden of proof to be that the measure is nót effective.
The picture shows that the burden of proof differs between development-focused risk strategies, such as CBA and safety-focused risk strategies, such as ALARP.

Cautionary principle 

  • This is closely related to the precautionary principle, but is a broader principle and not so often referred to.  
  • This principle expresses that if the consequences of an activity could be serious and subject to uncertainties, then cautionary measures should be taken, or the activity should not be carried out.  
  • The key difference between the cautionary and precautionary principles is that the latter refers to “scientific uncertainties”, whereas the former just refers to “uncertainties”.  
  • As scientific uncertainties are a type of uncertainty, the precautionary principle is a special case of the cautionary principle.  
  • Illustrations of some policies supported by the cautionary principle related to industrial safety: 
  • Robust design solutions, such that deviations from normal conditions do not lead to hazardous situations and accidents. 
  • In relation to the balance between protection and development, the cautionary principle gives weight to protection. It has a role in notifying people and society about protecting against potential hazards and threats with serious consequences.  

Discursive strategies 

  • This strategy uses measures to build confidence and trustworthiness through the reduction of uncertainties and ambiguities, clarification of facts, involvement of affected people, deliberation and accountability.  

ALARP 

  • Risk reduction can be accomplished by many different means, depending on the type of activity considered. In a safety context, it is common to refer to the as low as reasonably practicable (ALARP) principle.  
  • According to this principle, an identified measure should be implemented unless it can be documented that there is an unreasonable disparity (gross disproportion) between costs/disadvantages and benefits (Aven & Thekdi, 2022, p. 206).  
  • The principle states that measures that can improve safety should in general be implemented; only in the case that it can be demonstrated that the costs are grossly disproportionate to the benefits gained can one refrain from implementing the measures.  
  • ALARP assessments require that appropriate measures are to be generated.  
  • ALARP recognizes the need for balancing the pros and cons of a measure, but It can be argued that protection is the primary consideration. It seeks to support – and measures that promote protection and safety should normally be implemented – only in the event that one is able to document a gross disproportion should the measure not be implemented. 

Risk strategies and the different problems related to them. (This will come on the exam) 

Risk problem Risk strategy/policy Typical instruments used 
Simple risk problem: the phenomena and processes considered are well understood, and accurate predictions can be made. Minor uncertainties. Examples: many health issues and transportations risks. Risk-informed using risk assessment. Cautionary precautionary principle. Codes and standards. Statistical analysis and traditional risk assessment. 
Risk-informed using risk assessment. Cautionary.  Being risk-informed Informed by risk assessments. Reducing risk related to risk sources and events. Reducing vulnerabilities, strengthening resilience. 
High levels of uncertainties. Potential for severe consequences. Example: Climate change risk, terrorism. Risk informed using risk assessment. Cautionary precautionary principle. Informed by risk assessments, which provide broad risk characterization, highlighting knowledge aspects and uncertainties. Reducing risk related to risk sources and events. Weight on reducing vulnerabilities, strengthening resilience.  
Value differences. A potential for severe consequences, example: climate change risk, nuclear industry. Discursive. Political processes, participatory discourse; competing arguments; beliefs and values are openly discussed.  

Cost-benefit analysis 

  • An approach to calculate the expected net present value of a measure. 
  • It’s the ratio between the expected cost and the expected benefits:  
  • E[NPV] = – Expected cost + Expected numbers of lives saved * VSL.  
  • The same as expected benefits – expected cost * VSL. 
  • The expected benefit here is a monetary value, and the expected cost is not, so we have to change the expected benefits to a monetary value. This is where the VSL comes in. We use this to convert the expected benefit (if its lives save) to a monetary value. 
  • VSL = the maximum value one is willing to pay to reduce the expected numbers of fatalities by 1.  
  • E[NPV] > 0 = Implement measure 
  • E[NPV] < 0 = Measure is not justified  
  • Typical assignment and how its solved: 
  • Define the concept of VSL. Say VSL = 30. Explain how this value is used in CBAs.  
  • The value of a statistical life is the maximum value on is willing to pay to reduce the expected number of fatalities by 1. If VSL = 30, then if we expected 0.2 lost lives, one would multiply the VSL with this 

Cost-effectiveness analysis 

  • An approach used to calculate the effectiveness of a measure (Typically risk-reducing measures).  
  • Cost-effectiveness ratio = E[Z] (Expected cost of a measure)  
  • E[B] (Expected benefits/rewards) 

Cost-effectiveness ratio = 10 mill euros (cost of the measure) / 40 million (ICAF) = 0.25 numbers lives saved 

  • The ICAF is the result of an CEA, which is a value that is sometimes used to support decision-making in whether or not to implement a measure.  
  • In order to do this, we need some other piece of information. We have the ICAF (the pricetag), but in order to make decision whether or not to implement the measure we need the value that represents the money we are willing to pay, the VSL. This value is typically specified by the company or institution.  
  • If the VSL value is higher than the ICAF, then we implement the measure. If the amount we are willing to pay so save a life is smaller than the price tag of the measure, then we don’t implement the measure. 

Typical assignment and how its solved: 

  • You consider investing in one of two projects, a) or b). The costs of the projects are 10 and 20, respectively. The rewards are given by the probability distribution. Compute related cost-effectiveness indices. Should you then choose the project with the highest index? 
Profit value 100 200 1000 
Project a) 0.45 0.50 0.05 
Project b) 0.60 0.30 0.10 

Solution

  • You have to find the expected benefits for both projects and the find the ratio. So for project A = E[B] = 0,45*100 + 0,5*200 + 0,05*1000 = 195. This is the expected benefit for project A. Then you have to take 10, which is the given cost of the project and divide it by 195 = 10/195 = 0,05. 
  • Same thing with project B. E[B] = 0,6*100 + 0,3*200 + 0,1*1000 = 220. 20/220 = 0,09.  
  • Then should you then choose the project with the highest index? Not necessarily. Here one should remember we are talking about expected values, and a higher probability of a lower profitability might not be something we want in project B. And also no knowledge and SoK is shown here, so we need to think about that as well.  

ICAF 

  • The amount you are willing to pay to save one life. 
  • expected cost per expected lives saved. 
  • The price tag. 

VSL 

  • Max value DM is willing to pay for reducing the expected number of fatalities by 1. 
  • The money you are willing to pay, and ICAF is the amount one have to pay. 

Multi-attribute analysis 

  • An approach where the goal is not to transform all various concerns into one dimension (typically monetary values), but to provide judments on each attribute separately, using a combination of quantitative and qualitative assessments.  
  • The quantification and related rules of the other two analyses may result in important aspects of the decision-making process being misrepresented or neglected. Many decision-making processes involving risk are not trivial – they cannot be replaced by formulas and numbers. Assessments and management of knowledge and uncertainty are demanding and contain elements that cannot be easily measured.  
  • The analysis is a decision support tool addressing the consequences of the various decision alternatives separately for the various attributes of interest. For each decision alternative, attention is given to attributes such as inverstment costs, operational costs, safety, environmental issues and so on.  
  • It is the decision-maker who has to weigh the different attributes. The trade-offs are made implicitly.  
  • MAA also acknowledges the importance of MRJ. 
  • MAA is always the recommended approach within risk management.  
 Pros Cons 
Cost-effectiveness analysis Simple calculations, easy to compare options Use of expected values – has limitations with respect to reflecting risks and uncertainties. 
Cost-benefit analysis 
Multi-attribute analysis.  
Generally recommended approach. 
Not forced to transform the different attribtues into one scale – easier to reflect risks and uncertainties that cannot be properly captured using expected values More resource demanding to compare options – requires decision-makers to make conscious judgements on how to weight the different attributes. 

Risk Science 

Definitions:

  • Most updated and justified beliefs (knowledge) on risk fundamentals (concepts), risk assessment, risk perception and communication and risk management and governance. And the process that gives us this knowledge. 
  • Most updated and justified beliefs (knowledge) on risk analysis. 

Applied risk analysis (science) 

  • Is putting the generic risk analysis science into real-life; ‘into practice’. 
  • The use of methods, models and approaches for risk understanding, characterization, assessment, perception, communication, management and governance for specific activities. 
  • Used in specific real-life situations. 

Generic risk analysis (science) 

  • Is what Terje Avens book is about. The general and basic concepts, methods, models and approaches for risk understanding, characterization, assessment, perception, communication, management and governance.  

Knowledge 

  • Is knowledge static in time? No. Use the risk concept to explain this. This was at some point – and still are in some schools, C*P. But this knowledge was contested, and therefore today we have the risk concept (C,U).  

Exam question 

  • Risk science is the most updated and justified knowledge on risk fundamentals, risk assessment, risk perception, risk communication and risk management and governance.  
  • Is this knowledge static in time? Explain. Can it be contested? Is science one voice? Discuss. 

Solution: 

  • The knowledge is not static in time no, just as any knowledge ever generated that we know of. Knowledge is based on facts, and the facts we know today might not be the same tomorrow, and very likely not the same as in 10 years. So therefore the knowledge that the science is based on, will always be exposed to change. And therefore it’s never uncontested. As soon as a new scientist expresses his knowledge based on new facts he’s discovered, then the standing knowledge would be contested. Science is not one voice no, since there are many different schools that are discussing and contesting different types of knowledge that appear.

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Search

Looking for a specific subject? Try a search below!

Tags

Social Media