In this post, I will provide a summary of the RIS660 course. Each lecture is divided into a set of questions. First, the question is presented with a concise answer. After this, a more elaborate answer is given. All the questions and their concise answers are made into flashcards.
Flashcards
Go to the QUIZLET website to access the flashcards and select different learning options.
The Quizlett passsword is 066SIR
Lecture 1: Risk, safety and security research; Developing ideas; Ethical considerations
Listen to Lecture 1
Question 1-1: What is risk, safety, and security research?
Generating new knowledge about risk, safety and security.
Risk, safety, and security research involve generating new knowledge about these broad, multifaceted topics. It encompasses various scientific disciplines and theoretical perspectives such as social sciences, psychology, engineering, medicine, and statistics. The field explores the meaning of these objects, the scope of research, philosophical underpinnings, and appropriate methodological approaches. There are diverse perspectives within risk, safety, and security sciences, including safety engineering, organizational safety, occupational safety, enterprise risk management, cyber security, societal resilience, and food safety management.
Question 1-2: Distinguish between different types of risk, safety, and security research.
Applied vs. fundamental, descriptive vs. analytical, quantitative vs. qualitative, conceptual vs. empirical.
Risk, safety, and security research can be categorized into applied vs. fundamental, descriptive vs. analytical, quantitative vs. qualitative, and conceptual vs. empirical. These categories are not mutually exclusive, and research often combines elements from different types.
Question 1-3: What are examples of questionable practices in research ethics?

Practises influencing the participants; self-esteem, rights, physical/mental stress, privacy, benefits, justification.
Examples of questionable practices include inducing participants to commit acts diminishing their self-esteem, violating rights of self-determination, exposing participants to physical or mental stress, invading their privacy, withholding benefits from some participants, and not treating participants fairly or with respect. Justification for such practices might be considered if the research can provide substantial benefits and there is no alternative method.
Question 1-4 What are important ethical considerations in research involving people?

Informed consent, confidentiality, voluntary participation, withdrawal consent, non-identifiable, handeling of personal data.
Informed consent and confidentiality are crucial in research involving people. Informed consent entails voluntary participation, providing information about the project’s purposes and implications, and allowing participants to withdraw consent at any time. Confidentiality ensures that participants and informants remain non-identifiable in published research, and personal data is handled confidentially and with care.
Question 1-5 What is the norm regarding confidentiality and privacy in research involving people?
That participants are non-identifiable, and personal data is handled with care conform GDPR.
The norm is that participants and informants are not identifiable in published research. Any personal data is handled confidentially and with care. Real-world research, prior to GDPR, sets guidelines for data protection. Relevant links for further information: sikt.no and forksningsetikk.no.
Lecture 2
Listen to Lecture 2
Question 2-1 Qualitative vs quantitative: What to select?
The most suitable method for answering the research question should be chosen. This depends on the method, strategy, form of the research question, control of behaviour Events and whether the research focuses on contemporary events.
Method | Strategy | Form of Research Question | Requires Control of Behavioral Events? | Focuses on Contemporary Events? |
---|---|---|---|---|
Experiment | How, why? | Yes | Yes | Yes |
Survey | Who, what, where, how many, how much? | No | Yes | Yes |
Archival Analysis | Who, what, where, how many, how much? | No | Yes/No | Yes/No |
History | How, why? | No | No | No |
Case Study | How, why? | No | Yes | Yes |
Question 2-2 What are case studies and when are they suitable?
- Definition: In-depth examination within a real-life context.
- Suitability: Explore complex social phenomena, contextual relationships, and unique situations.
- When Appropriate: To answer “how” and “why” questions, aiming for comprehensive understanding.
Question 2-3 Why research multiple cases and what is meant by ‘grounded theory’?
Researching Multiple Cases:
Explore patterns, commonalities, and variations across different instances. Useful for comparing two situations.

Grounded Theory:
- Methodology: Develops theories directly from data.
- Approach: Systematic analysis without preconceived theories.
- Goal: Generate concepts and theories grounded in the data for a deeper understanding.

Question 2-4 What are the four steps of a research design?
Research Process Steps:
- Design:
- Concepts/Theory, Causal Validity, Generalization, Repeatability.
- Preparation (Protocol Development):
- Quality vs. Quantitative, Screening of Respondents, Field Procedures, Pilot Studies.
- Collection:
- Collecting data preferably using triangulation (using multiple sources/methods).
- Analysis (Results + Analysis):
- Analysing all the evidence. Studying all the evidence is only possible with a narrow scope.
Lecture 3 Building the bridge between SAM535 and RIS660
Listen to Lecture 3
Question 3-1 What do we mean by ontology, methodology and epistemology?
Ontology is the philosophical study of being, as well as
related concepts such as existence, becoming, and reality.
Methodology is the study of research methods. The term
can also refer to the methods themselves or to the
philosophical discussion of associated background
assumptions.
Epistemology The (philosophical) analysis of the nature of
knowledge and the conditions required for a belief to
constitute knowledge, such as truth and justification
Question 3-2 The six steps of the scientific framework:
- Ontology ”What is reality”
- Epistemology ”What and how can we use epistemology to get knowledge”
- Theoretical perspective ”What approach can we use to get knowledge”
- Methodology ”What procedure can we apply /use to acquire knowledge”
- Methods ”What tools can we use to acquire knowledge”
- Sources ”What data can we collect”
Question 3-3 Actions, actors, games, and structures.
Action: The fact or process of doing something (as agents), typically to achieve
an aim.
Actors: A participant in an action or process.
Games: Games are confined areas that challenge the interpretation and
optimisation of rules and tactics not to mention time and space. The rules of
the game constrain the players but also enable them to pursue their
end.
Structure/system: A set of things working together as parts of a mechanism or an
interconnecting network; a complex whole set of principles or procedures
according to which something is done; an organized scheme or method.
Question 3-4 Explain Realism vs Constructivism in relation to risk.

Weak constructionist / critical realism.
Risk is an objective hazard, threat or danger that is inevitably mediated through social and cultural processes and can never be known in isolation from these processes.
Realism.
Risk is an objective hazard, threat or danger that exists and can be measured independently of social, and cultural processes. Risk perceptions may be distorted or biased through a social and cultural framework
Strong constructionist.
Nothing is risk in itself what we understand to be risk (or hazard or danger ) is the product of historically and culturally contingent ways of seeing.
Question 3-5 Account for differences between social identity and personal identity and how these differences relate to our definitions of agent and actor.

Social identity is derived from group memberships, influencing how one acts as an agent or performs as an actor according to group norms. Personal identity is based on unique traits and experiences, driving individual actions as an agent and personal expressions as an actor.
Social Identity:
- Definition: Social identity refers to the aspects of an individual’s self-concept that are derived from their membership in social groups. It involves how one perceives themselves in relation to broader social categories, such as nationality, religion, or organizational affiliations.
Personal Identity:
- Definition: Personal identity pertains to the individual characteristics that distinguish a person from others. It encompasses unique qualities, experiences, beliefs, and traits that define who a person is as an individual.
Relation to Agent and Actor:
- Agent:
- Social Identity: Acts based on group affiliation.
- Personal Identity: Acts based on unique personal characteristics.
- Actor:
- Social Identity: Performs roles aligned with group norms.
- Personal Identity: Expresses behaviors based on individual traits and experiences.
Question 3-6 What are the differences in perspective of Expert vs Laypeople
Experts approach risk with a more analytical and statistically driven perspective, while laypeople often rely on personal experiences, emotions, and intuitive judgments in their assessment.
Lecture 4 Interviews, Focusgroups and additional methods of data collection.

Listen to Lecture 4
Question 4-1 Name three different types of interviews and their ad- and disadvantages.
Fully structured, Semi-Structured, Unstructured. Structured offers more consistent results, improving analysis, but decreases flexibility and the ability to delve deeper.
Fully Structured Interview:
- Advantages:
- Consistency: Ensures consistency in data collection as all participants are asked the same set of predetermined questions.
- Quantitative Analysis: Facilitates quantitative analysis due to standardized responses.
- Disadvantages:
- Limited Flexibility: Offers limited flexibility for probing or exploring unexpected insights.
- Lack of Depth: May lack depth in understanding complex responses.
Semi-Structured Interview:
- Advantages:
- Flexibility: Allows the interviewer to follow up on interesting responses, providing flexibility in the conversation.
- In-Depth Exploration: Permits a more in-depth exploration of responses compared to fully structured interviews.
- Disadvantages:
- Potential Variability: Responses may vary between participants, making it challenging to standardize data.
Unstructured Interview:
- Advantages:
- Rich Data: Generates rich and detailed data, allowing for a deep understanding of the interviewee’s perspective.
- Exploratory Nature: Encourages exploration of diverse topics and unexpected insights.
- Disadvantages:
- Subjectivity: Results in subjective data collection, making analysis and comparison more challenging.
- Time-Consuming: Can be time-consuming due to the open-ended nature of the conversation.
Question 4-2 What are the ad- and disadvantages of focus groups?
Efficient technique, collecting much data in a short time with minimal interaction. But difficult to moderate, analyse and generalize.
Advantages
•A highly efficient technique (data collected from several participants at the same time)
•Participants enjoy the interaction
•Participants are stimulated by each other to share opinions
•Relatively inexpensive and less time-consuming
Disadvantages
• The number of questions covered is limited
• The role of a moderator may be challenging
•Conflicts may arise
•Confidentiality can be a problem
•The results are difficult to generalize
Lecture 5 Fieldwork and Ethnography
Listen to Lecture 5
Question 5-1: What is the difference between participant observations and structured observations and what sort of biases can occur?
Participant observations involve active participation and immersion, while structured observations follow a predetermined plan.
Observational Biases:
- Selective Attention: Focusing on specific aspects of the environment while neglecting others.
- Selective Coding: Interpreting or categorizing observed behaviors based on preconceived notions or expectations.
- Selective Memory: Recalling or documenting certain events more vividly than others, influencing the overall representation.
- Interpersonal Factors: Biases introduced due to the relationship between the observer and the observed, impacting objectivity.
Question 5-2 What are the three quality criteria used in ethnography?

Veracity: Asks if the research truthfully describes what it set out to study.
Objectivity: Questions whether the research avoids personal biases in drawing conclusions.
Perspicacity: It assesses if the research’s insights can be applied to understanding human behavior in different situations.
Lecture 6 The analysis and interpretation of qualitative data
Listen to Lecture 6
Question 6-1: Name the three approaches in Qualitative Analysis:
Quasi-Statistical, Thematic Coding, Grounded Theory.
Quasi-Statistical Approaches:
Analyzes qualitative data using word or phrase frequencies and inter-correlations to assess the importance of terms and concepts.
Example: Content analysis, where researchers quantify and analyze the occurrence of specific words or phrases to draw insights from the data.
Thematic Coding Approach:
Involves coding parts of qualitative data, grouping codes with the same label into themes.
Example: Researchers identify and label portions of data, grouping similar codes under broader themes. These themes then guide further analysis.
Grounded Theory Approach:
Generates codes directly from qualitative data, emphasizing the researcher’s interpretation of meanings or patterns.
Example: Researchers develop a theory based on emergent patterns and meanings observed in the data, creating codes grounded in their interaction with the information.
Question 6-2: How to Conduct Thematic Coding Analysis?
Familiarizing Yourself with Your Data: Generating Initial Codes: Identifying Themes: Constructing Thematic Networks: Integration and Interpretation:
Familiarizing Yourself with Your Data:
- Process: Transcribe data (if necessary), read and re-read the data, and note down initial ideas.
- Importance: Immerse yourself in the data to gain a deep understanding, particularly after initial data collection.
Generating Initial Codes:
- Methods: Devise a framework or template, or generate codes inductively through interaction with the data.
- Coding: Systematically assign codes to extracts across the entire data set, grouping similar extracts under the same code.
Identifying Themes:
- Process: Collate codes into potential themes, gathering all relevant data for each theme.
- Validation: Check if themes align with coded extracts and the entire data set; revise codes or themes if necessary.
Constructing Thematic Networks:
- Visualization: Develop a thematic ‘map’ or network of the analysis.
- Representation: Create a visual representation of how themes connect and interact within the data.
Integration and Interpretation:
- Comparison: Make comparisons between different aspects of the data using techniques like tables and networks.
- Exploration: Explore, describe, summarize, and interpret patterns within the data.
- Quality Assurance: Demonstrate the quality of the analysis through thorough exploration and interpretation.
Question 6-3 Thematic Analysis – Deduction, Induction, or Abduction?
Deduction:
- Process: Starts with a theory, makes an observation, and infers a result.
- Application: Begins with a predefined framework or theoretical perspective.
Induction:
- Process: Starts with a case, makes an observation, and then generalizes to a wider population.
- Application: Involves deriving general principles or patterns from specific instances.
Abduction:
- Process: Can start with either a theory or an observation, relate and interpret them.
- Characteristics:
- Theory and Observation: Involves using theory in conjunction with observation.
- Interpretation: Focuses on producing an interpretation rather than generalization.
All forms can be part of a thematic analysis.
Lecture 7 The analysis and interpretation of qualitative data part II
Listen to Lecture 7
Question 7-1 What is a Flexible Research Design and what are it’s Characteristics?
Flexible Research Design: A flexible research design is adaptable and evolves as the research progresses, allowing for adjustments in response to emerging insights and data.
Characteristics of a Good Flexible Design:
- Multiple Data Collection Techniques: Utilizes various qualitative data collection techniques, and can include quantitative data. The data is summarized and information is given on how the data is collected.
- Framed within Flexible Design Assumptions: Inclusion: Aligns with the assumptions and characteristics of flexible design, including an evolving nature, presentation of multiple realities, researcher as a data collection instrument, and a focus on participants’ views.
- Informed by Existing Research Traditions: Acknowledges and incorporates one or more traditions of research.
- Starts with a Single Idea or Problem: Begins with a singular idea or problem that the researcher aims to understand.
- Rigorous Approach to Data Collection and Analysis: The study shows a rigorous approach to data collection, data analysis, and report writing.
- Analyses Using Multiple Levels of Abstraction: Analyzes data using multiple levels of abstraction, often presenting studies in stages or layering analyses from specific themes to larger perspectives.
- Clear and Engaging Writing: Ensures clear, engaging writing that immerses the reader in the research context.
Question 7-2 How to determine an appropriate sample in flexible designs?
In flexible research designs, the focus shifts from random sampling to purposive or theoretical sampling. The emphasis is on selecting participants based on their direct relevance to research questions or theoretical constructs, rather than seeking representativeness in a population. Generalizations in flexible designs differ from statistical ones, aiming for theoretical insights, evidence of mechanisms in specific contexts, and evaluating the potential impact for similar cases across different contexts.
Question 7-3 How to establish trustworthiness in flexible design research?
- Accuracy and Completeness: Regularly assess whether accounts of observations and findings are accurate and complete.
- Transparent Interpretation: Clarify the process of reaching interpretations.
- Consideration of Alternative Explanations: Actively explore alternative explanations and understandings of the phenomenon under study.
- Prolonged Involvement: Establish trust through prolonged involvement in the research context. Invest significant time in the field to build rapport and deepen understanding, fostering a more genuine connection with participants.
- Triangulation: Combine diverse data sources or methods to cross-verify findings and increase the reliability of the study.
- Peer Debriefing and Support: Engage in discussions with peers to gain valuable insights, perspectives, and feedback, ensuring a more robust research process.
- Member Checking: Share findings with participants for their feedback and validation, ensuring that their perspectives align with the researcher’s interpretations.
- Negative Case Analysis: Systematically examine instances that contradict or challenge the emerging patterns, contributing to a more comprehensive understanding.
- Audit Trail: Document every step of the research journey, from data collection to analysis, allowing for transparency and enabling others to follow and assess the research path.
Lecture 8 Fixed designs
Listen to Lecture 8
Question 8-1: What are fixed research designs and why use them?
Fixed research designs involve a rigid, unchangeable data collection process to minimize the risk of unusable data. These designs are theory-driven, with constant research questions, predetermined methods, results, and analysis. They are used for their structured, controlled approach, ensuring minimal flexibility and precise adherence to the study plan.
Fixed research designs are employed when the data collection process cannot be altered once initiated, minimizing the risk of accumulating large amounts of unusable data. This approach requires a well-defined conceptual framework, indicating a theory-driven orientation. It is suitable when research questions remain constant throughout the study, methods and expected results are predetermined, a clear sampling strategy is in place, and the analysis approach is predetermined. The fixed design ensures a structured and controlled research process with minimal flexibility during the study.
Question 8-2: How do we establish trustworthiness in fixed research designs?
In fixed research designs, trustworthiness is achieved by thorough, unbiased, and objective execution along with ensuring validity and reliability of the study.
To establish trustworthiness in fixed research designs, the process begins with common-sense actions such as conducting a thorough and honest job, and maintaining an unbiased and objective approach throughout the study. Here’s a detailed breakdown of how trustworthiness is supported by two key concepts: validity and reliability.
Validity involves several components:
- Justification of Conclusions: Ensuring the conclusions drawn from the research are well-supported by evidence.
- Measurement Alignment: Confirming that the metrics used in the study accurately reflect what they are supposed to measure, known as construct validity.
- Study Design and Findings: Checking that the research design and its findings reliably support the conclusions, referred to as internal validity.
- Extent of Applicability: Assessing how well the findings can be generalized to other settings or groups, which relates to external validity.
Reliability focuses on consistency:
- This includes maintaining dependability in the data collection and analysis processes to ensure that results are consistent over time and replicable under similar conditions.
Question 8-3: Elaborate on experimental designs, true experiments and Randomized Control Trails.
Experimental designs involve manipulating independent variables and controlling others to measure effects on dependent variables. True experiments, like Randomized Controlled Trials (RCT), use random assignment to treatment or control groups to ensure validity. Quasi-experiments and non-random experimental designs, while useful, require careful interpretation due to potential biases.
Experimental Designs:
Experimental designs involve the assignment of participants to different conditions, manipulation of independent variables by the experimenter, and the measurement of the effects on dependent variables, while controlling for all other variables.
True Experiments and Randomized Controlled Trials (RCT):
True experiments, exemplified by Randomized Controlled Trials (RCT), employ random allocation of participants to intervention/treatment and control groups. Examples of true experiments include post-test-only RCTs, pre-test post-test RCTs, and matched pairs experiments, among others.
Quasi-Experiments:
In the context of studying the impact of a new job safety analysis (JSA) template, the following designs should be avoided:
- A single group of workers asked about the usefulness of using the new template.
- Two groups representing different employers asked about usefulness after one group used the new template.
- A single group asked about usefulness before and after template implementation.
Experimental Design without Random Allocation:
While experimental designs without random allocation can be useful, they must be approached with caution. Single case experiments, a class of experimental designs studying a single group, use baseline conditions (A) and intervention phases (B) to measure changes in a dependent variable under different conditions.
Non-Experimental Fixed Designs:
Non-experimental fixed designs lack manipulation and are suitable in situations where manipulation is not possible, feasible, or ethical. Surveys are examples of non-experimental fixed designs.
Question 8-4: What are: Quasi-experiments, Single case experiments, Non-experimental fixed designs
Quasi Experiments:
Quasi experiments are research designs that lack the random assignment of participants to experimental and control groups. While they involve manipulation of an independent variable, the absence of randomization limits the ability to establish causal relationships. Common in real-world settings, they address practical and ethical constraints associated with randomization.
Single Case Experiments:
Single case experiments, also known as small N, single subject, or single case designs, study a single group. Utilizing baseline conditions (A) and intervention phases (B), they aim to show changes in a dependent variable under different conditions. Examples include interrupted time series designs, denoted as AB, ABA, ABAB, or even ABCA (introducing a second treatment phase C).
Non-Experimental Fixed Designs:
Non-experimental fixed designs are characterized by a lack of manipulation of the situation, making them suitable in situations where manipulation is not possible, feasible, or ethical. Examples include surveys, particularly in scenarios where studying the phenomenon of interest does not involve intervention or treatment.
Question 8-5: How to determine Sample size in fixed designs?
Determining Sample Size in Fixed Designs:
The determination of sample size in fixed designs is not straightforward and depends on the types of analyses planned for the data. For simple experimental designs and tests, around 15 participants per group is often considered sufficient. In surveys, the required sample size is typically influenced by the number of variables included. Making statements about a population based on a sample requires considering the accuracy of the estimate, which is influenced by the sample size.
Question 8-6: What are the benefits of multistrategy (mixed method) designs
- Triangulation: Corroboration between quantitative and qualitative data enhances the validity of findings.
- Completeness: Combining research approaches produces a more complete and comprehensive picture of the research topic.
- Offsetting Weaknesses: Multistrategy designs help neutralize the limitations of each approach while building on their strengths, leading to stronger inferences.
- Addressing Different Research Questions: These designs can address a wider range of research questions compared to single-method fixed or flexible designs.
- Dealing with Complexity: Valuable in real-world settings due to the complex nature of phenomena, requiring a range of perspectives for understanding.
- Explaining Findings: One research approach can be used to explain data generated from another, particularly useful for unanticipated or unusual findings.
- Illustration of Data: Qualitative data can illustrate quantitative findings, providing a richer understanding of the investigated phenomenon.
- Refining Research Questions: Qualitative phases refine research questions or develop hypotheses for testing in subsequent quantitative phases.
- Instrument Development and Testing: Qualitative phases may contribute to the development and testing of instruments used in quantitative phases.
lecture 9 Collection of quantitative data
Listen to Lecture 9
Question 9-1: What are four typical features of a survey in risk and safety research?

Surveys often have a fixed research design, are predominantly non-experimental, utilize standardized data collection from many people, and use representative samples to study a population.
Elaborate Answer:
Surveys are a common methodology in applied risk and safety research due to their versatility and effectiveness in gathering data from diverse populations. The four typical features of surveys include:
- Fixed Research Design: Surveys usually follow a structured format where the design does not change once established. This allows for consistent data collection across various subjects.
- Non-experimental Design: Most surveys are observational and do not involve the manipulation of variables, which is characteristic of non-experimental designs. However, they can also be adapted to experimental setups where certain conditions are manipulated.
- Standardized Data Collection: Surveys collect data using standardized methods such as questionnaires or interviews that are administered to a large number of participants. This standardization ensures that the data are comparable and measurable across all respondents.
- Representative Samples: Surveys often use representative samples that reflect the characteristics of a larger population.
Question 9-2: What are common validity issues of surveys?

Ambiguous questions, sample biases, differences in perception and answers of respondents.
From the lecture slides:
• Ambiguous or incomprehensible questions
• Low response rates (Note from me: This is a reliability issue, not a validity issue. Reliability is about the replicability of the study, validity is about measuring the right thing. The response rate does not tell if the right thing was measured or not).
• Bias in the sample
• Differences in what participants answer and what they do/how they feel/what they believe
Question 9-3: What are the three requirements to establish causation in research?

Validity requires that variables correlate, the cause precedes the effect, and other explanations for the correlation can be ruled out.
- Variables Correlate: Suppose a survey investigates the relationship between the use of personal protective equipment (PPE) and the incidence of workplace injuries. To establish validity, there must be a demonstrable correlation where increased use of PPE correlates with decreased injuries.
- Cause Precedes Effect: For the survey findings to be valid, it must be shown that the use of PPE (cause) occurs before the reduction in injury rates (effect).
- Other Explanations Ruled Out: It must also be demonstrated that other potential causes of the reduction in injury rates, such as improved training or safer workplace practices, are not the actual reasons behind the observed correlation. This involves controlling for these variables or showing through statistical analysis that their impact is negligible compared to that of PPE.
Question 9-4 Describe four different ways of conducting a survey and their ad- and disadvantages.

Surveys can be conducted via online, telephone, mail, or face-to-face methods. Online is cost-effective but may have sampling bias, telephone offers personal engagement but is costly, mail surveys have broad coverage but slow response times, and face-to-face surveys yield high-quality data but are expensive and time-intensive.
1. Online Surveys
Advantages:
- Cost-effective: Low cost as there are no physical materials or travel expenses.
- Broad Reach: Can reach a large and geographically dispersed audience quickly.
- Speed: Rapid deployment and real-time data collection.
Disadvantages:
- Sampling Bias: May not represent populations without internet access.
- Low Response Rates: Often lower than other methods due to survey fatigue or lack of personal engagement.
- Quality of Data: More prone to careless responses as participants may rush through the survey.
2. Telephone Surveys
Advantages:
- Personal Touch: Can increase the likelihood of participation through direct interaction.
- Higher Response Rates: Typically higher than online surveys.
- Controlled Sampling: Easier to target specific demographic groups.
Disadvantages:
- Cost: More expensive due to the need for trained staff and telecommunication costs.
- Time-Consuming: Takes longer to collect data.
- Potential Bias: Call screening or refusal to participate can skew results.
3. Mail Surveys
Advantages:
- Wide Coverage: Can reach individuals without phone or internet access.
- Visual Design Flexibility: Allows for complex question formats and branding.
- Response Consideration: Respondents can take their time to consider answers thoroughly.
Disadvantages:
- Low Response Rates: Typically lower than telephone surveys.
- Slow Data Collection: Takes longer to send, receive, and process.
- Higher Costs: Printing, mailing, and data entry costs are significant.
4. Face-to-Face Surveys
Advantages:
- High Quality Data: Interviewers can ensure comprehension and thoroughness.
- Flexibility: Questions can be adapted or clarified on the spot.
- High Response Rates: Personal interaction boosts participation.
Disadvantages:
- High Cost: Travel, training, and time make it the most expensive method.
- Time-Intensive: Requires significant time for both preparation and execution.
- Potential Interviewer Bias: The presence or demeanor of the interviewer might influence responses.
Question 9-5: What are the key elements in designing and using a questionnaire for survey research?

Questionnaires should use simple, clear language, avoid double-barreled and leading questions, and ensure questions are relevant, understandable, and designed to elicit accurate responses.
- Simplicity in Language: Questions should be straightforward and free from jargon to be universally understandable.
- Relevance to Research Questions: Each question should directly contribute to answering the main research questions, ensuring all data collected is relevant.
- Avoidance of Biased Questions: Questions should be neutral, avoiding any wording that might lead or influence respondents’ answers.
- Clarity and Specificity: Questions should be specific and clear, avoiding any ambiguity that could confuse respondents or lead to varied interpretations.
Question 9-6: What steps are involved in developing a Likert scale for survey research?
Developing a Likert scale involves creating relevant statements, obtaining responses from a sample, scoring responses, and refining the scale based on discriminative power of items.
To develop a Likert scale in survey research, researchers typically follow these steps:
- Item Creation: Develop a series of statements that respondents can agree or disagree with, relevant to the concept being measured.
- Pilot Testing: Administer these items to a representative sample to gather initial data.
- Scoring Responses: Assign numerical values to the degrees of agreement or disagreement (e.g., Strongly Agree = 5, Strongly Disagree = 1).
- Analyzing Item Responses: Use statistical methods to analyze the responses, identifying items that effectively discriminate between different levels of the attribute or construct being measured.
- Scale Refinement: Remove or revise items that do not contribute effectively to the scale’s overall reliability and validity.
Lecture 10 Guest lecture Safetec on quantitative data 1
Listen to Lecture 10
Question 10-1: What are the critical steps in creating a dataset for quantitative analysis?

Critical steps include data export from tools like SurveyXact, cleaning data for inaccuracies or missing entries, renaming and reprogramming variables, and creating dummy variables for analysis.
Creating a robust dataset involves several key activities:
- Data Export: Efficiently transfer data from collection platforms (like SurveyXact) to analysis software (e.g., SPSS).
- Data Cleaning: Address incomplete entries, remove or recode irrelevant responses like ‘Don’t know’, and ensure that all variables are correctly named and formatted.
- Variable Adjustment: Recode data to create groups or categories that are of interest for the specific analysis.
- Quality Control: Use statistical methods like frequencies and cross-tabs to verify the data’s integrity and consistency.
Question 10-2: What techniques are used to explore relationships between two variables in quantitative research?

Techniques include correlation analysis, cross-tabulation, and using statistical tests like Chi-square to measure associations between categorical variables.
Exploring relationships involves:
- Correlation Analysis: Determines if changes in one variable predict changes in another, with methods tailored to data types (Pearson, Spearman).
- Cross Tabulation: Examines relationships between two categorical variables by creating a matrix of frequencies for each category.
- Statistical Tests: Chi-square tests, for example, help confirm the strength of associations between categorical variables.
Question 10-3: What statistical methods are used to analyze differences in quantitative research?

Common methods include t-tests, ANOVA for comparing means across groups, and regression analysis to understand how variables predict outcomes.
Analyzing differences involves:
- T-Tests: Assess if two groups differ significantly on some continuous variable.
- ANOVA (Analysis of Variance): Used when comparing more than two groups to determine if there are any statistically significant differences between the means.
- Regression Analysis: Helps in predicting a dependent variable based on one or more independent variables, identifying the strength and nature of the relationship.
Question 10-4: How do researchers handle the challenge of making causal inferences from correlational data?

Researchers use statistical controls, longitudinal designs, or experimental setups to differentiate between mere correlation and true causation.
Causal inferences require careful design:
- Control for Confounders: Using statistical techniques like regression to control for variables that might affect the relationship.
- Temporal Precedence: Ensuring that the cause precedes the effect in time, which is crucial for establishing a causal link.
- Ruling Out Alternatives: Using experimental or quasi-experimental designs to rule out other potential causal explanations.
Lecture 11 Guest lecture Safetec on quantitative data 2
Listen to Lecture 11
Question 11-1: What is factor analysis, and how is it applied in quantitative research?

Factor analysis is a statistical method used to identify underlying variables, or factors, that explain the pattern of correlations within a set of observed variables.
A Frogtor analysis
Factor analysis groups together interrelated variables into factors based on their correlations, simplifying data analysis by reducing the number of variables:
- Identifying Clusters: Variables that correlate highly are grouped into fewer factors.
- Principal Component Analysis (PCA): Used to extract factors, where factors are chosen based on their eigenvalues (usually greater than 1) and factor loadings (typically above 0.4).
- Interpretation: The factors are interpreted to understand the latent constructs they represent, often using varimax rotation to enhance the interpretability.
Question 11-2: How does exploratory factor analysis (EFA) differ from confirmatory factor analysis (CFA)?

EFA is used to identify potential structures in data without predefined notions, whereas CFA tests the hypothesis about the structure and fits data to a pre-specified model.
EFA and CFA serve different purposes in research:
- Exploratory Factor Analysis (EFA): Often employed when a researcher wants to discover the underlying structure of a dataset without prior assumptions. It helps in identifying new factors.
- Confirmatory Factor Analysis (CFA): Used to test whether measures of a construct are consistent with a researcher’s understanding based on theoretical knowledge. CFA assesses the goodness of fit of a hypothesized model using various fit indices.
Question 11-3: What is structured equation modeling (SEM), and when is it used?
SEM is a comprehensive statistical technique that combines factor analysis and multiple regression, used to analyze structural relationships between measured variables and latent constructs.
SEM is used to confirm theory and test complex models involving multiple pathways and interrelationships among variables:
- Model Specification: Involves setting up a model that specifies the expected relationships among observed and latent variables.
- Model Estimation: The model’s parameters are estimated, usually through maximum likelihood estimation.
- Model Evaluation: Fit indices such as RMSEA, CFI, and TLI are used to determine how well the model fits the data.
Question 11-4: How are type I and type II errors relevant in the analysis of differences in quantitative research?
Type I errors occur when a true null hypothesis is incorrectly rejected, while type II errors happen when a false null hypothesis is not rejected.
Understanding these errors is critical for research validity:
- Type I Errors (False Positives): Erroneously concluding that there is an effect when there is none, often controlled by setting a lower alpha level (e.g., 0.05).
- Type II Errors (False Negatives): Failing to detect an effect when there actually is one, typically addressed by increasing the sample size to enhance statistical power.
Question 11-5: How does regression analysis help in understanding the relationships among variables?
Regression analysis determines the strength and form of the relationship between a dependent variable and one or more independent variables, predicting outcomes based on predictor inputs.
- Linear Regression: Assesses the linear relationship between dependent and independent variables, quantified by coefficients indicating the magnitude and direction of relationships.
- Multiple Regression: Involves several independent variables to understand their collective impact on the dependent variable, adjusting for other factors.
- Model Diagnostics: Regression also involves checking for the assumptions of linearity, normality, homoscedasticity, and absence of multicollinearity.
Question 11-6: What role does the analysis of variance (ANOVA) play in quantitative research?
ANOVA is used to compare the means of three or more groups to determine if there are any statistically significant differences among the groups.
ANOVA identifies group differences under controlled conditions:
- One-way ANOVA: Tests differences across multiple groups based on a single independent variable.
- Factorial ANOVA: Allows researchers to assess the interaction effects between two or more categorical independent variables on a continuous dependent.
- Post Hoc Tests: If ANOVA results are significant, post hoc tests like Tukey’s HSD are performed to pinpoint exactly which groups differ from each other.
Lecture 12 conceptual studies in risk safety and security research
Listen to Lecture 12
Question 12-1: What distinguishes conceptual research from empirical research in risk, safety, and security studies?
Conceptual research focuses on analyzing and reflecting on abstract ideas and entities such as concepts, principles, and methods, rather than collecting and analyzing data to answer real-world questions.
Unlike empirical research that relies on observable data to validate theories or models, conceptual research in risk, safety, and security studies involves:
- Theoretical Exploration: Engages with theoretical constructs and the formulation of frameworks without the direct use of empirical data.
- Methodological Development: Focuses on enhancing conceptual understanding and proposing new approaches to tackle complex problems.
- Normative Analysis: Often involves evaluating and recommending the best practices or definitions, which are inherently normative and not purely factual.
Question 12-2: What is Type A and Type B knowledge in the context of risk research, and how are they generated?
Type A knowledge involves understanding the “world” using empirical methods and traditional scientific approaches, whereas Type B knowledge has a normative dimension, seeking the best concepts through argumentation and reasoning.
Question 12-3: What are the challenges in making conceptual contributions in risk, safety, and security research?
Conceptual contributions require innovative thinking, rigorous argumentation, and the ability to synthesize and critique existing knowledge and methodologies.
- Innovativeness: Developing new insights that significantly advance the field.
- Rigorous Evaluation: Ensuring conceptual clarity, internal consistency, and empirical applicability of the new concepts.
- Peer Review and Adoption: Navigating the peer review process and gaining acceptance and implementation in practical settings.
Lecture 13 Exam prep:
Listen to all the audio combined.
Best of luck with the exam! Leave a comment if you liked it, and email me if anything is unclear or needs improving.
Leave a Reply