Episode 27: Threat Modelling and the Threat Landscape

Welcome to The Bare Metal Cyber CRISC Prepcast. This series helps you prepare for the exam with focused explanations and practical context.
In risk management, a threat is not just a possibility—it’s a potential force. A threat is any circumstance, agent, or event with the ability to exploit a vulnerability. Threats can come from outside the organization or from within. They may be intentional—like a targeted cyberattack—or accidental, like a misconfiguration or an employee mistake. They can be natural, such as floods or earthquakes, or man-made, such as sabotage or data theft. Common examples include malware infections, unauthorized access, insider fraud, denial-of-service attacks, or service disruptions due to environmental events. But a threat alone does not equal risk. It only becomes a risk when it intersects with a vulnerability and an exposed asset. On the CRISC exam, you’ll be asked to isolate the threat element in a scenario. Don’t confuse it with the conditions that allowed it or the impact that followed. The threat is the actor or catalyst—the spark, not the fire.
Threat modeling matters because it helps you see where that spark might land. It is the proactive practice of identifying how a threat could compromise a system, process, or asset—before it does. Threat modeling allows risk professionals to visualize potential attack paths, weak points, and failure conditions. It enables the design of targeted controls, optimized for the threat in question. Rather than defending everything equally, threat modeling prioritizes based on exposure and likelihood. It’s especially useful during system design, architecture review, application development, and process redesign. In CRISC, threat modeling connects technical scenarios to business outcomes. A well-modeled threat gives context to the risk assessment and sharpens the treatment plan. It’s not a theory—it’s a design tool. The exam will reward answers that show how technical risks are mapped to business impacts, and how modeling is used to prevent vulnerabilities from maturing into events.
Threat modeling and risk assessment are closely related but not interchangeable. Risk assessment is the broader activity—it includes evaluating impact, likelihood, and treatment options. Threat modeling focuses specifically on how threats might exploit weaknesses. In other words, threat modeling feeds the risk assessment with structured insight into what can go wrong. It shows who or what might attack, which paths they might take, and how they might succeed. This allows organizations to address threats before they lead to risk events. While risk assessments may be repeated quarterly or annually, threat modeling often occurs during the design or reconfiguration of systems or processes. On the CRISC exam, scenarios may reference system development or policy planning phases. If no threat modeling is mentioned, the best answer may involve including it. CRISC professionals must know when to model, what to model, and how to integrate that insight into formal risk evaluation.
Several structured methodologies exist for performing threat modeling. STRIDE helps identify threats by category: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. DREAD assesses risk severity based on Damage potential, Reproducibility, Exploitability, Affected users, and Discoverability. Other models include OCTAVE, VAST, and the use of attack trees to map potential paths. Not all of these will be referenced by name on the exam, but CRISC candidates are expected to understand the purpose and structure behind them. If a scenario says a model was used but an important factor was missed, you need to know what the model should have included. STRIDE focuses on system-based threats. DREAD supports prioritization. Attack trees help identify dependencies. The key is not memorization—it’s understanding how these tools contribute to the analysis and what gaps emerge when modeling is incomplete or outdated.
A credible threat model identifies not only the threats but also the agents and vectors that deliver them. Threat agents are the actors—individuals, groups, systems, or forces that may initiate harm. These could be cybercriminals, careless employees, nation-states, or even automated tools. Vectors are the methods or pathways used to carry out an attack—phishing emails, rogue APIs, USB drives, social engineering, or exposed ports. Modeling becomes precise when the agent is matched to the vector and mapped to the intended target. CRISC professionals must factor in motivation, capability, and opportunity. For example, an insider threat with high access rights and personal grievance poses a very different risk than an external attacker probing for public vulnerabilities. On the exam, threat misclassification leads to flawed controls. You’ll be expected to identify the actor, trace the path, and understand the context that enabled it. Modeling is as much about accuracy as it is about imagination.
The threat landscape is the broader environment in which your specifiCRISCs exist. It includes all known, emerging, and evolving threats across industries, geographies, and technologies. This includes cybercrime patterns, changes in regulation, geopolitical instability, shifts in technology platforms, and even natural hazard trends. CRISC professionals must monitor this landscape continuously to keep their models and assessments relevant. Threat intelligence platforms, industry alerts, vulnerability databases, and CERT advisories are useful tools for staying informed. On the exam, scenarios may describe risks that weren’t reassessed after a major event or technology shift. The clue may be that a threat evolved, but the risk model did not. Your answer should restore awareness and update the analysis. Risk modeling without external input becomes stale. The landscape is always moving. CRISC professionals must move with it.
Mapping threats to assets and controls makes threat modeling operational. It is not enough to identify threats—you must show how they affect specific business functions, systems, and data types. Controls must then be selected based on threat type and asset criticality. For example, denial-of-service threats may target customer-facing websites, requiring controls like rate limiting, geo-fencing, or upstream filtering. Data exfiltration threats call for encryption, monitoring, and access governance. The model must link threat to target and control. This alignment enables precision—focusing protection where it matters most. On the CRISC exam, if the question describes a DDoS threat and offers generic controls, choose the answer that reflects the modeled scenario. Don’t pick controls in isolation. Pick them in alignment with threat-path analysis, asset relevance, and control fit. That’s the essence of threat-informed governance.
Some threats evolve slowly. Others are deliberately engineered to evade detection. Advanced Persistent Threats—known as APTs—are multi-stage, long-term, stealthy attacks often carried out by nation-states or well-funded actors. These threats infiltrate, persist, and escalate silently. Emerging threats include AI-driven attacks, cloud misconfigurations, zero-day exploits, and third-party compromises. These threats cannot be managed with reactive controls alone. They require strategic threat modeling, real-time monitoring, and integrated response planning. CRISC professionals must evolve their models to capture these new patterns. On the exam, questions may involve cloud service providers, AI tools, or supply chain vulnerabilities. Look for the option that reflects awareness of novel threats—not just legacy patterns. The modern risk landscape is fast, distributed, and adaptive. Threat modeling must be just as agile.
No model is perfect. But poorly executed models carry serious limitations. Overfocusing on known threats can leave organizations blind to novel methods. Using outdated data leads to inaccurate assumptions. Involving only technical staff may miss business relevance. Effective modeling requires diverse input, business context, and frequent review. On the CRISC exam, scenarios involving failed modeling often contain clues about narrow scope, missed stakeholders, or stale assumptions. The correct answer usually involves broadening input, updating threat intelligence, or revisiting the model with cross-functional collaboration. Threat modeling is not a checklist—it is a conversation. It must be iterative, contextual, and inclusive.
Threat-focused exam scenarios often start with clear clues. If the question says “the threat actor exploited a…” your next step is to identify the vector and analyze the failed or missing control. If it says “the model did not include…” that indicates a coverage gap. If it asks “what step was missing from the analysis?” the answer is often threat identification or asset mapping. If you’re asked which control would most effectively mitigate the threat, choose the one tailored to the scenario, not the most expensive or advanced. CRISC exams do not test terminology alone—they test judgment. Your job is to map the threat, identify the risk, and recommend the right response in context. Modeling threats is about seeing clearly before something happens—and preparing in ways that make outcomes predictable, not painful.
Thanks for joining us for this episode of The Bare Metal Cyber CRISC Prepcast. For more episodes, tools, and study support, visit us at Baremetalcyber.com.

Episode 27: Threat Modelling and the Threat Landscape
Broadcast by