Episode 64: Emerging Technologies and Associated Risks
Welcome to The Bare Metal Cyber CRISC Prepcast. This series helps you prepare for the exam with focused explanations and practical context.
Emerging technology plays a critical role in modern organizations, offering new ways to operate, compete, and deliver value. However, with innovation comes uncertainty, and this uncertainty often leads to gaps in governance, policy enforcement, and control maturity. Technologies are evolving faster than the frameworks designed to manage them, which means CRISC professionals must stay ahead of these changes. The role of risk practitioners is not to block innovation but to guide it responsibly, identifying and addressing the risks that come with adoption. On the exam, many scenarios involving new technologies center around the fact that risks were only discovered after implementation. The best answers will reflect the need to evaluate these risks early and embed oversight into the adoption process itself.
Emerging technologies span a wide range of categories, each with unique potential and distinctive risk profiles. Cloud computing has become foundational, including infrastructure as a service, software as a service, and multi-cloud strategies that increase flexibility but complicate control. Artificial intelligence and machine learning offer powerful automation and insight capabilities, while simultaneously introducing opaque decision-making and oversight challenges. The Internet of Things and edge computing bring computing power closer to where data is created, but also add thousands of unmanaged devices to the environment. Blockchain and distributed ledgers promise immutability and transparency, yet create new forms of execution risk and regulatory uncertainty. Robotic process automation and low-code platforms empower business users to build applications and automate workflows, often without traditional IT involvement. Quantum computing and advanced cryptography remain mostly theoretical for now but raise serious questions about the future of encryption and data protection. CRISC professionals must be aware of each category’s evolving risk landscape and know how to apply control thinking even when standards are still catching up.
Cloud computing introduces a mix of powerful opportunities and critical risks. One of the most common issues is the loss of visibility and control over systems and data, especially when services are fully managed by external vendors. Shared responsibility confusion also appears frequently—organizations may not understand which security tasks belong to the provider and which belong to them. This confusion can leave key controls unassigned. Application programming interfaces often serve as the glue between services, but insecure APIs and misconfigurations can expose sensitive data. Access management must be tightly enforced to prevent privilege escalation or misuse. Additional concerns include data residency and the legal limitations of storing information in different jurisdictions. Vendor lock-in is also a risk, where switching providers becomes difficult due to proprietary integrations. If a scenario describes a cloud provider breach where logs are unavailable, it likely signals a due diligence failure during onboarding or ongoing monitoring. On the exam, cloud-related answers should emphasize visibility, responsibility clarity, and strong integration of audit and security tools.
Artificial intelligence and machine learning offer enormous potential but introduce significant governance challenges. One of the main risks is algorithmic bias, where models make decisions that unintentionally reflect societal or data-driven prejudices. Explainability is another concern—organizations may struggle to understand or justify why a model produced a certain output. Without transparency, accountability is difficult to enforce. Models are also vulnerable to data poisoning, where training data is manipulated to degrade performance, and model drift, where a model’s accuracy declines over time due to changing inputs. Oversight and version control are often missing, especially when models are deployed quickly. Regulatory guidance is still forming, which means uncertainty remains around how to ensure fairness, auditability, and compliance. Integration with legacy systems may introduce additional risk, as old controls might not align with new technology. On the exam, questions about AI often reflect this combination of high potential and low maturity. The best answers demonstrate proactive governance and control placement, even when standards are still unclear.
The Internet of Things and edge computing expand the digital footprint of organizations but often at the expense of security. Many IoT devices are inexpensive and lack the memory or processing power to support robust protections. Patching these devices in the field may be impossible or impractical, leaving them vulnerable for long periods. Physical access to devices, especially in industrial or public environments, increases the chance of tampering. Communications between devices and central systems may not be encrypted, exposing data or control channels. One of the biggest challenges is inventory—many organizations cannot track every IoT device connected to their network, making visibility and risk assessment difficult. For the exam, the strongest answers will emphasize segmenting these devices into controlled network zones, encrypting communications, and actively monitoring for anomalies. Answers that treat IoT like traditional infrastructure usually miss the unique exposure these systems create.
Blockchain and smart contracts present a different set of risk dynamics due to their immutability and decentralized architecture. Once information is written to a blockchain, it cannot be easily altered, which is beneficial for integrity but makes it difficult to correct mistakes. Smart contracts are code-based agreements that automatically execute when conditions are met, and if bugs or logic errors exist in that code, the results can be disastrous. Because there is no traditional override function, errors in smart contracts can result in financial loss or process failure. Integrating blockchain systems with traditional IT environments introduces complexity, especially around data validation and timing. The regulatory environment for blockchain continues to develop, which increases uncertainty around legal enforcement, privacy rights, and financial compliance. If the exam presents a scenario where a smart contract was exploited and there was no way to reverse the outcome, it is likely pointing to a lack of design controls and oversight. CRISC professionals must understand that code-based automation does not eliminate risk—it transforms and sometimes magnifies it.
Robotic process automation and low-code platforms empower users but create governance gaps when not carefully monitored. Business users can build applications or automate workflows without involving IT, which means these tools often bypass standard approval and testing processes. If flawed logic is automated, those flaws are repeated at scale, potentially causing data integrity issues or unintended behavior. Security controls may be weak, with credentials stored in scripts, hardcoded keys, or incomplete audit trails. These tools also expand the organization’s shadow IT footprint, introducing new systems and data flows that risk professionals may not even be aware of. On the exam, look for scenarios where a business-built tool failed or exposed data, and ask whether there was any review process, access control, or testing in place. The best responses will support governance, validation, and integration of user-built tools into formal risk management processes.
Before adopting any emerging technology, organizations must conduct risk assessments that are tailored to the unique characteristics of that innovation. This begins with a threat surface analysis to identify how and where new exposures might appear. Next, organizations should assess how the technology affects data classification, especially if sensitive data will be created, processed, or stored. Regulatory implications must be reviewed—new technology may introduce cross-border data flows, privacy risks, or compliance conflicts. Another consideration is control compatibility: will existing monitoring tools, access systems, and auditing mechanisms work with the new solution? Will logs be generated in a usable format? Governance teams, including architecture and information security groups, should be involved from the very beginning. Adoption should not proceed until a governance board has evaluated the proposal in terms of risk appetite and mitigation strategies. On the exam, the right answer often involves early engagement, structured approval, and holistic evaluation—not just technical review.
Once adopted, emerging technologies must be continuously monitored and governed to ensure that risks remain within acceptable boundaries. This includes defining new key risk indicators that reflect the unique behaviors of the technology. For artificial intelligence, this might involve tracking model drift or abnormal decision patterns. For APIs, it could involve measuring unusual load patterns or access attempts. For IoT, it might mean alerting on device behavior outside normal ranges. The risk register must be updated as new findings emerge, and treatment plans should be revised accordingly. Internal audit teams must include emerging technologies in their review cycles, and post-implementation assessments should check whether intended controls are working. Testing plans must be updated to reflect new dependencies, and rollback strategies must be prepared in case new systems fail or create unintended consequences. On the exam, lifecycle thinking is key. Look for answers that go beyond initial adoption and reflect long-term control and risk awareness.
In CRISC exam questions about emerging technology, you will often be asked to identify what was missed before adoption. Commonly missed elements include risk assessments, control mapping, and engaging the right stakeholders. Other questions may ask about the key risk in a given platform, and strong answers focus on visibility, accountability, or how the new system integrates with existing controls. Some questions may ask how to respond to a trend, and governance-based responses such as pilot testing, updating policy, or modifying oversight are usually best. You might also face scenarios that ask which control is missing, and responses like access restrictions, monitoring tools, or audit log requirements often appear. The best answers demonstrate proactive thinking, layered defense models, and an ability to align new technology with existing risk frameworks. Always consider whether the approach fits into the broader governance program and reflects a full view of the system’s impact.
Thanks for joining us for this episode of The Bare Metal Cyber CRISC Prepcast. For more episodes, tools, and study support, visit us at Baremetalcyber.com.
