AI Deepfakes and Shadow AI Are Emerging Corporate Risk Concerns

by | Mar 5, 2026

Artificial intelligence is being adopted across organisations at a rapid pace. Alongside the benefits, security analysts are reporting a rise in new forms of cyber and operational risk linked to AI tools.

AI Enabled Deception in Cybercrime

One development attracting attention is the use of AI generated deception in cybercrime. Security researchers report that artificial intelligence now enables attackers to generate convincing messages, voices, and digital content that can be used in fraud and impersonation attacks. These techniques are increasingly used in social engineering campaigns targeting organisations and individuals.

Microsoft’s Cyber Signals report highlighted the scale of the challenge. Between April 2024 and April 2025, Microsoft reported blocking approximately US$4 billion in fraud attempts, rejecting 49,000 fraudulent partnership enrolments, and blocking around 1.6 million bot sign up attempts every hour.

The report also noted that artificial intelligence has lowered the technical barrier for cybercriminals. Tasks that previously required significant effort, such as generating convincing phishing messages or gathering target information, can now be automated or completed in minutes.

The Rise of Shadow AI in the Workplace

Another issue gaining attention is the rise of “shadow AI”, which refers to employees using AI tools that have not been approved or monitored by their organisations.

Research commissioned by Microsoft found that 71 percent of employees in the United Kingdom have used unapproved AI tools at work, with more than half using them weekly.

While the motivation is often productivity, the practice can expose organisations to risks including data leakage, privacy breaches, and compliance failures. Employees may upload internal information into external AI systems or rely on outputs that have not been verified.

Emerging Security Vulnerabilities in AI Systems

Security experts also warn that AI systems themselves can become targets of manipulation. Techniques such as prompt injection can influence AI systems to reveal information or perform unintended actions.

Taken together, these developments highlight how artificial intelligence is reshaping the risk landscape for organisations. Cybersecurity, data governance, and enterprise risk management functions are increasingly expected to address the operational implications of AI adoption.

For risk professionals, the challenge lies in establishing governance frameworks that support innovation while maintaining oversight over how AI systems are used within organisations. As artificial intelligence continues to expand across business functions, understanding how these technologies influence operational risk will become an increasingly important part of enterprise risk management.


Sources: