Expert Analysis

Scams and AI: An urgent legal matter

By Sara Farr Guy and Giovann Martin ·

Law360 Canada (April 28, 2026, 2:41 PM EDT) --
Sara Farr Guy
Sara Farr Guy
Giovann Martin
Giovann Martin
Never has there been a more pressing need to reevaluate and reform labour law to encompass digital literacy, particularly at a time when AI tools are being deployed at an industrial scale for cyber-enabled fraud.

A recent Time magazine investigation revealed that cybercrime is costing up to US$1 trillion annually and that AI has fundamentally changed the threat landscape by increasing the volume and quality of attacks, lowering the skill threshold for scammers, enabling global reach and automating deception. Cybercrime has evolved into a professionalized, AI-driven, industrial-scale ecosystem that now targets millions of workers through sophisticated malware, impersonation and social engineering.

As a result, one can argue that cybercrime has become a foreseeable workplace hazard. Yet labour and employment laws continue to treat cyber threats as an IT issue, even though the evidence shows they are predictable, preventable and directly affect workers. There is a compelling argument to impose clearer employer duties around digital safety, training and risk mitigation.

Robot

Serhii Cherepia

Time’s investigation highlights scam compounds in Myanmar, Laos and Cambodia that house around 300,000 trafficked workers, where operations have shifted from traditional phishing operations to remote access trojans (malicious software programs) that give attackers full control of victims’ devices. These attacks often impersonate anything from banks, airlines, tax authorities and police to platforms such as Google Play. The scale and believability of these scams increase with AI generated scripts, fake photos, multilingual content and deepfakes, and such phishing and impersonation operations succeed because they exploit human factors.

From offensive risk to defensive potential

Earlier this month, Anthropic decided against releasing an internal AI model, known as Claude Mythos Preview, after it identified previously unknown software vulnerabilities and generated working exploits without step-by-step human guidance during controlled sandbox testing. The model could identify real zero-day vulnerabilities in complex, production-grade software; chain vulnerabilities together into functional exploits; and operate autonomously within defined testing parameters. Additionally, all this was done faster and more cheaply than traditional commercial penetration-testing workflows.

The company emphasized this was not an accidental deployment, but a controlled stress test meant to highlight this type of failure mode. Out of concern that cybercriminals and spies could abuse such capabilities, the model will not be made generally available.

More broadly, while AI is increasingly implicated in cybercrime, similar technologies can be explored for predictive and proactive use in the development of cyber-defence mechanisms — some of which could be encompassed in employer training and resilience planning. Models with capabilities similar to that of Mythos can be used to stress-test the resilience of a company’s cyber defences and, where appropriate, provide employee-upskilling regimens specifically designed to address identified weaknesses.

While cybercrime is unlikely to be fully preventable, mitigating it at scale would require a multinational, or potentially global, collaborative effort towards the development of a comprehensive preventative model.

Cybercrime and the workplace

The LexisNexis Risk Solutions 2026 Cybercrime Report confirms that cybercrime is accelerating faster than legitimate digital activity. In 2025, global attack rates jumped eight per cent year-on-year.

“Attack growth was predominantly through browser channels,” the report notes, with mobile app attacks halving across most regions while browser attacks increased dramatically. Desktop browser attack rates doubled to 4.3 per cent, while mobile app attack rates dropped 56 per cent to 0.4 per cent. Automated bot attacks grew 59 per cent over the same period.

The report also highlights the rapid rise of agentic AI, with agentic commerce traffic growing 450 per cent between the first and fourth quarters of 2025. These autonomous agents can take actions and make decisions based on initial human prompts, a development that adds complexity to the threat landscape. Fraud networks are increasingly multi-industry and cross-border, with stolen funds moved abroad to evade domestic controls.

Employees are targeted through work devices, personal devices used for work, and hybrid environments, and AI has fundamentally changed the threat landscape. Agentic AI, deepfake-enabled fraud and increasingly sophisticated bots make it unreasonable to expect untrained employees to defend themselves. Although workers are now exposed to predictable and foreseeable digital harms, labour regulations do not treat cyber-risk as part of the employer’s duty of care.

When a risk becomes widespread, predictable and preventable, it becomes foreseeable. Both the Time investigation and LexisNexis Risk Solutions report show that cybercrime is no longer sporadic or opportunistic, it is industrialized, global and predictable. The evidence makes one conclusion unavoidable: digital safety is now a core component of workplace safety, and labour law has not kept pace.

Traditional labour frameworks were built for a physical workplace, namely machinery, ergonomics, hazardous substances and interpersonal risks. But the modern worker faces a different category of harm: AI-enabled cybercrime that targets employees directly, often through the very tools they must use to perform their jobs.

Research shows that trained employees are less likely to fall victim to cybercrime attempts, the most common of which, and typically targeted at companies, include business email compromise (BEC) and phishing scams. While many large multinational companies provide training on such dangers, statutory recognition would classify cyber-risk as a workplace hazard, establish liability rules for cyber-enabled workplace harms, and protect employees who fall victim to sophisticated attacks.

Employers already have a duty to provide a safe working environment and, in 2026, that should include digital safety. The scale of harm justifies legal reform. Existing legal frameworks are outdated and cannot keep pace with AI-driven cybercrime. Labour law reform is urgent, not optional.

Sara Farr Guy is an editor with LexisNexis North America.

Giovann Martin is an editorial manager with LexisNexis North America and UK Case Law and an LLD candidate in mercantile law at the University of South Africa, where his research focuses on cybercrime and labour law reform.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.