AI, protecting yourself from bad actors | Connie L. Braun and Juliana Saxberg

By Connie L. Braun and Juliana Saxberg ·

Law360 Canada (April 23, 2024, 12:46 PM EDT) --
Connie Braun
Connie Braun
Juliana Saxberg
Juliana Saxberg
While it is true that many people believe that laws and regulations are enacted only to limit us, laws and regulations are actually in place to guide us about how to conduct ourselves properly in life and business. Common standards and shared values, consideration of others and responsible behaviour all contribute to a society that functions well. Some people will use tools, intended for good, in harmful ways — ways that disregard laws and regulations. The same applies to the use of AI, with individuals finding ways to impersonate, cheat, manipulate and deceive — anything you can think of to exploit others.

Global law enforcement authorities say that organized crime groups have quickly integrated AI technologies into their business, including evolving the Crime-as-a-Service (CaaS) business model. Anyone can purchase access to highly sophisticated digital technologies and services on the “dark web,” an enclave of Internet sites unindexed by search engines that are accessible only through black market web tools, such as the browser, Tor.

Black market AIs are emerging on user forums created specifically for cybercrime or engineered to remove safeguards from existing and frequently used tools. In these forums, the most discussed use cases for intentionally harmful use and abuses of AI, ML and LLM technologies are:

Human impersonation on social networking platforms

These are intelligent tools that convincingly impersonate human behaviour. Bad actors often use human impersonation to fool bot detection systems or manipulate web traffic and engagement by generating likes, creating fake accounts and more. The goal is to increase influence or monetize content.

Online cheating in professional and lucrative e-sports

As online gaming competitions now involve substantial sums of prize money, hackers are getting better at harnessing AI to win, using online gaming for money laundering purposes.

ML supercharged hacking

Hackers use open-source hacking tools that include an LLM trained on a large dataset of passwords recovered from public leaks to predict how people will change and update their passwords. Employing Wi-Fi hacking applications that access ML gamification strategy to reward successful Wi-Fi cracks, thus permitting the tool to improve its performance freely and independently.

Deepfakes

Deepfakes comprise lifelike facsimiles of real or imagined individuals before being used in deceptive or harmful images and videos, including underage and exploitative pornography. Systems like this can be used to fool “Know Your Client” (KYC) checks and to fake or manipulate evidence used in legal proceedings. Voice/phone call scams are gaining in popularity with politicians loudly sounding the alarm over AI’s potential for interference with elections.

Social engineering (a.k.a. “human hacking”)

Social engineering that focuses on manipulating or deceiving users in ways that enable a hacker to obtain control over a computer system has increased in both frequency and sophistication thanks to generative AI applications. Scammers have used generative AI to create convincing emails and documents that carry malware, used social connections and deception to pressure or trick users into letting down their guard and easily infiltrated secure systems. Often, these activities are followed by using or selling their access to others to undertake digital crimes. Vendors on dark web networks sell access to internal systems, website accounts, databases, credentials, tools, malware, credit card details, exploit kits and more.

All of these uses and abuses are frightening, so it behooves us to learn how to recognize these uses and abuses to protect ourselves. Research currently underway is intended to help with the development of tools that will algorithmically identify manipulated and fake information. These researchers also want to provide strategies and techniques that build public awareness of these technologies. The ultimate goal is to help you and me, your family and neighbours, the people you meet in your day-to-day activities and everyone else to think critically about the information and media they consume. In this way, the detection of faked information and media becomes part of our day-to-day awareness.

The Canadian Centre for Cybersecurity provides guidelines that help all to evaluate information and media by taking time to review the sources and messaging that is conveyed. These guidelines offer a way to protect yourself. Consider the following characterizations:

  • When information is valid, it is factually correct and can be verified by other means; it is not misleading in any way.
  • When information is inaccurate, usually it is incomplete and contrived to be misleading.
  • When information is false, it is incorrect with data available to prove that the opposite is true.
  • When information is unsustainable, there is not enough information to validate or invalidate based on the available data.

Then, consider these questions:

  • Does the web page you are reading, the image you are viewing or the video you are watching provoke an emotional response?
  • Is there a bold statement that makes an extraordinary claim on a controversial matter?
  • Does the information or media contain clickbait — that is, text or an image with a link intended to attract attention and entice users to click on it and follow?
  • Can you identify topical information that is within context?
  • Are there small pieces of valid information that are exaggerated or distorted?
  • Has the information or media spread virally on unvetted or loosely vetted platforms?

Learning about and practising good Internet hygiene is a good first step toward protecting yourself and others around you. To spread awareness, discuss with family, friends and coworkers. Act responsibly, be smart and take the necessary steps to protect yourself.

Connie L. Braun is a product adoption and learning consultant with LexisNexis Canada. Juliana Saxberg is a practice area consultant in corporate and public markets with LexisNexis Canada.
 
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is neither intended to be nor should be taken as legal advice.


Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.