Preparing now for AI in your sphere | Connie L. Braun and Juliana Saxberg

By Connie L. Braun and Juliana Saxberg ·

Law360 Canada (February 27, 2024, 11:32 AM EST) --
Connie Braun
Connie Braun
Juliana Saxberg
Juliana Saxberg
There is a growing corpus of commentary and advice to Canadian entities recommending how to responsibly manage the risk presented by artificial intelligence (AI). For smaller entities that do not have the resources or operational capacity to institute an AI governance committee, the prospect of following this advice can seem daunting. Existing tech governance frameworks and guidelines have already proven inadequate to corral the risk pervasive in AI applications. For those not already aware, these risks contain bias and security vulnerabilities, along with seemingly no means to prevent human misuse of AI tools or poisoning of data.

While it makes sense for Canadian entities to move ahead and actively develop AI risk governance architecture, there is considerable hesitancy due to the uncertain future of the proposed Artificial Intelligence and Data Act (AIDA). Waiting for AI regulations while AI continues its rapid pace of development and the risks loom large may seem irresponsible. Yet, for this exact reason, entities should consider implementing now any internal policies, audits and impact assessments in anticipation of legislation and regulations. Doing so will mean that entities further enhance their organizational preparedness by training professionals to responsibly implement anticipated governance goals ahead of need.

As is the case with all aspirational corporate governance prescriptions, organizations need to devise an AI risk governance strategy that fits their current risk appetite, resources, and governance maturity. With the regulatory context changing day by day, AI risk professionals advise that the best approach is to establish an AI governance/responsible AI framework now. To do so means being ready for whatever AI regulation is presented in your jurisdiction, simply needing to adjust the approach to fit the specific requirements rather than starting from nothing.

At minimum, business leaders need to accept that effective AI risk governance will require investment in developing internal capacity and committing resources to upskill humans involved at every level of the entity. Google’s May 2023 Policy Agenda For Responsible AI Progress recommends that leaders “[b]uild technical and human capacity in the ecosystem to enable effective risk management.” This includes equipping all team members, regardless of role, with the necessary proficiencies and vocabulary to navigate AI risk.

With skills and terminology in place, encouraging social dialogue and lifelong learning programs will accomplish much toward helping to ensure fair transition for workers affected by the deployment of AI language models. Users of AI technologies within organizations have a critical role to play in AI governance. Consider, for example, that the entry point for social engineering attacks is manipulation of users. As such, every person who touches data or AI products, no matter what role, should understand and embrace the company’s data and AI ethics framework. Creating a culture in which a data and AI ethics strategy is successfully deployed and maintained requires educating employees in such a way that they are encouraged to ask questions, raise their concerns and point to potential flaws.

Corporate AI governance can build on guidelines that are consistently reflected in other governance models. The human-in-the-loop role, for example, which is endorsed by all models, requires that humans check all AI-generated deliverables. Nowhere is this more important than when generative AI is used for decision-making or for providing legal advice or services. In addition, transparency is commonly recognized as a key guiding factor for AI risk governance in entities. Disclosure of the use of AI, as well as details of how AI output is human-supervised and quality-controlled, is key to mitigating legal risk.

Data is the gasoline that fuels innovation, including AI. This means that entity-wide policy and procedures for data and information governance are basic requirements for every well-governed Canadian entity, no matter its size. The government of Canada has produced and is using a digital standards playbook, the goals of which focus on transparency through document management, supporting organizational continuity and resilience, as well as allowing for independent evaluation, audit and review. Good data stewardship is required to establish that data is clean, reusable, and responsibly managed so that it can reliably be used for data-driven decision-making.

Human-centredness provides helpful guidance for a future-proof information governance strategy contributing advantage to and amplifying human-created content. The good governance to-do list for 2024 must include curating and maintaining an archive of your entity’s uniquely human content, decisions, processes, guidelines, writings, teachings, art, photographs and other artifacts.

Effective management of AI governance risk seems endlessly resource-intensive, yet recognizing the transformative potential of AI technologies means understanding that interaction with technology across the enterprise will need to fundamentally change. Planning for and implementing these changes really is the only way to move past AI as a source of distrust. Canadian entities cannot dodge the responsibility of contributing to a future-proof AI risk governance strategy that will protect stakeholders, business assets, and the integrity of Canadian democratic and legal systems. There are some people who might say that our very survival depends on it. Preparing your entity now for AI, even before legislation and regulations are in place, will save a lot of time later on.

Connie L. Braun is a product adoption and learning consultant with LexisNexis Canada. Juliana Saxberg is a practice area consultant, corporate & public markets with LexisNexis Canada.
 
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and 


Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.