![]() |
Connie Braun |
![]() |
Juliana Saxberg |
The democratic process of creating any legislation is inclusive and offers a good model for AI governance and advancing thinking toward “Responsible AI,” the framework of which has already been endorsed by the Canadian Government in its 2023 Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems for business and the Directive on Responsible AI Use in government.
Application of human-centred design principles to AI governance problems aligns neatly with six key building blocks: accountability, inclusiveness, reliability and safety, fairness, transparency, and privacy and security.
1. Accountability highlights sensitivity to user needs through empathy, involvement and evidence-based understanding of the human condition. By involving users throughout the design and development stages, human-centred design improves user satisfaction and diminishes potential negative effects of AI on human health, safety and contributions to system performance. One fundamental way to operationalize human-centred AI governance is by employing the “Human in the Loop Rule;” that is, no technology may be permitted to make decisions without human supervision. The final decision must always be made by a human.
2. Inclusiveness showcases concern for the uniquely human values of dignity, privacy and inclusion. Starting from the comprehension that developing advanced technologies compels a moral responsibility means that compassion must be central to the design process. This kind of ethical imperative does so much to provide opportunity for contributions from people whose lives could be disrupted by a technology. In the end, it simply is excellent product design.
3. Reliability and safety provide opportunities for algorithms that prefer outcomes that enable human choice as part of the training process. Human empowerment like this recognises the rights and privileges of human content creators, with the concept of “data dignity.” In this situation, humans are not required to give up moral rights, privacy, or personal data in exchange for using Internet platforms where people can be inspired to put better content into them. In 2019, the Japanese government published the Social Principles of Human-Centric AI which makes human dignity its North Star:
2. Inclusiveness showcases concern for the uniquely human values of dignity, privacy and inclusion. Starting from the comprehension that developing advanced technologies compels a moral responsibility means that compassion must be central to the design process. This kind of ethical imperative does so much to provide opportunity for contributions from people whose lives could be disrupted by a technology. In the end, it simply is excellent product design.
3. Reliability and safety provide opportunities for algorithms that prefer outcomes that enable human choice as part of the training process. Human empowerment like this recognises the rights and privileges of human content creators, with the concept of “data dignity.” In this situation, humans are not required to give up moral rights, privacy, or personal data in exchange for using Internet platforms where people can be inspired to put better content into them. In 2019, the Japanese government published the Social Principles of Human-Centric AI which makes human dignity its North Star:
We should not build a society where humans are overly dependent on AI or where AI is used to control human behavior through the excessive pursuit of efficiency and convenience. We need to construct a society where human dignity is respected and, by using AI as a tool, a society where people can better demonstrate their various human abilities, show greater creativity, engage in challenging work, and live richer lives both physically and mentally.
4. Fairness can be seen in how different models of design thinking have already emerged with the goal of advancing human rights through the design process. This means that the design process enables accessibility for all people to the greatest extent possible and tailoring every aspect to human diversity and inclusion.
5. Transparency means involving users in design and testing from the start. In this way, designers can ensure that AI applications are not only inclusive and perceived as ethical, but also, that they remain lawful.
6. Privacy and security recognize that the greatest threats from AI technologies may be their potential for weaponization in the wrong hands. The AI Now Institute recently unveiled its game-changing Zero Trust AI Governance framework that relies on three principles:
1) Time is of the essence —begin by vigorously enforcing existing laws.
2) Bold, easily administered, bright-line rules are necessary.
3) At each phase of the AI system life cycle, the burden should be on companies to prove their systems are not harmful.
2) Bold, easily administered, bright-line rules are necessary.
3) At each phase of the AI system life cycle, the burden should be on companies to prove their systems are not harmful.
From these building blocks, we can discern that the application of human-centred design principles to AI governance shows frank acknowledgement of the flaws inherent in the human condition. For this reason, AI applications must reflect realistic appreciation of user needs and human psychology, accepting human behaviour as it occurs rather than how we might wish it to be. Thus, inclusive design supplies guided decision-making rubrics that mitigate human failings and protect against moral hazard.
Adherence to ethical and legal standards in the development of AI systems is vital to keeping these building blocks solid and supporting each other accommodating new blocks being added to the arena. Maintaining human oversight of training models and algorithms will help us to adapt to the cultural shift in which innovation and continuous learning are adopted and incorporated. Empowering employees to contribute by teaching and learning establishes more opportunities for fairness and equity to become available.
Having the solid foundation of AI governance that is human-centred provides a safe and steady platform upon which these building blocks will be able to flourish. With the knowledge and understanding that machines must remain subject to effective oversight by people, an environment in which the people who design and operate machines humanity can remain accountable to consumers. The inclusive democratic process of creating legislation for AI governance will continue to advance thinking toward “Responsible AI.”
Connie L. Braun is a product adoption and learning consultant with LexisNexis Canada. Juliana Saxberg is a practice area consultant, corporate & public markets with LexisNexis Canada.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and
Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.