Importance of tempering AI with human perspective | Courtney Mulqueen

By Courtney Mulqueen ·

Law360 Canada (January 21, 2025, 1:38 PM EST) --
Courtney Mulqueen
Courtney Mulqueen
While artificial intelligence (AI) has proven its worth in streamlining the disability insurance claim process, its use must be tempered with the human perspective.

I am for anything that will speed up the insurance application process so policyholders can get the benefits they need. However, the risks of replacing the human element with AI are obvious and foreseeable. There must be checks and balances.

After years of dealing with the insurance industry, my fear is that artificial intelligence could quickly become another tool used to deny claims instead of properly assessing and administering them.

AI’s use has gained traction throughout the insurance industry. It enables insurers to streamline their operations by performing tedious, time-consuming tasks typically done by underwriters and evaluators. It could help reduce extended delays experienced by people waiting for a resolution since claims evaluators are generally overworked.

Proponents argue AI algorithms can also pick up irregularities in claims submissions and detect potential cases of fraud.

We are already seeing AI’s impact on insurance company staffing. Canadian Underwriter reported on a speech during the Future of Insurance 2024 where brokerage CEO Stephen Billyard observed artificial intelligence has taken “a whole lot of underwriters’ jobs out of our business.”

“Certainly, that’s not what we’re seeking…but the process for us has become so much more efficient that, in fact, yes, it is eliminating jobs,” Billyard said.

Take the human component out of the evaluation equation and deserving claimants could be more likely to be denied benefits.

I deal with insurance companies who use medical guidelines for recovery, and they are general and subjective. The guideline will say a person should be recovered by a certain point in time based on similar injuries to other people. Those guidelines are then used to terminate claims. However, at least now the process involves a human element.

Artificial intelligence bases its assessment solely on the information available to it, such as the medical guidelines. It stands to reason that an AI-driven system could lead to an increase in claim denials.

An outcry about the rise of AI-powered claim denials came to the forefront late last year with the murder of UnitedHealthCare CEO Brian Thompson, who was shot in New York. It was reported that the words “deny,” “defend” and “depose” were written on the shell casings, terms critics say are used to describe how the insurance industry denies claims.

Business news website Quartz reported the murder “sparked public scrutiny of health insurers, especially regarding their use of AI in evaluating claims.”

I fear it will be more difficult to dispute claims if AI becomes the exclusive evaluator of applications for benefits. Challenging these denials is going to be problematic because AI is going to be relying on information that the insurance company claims is objective.

Legislation might be necessary to limit the influence of artificial intelligence. Take California, which recently enacted new laws on AI. While The Physicians Make Decisions Act does not prohibit the use of AI, the law mandates that human judgment must remain central to coverage decisions, MSN reported. Artificial intelligence tools cannot be used to deny, delay or alter health care services which are deemed necessary by doctors.

Canada could benefit from such legislation to prevent insurance companies from relying exclusively on AI to assess claims. As it stands now, I believe insurers derive much more benefit from the use of artificial intelligence than do claimants.

We may want to heed the ethics guidelines found in the National Library of Medicine report Artificial Intelligence in Evaluation of Permanent Impairment: New Operational Frontiers.

“While the use of AI models can offer significant advantages in terms of efficiency and standardization of assessment, it is crucial to carefully consider the potential risks and implications arising from this practice,” the report states “The absence of direct human interaction may reduce the individual undergoing assessment to objective and numerical data, disregarding the complexity and uniqueness of the person. It is important to ensure that the use of AI does not result in dehumanizing treatment towards the individuals involved and does not disregard the subjective suffering of the person.

“The issue of transparency and interpretability of AI models emerges as an ethical concern. Since machine learning algorithms often operate in complex and non-linear ways, understanding how decisions are made and which factors influence those decisions can be challenging,” the report adds. “This raises concerns about accountability and the possibility of challenging assessments made by AI.”

It is therefore necessary to ensure that algorithms are “developed transparently and that the individuals involved have access to clear and understandable explanations of the decisions made,” the author contends.

How information is gathered and utilized is another important ethical concern.

“If the data used to train such models are incomplete or biased, AI may perpetuate and amplify these biases, leading to injustices in assessing personal damage,” according to the report. “Ensuring balanced and representative data collection, as well as careful validation of AI models, is essential to avoid systemic discrimination.”

The final decision must remain with an “expert evaluator,” the author states, adding it is also “imperative to ensure that AI systems respect the privacy of individuals being assessed and maintain data security.”

AI has its place, but it must not be the final arbiter in health care claims. It may well be able to help expedite claims by performing tedious tasks such as summarizing medical records.

My concern is how far the insurance industry goes in removing the human element for the sake of the bottom line. To prevent bias or discrimination, AI must be guided by ethical considerations and systems should be reviewed at regular intervals to ensure fairness.

Courtney Mulqueen, of Mulqueen Disability Law, has over 20 years of experience litigating disability claims. Her focus and passion is representing plaintiffs with disabilities who live with complex “invisible conditions,” like mental illness and chronic conditions that are difficult to prove, diagnose and treat.

The opinions expressed are those of the author and do not reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at Peter.Cartert@lexisnexis.ca or call 647-776-6740.