The Interim Policy on the Use of Artificial Intelligence, announced by Chief Administrative Judge Joseph A. Zayas, grew from a committee formed in April 2024 to study the use of AI in the Unified Court System.
"This interim policy has truly been a collaborative effort to provide clear communication about this emerging technology and to address and alleviate concerns over its use within the court system," said Associate Justice Angela Iannacci, who co-chaired the advisory committee.
The seven-page policy for AI use across the United Court System, or UCS, describes the current state of artificial intelligence and details what sorts of models may be used for court work and some general restrictions on how far the technology can be used.
The policy applies to all UCS judges, justices and nonjudicial employees, and its guidance must be adhered to on essentially every UCS-owned device and any UCS-related work.
"Simply stated, this new policy provides a strong base, guiding the court system on how to best leverage AI's potential to help fulfill the judiciary's core mission," Judge Zayas said in a statement on Friday. "While AI can enhance productivity, it must be utilized with great care. It is not designed to replace human judgment, discretion, or decision-making."
The policy touches on both benefits and problems associated with AI, and sets out guidelines and guardrails on its use within the courts, particularly in regard to generative AI. Use of AI tools is limited to those already approved by UCS, while initial and ongoing training is mandated for all judges and nonjudicial staff with computer access.
If staff were to use AI tools to draft documents and summarize data, they must review all content produced and ensure all language is inclusive and respectful. The advisory committee highlighted AI's propensity to fabricate information, take a biased stance and put confidential information at risk of exposure. The policy further prohibits the use of AI tools by judges to render decisions or by nonjudicial employees that violated their ethical responsibilities.
The policy emphasizes that the rules governing the security and confidentiality of court records fully applies to AI technology, and any information used on public AI models should assume it will immediately become public.
Some of the approved private models include Microsoft Azure AI Services, Microsoft 365 Copilot, GitHub Copilot for Business or Enterprise and Trados Studio. Additionally, the free version of OpenAI ChatGPT, which is a public generative AI tool, is approved while paid subscriptions are prohibited. A press release on Friday added that any questions about potential ethical concerns arising from AI technology by judicial officers should be directed to the Advisory Committee on Judicial Ethics.
"We have a duty to carefully explore — and fully understand — AI's strengths and limitations, so that we may use it responsibly, intelligently, and optimally, in furthering the delivery of justice across the state," First Deputy Chief Administrative Judge Norman St. George said in Friday's announcement.
As the volume of sanctions orders resulting from attorneys' use of faulty citations blamed on artificial intelligence continues to rise, federal judges are beginning to pivot from financial sanctions to more creative means of disciplining lawyers, including targeting their professional reputations in ways that could really hurt.
--Editing by Rich Mills.
For a reprint of this article, please contact reprints@law360.com.