12 Angry fishpackers | Marcel Strigberger
Friday, September 15, 2023 @ 2:37 PM | By Marcel Strigberger
This experience qualifies me sufficiently to comment on the subject of AI. I don’t even like Google.
I gather artificial intelligence is intelligence demonstrated by computers. I got this information from a trusted source, Google. This brings back thoughts to that 1960s film 2001: Space Odyssey where that computer HAL aboard a spaceship felt threatened by the humans and became quite aggressive. I guess it was intelligent enough to concern itself with self-preservation.
I also think about the Luddites, that group of English workers in the early 1800s who went around destroying machinery as they believed this equipment was threatening their jobs. I can somewhat empathize with the Luddites, though fortunately I never felt the urge to invade any law offices and unplug their computers. And given all the wiring, I probably would not even know how to achieve this task.
AI is scary. Even Elon Musk has concerns about AI noting that one of the biggest risks to the future of civilization is AI. I rest my case. Given this potential threat to civilization I am now thinking twice about buying a Tesla.
I find human contact is waning. We telephone say a bank, with a query, such as why the details of my bank account disappeared, and we are directed to go online where most of our questions can be answered. Right.
And then you see that message online where you are supposed to prove that you are not a robot. Up pops that grid of about a dozen squares and you must tick off say all pictures showing a bus. I recently missed on that one as I counted one square that had only a bus driver in it. Maybe he was not even on the bus as the bus was being operated by a robot. Who knows?
Given that a robot probably is behind this harassing quiz, why does it care if you are a robot too? After all if you are a robot, aren’t you on his team? Robotic envy?
How might AI permeate a litigation practice? What do I see in my crystal ball?
Consider examinations for discovery. Lawyers are not supposed to coach their clients. But how many lawyers can swear they never gave their clients now and then during pointed questioning a kick in the shins? Will artificial intelligence eliminate this verboten practice?
Not necessarily. Technology will no doubt step up to the plate, (or should I say, under the table), to make sure your client answers right. The lawyer and client will wear a set of earbuds and when the lawyer’s brainwaves sense the client is about to say something stupid, the lawyer will squirm activating from the client’s side a large swinging boot. Hey, they laughed at Edison. Light bulb pshaw!
Trials of course will be radically different with AI. The jurors will all be robots. This will speed up jury selection. The court registrar draws a card out of a drum and a robot stands up citing its name and occupation and lawyers can accept or challenge:
“Zarkon 768 — fish packer at Costco.”
Personally, I was always easy with jury selection though I would likely challenge any robot with a face like a Picasso painting. I find I something untrustworthy about anybody with one ear on his chin, no mouth and three noses.
And the judge would not have to send this jury out for a voir dire while the lawyers argue over admissibility of evidence. The judge would just say to the jury, “Pay no attention to these lawyers now. They’re just blabbing.”
The jury would likely respond in unison: “We obey.”
(During this hiatus, that Costco robot might just offer the visitors in the body of the court some samples of chopped herring).
My concern is that juries might resort to technical shortcuts to arrive at a verdict. I can see them retiring to the jury room and the foreperson bellows out, “Hey Siri. Is the guy guilty or not guilty?”
Which brings me back to the judge. Why not? After all robots are putting together cars, performing surgery and beating grandmasters in chess. Do we see the future trials adjudicated by robot judges?
“Oyez, oyez oyez, all rise for the judge, Justice SOL.83.”
And to keep pace with reality some judges would have to be created nasty. I can think of one local judge who insisted male lawyers appearing before him wore black or gray pants. I saw a lawyer unwittingly wearing brown pants and His Honour chewed him out saying, “Counsel, I can’t hear you.”
We may not be far from the day when a robot judge says to a lawyer, “Error error. Please remove those red suspenders.”
I suppose the lawyers’ gossip chatter about judges would be similar as it is now with some twists.
“How’s Justice X-311?”
“He’s a hanging judge. But fortunately my last appearance before him resulted in a mistrial. His battery ran out.”
With A1 such as ChatGPT I also see potential confidentiality problems. Is it possible the robot could turn against the lawyer, and use extortion? “Hey insurer, I know you are prepared to offer $1 million to settle this action. What’s it worth to you for me not to divulge this information to the plaintiff’s lawyer?”
And we already see tech glitches happening. One ChatGPT recently listed as the third most visited sight in Ottawa the Ottawa Food Bank. What else can go wrong in Ottawa?
But then of course there is no problem as you always have the availability, comfort, and ease of tech support. Just go online. Or hit some chat box. Or ask your personal robot.
Marcel Strigberger retired from his Greater Toronto Area litigation practice and continues the more serious business of humorous author and speaker. His book Boomers, Zoomers, and Other Oomers: A Boomer-biased Irreverent Perspective on Aging is now available on Amazon, (e-book) and paper version. Visit www.marcelshumour.com. Follow him @MarcelsHumour.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at email@example.com or call 647-776-6740.