'Editorial Commentary'
While the intersection of ‘Artificial Intelligence’ into everyday life seems to be happening increasingly, we must be careful in assuming that everything ‘Artificial Intelligence’ tells us is fact rather than fiction. In other words, perhaps Artificial Intelligence is more ‘Artificial’ than 'Intelligent'.
The Associated Press recently reported1 that two Manhattan lawyers had to appear in Federal Court after a filing they had prepared cited past court cases invented by the AI-powered chatbot, ChatGPT. One lawyer explained that he had used ChatGPT to search for legal precedents. When it identified several related cases he had not previously found and those he had already cited, he included the new cases without double-checking them.
The lawyer told the Federal Judge he had no idea “ChatGPT could fabricate (legal) cases.”
In other words, ChatGPT ‘artificially created’ cases to make the argument better than it would appear otherwise.
While the lawyer(s) in question didn’t do their due diligence to confirm the ChapGPT information, the Federal Judge did the proper research to identify the mistakes, rather than allow the ‘Artificial’ information to have any merits in the case. Had it not been for the Judge’s work, a case might have been decided erroneously.
So, the question becomes, if ChatGPT can ‘Artificially’ create legal precedents, what other artificiality can it produce?
Will we next see Financial Statements from accountants and auditors include ‘Artificially’ bolstered assets or diminished liabilities because ChatGPT has decided that the financial information is better reflected that way?
Will ChatGPT ‘Artificially’ frame earnings of major corporations in an overly optimistic light?
And who is really to blame if such irregularities occur? Will the fault lie with the underlings told to produce documentation using ChatGPT? Or will the responsibility lie with the accountants and auditors who sign off on the documents “with no idea that ChatGPT could simply ‘fabricate’ numbers”?
Perhaps the fault will lie with ChatGPT or OpenAI, the company behind ChatGPT.
Maybe the blame will land on Microsoft, which has invested over a billion dollars in OpenAI.
Perhaps the fault rests with ‘everyone’ who says, “ChatGPT (chatbots) are the future that will eliminate the need for tedious tasks.”
People used to ‘do math,’ and when it was critical, they had it checked, checked, and checked again to ensure it was correct. Even when the first computers came on the scene, ‘human computers,’ I prefer to call them “mathletes,” checked the work of those IBMs.
While the 2016 book ‘Hidden Figures’2 by Margot Lee Shetterly tells the work behind Project Mercury, it is really a story about “the importance of the math being right” and the ‘human computers’ who ensured it was. It was about the necessity to check, check and recheck the accuracy of the figures related to the orbital geometry needed to get those first astronauts into space.
Today we assume these learning machines can’t be wrong because they have artificial intelligence.
I would caution to say we must always have qualified individuals in every profession who check, check, and recheck the work these AI tools, like ChatGPT, are doing to make sure it is fact rather than the ‘Artificial’ data they apparently are at liberty to create.
Murph
Footnotes and Disclosures:
1 – Lawyers blame AI for citing bogus case, Larry Neumeister, Associated Press.
2 – Hidden Figures: The American Dream and the Untold Story of the Black Women Who Helped Win the Space Race, 2016, Margot Lee Shetterly, published by William Morrow & Company.