紐約一上訴法院的法官們?cè)趲酌腌姾缶鸵庾R(shí)到,,在一起訴訟中,,通過(guò)視頻屏幕要向他們陳述案情的男子,不僅沒(méi)有法律學(xué)位,,甚至根本不存在,。
人工智能滲透法律界所引發(fā)的尷尬故事再添新篇章。3月26日,,在紐約州最高法院上訴庭第一司法部鑲嵌彩色玻璃的穹頂下,,一起職場(chǎng)糾紛案的原告杰羅姆·德瓦爾德準(zhǔn)備向法官陳述案情時(shí),上演了這出鬧劇,。
薩莉·馬扎內(nèi)特-丹尼爾斯法官宣布:“上訴人已提交視頻作為陳述材料,。好的,我們現(xiàn)在就播放這段視頻,?!?
視頻屏幕上出現(xiàn)一位面帶微笑的年輕男子,發(fā)型經(jīng)過(guò)精心打理,,身著扣角領(lǐng)襯衫和毛衣,。
這名男子在視頻開(kāi)頭說(shuō)道:“敬請(qǐng)法庭垂鑒。我今日以謙卑的自訴人身份,,向五位尊貴的法官做法律陳述,。”
馬扎內(nèi)特-丹尼爾斯法官打斷道:“請(qǐng)稍等,。這是本案代理律師嗎,?”
德瓦爾德回應(yīng)稱:“這是我制作的,并非真人,?!?/p>
事實(shí)上,這是人工智能生成的虛擬形象,。法官顯然不悅,。
馬扎內(nèi)特-丹尼爾斯法官說(shuō)道:“您在申請(qǐng)時(shí)本應(yīng)說(shuō)明此事。但你并沒(méi)有告訴我們,,先生,。”說(shuō)罷,,她立即喝令關(guān)閉視頻,。
她表示:“我無(wú)法容忍被誤導(dǎo),。”然后她才允許德瓦爾德繼續(xù)陳述,。
德瓦爾德事后向法院致歉,,申明他并無(wú)惡意。由于他沒(méi)有聘請(qǐng)代理律師,,因此不得不自行完成法律陳述,。他原以為虛擬形象能避免自己的言語(yǔ)含糊與卡頓,順利完成陳述,。
在接受美聯(lián)社專訪時(shí),,德瓦爾德透露,他事先向法院申請(qǐng)了播放預(yù)錄視頻,,隨后使用舊金山某科技公司開(kāi)發(fā)的產(chǎn)品創(chuàng)建虛擬形象,。最初,他曾嘗試制作自己的數(shù)字分身,,但開(kāi)庭前未能完成,。
德瓦爾德坦言:“法庭對(duì)此確實(shí)非常不滿。法官們嚴(yán)厲呵斥了我,?!?/p>
即便執(zhí)業(yè)律師應(yīng)用人工智能出現(xiàn)問(wèn)題時(shí)也會(huì)遇到麻煩。
2023年6月,,紐約聯(lián)邦法官對(duì)兩名律師和一家律所各處以5,000美元罰款,,原因是他們使用人工智能進(jìn)行法律檢索,結(jié)果引用了聊天機(jī)器人編造的虛構(gòu)案例,。涉事律所稱這是“善意失誤”,,因?yàn)樗麄儧](méi)有意識(shí)到人工智能會(huì)編造事實(shí)。
同年末,,在特朗普前私人律師邁克爾·科恩的代理律師提交的法律文書(shū)中,再次出現(xiàn)人工智能虛構(gòu)的判例,??贫鞒袚?dān)全責(zé),并表示他未預(yù)料到自己使用的谷歌(Google)法律檢索工具存在所謂的“AI幻覺(jué)”問(wèn)題,。
盡管AI存在諸多錯(cuò)誤,,亞利桑那州最高法院上月卻有意啟用了兩個(gè)AI生成的虛擬形象(與德瓦爾德所用的技術(shù)類似),用于向公眾總結(jié)裁判文書(shū),。
法院官網(wǎng)上的“丹尼爾”與“維多利亞”宣稱其職責(zé)是“傳播法院資訊”,。
威廉與瑪麗法學(xué)院(William & Mary Law School)法律與法院技術(shù)中心副教授兼研究部助理主任丹尼爾·申表示,他對(duì)德瓦爾德在紐約法院上訴案件中使用虛假形象進(jìn)行陳述并不感到意外,。
他表示:“在我看來(lái),,這種情況是不可避免的,。”
他指出囿于傳統(tǒng)與法院的規(guī)則以及被取消律師執(zhí)業(yè)資格的風(fēng)險(xiǎn),,律師不可能這樣做,。但沒(méi)有聘請(qǐng)律師并且申請(qǐng)準(zhǔn)許在法院進(jìn)行陳述的自訴人,通常缺乏對(duì)使用合成視頻陳述案件的風(fēng)險(xiǎn)提示,。
德瓦爾德表示,,他試圖關(guān)注技術(shù)前沿,近期剛聽(tīng)了美國(guó)律師協(xié)會(huì)(American Bar Association)主辦的一場(chǎng)在線研討會(huì),,討論的主題是AI在法律界的應(yīng)用,。
截至上周四,德瓦爾德的案件仍在紐約上訴法院審理中,。(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
紐約一上訴法院的法官們?cè)趲酌腌姾缶鸵庾R(shí)到,,在一起訴訟中,通過(guò)視頻屏幕要向他們陳述案情的男子,,不僅沒(méi)有法律學(xué)位,,甚至根本不存在。
人工智能滲透法律界所引發(fā)的尷尬故事再添新篇章,。3月26日,,在紐約州最高法院上訴庭第一司法部鑲嵌彩色玻璃的穹頂下,一起職場(chǎng)糾紛案的原告杰羅姆·德瓦爾德準(zhǔn)備向法官陳述案情時(shí),,上演了這出鬧劇,。
薩莉·馬扎內(nèi)特-丹尼爾斯法官宣布:“上訴人已提交視頻作為陳述材料。好的,,我們現(xiàn)在就播放這段視頻,。”
視頻屏幕上出現(xiàn)一位面帶微笑的年輕男子,,發(fā)型經(jīng)過(guò)精心打理,,身著扣角領(lǐng)襯衫和毛衣。
這名男子在視頻開(kāi)頭說(shuō)道:“敬請(qǐng)法庭垂鑒,。我今日以謙卑的自訴人身份,,向五位尊貴的法官做法律陳述?!?/p>
馬扎內(nèi)特-丹尼爾斯法官打斷道:“請(qǐng)稍等,。這是本案代理律師嗎?”
德瓦爾德回應(yīng)稱:“這是我制作的,,并非真人,。”
事實(shí)上,這是人工智能生成的虛擬形象,。法官顯然不悅,。
馬扎內(nèi)特-丹尼爾斯法官說(shuō)道:“您在申請(qǐng)時(shí)本應(yīng)說(shuō)明此事。但你并沒(méi)有告訴我們,,先生,。”說(shuō)罷,,她立即喝令關(guān)閉視頻,。
她表示:“我無(wú)法容忍被誤導(dǎo)?!比缓笏旁试S德瓦爾德繼續(xù)陳述,。
德瓦爾德事后向法院致歉,申明他并無(wú)惡意,。由于他沒(méi)有聘請(qǐng)代理律師,,因此不得不自行完成法律陳述。他原以為虛擬形象能避免自己的言語(yǔ)含糊與卡頓,,順利完成陳述,。
在接受美聯(lián)社專訪時(shí),德瓦爾德透露,,他事先向法院申請(qǐng)了播放預(yù)錄視頻,,隨后使用舊金山某科技公司開(kāi)發(fā)的產(chǎn)品創(chuàng)建虛擬形象。最初,,他曾嘗試制作自己的數(shù)字分身,,但開(kāi)庭前未能完成。
德瓦爾德坦言:“法庭對(duì)此確實(shí)非常不滿,。法官們嚴(yán)厲呵斥了我,。”
即便執(zhí)業(yè)律師應(yīng)用人工智能出現(xiàn)問(wèn)題時(shí)也會(huì)遇到麻煩,。
2023年6月,,紐約聯(lián)邦法官對(duì)兩名律師和一家律所各處以5,000美元罰款,原因是他們使用人工智能進(jìn)行法律檢索,,結(jié)果引用了聊天機(jī)器人編造的虛構(gòu)案例,。涉事律所稱這是“善意失誤”,因?yàn)樗麄儧](méi)有意識(shí)到人工智能會(huì)編造事實(shí),。
同年末,在特朗普前私人律師邁克爾·科恩的代理律師提交的法律文書(shū)中,,再次出現(xiàn)人工智能虛構(gòu)的判例,。科恩承擔(dān)全責(zé),,并表示他未預(yù)料到自己使用的谷歌(Google)法律檢索工具存在所謂的“AI幻覺(jué)”問(wèn)題,。
盡管AI存在諸多錯(cuò)誤,,亞利桑那州最高法院上月卻有意啟用了兩個(gè)AI生成的虛擬形象(與德瓦爾德所用的技術(shù)類似),用于向公眾總結(jié)裁判文書(shū),。
法院官網(wǎng)上的“丹尼爾”與“維多利亞”宣稱其職責(zé)是“傳播法院資訊”,。
威廉與瑪麗法學(xué)院(William & Mary Law School)法律與法院技術(shù)中心副教授兼研究部助理主任丹尼爾·申表示,他對(duì)德瓦爾德在紐約法院上訴案件中使用虛假形象進(jìn)行陳述并不感到意外,。
他表示:“在我看來(lái),,這種情況是不可避免的?!?/p>
他指出囿于傳統(tǒng)與法院的規(guī)則以及被取消律師執(zhí)業(yè)資格的風(fēng)險(xiǎn),,律師不可能這樣做。但沒(méi)有聘請(qǐng)律師并且申請(qǐng)準(zhǔn)許在法院進(jìn)行陳述的自訴人,,通常缺乏對(duì)使用合成視頻陳述案件的風(fēng)險(xiǎn)提示,。
德瓦爾德表示,他試圖關(guān)注技術(shù)前沿,,近期剛聽(tīng)了美國(guó)律師協(xié)會(huì)(American Bar Association)主辦的一場(chǎng)在線研討會(huì),,討論的主題是AI在法律界的應(yīng)用。
截至上周四,,德瓦爾德的案件仍在紐約上訴法院審理中,。(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
NEW YORK (AP) — It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen — a person about to present an argument in a lawsuit — not only had no law degree, but didn’t exist at all.
The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division’s First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute.
“The appellant has submitted a video for his argument,” said Justice Sallie Manzanet-Daniels. “Ok. We will hear that video now.”
On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater.
“May it please the court,” the man began. “I come here today a humble pro se before a panel of five distinguished justices.”
“Ok, hold on,” Manzanet-Daniels said. “Is that counsel for the case?”
“I generated that. That’s not a real person,” Dewald answered.
It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased.
“It would have been nice to know that when you made your application. You did not tell me that sir,” Manzanet-Daniels said before yelling across the room for the video to be shut off.
“I don’t appreciate being misled,” she said before letting Dewald continue with his argument.
Dewald later penned an apology to the court, saying he hadn’t intended any harm. He didn’t have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words.
In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing.
“The court was really upset about it,” Dewald conceded. “They chewed me up pretty good.”
Even real lawyers have gotten into trouble when their use of artificial intelligence went awry.
In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a “good faith mistake” in failing to understand that artificial intelligence might make things up.
Later that year, more fictious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn’t realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
Those were errors, but Arizona’s Supreme Court last month intentionally began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public.
On the court’s website, the avatars — who go by “Daniel” and “Victoria” — say they are there “to share its news.”
Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he wasn’t surprised to learn of Dewald’s introduction of a fake person to argue an appeals case in a New York court.
“From my perspective, it was inevitable,” he said.
He said it was unlikely that a lawyer would do such a thing because of tradition and court rules and because they could be disbarred. But he said individuals who appear without a lawyer and request permission to address the court are usually not given instructions about the risks of using a synthetically produced video to present their case.
Dewald said he tries to keep up with technology, having recently listened to a webinar sponsored by the American Bar Association that discussed the use of AI in the legal world.
As for Dewald’s case, it was still pending before the appeals court as of Thursday.