我想看一级黄色片_欧美性爱无遮挡电影_色丁香视频网站中文字幕_视频一区 视频二区 国产,日本三级理论日本电影,午夜不卡免费大片,国产午夜视频在线观看,18禁无遮拦无码国产在线播放,在线视频不卡国产在线视频不卡 ,,欧美一及黄片,日韩国产另类

首頁(yè) 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專題 品牌中心
雜志訂閱

山姆·奧特曼:AI帶來(lái)的“滅絕風(fēng)險(xiǎn)”不亞于疫情和核戰(zhàn)爭(zhēng)

TRISTAN BOVE
2023-06-03

OpenAI首席執(zhí)行官山姆·奧特曼聯(lián)合一眾技術(shù)專家警告,,AI對(duì)人類生存的威脅不亞于核戰(zhàn)爭(zhēng)和全球性流行病。

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)
?
OpenAI首席執(zhí)行官山姆·奧特曼(Sam Altman)警告,人工智能(AI)可能帶來(lái)滅絕風(fēng)險(xiǎn),。圖片來(lái)源:WIN MCNAMEE—GETTY IMAGES

技術(shù)專家和計(jì)算機(jī)科學(xué)專家警告,,AI對(duì)人類生存的威脅不亞于核戰(zhàn)爭(zhēng)和全球性流行病,甚至為AI辯護(hù)的企業(yè)領(lǐng)導(dǎo)者也更加謹(jǐn)慎地看待對(duì)AI技術(shù)可能導(dǎo)致的滅絕風(fēng)險(xiǎn)。

周二,,非營(yíng)利研究機(jī)構(gòu)人工智能安全中心(Center for A.I. Safety)發(fā)表了一封公開信“人工智能風(fēng)險(xiǎn)聲明”(statement of A.I. risk),,包括ChatGPT開發(fā)商OpenAI的首席執(zhí)行官山姆·奧特曼在內(nèi)的300多人聯(lián)合簽署了該聲明。這份簡(jiǎn)短的聲明闡述了AI的相關(guān)風(fēng)險(xiǎn):

“減輕AI帶來(lái)的風(fēng)險(xiǎn)應(yīng)該像流行病和核戰(zhàn)爭(zhēng)等社會(huì)性風(fēng)險(xiǎn)一樣成為全球優(yōu)先事項(xiàng),?!?/p>

這封信的序言講到,該聲明旨在就如何準(zhǔn)備應(yīng)對(duì)AI技術(shù)潛在的滅世風(fēng)險(xiǎn)問題“引發(fā)討論”,。其他簽署者包括谷歌(Google)前任工程師杰弗里·辛頓(Geoffrey Hinton)和蒙特利爾大學(xué)(University of Montreal)計(jì)算機(jī)科學(xué)家約書亞·本吉奧(Yoshua Bengio),,他們因?qū)ΜF(xiàn)代計(jì)算機(jī)科學(xué)做出了巨大貢獻(xiàn)而被稱為AI的兩位教父。最近幾周,,本吉奧和辛頓已多次就AI技術(shù)在不久的將來(lái)可能發(fā)展出的危險(xiǎn)能力發(fā)出警告,。辛頓最近剛離開谷歌,因此他可以更公開地談?wù)揂I的風(fēng)險(xiǎn),。

這并非第一封這類信,,此前也曾有公開信呼吁人們進(jìn)一步關(guān)注先進(jìn)AI研究在缺乏嚴(yán)格的政府監(jiān)管下可能會(huì)帶來(lái)的毀滅性后果。今年3月,,埃隆·馬斯克(Elon Musk)和1000多名技術(shù)人員和專家曾呼吁人們將對(duì)先進(jìn)AI的研究暫停6個(gè)月,,稱AI技術(shù)可能造成破壞。

本月,,奧特曼也對(duì)美國(guó)國(guó)會(huì)發(fā)出警告稱,,隨著AI技術(shù)的飛速發(fā)展,當(dāng)前監(jiān)管已經(jīng)不足以滿足需求了,。

奧特曼最近簽署的這份聲明并沒有像先前那封信一樣概述具體目標(biāo),,而是力求推動(dòng)討論。本月初,,辛頓在接受美國(guó)有線電視新聞網(wǎng)(CNN)采訪時(shí)表示,,他沒有簽署三月份的那封信,因?yàn)殍b于中美已經(jīng)在AI技術(shù)領(lǐng)域展開競(jìng)爭(zhēng),,暫停AI研究是不現(xiàn)實(shí)的,。

他說(shuō):“我不認(rèn)為我們可以阻止AI的發(fā)展。我沒有在呼吁大家停止AI研究的請(qǐng)?jiān)笗虾灻?,因?yàn)榧词姑绹?guó)人停止了研究,,中國(guó)人也不會(huì)停?!?/p>

盡管OpenAI 和谷歌等AI領(lǐng)先開發(fā)商的高管都呼吁政府加快對(duì)AI技術(shù)的監(jiān)管步伐,但一些專家警告稱,,在AI目前帶來(lái)的問題(包括散布誤導(dǎo)信息和可能引起偏見等)已經(jīng)造成嚴(yán)重破壞的情況下,,討論AI技術(shù)未來(lái)的會(huì)導(dǎo)致的滅絕風(fēng)險(xiǎn)只會(huì)適得其反。其他人甚至認(rèn)為,奧特曼這些首席執(zhí)行官之所以公開討論滅絕風(fēng)險(xiǎn),,是為了轉(zhuǎn)移人們對(duì)AI技術(shù)當(dāng)前問題的注意力,,而這些問題已經(jīng)釀成了許多后果,包括促進(jìn)了關(guān)鍵的大選年時(shí)虛假新聞的及時(shí)傳播,。

但對(duì)AI持悲觀態(tài)度的人也警告稱,,AI技術(shù)的發(fā)展速度十分迅猛,其會(huì)導(dǎo)致的滅絕風(fēng)險(xiǎn)可能會(huì)過快地成為一個(gè)問題,,讓人們猝不及防,。人們?cè)絹?lái)越擔(dān)心,能夠?yàn)樽约核伎己屯评淼某?jí)智能AI會(huì)比許多人想象的更快成為現(xiàn)實(shí),,而且一些專家警告稱,,AI技術(shù)當(dāng)前與人類利益和福祉并不契合。

本月,,辛頓在接受《華盛頓郵報(bào)》(Washington Post)采訪時(shí)表示,,超級(jí)智能AI正在快速發(fā)展,可能只需要20年就能成為現(xiàn)實(shí),,現(xiàn)在是時(shí)候該討論先進(jìn)AI的風(fēng)險(xiǎn)了,。

他說(shuō):“這不是科幻小說(shuō)?!保ㄘ?cái)富中文網(wǎng))

譯者:中慧言-劉嘉歡

OpenAI首席執(zhí)行官山姆·奧特曼(Sam Altman)警告,,人工智能(AI)可能帶來(lái)滅絕風(fēng)險(xiǎn)。

技術(shù)專家和計(jì)算機(jī)科學(xué)專家警告,,AI對(duì)人類生存的威脅不亞于核戰(zhàn)爭(zhēng)和全球性流行病,,甚至為AI辯護(hù)的企業(yè)領(lǐng)導(dǎo)者也更加謹(jǐn)慎地看待對(duì)AI技術(shù)可能導(dǎo)致的滅絕風(fēng)險(xiǎn)。

周二,,非營(yíng)利研究機(jī)構(gòu)人工智能安全中心(Center for A.I. Safety)發(fā)表了一封公開信“人工智能風(fēng)險(xiǎn)聲明”(statement of A.I. risk),,包括ChatGPT開發(fā)商OpenAI的首席執(zhí)行官山姆·奧特曼在內(nèi)的300多人聯(lián)合簽署了該聲明。這份簡(jiǎn)短的聲明闡述了AI的相關(guān)風(fēng)險(xiǎn):

“減輕AI帶來(lái)的風(fēng)險(xiǎn)應(yīng)該像流行病和核戰(zhàn)爭(zhēng)等社會(huì)性風(fēng)險(xiǎn)一樣成為全球優(yōu)先事項(xiàng),?!?/p>

這封信的序言講到,該聲明旨在就如何準(zhǔn)備應(yīng)對(duì)AI技術(shù)潛在的滅世風(fēng)險(xiǎn)問題“引發(fā)討論”,。其他簽署者包括谷歌(Google)前任工程師杰弗里·辛頓(Geoffrey Hinton)和蒙特利爾大學(xué)(University of Montreal)計(jì)算機(jī)科學(xué)家約書亞·本吉奧(Yoshua Bengio),,他們因?qū)ΜF(xiàn)代計(jì)算機(jī)科學(xué)做出了巨大貢獻(xiàn)而被稱為AI的兩位教父。最近幾周,,本吉奧和辛頓已多次就AI技術(shù)在不久的將來(lái)可能發(fā)展出的危險(xiǎn)能力發(fā)出警告,。辛頓最近剛離開谷歌,因此他可以更公開地談?wù)揂I的風(fēng)險(xiǎn),。

這并非第一封這類信,,此前也曾有公開信呼吁人們進(jìn)一步關(guān)注先進(jìn)AI研究在缺乏嚴(yán)格的政府監(jiān)管下可能會(huì)帶來(lái)的毀滅性后果,。今年3月,埃隆·馬斯克(Elon Musk)和1000多名技術(shù)人員和專家曾呼吁人們將對(duì)先進(jìn)AI的研究暫停6個(gè)月,,稱AI技術(shù)可能造成破壞,。

本月,奧特曼也對(duì)美國(guó)國(guó)會(huì)發(fā)出警告稱,,隨著AI技術(shù)的飛速發(fā)展,,當(dāng)前監(jiān)管已經(jīng)不足以滿足需求了。

奧特曼最近簽署的這份聲明并沒有像先前那封信一樣概述具體目標(biāo),,而是力求推動(dòng)討論,。本月初,辛頓在接受美國(guó)有線電視新聞網(wǎng)(CNN)采訪時(shí)表示,,他沒有簽署三月份的那封信,,因?yàn)殍b于中美已經(jīng)在AI技術(shù)領(lǐng)域展開競(jìng)爭(zhēng),暫停AI研究是不現(xiàn)實(shí)的,。

他說(shuō):“我不認(rèn)為我們可以阻止AI的發(fā)展,。我沒有在呼吁大家停止AI研究的請(qǐng)?jiān)笗虾灻驗(yàn)榧词姑绹?guó)人停止了研究,,中國(guó)人也不會(huì)停,。”

盡管OpenAI 和谷歌等AI領(lǐng)先開發(fā)商的高管都呼吁政府加快對(duì)AI技術(shù)的監(jiān)管步伐,,但一些專家警告稱,,在AI目前帶來(lái)的問題(包括散布誤導(dǎo)信息和可能引起偏見等)已經(jīng)造成嚴(yán)重破壞的情況下,討論AI技術(shù)未來(lái)的會(huì)導(dǎo)致的滅絕風(fēng)險(xiǎn)只會(huì)適得其反,。其他人甚至認(rèn)為,,奧特曼這些首席執(zhí)行官之所以公開討論滅絕風(fēng)險(xiǎn),是為了轉(zhuǎn)移人們對(duì)AI技術(shù)當(dāng)前問題的注意力,,而這些問題已經(jīng)釀成了許多后果,,包括促進(jìn)了關(guān)鍵的大選年時(shí)虛假新聞的及時(shí)傳播。

但對(duì)AI持悲觀態(tài)度的人也警告稱,,AI技術(shù)的發(fā)展速度十分迅猛,,其會(huì)導(dǎo)致的滅絕風(fēng)險(xiǎn)可能會(huì)過快地成為一個(gè)問題,讓人們猝不及防,。人們?cè)絹?lái)越擔(dān)心,,能夠?yàn)樽约核伎己屯评淼某?jí)智能AI會(huì)比許多人想象的更快成為現(xiàn)實(shí),而且一些專家警告稱,,AI技術(shù)當(dāng)前與人類利益和福祉并不契合,。

本月,辛頓在接受《華盛頓郵報(bào)》(Washington Post)采訪時(shí)表示,,超級(jí)智能AI正在快速發(fā)展,,可能只需要20年就能成為現(xiàn)實(shí),,現(xiàn)在是時(shí)候該討論先進(jìn)AI的風(fēng)險(xiǎn)了。

他說(shuō):“這不是科幻小說(shuō),。”(財(cái)富中文網(wǎng))

譯者:中慧言-劉嘉歡

Technologists and computer science experts are warning that artificial intelligence poses threats to humanity’s survival on par with nuclear warfare and global pandemics, and even business leaders who are fronting the charge for A.I. are cautioning about the technology’s existential risks.

Sam Altman, CEO of ChatGPT creator OpenAI, is one of over 300 signatories behind a public “statement of A.I. risk” published Tuesday by the Center for A.I. Safety, a nonprofit research organization. The letter is a short single statement to capture the risks associated with A.I.:

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The letter’s preamble said the statement is intended to “open up discussion” on how to prepare for the technology’s potentially world-ending capabilities. Other signatories include former Google engineer Geoffrey Hinton and University of Montreal computer scientist Yoshua Bengio, who are known as two of the Godfathers of A.I. due to their contributions to modern computer science. Both Bengio and Hinton have issued several warnings in recent weeks about what dangerous capabilities the technology is likely to develop in the near future. Hinton recently left Google so that he could more openly discuss A.I.’s risks.

It isn’t the first letter calling for more attention to be paid to the possible disastrous outcomes of advanced A.I. research without stricter government oversight. Elon Musk was one of over 1,000 technologists and experts to call for a six-month pause on advanced A.I. research in March, citing the technology’s destructive potential.

And Altman warned Congress this month that sufficient regulation is already lacking as the technology develops at a breakneck pace.

The more recent note signed by Altman did not outline any specific goals like the earlier letter, other than fostering discussion. Hinton said in an interview with CNN earlier this month that he did not sign the March letter, saying that a pause on A.I. research would be unrealistic given the technology has become a?competitive sphere between the U.S. and China.

“I don’t think we can stop the progress,” he said. “I didn’t sign the petition saying we should stop working on A.I because if people in America stop, people in China wouldn’t.”

But while executives from leading A.I. developers including OpenAI and even Google have called on governments to move faster on regulating A.I., some experts warn that it is counter-productive to discuss the technology’s future existential risks when its current problems, including misinformation and potential biases, are already wreaking havoc. Others have even argued that by publicly discussing A.I.’s existential risks, CEOs like Altman have been trying to distract from the technology’s current issues which are already creating problems, including facilitating the spread of fake news just in time for a pivotal election year.

But A.I.’s doomsayers have also warned that the technology is developing fast enough that existential risks could become a problem faster than humans can keep tabs on. Fears are growing in the community that superintelligent A.I., which would be able to think and reason for itself, is closer than many believe, and some experts warn that the technology is not currently aligned with human interests and well-being.

Hinton said in an interview with the Washington Post this month that the horizon for superintelligent A.I. is moving up fast and could now be only 20 years away, and now is the time to have conversations about advanced A.I.’s risks.

“This is not science fiction,” he said.

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識(shí)產(chǎn)權(quán)為財(cái)富媒體知識(shí)產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專屬所有或持有,。未經(jīng)許可,,禁止進(jìn)行轉(zhuǎn)載、摘編,、復(fù)制及建立鏡像等任何使用,。
0條Plus
精彩評(píng)論
評(píng)論

撰寫或查看更多評(píng)論

請(qǐng)打開財(cái)富Plus APP

前往打開
熱讀文章