對(duì)于那些擔(dān)心人工智能技術(shù)被用于實(shí)現(xiàn)犯罪目的或者其他邪惡目的的人,,微軟(Microsoft)的首席經(jīng)濟(jì)學(xué)家做出了如下回答:汽車(chē)也可能不安全,。關(guān)鍵是要把保障措施落實(shí)到位。
邁克爾·施瓦茨于5月3日在世界經(jīng)濟(jì)論壇(World Economic Forum)的小組討論中談到人工智能時(shí)說(shuō):“毋庸置疑,,我們必須擔(dān)心這項(xiàng)技術(shù)的安全性——就像其他任何技術(shù)一樣,。”
例如,,車(chē)輛可以將人們帶到他們想去的地方,。但由于事故和污染,它們也很危險(xiǎn),。
“我希望人工智能永遠(yuǎn)不會(huì)像內(nèi)燃機(jī)那樣致命,。”施瓦茨表示,。
即使對(duì)從事人工智能研究的公司來(lái)說(shuō),,人工智能的危險(xiǎn)也很難忽視,施瓦茨承認(rèn),,這項(xiàng)技術(shù)如果落入壞人之手,,可能就會(huì)造成傷害。
施瓦茨稱:“我確信,,人工智能會(huì)被壞人利用,,而且這將造成真正的損害。如果落入垃圾郵件發(fā)送者,、想要操縱選舉的人手里,,人工智能就會(huì)造成很大的損害,。”
但他指出,,其中一些損害是能夠避免的,。“一旦我們發(fā)現(xiàn)真正的危害,,我們就必須問(wèn)自己一個(gè)簡(jiǎn)單的問(wèn)題:‘我們能否以這樣一種方式來(lái)監(jiān)管人工智能,,使落入監(jiān)管范圍無(wú)法實(shí)現(xiàn)的、但卻可以帶來(lái)益處的事情變得不那么重要,?’”施瓦茨說(shuō)?!霸瓌t應(yīng)該是,,監(jiān)管給社會(huì)帶來(lái)的好處應(yīng)該大于其社會(huì)成本?!?/p>
施瓦茨稱,,微軟專注于研究人工智能能夠給社會(huì)帶來(lái)的益處,并努力開(kāi)發(fā)人工智能,,幫助人類(lèi)提高效率以“取得更多成就”,。
微軟的一位發(fā)言人對(duì)《財(cái)富》雜志表示:“我們對(duì)人工智能的未來(lái)持樂(lè)觀態(tài)度,我們認(rèn)為與人工智能帶來(lái)的挑戰(zhàn)相比,,人工智能的進(jìn)步將解決更多難題,,但我們也一直堅(jiān)信,當(dāng)你創(chuàng)造出可以改變世界的技術(shù)時(shí),,你必須確保人類(lèi)負(fù)責(zé)任地使用這項(xiàng)技術(shù),。”
在最近興起的生成式人工智能技術(shù)中,,微軟一直扮演著關(guān)鍵角色,。該公司利用OpenAI的技術(shù)開(kāi)發(fā)了一個(gè)聊天機(jī)器人,并將其整合到許多產(chǎn)品里,。微軟還計(jì)劃在未來(lái)幾年向OpenAI投資100億美元,,此前微軟已經(jīng)在2019年和2021年向這家初創(chuàng)公司注資。
施瓦茨對(duì)人工智能技術(shù)的警告在某種程度上與谷歌(Google)的前副總裁兼工程研究員杰弗里·辛頓最近的言論相呼應(yīng),。辛頓曾經(jīng)幫助開(kāi)發(fā)了一些關(guān)鍵技術(shù),,為當(dāng)今廣泛使用的人工智能工具提供了支持,被稱為“人工智能教父”,。辛頓警告稱,,要阻止人工智能被用于欺詐可能很難。
在5月1日發(fā)表的《紐約時(shí)報(bào)》(New York Times)采訪中,,辛頓說(shuō):“很難想象如何阻止壞人利用它做壞事,。”
“我用通常的借口來(lái)安慰自己:如果我沒(méi)有研發(fā)出這些關(guān)鍵技術(shù),別人也會(huì)研發(fā)出來(lái),?!?/p>
辛頓的擔(dān)憂之一是人工智能工具的出現(xiàn),讓在幾秒鐘內(nèi)創(chuàng)建圖像和匯總信息成為可能,。它們可能導(dǎo)致虛假內(nèi)容的傳播,,而普通人很難辨別其準(zhǔn)確性。
雖然施瓦茨和辛頓擔(dān)心壞人可能會(huì)濫用人工智能,,但兩位專家在人工智能可能對(duì)某些工作產(chǎn)生影響的看法上存在分歧,。
在世界經(jīng)濟(jì)論壇的小組討論中,施瓦茨表示,,人們對(duì)自己的工作會(huì)被人工智能取代感到“恐懼”,,其實(shí)他們“不必太擔(dān)心”。但從2013年起一直到最近都在谷歌工作的辛頓指出,,在人工智能主導(dǎo)的工作環(huán)境里,,人工智能確實(shí)會(huì)給人類(lèi)的工作帶來(lái)威脅。
辛頓說(shuō):“人工智能取代了艱苦的工作,,但它能夠取代的工作可能不止此類(lèi)工作,。”
呼吁停止開(kāi)發(fā)高級(jí)人工智能
今年3月,,超過(guò)2.5萬(wàn)名技術(shù)專家——從學(xué)者到前高管——簽署了一封公開(kāi)信,,要求將高級(jí)人工智能系統(tǒng)的開(kāi)發(fā)暫停六個(gè)月,以便政府可以更好地理解其影響,,并進(jìn)行相應(yīng)監(jiān)管,。這封信認(rèn)為,一些系統(tǒng),,比如OpenAI于3月早些時(shí)候推出的GPT-4,,正在“成為人類(lèi)在一般任務(wù)方面的競(jìng)爭(zhēng)者”,帶來(lái)產(chǎn)生錯(cuò)誤信息的威脅,,并有可能大規(guī)模地實(shí)現(xiàn)工作自動(dòng)化,。
谷歌和微軟等科技巨頭的高管們表示,暫停六個(gè)月不會(huì)解決這一問(wèn)題,。其中包括Alphabet和谷歌的首席執(zhí)行官桑達(dá)爾·皮查伊,。
“我認(rèn)為,在具體細(xì)節(jié)上,,我并不完全清楚你們會(huì)如何實(shí)現(xiàn)這一目標(biāo),。”他在今年3月的一次播客采訪中說(shuō),,他指的是將開(kāi)發(fā)暫停六個(gè)月,?!爸辽賹?duì)我而言,如果不讓政府參與進(jìn)來(lái),,就無(wú)法有效地實(shí)現(xiàn)這一目標(biāo),。因此,我認(rèn)為需要進(jìn)行更多的思考,?!?/p>
微軟的首席科學(xué)官在今年4月接受《財(cái)富》雜志采訪時(shí)稱,除了暫停人工智能開(kāi)發(fā),,還有其他選擇,。
微軟的埃里克·霍維茨表示:“對(duì)我來(lái)說(shuō),我寧愿看到更多的知識(shí)涌現(xiàn),,甚至加速研發(fā),,而不是暫停六個(gè)月,我甚至不確定這是否可行,。而且,從更廣泛的意義上說(shuō),,暫停六個(gè)月意義不大,。我們真的需要在理解、指導(dǎo)甚至監(jiān)管這項(xiàng)技術(shù)上投入更多——參與進(jìn)來(lái),,而不是暫停,。”(財(cái)富中文網(wǎng))
譯者:中慧言-王芳
對(duì)于那些擔(dān)心人工智能技術(shù)被用于實(shí)現(xiàn)犯罪目的或者其他邪惡目的的人,,微軟(Microsoft)的首席經(jīng)濟(jì)學(xué)家做出了如下回答:汽車(chē)也可能不安全,。關(guān)鍵是要把保障措施落實(shí)到位。
邁克爾·施瓦茨于5月3日在世界經(jīng)濟(jì)論壇(World Economic Forum)的小組討論中談到人工智能時(shí)說(shuō):“毋庸置疑,,我們必須擔(dān)心這項(xiàng)技術(shù)的安全性——就像其他任何技術(shù)一樣,。”
例如,,車(chē)輛可以將人們帶到他們想去的地方,。但由于事故和污染,它們也很危險(xiǎn),。
“我希望人工智能永遠(yuǎn)不會(huì)像內(nèi)燃機(jī)那樣致命,。”施瓦茨表示,。
即使對(duì)從事人工智能研究的公司來(lái)說(shuō),,人工智能的危險(xiǎn)也很難忽視,施瓦茨承認(rèn),,這項(xiàng)技術(shù)如果落入壞人之手,,可能就會(huì)造成傷害,。
施瓦茨稱:“我確信,人工智能會(huì)被壞人利用,,而且這將造成真正的損害,。如果落入垃圾郵件發(fā)送者、想要操縱選舉的人手里,,人工智能就會(huì)造成很大的損害,。”
但他指出,,其中一些損害是能夠避免的,。“一旦我們發(fā)現(xiàn)真正的危害,,我們就必須問(wèn)自己一個(gè)簡(jiǎn)單的問(wèn)題:‘我們能否以這樣一種方式來(lái)監(jiān)管人工智能,,使落入監(jiān)管范圍無(wú)法實(shí)現(xiàn)的、但卻可以帶來(lái)益處的事情變得不那么重要,?’”施瓦茨說(shuō),。“原則應(yīng)該是,,監(jiān)管給社會(huì)帶來(lái)的好處應(yīng)該大于其社會(huì)成本,。”
施瓦茨稱,,微軟專注于研究人工智能能夠給社會(huì)帶來(lái)的益處,,并努力開(kāi)發(fā)人工智能,幫助人類(lèi)提高效率以“取得更多成就”,。
微軟的一位發(fā)言人對(duì)《財(cái)富》雜志表示:“我們對(duì)人工智能的未來(lái)持樂(lè)觀態(tài)度,,我們認(rèn)為與人工智能帶來(lái)的挑戰(zhàn)相比,人工智能的進(jìn)步將解決更多難題,,但我們也一直堅(jiān)信,,當(dāng)你創(chuàng)造出可以改變世界的技術(shù)時(shí),你必須確保人類(lèi)負(fù)責(zé)任地使用這項(xiàng)技術(shù),?!?/p>
在最近興起的生成式人工智能技術(shù)中,微軟一直扮演著關(guān)鍵角色,。該公司利用OpenAI的技術(shù)開(kāi)發(fā)了一個(gè)聊天機(jī)器人,,并將其整合到許多產(chǎn)品里。微軟還計(jì)劃在未來(lái)幾年向OpenAI投資100億美元,,此前微軟已經(jīng)在2019年和2021年向這家初創(chuàng)公司注資,。
施瓦茨對(duì)人工智能技術(shù)的警告在某種程度上與谷歌(Google)的前副總裁兼工程研究員杰弗里·辛頓最近的言論相呼應(yīng)。辛頓曾經(jīng)幫助開(kāi)發(fā)了一些關(guān)鍵技術(shù),,為當(dāng)今廣泛使用的人工智能工具提供了支持,,被稱為“人工智能教父”,。辛頓警告稱,要阻止人工智能被用于欺詐可能很難,。
在5月1日發(fā)表的《紐約時(shí)報(bào)》(New York Times)采訪中,,辛頓說(shuō):“很難想象如何阻止壞人利用它做壞事?!?/p>
“我用通常的借口來(lái)安慰自己:如果我沒(méi)有研發(fā)出這些關(guān)鍵技術(shù),,別人也會(huì)研發(fā)出來(lái)?!?/p>
辛頓的擔(dān)憂之一是人工智能工具的出現(xiàn),,讓在幾秒鐘內(nèi)創(chuàng)建圖像和匯總信息成為可能。它們可能導(dǎo)致虛假內(nèi)容的傳播,,而普通人很難辨別其準(zhǔn)確性,。
雖然施瓦茨和辛頓擔(dān)心壞人可能會(huì)濫用人工智能,但兩位專家在人工智能可能對(duì)某些工作產(chǎn)生影響的看法上存在分歧,。
在世界經(jīng)濟(jì)論壇的小組討論中,,施瓦茨表示,人們對(duì)自己的工作會(huì)被人工智能取代感到“恐懼”,,其實(shí)他們“不必太擔(dān)心”,。但從2013年起一直到最近都在谷歌工作的辛頓指出,在人工智能主導(dǎo)的工作環(huán)境里,,人工智能確實(shí)會(huì)給人類(lèi)的工作帶來(lái)威脅。
辛頓說(shuō):“人工智能取代了艱苦的工作,,但它能夠取代的工作可能不止此類(lèi)工作,。”
呼吁停止開(kāi)發(fā)高級(jí)人工智能
今年3月,,超過(guò)2.5萬(wàn)名技術(shù)專家——從學(xué)者到前高管——簽署了一封公開(kāi)信,,要求將高級(jí)人工智能系統(tǒng)的開(kāi)發(fā)暫停六個(gè)月,以便政府可以更好地理解其影響,,并進(jìn)行相應(yīng)監(jiān)管,。這封信認(rèn)為,一些系統(tǒng),,比如OpenAI于3月早些時(shí)候推出的GPT-4,,正在“成為人類(lèi)在一般任務(wù)方面的競(jìng)爭(zhēng)者”,帶來(lái)產(chǎn)生錯(cuò)誤信息的威脅,,并有可能大規(guī)模地實(shí)現(xiàn)工作自動(dòng)化,。
谷歌和微軟等科技巨頭的高管們表示,暫停六個(gè)月不會(huì)解決這一問(wèn)題,。其中包括Alphabet和谷歌的首席執(zhí)行官桑達(dá)爾·皮查伊,。
“我認(rèn)為,,在具體細(xì)節(jié)上,我并不完全清楚你們會(huì)如何實(shí)現(xiàn)這一目標(biāo),?!彼诮衲?月的一次播客采訪中說(shuō),他指的是將開(kāi)發(fā)暫停六個(gè)月,?!爸辽賹?duì)我而言,如果不讓政府參與進(jìn)來(lái),,就無(wú)法有效地實(shí)現(xiàn)這一目標(biāo),。因此,我認(rèn)為需要進(jìn)行更多的思考,?!?/p>
微軟的首席科學(xué)官在今年4月接受《財(cái)富》雜志采訪時(shí)稱,除了暫停人工智能開(kāi)發(fā),,還有其他選擇,。
微軟的埃里克·霍維茨表示:“對(duì)我來(lái)說(shuō),我寧愿看到更多的知識(shí)涌現(xiàn),,甚至加速研發(fā),,而不是暫停六個(gè)月,我甚至不確定這是否可行,。而且,,從更廣泛的意義上說(shuō),暫停六個(gè)月意義不大,。我們真的需要在理解,、指導(dǎo)甚至監(jiān)管這項(xiàng)技術(shù)上投入更多——參與進(jìn)來(lái),而不是暫停,?!保ㄘ?cái)富中文網(wǎng))
譯者:中慧言-王芳
Microsoft’s chief economist offers the following reply to those concerned about A.I. being misused for crime or other nefarious purposes: Cars can be unsafe, too. The key is putting safeguards in place.
“We do have to worry a lot about [the] safety of this technology—just like with any other technology,” Michael Schwarz said about artificial intelligence during a World Economic Forum panel on May 3.
Vehicles, for example, get people to where they want to go. But they’re also a danger because of accidents and pollution.
“I hope that A.I. will never, ever become as deadly as [an] internal combustion engine is,” Schwarz said.
A.I.’s dangers can be hard to dismiss even for companies working on them, and Schwarz acknowledges that the technology can cause harm in the wrong hands.
“I am quite confident that A.I. will be used by bad actors, and yes it will cause real damage,” Schwarz said. “It can do a lot of damage in the hands of spammers, people who want to manipulate elections and so on.”
But some of that can be avoided, he said. “Once we see real harm, we have to ask ourselves the simple question: ‘Can we regulate that in a way where the good things that will be prevented by this regulation are less important?’” Schwarz said. “The principles should be the benefits from the regulation to our society should be greater than the cost to our society.”
Microsoft is focused on the good that A.I. can bring society and is working to develop A.I. to “help people achieve more” by making them more efficient, Schwarz said.
“We are optimistic about the future of A.I., and we think A.I. advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly,” a Microsoft spokesperson told Fortune.
Microsoft has been a key player in the recent surge of generative A.I. technology. The company has built a chatbot using technology from OpenAI and has incorporated it into a number of products. Microsoft also plans to invest $10 billion in OpenAI over several years, after having already pumped money into the startup in 2019 and 2021.
Schwarz’s warning about A.I. echoed, to a point, recent remarks by Geoffrey Hinton, a former Google VP and engineering fellow, who helped create some of the key technologies powering the widely used A.I. tools today and who is referred to as “the Godfather of A.I.” He warned that it may be tough to stop A.I. from being used for fraud.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told the?New York Times in an interview published on May 1.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
One of Hinton’s concerns is the availability of A.I. tools that can create images and aggregate information in a matter of seconds. They could lead to the spread of fake content whose accuracy the average person would have trouble discerning.
While Schwarz and Hinton worry about how bad actors may misuse A.I., the two experts diverge in how they think A.I. may impact certain jobs.
During the WEF panel, Schwarz said people are “paranoid” about their work being replaced by A.I. and that they “shouldn’t be too worried” about it. But Hinton, who had worked at Google since 2013 until recently, said that there is a real risk to jobs in an A.I.-dominated work environment.
“It takes away the drudge work,” Hinton said. “It might take away more than that.”
Calls to halt advanced A.I. development
In March, over 25,000 tech experts—from academics to former executives—signed an open letter asking for a six-month pause in the development of advanced A.I. systems so that their impact could be better understood and regulated by governments. The letter argued that some systems, such as OpenAI’s GPT-4 introduced earlier that month, are “becoming human-competitive at general tasks,” threatening to help generate misinformation and potentially automating jobs at a large scale.
Executives at tech giants like Google and Microsoft have said that a six-month pause will not solve the problem. Among them is Alphabet and Google CEO Sundar Pichai.
“I think in the actual specifics of it, it’s not fully clear to me how you would do something like that today,” he said during a podcast interview in March, referring to the six-month moratorium. “To me, at least, there is no way to do this effectively without getting governments involved. So I think there’s a lot more thought that needs to go into it.”
Microsoft’s chief scientific officer told?Fortune in an interview in April that there are alternatives to pausing A.I. development.
“To me, I would prefer to see more knowledge, and even an acceleration of research and development, rather than a pause for six months, which I am not sure if it would even be feasible,” said Microsoft’s Eric Horvitz. “In a larger sense, six months doesn’t really mean very much for a pause. We need to really just invest more in understanding and guiding and even regulating this technology—jump in, as opposed to pause.”