AI技術(shù)被大公司壟斷,這家公司能否打破舊有格局?
Element AI蒙特利爾辦公室的Orwell靜思室(左1);員工們?cè)诳偛颗ぷ?。Guillaume Simoneau for Fortune
在當(dāng)今的人工智能領(lǐng)域,,所有技術(shù)似乎都與加拿大不同大學(xué)的三位研究員有關(guān)。第一位是杰夫·欣頓,,這位70歲高齡的英國(guó)人在多倫多大學(xué)任教,,是深度學(xué)習(xí)這個(gè)子領(lǐng)域的先驅(qū),而這一領(lǐng)域業(yè)已成為人工智能的代名詞,。第二位是一位名為楊立昆的法國(guó)人,,57歲,上個(gè)世紀(jì)80年代曾在欣頓的實(shí)驗(yàn)室工作,,如今在紐約大學(xué)執(zhí)教,。第三位是54歲的約書(shū)亞·本吉奧,出生于巴黎,,成長(zhǎng)于蒙特利爾,,如今在蒙特利爾大學(xué)執(zhí)教。這三位人士是十分要好的朋友和合作伙伴,,正因?yàn)槿绱?,業(yè)界人士將他們?nèi)环Q(chēng)之為加拿大黑手黨。 然而在2013年,,谷歌將欣頓收入麾下,,F(xiàn)acebook聘請(qǐng)了楊立昆。這兩位人士依然保留著其學(xué)術(shù)職務(wù),并繼續(xù)任教,,但本吉奧并不是一位天生的實(shí)業(yè)家,。他的舉止十分謙虛,近乎謙卑,,略微勾著腰,,每天花大量的時(shí)間坐在電腦顯示器旁。盡管他曾為多家公司提供過(guò)咨詢(xún)服務(wù),,而且會(huì)經(jīng)常性地獲邀加入某一公司,,但約書(shū)亞堅(jiān)持追求他所熱衷的項(xiàng)目,而不是那些最有可能獲利的項(xiàng)目,。他的朋友,、人工智能初創(chuàng)企業(yè)Imagia聯(lián)合創(chuàng)始人亞歷山大·鮑斯利爾對(duì)我說(shuō):“人們應(yīng)意識(shí)到,他的志向異常遠(yuǎn)大,,其價(jià)值觀也是高高在上,。一些科技行業(yè)的工作者已經(jīng)忘卻了人性,但約書(shū)亞并沒(méi)有,。他真的希望這一科學(xué)突破能夠?qū)ι鐣?huì)有所助益,。” 科羅拉多大學(xué)波爾得分校的人工智能教授邁克·莫澤說(shuō)的更為直率:“約書(shū)亞并沒(méi)有背叛自己,?!? 然而,不背叛自己已經(jīng)成為了一種少數(shù)派行為,。大型科技公司,,例如亞馬遜、Facebook,、谷歌和微軟等,,都在大肆收購(gòu)創(chuàng)新初創(chuàng)企業(yè),并挖走了大學(xué)的精英,,為的是獲取頂級(jí)人工智能人才,。華盛頓大學(xué)人工智能教授佩德羅·多明戈斯稱(chēng),他每年都會(huì)詢(xún)問(wèn)學(xué)術(shù)界的熟人,,想了解是否有學(xué)生希望讀取博士后學(xué)位,。他對(duì)我說(shuō),上次他問(wèn)本吉奧時(shí),,“本吉奧表示,,‘他們還沒(méi)畢業(yè),我就已經(jīng)留不住他們了,。’”本吉奧受夠了當(dāng)前的這種態(tài)勢(shì),希望阻止大學(xué)的人才流失,。他堅(jiān)信,,實(shí)現(xiàn)這一目標(biāo)最好的方式就是以其人之道還治其人之身:資本主義重拳。 |
IN THE MODERN FIELD OF ARTIFICIAL INTELLIGENCE, all roads seem to lead to three researchers with ties to Canadian universities. The first, Geoffrey Hinton, a 70-year-old Brit who teaches at the University of Toronto, pioneered the subfield called deep learning that has become synonymous with A.I. The second, a 57-year-old Frenchman named Yann LeCun, worked in Hinton’s lab in the 1980s and now teaches at New York University. The third, 54-year-old Yoshua Bengio, was born in Paris, raised in Montreal, and now teaches at the University of Montreal. The three men are close friends and collaborators, so much so that people in the A.I. community call them the Canadian Mafia. In 2013, though, Google recruited Hinton, and Facebook hired LeCun. Both men kept their academic positions and continued teaching, but Bengio, who had built one of the world’s best A.I. programs at the University of Montreal, came to be seen as the last academic purist standing. Bengio is not a natural industrialist. He has a humble, almost apologetic, manner, with the slightly stooped bearing of a man who spends a great deal of time in front of computer screens. While he advised several companies and was forever being asked to join one, Bengio insisted on pursuing passion projects, not the ones likeliest to turn a profit. “You must realize how big his heart is and how well-placed his values are,” his friend Alexandre Le Bouthillier, a cofounder of an A.I. startup called Imagia, tells me. “Some people on the tech side forget about the human side. Yoshua does not. He really wants this scientific breakthrough to help society.” Michael Mozer, an A.I. professor at the University of Colorado at Boulder, is more blunt: “Yoshua hasn’t sold out.” Not selling out, however, had become a lonesome endeavor. Big tech companies—Amazon, Facebook, Google, and Microsoft, among others—were vacuuming up innovative startups and draining universities of their best minds in a bid to secure top A.I. talent. Pedro Domingos, an A.I. professor at the University of Washington, says he asks academic contacts each year if they know students seeking postdoc positions; he tells me the last time he asked Bengio, “he said, ‘I can’t even hold on to them before they graduate.’ ” Bengio, fed up by this state of affairs, wanted to stop the brain drain. He had become convinced that his best bet for accomplishing this was to use one of Big Tech’s own tools: the blunt force of capitalism. |
開(kāi)發(fā)人工智能的一些大型科技公司已經(jīng)實(shí)現(xiàn)了對(duì)資源的控制,,然而約書(shū)亞·本吉奧卻是行業(yè)內(nèi)為數(shù)不多抵制商業(yè)化運(yùn)作的人士,。他的公司Element AI改變了這一情況。Guillaume Simoneau for Fortune
2015年9月的一個(gè)暖意融融的下午,,本吉奧和四位最為要好的同事齊聚鮑斯利爾在蒙特利爾的家中,。這次聚會(huì)實(shí)際上是一次策略討論會(huì),主題是一家技術(shù)轉(zhuǎn)讓公司(由本吉奧數(shù)年前與他人創(chuàng)建),。然而,,對(duì)其專(zhuān)業(yè)領(lǐng)域的未來(lái)感到憂心忡忡的本吉奧也借此機(jī)會(huì)提出了一些自己一直在思考的問(wèn)題:是否有可能創(chuàng)建一家企業(yè),能夠幫助初創(chuàng)企業(yè)和大學(xué)這個(gè)龐大的生態(tài)系統(tǒng),,而不是傷害這一系統(tǒng),,而且這家企業(yè)對(duì)于整個(gè)社會(huì)也能有所助益?如果可行的話,,這家企業(yè)是否能夠在大型科技企業(yè)主宰的世界中與其他企業(yè)競(jìng)爭(zhēng),。 本吉奧尤其希望聽(tīng)到他的朋友讓-弗朗西斯·加格內(nèi)的意見(jiàn),后者是一位充滿活力的多產(chǎn)創(chuàng)業(yè)家,,比本吉奧小15歲多,。加格內(nèi)此前曾向如今名為JDA Software的公司出售了由自己聯(lián)合創(chuàng)建的初創(chuàng)企業(yè)。在該公司工作了三年之后,,加格內(nèi)離開(kāi)了該公司,,成為了加拿大風(fēng)投公司Real Ventures的常駐創(chuàng)業(yè)者。如果加格內(nèi)的下一個(gè)項(xiàng)目與自身的目標(biāo)相一致,,本吉奧非常希望參與其中,。湊巧的是,加格內(nèi)也在琢磨如何在這個(gè)由大科技公司主宰的世界中生存,。在經(jīng)過(guò)了三小時(shí)的會(huì)面之后,,也就是在太陽(yáng)開(kāi)始落山之時(shí),加格內(nèi)對(duì)本吉奧和其他人說(shuō):“行了,,我準(zhǔn)備把這個(gè)商業(yè)計(jì)劃再充實(shí)一下,。” 那年冬天,,加格內(nèi)和同事尼古拉斯·查帕多斯到訪了本吉奧在蒙特利爾大學(xué)不大的辦公室,。加格內(nèi)在本吉奧學(xué)術(shù)行頭——教科書(shū)、一沓沓的論文,、寫(xiě)滿了密密麻麻公式的白板——的環(huán)繞下宣布,,在Real Ventures的支持下,,他已經(jīng)拿出了一個(gè)商業(yè)計(jì)劃。他建議聯(lián)合創(chuàng)建一家初創(chuàng)企業(yè),,后者將致力于為初創(chuàng)企業(yè)和其他缺乏資源的機(jī)構(gòu)打造人工智能技術(shù),,這些機(jī)構(gòu)沒(méi)錢(qián)自行研發(fā),但有可能對(duì)非大型科技公司之類(lèi)的供應(yīng)商感興趣,。這家初創(chuàng)企業(yè)的主要賣(mài)點(diǎn)就是,,公司所擁有的人才可能是地球上最有才干的團(tuán)隊(duì)之一。它可以向來(lái)自于本吉奧的實(shí)驗(yàn)室以及其他大學(xué)的研究人員支付薪資,,讓他們每個(gè)月來(lái)公司工作幾個(gè)小時(shí),,同時(shí)保留其學(xué)術(shù)職務(wù)。通過(guò)這種方式,,公司便能以較低廉的價(jià)格獲得頂級(jí)人才,,而大學(xué)又可以保留其研究人員,同時(shí)主流客戶便有機(jī)會(huì)與財(cái)大氣粗的競(jìng)爭(zhēng)對(duì)手開(kāi)展競(jìng)爭(zhēng),。各方面都是贏家,,可能唯一吃虧的就是那些大型科技公司。 谷歌首席執(zhí)行官桑達(dá)爾·皮查伊在今年早些時(shí)候宣布:“人工智能是人類(lèi)正在攻克的最為重要的事情之一,。它可能比電或火更加復(fù)雜,。”谷歌和其他公司構(gòu)成了本吉奧所擔(dān)心的大型科技公司威脅,,他們已將自己標(biāo)榜為普及人工智能的生力軍,,其實(shí)現(xiàn)這一目標(biāo)的方式便是讓消費(fèi)者和不同規(guī)模的公司都能用得起人工智能技術(shù),而且用該技術(shù)來(lái)改善世界,。谷歌云首席科學(xué)家李飛飛向我透露:“人工智能將讓世界發(fā)生翻天覆地的變化,。它是一種能夠讓工作、生活和社會(huì)更加美好的力量,?!? 在本吉奧和加格內(nèi)討論成立初創(chuàng)企業(yè)之時(shí),這些大型科技公司還未卷入那些引人注目的人工智能倫理困境——關(guān)于出售人工智能并將其用于軍事,、可預(yù)測(cè)監(jiān)控,,以及在產(chǎn)品中不慎融入種族歧視和其他偏見(jiàn),而且它們很快因此而嘗到了苦果,。但即便是在那個(gè)時(shí)候,,知情人士都清楚地知道,大型科技公司正在部署人工智能來(lái)鞏固其巨大的權(quán)力和財(cái)富,。要理解這一現(xiàn)象,,我們得知道人工智能與其他軟件的區(qū)別。首先全球的人工智能專(zhuān)家相對(duì)較少,,這意味著他們可以輕松拿到6位數(shù)的薪資,,而這一點(diǎn)就會(huì)使組建大型工智能專(zhuān)家團(tuán)隊(duì)異常昂貴,,只有那些最有錢(qián)的公司才負(fù)擔(dān)得起。第二,,相對(duì)于傳統(tǒng)軟件,,人工智能對(duì)計(jì)算能力的要求更高,但也因此大幅提高了成本,,同時(shí)還需要更多的優(yōu)質(zhì)數(shù)據(jù),可是這些數(shù)據(jù)很難獲得,,除非你剛好是一家大型科技公司,,能夠無(wú)限制地獲得上述兩種資源。 本吉奧說(shuō):“如今,,人工智能的發(fā)展方式出現(xiàn)了一種新的特征……專(zhuān)長(zhǎng),、財(cái)富和權(quán)力集中在少數(shù)幾家公司手中?!备玫馁Y源能夠吸引到更好的研究人員,,后者會(huì)帶來(lái)更好的創(chuàng)新,為公司創(chuàng)造更多收入,,從而購(gòu)買(mǎi)更多的資源,。“有點(diǎn)類(lèi)似于自給自足,?!彼a(bǔ)充道。 本吉奧最早與人工智能結(jié)緣時(shí)恰逢大型科技公司的崛起,。本吉奧于上個(gè)世紀(jì)70年代在蒙特利爾長(zhǎng)大,,他尤其熱愛(ài)科幻類(lèi)書(shū)籍,例如菲利普·迪克的小說(shuō)《Do Androids Dream of Electric Sheep?》,。在這本書(shū)中,,由大型公司制造的有感情機(jī)器人失去了控制。在大學(xué),,本吉奧讀的是計(jì)算機(jī)工程專(zhuān)業(yè),,并在麥吉爾大學(xué)攻讀碩士學(xué)位,當(dāng)時(shí)他看到了由杰夫·欣頓撰寫(xiě)的一篇論文,,與兒童時(shí)代那篇他異常喜愛(ài)的故事發(fā)生了共鳴,,然后他就像著了魔一樣。他隨后回憶道:“我當(dāng)時(shí)覺(jué)得,,‘天哪,,這就是我想要從事的事情?!? 隨著時(shí)間的推移,,本吉奧與欣頓和楊立昆成為了深度學(xué)習(xí)這一領(lǐng)域的重量級(jí)人物,,后者涉及名為神經(jīng)網(wǎng)絡(luò)的計(jì)算機(jī)模型。但他們的研究出師不利,,不僅找錯(cuò)了方向,,而且目標(biāo)也很模糊。深度學(xué)習(xí)在理論上十分誘人,,但沒(méi)有人能夠讓它在實(shí)踐中有效運(yùn)行,。科羅拉多大學(xué)教授莫澤回憶道:“多年來(lái),,在機(jī)器學(xué)習(xí)會(huì)議上,,神經(jīng)網(wǎng)絡(luò)并不怎么受歡迎,而約書(shū)亞會(huì)在會(huì)上大談特談其神經(jīng)網(wǎng)絡(luò),。而我的感受是,,‘可憐的約書(shū)亞,真的是執(zhí)迷不悟,?!? 在00年代末期,研究人員明白了為什么深度學(xué)習(xí)未能發(fā)揮其效用,。對(duì)神經(jīng)網(wǎng)絡(luò)進(jìn)行高水平的培訓(xùn)需要更高的計(jì)算能力,,但當(dāng)時(shí)無(wú)法提供這樣的計(jì)算能力。此外,,神經(jīng)網(wǎng)絡(luò)需要優(yōu)質(zhì)的數(shù)字信息才能進(jìn)行學(xué)習(xí),,在消費(fèi)互聯(lián)網(wǎng)崛起之前,沒(méi)有足夠的信息可供神經(jīng)網(wǎng)絡(luò)來(lái)學(xué)習(xí),。在00年代末,所有一切都發(fā)生了變化,,而且大型科技公司很快便應(yīng)用了本吉奧和其同事的技術(shù),,實(shí)現(xiàn)了諸多的商業(yè)里程碑:翻譯語(yǔ)言,、理解發(fā)言,、面部識(shí)別等,。 在那之前,,本吉奧的弟弟薩米在谷歌工作,,他也是一名人工智能研究員。有人力邀本吉奧與他的弟弟和同事一道,,前往硅谷發(fā)展,,然而,在2016年10月,,他與加格內(nèi)和查帕多斯攜手Real Ventures創(chuàng)建了其自己的初創(chuàng)企業(yè):Element AI,。Element AI的投資公司DCVC的執(zhí)行合伙人馬特·歐克說(shuō):“除了Element AI之外,約書(shū)亞在任何人工智能平臺(tái)都沒(méi)有實(shí)質(zhì)性的所有權(quán),,然而在過(guò)去五年中,,有不少人都勸他這樣做。他在公司完全靠聲譽(yù)說(shuō)話,。” 為了贏得客戶,,Element利用了其研究人員的明星效應(yīng),,其出資方的聲譽(yù)光環(huán)以及提供比大型科技公司更個(gè)性化服務(wù)的承諾,。但公司的高管也在從另一個(gè)角度發(fā)力:在當(dāng)今這個(gè)年代,,谷歌曾競(jìng)相將人工智能技術(shù)賣(mài)給軍隊(duì),F(xiàn)acebook曾接待了一位影響了美國(guó)大選的激進(jìn)演員,,而亞馬遜則在貪婪地吞食全球經(jīng)濟(jì),,但Element可以將自己確立為一家并不是很貪婪、更注重倫理的公司,。 今春,,我到訪了Element位于蒙特利爾Plateau District的總部,。公司的人數(shù)有了大幅的增長(zhǎng),,達(dá)到了300人,而且根據(jù)墻上張貼的五顏六色的報(bào)事貼數(shù)量,工作量亦有了大幅增長(zhǎng),。在一次會(huì)面中,公司的12位Elemental(Elemental人,公司員工稱(chēng)呼自己的方式)觀看了一個(gè)正在開(kāi)發(fā)中的產(chǎn)品的演示。其中,工作人員可以在類(lèi)似于谷歌的界面上輸入問(wèn)題,,例如“公司的招聘預(yù)測(cè)”,然后獲得最新的答案。這些答案不僅基于現(xiàn)有的信息,而且還來(lái)源于人工智能根據(jù)業(yè)務(wù)目標(biāo)的理解對(duì)未來(lái)做出的預(yù)測(cè)。我所見(jiàn)到的員工看起來(lái)既興奮又疲憊不堪,這一現(xiàn)象在初創(chuàng)企業(yè)中十分常見(jiàn)。 Element所面臨的一個(gè)持續(xù)挑戰(zhàn)便是缺乏優(yōu)質(zhì)數(shù)據(jù),。培訓(xùn)人工智能模型最簡(jiǎn)單的方法就是在模型中錄入帶有詳細(xì)標(biāo)記的案例,例如數(shù)千張貓的圖片或翻譯后的文本。大型科技公司能夠獲得眾多的消費(fèi)導(dǎo)向型數(shù)據(jù),,這一優(yōu)勢(shì)在打造大規(guī)模消費(fèi)產(chǎn)品時(shí)都是其他公司所無(wú)法比擬的,。然而,,企業(yè)、政府和其他機(jī)構(gòu)擁有大量的隱私信息,。即便公司使用了谷歌的郵件服務(wù),,或亞馬遜的云計(jì)算服務(wù),這些科技巨頭也不會(huì)讓這些供應(yīng)商獲取其有關(guān)設(shè)備故障或銷(xiāo)售趨勢(shì)或處理時(shí)間的內(nèi)部數(shù)據(jù)庫(kù),。而Element正是看到了這一點(diǎn),。如果公司能夠獲取多家公司的數(shù)據(jù)庫(kù),例如產(chǎn)品圖片的數(shù)據(jù)庫(kù),,那么公司便可以在客戶允許的情況下,,使用所有這些信息來(lái)打造一個(gè)更好的產(chǎn)品推薦引擎。大型科技公司也在向企業(yè)銷(xiāo)售人工智能產(chǎn)品和服務(wù),,IBM正好也在專(zhuān)注于這項(xiàng)業(yè)務(wù),,但是沒(méi)有人能夠壟斷這一市場(chǎng)。Element認(rèn)為,,如果它可以將自己融入這項(xiàng)機(jī)構(gòu),,那么公司便可以建立企業(yè)數(shù)據(jù)方面的優(yōu)勢(shì),類(lèi)似于大型科技公司在消費(fèi)品數(shù)據(jù)方面的優(yōu)勢(shì),。 公司在這一方面并非只是停留在紙面上,。Element已與加拿大多家知名公司簽訂了協(xié)議,包括蒙特利爾港和加拿大廣播電臺(tái),,而且其客戶有十幾家都是全球1000強(qiáng)公司,,但公司高管并沒(méi)有透露具體數(shù)量,或任何非加拿大公司,。產(chǎn)品也處于早期開(kāi)發(fā)階段,。在問(wèn)題回答產(chǎn)品的演示期間,該項(xiàng)目的經(jīng)理弗朗西斯科·梅勒特(英語(yǔ)非母語(yǔ))曾索取了有關(guān)“雇員在某個(gè)產(chǎn)品上耗費(fèi)了多少時(shí)間”的信息,。梅勒特承認(rèn)這款產(chǎn)品的面世仍需很長(zhǎng)的時(shí)間,。但是他指出,Element希望它能夠變得超級(jí)智能,,甚至能夠回答最有深度的策略問(wèn)題。他給出了這樣一個(gè)例子:“接下來(lái)應(yīng)該怎么做,?”這看起來(lái)似乎已經(jīng)超越了策略的范疇,,聽(tīng)起來(lái)有點(diǎn)近乎祈禱的意味。 谷歌雇員曾反對(duì)公司將人工智能技術(shù)提供給五角大樓的決定便是一個(gè)很好的例子,。這一事實(shí)表明,,科技公司在人工智能軍事用途方面的立場(chǎng)已經(jīng)成為了倫理的試金石,。本吉奧和其他聯(lián)合創(chuàng)始人在最初便發(fā)誓,絕不將人工智能用于攻擊性軍事用途,。但是今年早些時(shí)候,,韓國(guó)的一所科研大學(xué)——科學(xué)技術(shù)高級(jí)研究院宣布,該機(jī)構(gòu)將與Element的主要投資方韓國(guó)國(guó)防部門(mén)韓華集團(tuán)合作,,打造軍事系統(tǒng),。盡管這兩家企業(yè)存在投資關(guān)系,但本吉奧簽署了一封公開(kāi)信,,聲討韓國(guó)的這家機(jī)構(gòu),,除非它承諾 “不開(kāi)發(fā)缺乏有效人類(lèi)控制的自主武器?!奔痈駜?nèi)則更為謹(jǐn)慎,,他在給韓華集團(tuán)的信件中強(qiáng)調(diào),Element不會(huì)與制造自主武器的公司開(kāi)展合作,。不久后,,加格內(nèi)和科學(xué)家們得到了保證:科學(xué)技術(shù)高級(jí)研究院和韓華集團(tuán)不會(huì)利用人工智能打造軍事系統(tǒng)。 自主武器并非是人工智能所面臨的唯一挑戰(zhàn),,也不是其所面臨的最嚴(yán)峻的挑戰(zhàn),。研究人工智能社會(huì)影響的紐約大學(xué)教授凱特·克勞福德寫(xiě)道,人工智能領(lǐng)域所有“束手無(wú)策”的問(wèn)題,,例如從現(xiàn)有問(wèn)題轉(zhuǎn)而成為未來(lái)存在的威脅,,以及“性別歧視、種族主義和其他形式的歧視”,,已被寫(xiě)入機(jī)器學(xué)習(xí)算法,。由于人工智能模型依靠工程師錄入的數(shù)據(jù)來(lái)進(jìn)行培訓(xùn),數(shù)據(jù)中的任何偏見(jiàn)都會(huì)讓獲得這一數(shù)據(jù)的模型出現(xiàn)問(wèn)題,。 推特曾部署了由微軟開(kāi)發(fā)的人工智能聊天機(jī)器人Tay,,以學(xué)習(xí)人類(lèi)如何交談,然而它很快便拋出了種族主義言論,,例如“希特勒是正確的,。”微軟對(duì)此道歉,,并將Tay下架,,同時(shí)表示公司正在著手解決數(shù)據(jù)偏見(jiàn)的問(wèn)題。谷歌的人工智能功能應(yīng)用使用自拍來(lái)幫助用戶尋找藝術(shù)作品中與自己長(zhǎng)相相似的人物,,非洲裔美籍人士的匹配對(duì)象基本上都是奴隸,,而亞洲裔美籍人士的匹配對(duì)象基本上都是斜眼的日本歌舞姬,可能是因?yàn)閿?shù)據(jù)過(guò)分依賴(lài)于西方藝術(shù)作品的緣故,。我自己是印度裔美籍女性,,當(dāng)我使用這款應(yīng)用時(shí),,谷歌給我發(fā)來(lái)的肖像是有著古銅色面孔、表情郁悶的印第安人首領(lǐng),。我對(duì)此也感到十分郁悶,,谷歌這一點(diǎn)倒是沒(méi)弄錯(cuò),。(一位發(fā)言人就此道歉,,并表示“谷歌正在致力于減少人工智能中不公平的偏見(jiàn)”,。) 像這類(lèi)問(wèn)題基本上源自于現(xiàn)實(shí)當(dāng)中存在的偏見(jiàn),然而雪上加霜的是,,外界認(rèn)為人工智能領(lǐng)域比廣泛的計(jì)算機(jī)科學(xué)領(lǐng)域更加缺乏多元化,,而后者是白人和亞洲人的天下。研究員提姆尼特·格布魯是一名埃塞俄比亞裔美籍女性,,曾在微軟等公司供職,,她說(shuō):“該領(lǐng)域的同質(zhì)性催生了這些巨大的問(wèn)題。這些人生活在自己的世界中,,并認(rèn)為自己異常開(kāi)明而且先知先覺(jué),,但是他們沒(méi)有意識(shí)到,他們讓這個(gè)問(wèn)題愈發(fā)嚴(yán)重,?!? 女性占Element員工總數(shù)的33%,占領(lǐng)導(dǎo)層的35%,,占技術(shù)員工數(shù)量的23%,,比很多大型科技公司的比例都要高。公司的雇員來(lái)自于超過(guò)25個(gè)國(guó)家:我遇到了一位來(lái)自塞內(nèi)加爾的研究人員,,他加入公司的部分原因在于,,盡管他拿到了富布萊特獎(jiǎng)學(xué)金來(lái)到美國(guó)深造,但無(wú)法獲得留在美國(guó)的簽證,。但是公司并沒(méi)有按照種族來(lái)劃分其員工,,而且在我到訪期間,公司大部分員工似乎都是白人和亞洲人,,尤其是管理層,。運(yùn)營(yíng)副總裁安妮·馬特爾是Element七名高管中唯一的女性,而工業(yè)解決方案高級(jí)副總裁奧馬爾·達(dá)拉是唯一的有色人種,。與Element相關(guān)聯(lián)的24名學(xué)院研究員中,,僅有3名是女性。在本吉奧實(shí)驗(yàn)室MILA網(wǎng)站上列出的100名學(xué)生當(dāng)中,,只有7名是女性,。(本吉奧表示,該網(wǎng)站的信息并沒(méi)有更新,,而且他并不知道當(dāng)前的性別比例,。)雖然格布魯與本吉奧的關(guān)系很好,但格布魯在批評(píng)時(shí)也是毫不留情,。她說(shuō):“我對(duì)他說(shuō),,你都簽署了反對(duì)自主武器的公開(kāi)信,并希望保持獨(dú)立,,但公司打造人工智能技術(shù)的員工大部分都是白人或亞洲男性,。你連自家實(shí)驗(yàn)室的問(wèn)題都沒(méi)解決,如何去解決世界性的問(wèn)題,?!? 本吉奧表示,他對(duì)這一局面感到羞愧,,并將努力解決這一問(wèn)題,,其中一個(gè)舉措便是擴(kuò)大招聘,并撥款幫助那些來(lái)自于被忽視群體的學(xué)生,。與此同時(shí),,Element已經(jīng)聘請(qǐng)了一位新人力副總裁安妮·梅澤,主要負(fù)責(zé)公司的多元化和包容性問(wèn)題,。為了解決產(chǎn)品中可能存在的倫理問(wèn)題,,Element將聘請(qǐng)倫理學(xué)家擔(dān)任研究員,并與開(kāi)發(fā)人員通力合作,。公司還在倫敦辦事處設(shè)立了AI for Good lab,,由前谷歌DeepMind的研究員朱莉安·科尼碧斯執(zhí)掌。在AI for Good lab中,,研究人員將以無(wú)償或有償?shù)姆绞?,圍繞能夠帶來(lái)社會(huì)福利的人工智能項(xiàng)目,與非營(yíng)利性和政府等機(jī)構(gòu)開(kāi)展合作,。 然而,,倫理挑戰(zhàn)依然存在。在早期研究中,,Element使用其自有數(shù)據(jù)來(lái)制造某些產(chǎn)品,;例如,問(wèn)題回答工具的部分培訓(xùn)資料便來(lái)自于內(nèi)部共享文件,。運(yùn)營(yíng)高管馬特爾對(duì)我說(shuō),,因?yàn)榱頔lement高管感到為難的是,如何在面部識(shí)別中使用人工智能技術(shù)才算是符合倫理,。他們計(jì)劃在自家雇員身上進(jìn)行試驗(yàn),。公司將安裝攝像頭,這些攝像頭在經(jīng)過(guò)員工的允許之后捕捉其面部圖像,,并對(duì)人工智能技術(shù)進(jìn)行培訓(xùn),。高管們將對(duì)員工進(jìn)行調(diào)查,,詢(xún)問(wèn)他們對(duì)該技術(shù)的感受,加深他們對(duì)于倫理維度的理解,。馬特爾說(shuō):“我們希望通過(guò)公司內(nèi)部試驗(yàn)把這個(gè)問(wèn)題弄明白,。” 當(dāng)然,,這意味著至少在最初的時(shí)候,,所有的面部識(shí)別模型都將基于并不能代表廣泛人群的面部圖片。馬特爾表示,,高管們意識(shí)到了這一問(wèn)題:我們對(duì)于代表性不充分問(wèn)題感到非常擔(dān)憂,,而且我們正在尋找對(duì)策。 即便是Element產(chǎn)品旨在為高管回答的問(wèn)題——接下來(lái)應(yīng)該怎么做,?——亦充滿了倫理挑戰(zhàn),。對(duì)于能夠?qū)崿F(xiàn)利潤(rùn)最大化的舉措,無(wú)論商用人工智能技術(shù)做出何種推薦,,人們也很難對(duì)其責(zé)難,。然而它如何做出這些決定?哪些社會(huì)代價(jià)是可以容忍的,?由誰(shuí)來(lái)決定,?正如本吉奧所承認(rèn)的那樣,隨著越來(lái)越多的機(jī)構(gòu)部署人工智能技術(shù),,盡管會(huì)有新的崗位涌現(xiàn)出來(lái),,但數(shù)百萬(wàn)人有可能會(huì)因此而失去工作。雖然本吉奧和加格內(nèi)最初計(jì)劃向小型機(jī)構(gòu)推銷(xiāo)其服務(wù),,但他們隨后還是將目光投向了排名前2000位的大公司,;事實(shí)證明,Element對(duì)大量數(shù)據(jù)集的需求遠(yuǎn)非小型機(jī)構(gòu)可以滿足,。尤為值得一提的是,,他們將目光投向了金融和供應(yīng)鏈公司,而其中規(guī)模最大的那些公司在這一方面并非就是毫無(wú)準(zhǔn)備的門(mén)外漢,。加格內(nèi)表示,,隨著技術(shù)的改善,Element預(yù)計(jì)也將向小一點(diǎn)的機(jī)構(gòu)銷(xiāo)售其技術(shù),。但到了那個(gè)時(shí)候,,Element向全球規(guī)模最大的公司提供人工智能優(yōu)勢(shì)的計(jì)劃似乎更適合為現(xiàn)有的大公司錦上添花,而不是面向大眾普及人工智能的福利,。 本吉奧認(rèn)為,,科學(xué)家的工作是繼續(xù)探索人工智能新成果。他說(shuō),各國(guó)政府應(yīng)加大對(duì)這一領(lǐng)域的規(guī)范力度,,同時(shí)更加公平地分配財(cái)富,,并投資教育和社會(huì)安全網(wǎng)絡(luò),規(guī)避人工智能不可避免的負(fù)面影響,。當(dāng)然,,這些主張的前提是,政府心懷民眾的最大利益,。與此同時(shí),美國(guó)政府正在削減富人的稅收,,而中國(guó)政府,,作為人工智能研究最大的資助者,將使用深度學(xué)習(xí)來(lái)監(jiān)控其民眾,。華盛頓大學(xué)教授多明戈斯表示:“在我看來(lái),,約書(shū)亞認(rèn)為人工智能可以符合倫理綱常,而且他的公司也能夠成為一家合乎倫理綱常的人工智能技術(shù)公司,。但是坦率地講,,約書(shū)亞有一點(diǎn)天真,眾多的技術(shù)專(zhuān)家亦是如此,。他們的看法過(guò)于理想化,。” 本吉奧并不贊成這一結(jié)論,。他說(shuō):“作為科學(xué)家,,我認(rèn)為我們有責(zé)任與社會(huì)和政府打交道,從而按照我們所堅(jiān)信的理念來(lái)引導(dǎo)人們的思想和心靈,?!? 今春一個(gè)清冷明朗的早晨,Element的員工齊聚一堂,,在一個(gè)已改造為活動(dòng)場(chǎng)地的高屋頂教堂開(kāi)展協(xié)同軟件設(shè)計(jì)場(chǎng)外培訓(xùn),。參加的員工按照?qǐng)A桌劃分為不同的組別,其任務(wù)就是設(shè)計(jì)一款教授人工智能基礎(chǔ)知識(shí)的游戲,。我與其中的6名員工一桌,,他們決定將這款人工智能游戲命名為Sophia the Robot(機(jī)器人索菲亞)。這個(gè)機(jī)器人發(fā)瘋了,,自然便需要使用人工智能技術(shù)來(lái)與之戰(zhàn)斗并進(jìn)行抓捕,。新人力副總裁梅澤剛好就在這一桌。她表示:“我很喜歡索菲亞這個(gè)名稱(chēng),,因?yàn)槲覀冃枰嗟呐?,但我不喜歡打打殺殺。”不少人在交頭接耳時(shí)都表示同意這一觀點(diǎn),。一位高管助理提出了建議:“可以將游戲的目標(biāo)設(shè)定為改變索菲亞的思考方式,,把它變?yōu)閹椭澜纭,!毙掳姹镜挠螒蚵?tīng)起來(lái)令人更加愉悅,,因?yàn)樗cElement的自我形象結(jié)合的更加緊密。一位雇員對(duì)我說(shuō):“在辦公室,,公司不允許談?wù)揝kynet(天網(wǎng))?!焙笳呤窃醋杂凇督K結(jié)者》授權(quán)的對(duì)抗性人工智能系統(tǒng),。任何不小心談?wù)撨@一話題的員工都必須將1美元放到一個(gè)特別準(zhǔn)備的罐子中。一位同事以非常高興的口吻補(bǔ)充說(shuō):“我們應(yīng)該持有積極樂(lè)觀的態(tài)度,?!? 隨后,我到訪了本吉奧在蒙特利爾大學(xué)的實(shí)驗(yàn)室,,由一個(gè)個(gè)點(diǎn)著日光燈的監(jiān)獄式房間組成,,房間里到處都是計(jì)算機(jī)顯示器和一摞摞的教科書(shū)。在其中一個(gè)房間里,,有十幾個(gè)年輕人一邊開(kāi)發(fā)其人工智能模型,,一邊開(kāi)著數(shù)學(xué)玩笑,同時(shí)還聊了聊自己的職業(yè)道路,。我在無(wú)意中聽(tīng)到:“微軟有著各種不俗的待遇,打折機(jī)票,、酒店,?!薄ⅰ拔颐恐苋lement AI一次,,然后拿到的是這臺(tái)電腦?!薄ⅰ八且粋€(gè)叛徒,?!?、“你可以在其他領(lǐng)域說(shuō)‘叛徒’這個(gè)詞,但是在深度學(xué)習(xí)領(lǐng)域不行,?!薄ⅰ盀槭裁??”,、“因?yàn)樵谏疃葘W(xué)習(xí)領(lǐng)域,每個(gè)人都是叛徒,。”看起來(lái),,本吉奧無(wú)償背叛的愿景還沒(méi)有完全實(shí)現(xiàn),。 然而,本吉奧能夠通過(guò)培養(yǎng)下一代研究人員來(lái)影響人工智能的未來(lái),,而且這種影響力可能是其他學(xué)者無(wú)法企及的。(他的一個(gè)兒子也成為了人工智能研究員,;另一個(gè)兒子是一名音樂(lè)家,。)一天下午,我到訪了本吉奧的辦公室,,在這個(gè)面積不大,、空曠的房間中,主要擺設(shè)是一個(gè)白板,,上面潦草地寫(xiě)著“襁褓中的人工智能”,,還有一個(gè)書(shū)架,上面擺放的書(shū)本包括《老鼠的大腦皮層》,。本吉奧承認(rèn),,盡管作為Element的聯(lián)合創(chuàng)始人,但由于一直忙于人工智能最前沿的研究工作,,自己并沒(méi)有在公司辦公室待過(guò)多長(zhǎng)時(shí)間,。這些研究離商業(yè)應(yīng)用還有很長(zhǎng)的路要走。 雖然科技公司一直專(zhuān)注于讓人工智能能夠更好地物盡其用,,即發(fā)現(xiàn)規(guī)律并對(duì)其進(jìn)行總結(jié),,但本吉奧希望越過(guò)這些最基本的用途,并開(kāi)始打造深受人類(lèi)智慧啟發(fā)的機(jī)器,。他并不愿意描述這類(lèi)機(jī)器的詳情,。但人們可以想象的是,在未來(lái),,機(jī)器不僅僅是用來(lái)在倉(cāng)庫(kù)搬運(yùn)產(chǎn)品,,而是能夠在現(xiàn)實(shí)世界中穿梭。它們不僅僅會(huì)對(duì)命令做出響應(yīng),同時(shí)還能理解并同情人類(lèi),。它們不僅僅能夠識(shí)別圖像,,還能夠創(chuàng)作藝術(shù)。為了實(shí)現(xiàn)這一目標(biāo),,本吉奧一直在研究人腦的工作原理,。他的一位博士后學(xué)生對(duì)我說(shuō),大腦便是“智能系統(tǒng)可能會(huì)成為現(xiàn)實(shí)的證據(jù)”,。本吉奧已將一款游戲作為其重點(diǎn)項(xiàng)目,。在游戲中,玩家通過(guò)與這位偽裝的嬰兒講話,、指路等等,,告訴一位虛擬兒童——也就是辦公室白板上寫(xiě)的“襁褓中的人工智能”——世界如何運(yùn)轉(zhuǎn)?!拔覀兛梢詮膵雰喝绾螌W(xué)習(xí)以及父母如何與自己的孩子互動(dòng)中吸取靈感,。”這看起來(lái)似乎很牽強(qiáng),,但是別忘了,,本吉奧曾經(jīng)看似荒誕的理念如今是大型科技公司最主流技術(shù)的理論支柱。 盡管本吉奧認(rèn)為人工智能可能會(huì)達(dá)到與人類(lèi)相仿的水平,,但他對(duì)埃隆·馬斯克這類(lèi)人所宣傳的影響深遠(yuǎn)的倫理問(wèn)題不以為然,,因?yàn)槠淝疤峋褪侨斯ぶ悄艿闹腔鬯揭叱鋈祟?lèi)。本吉奧對(duì)于人類(lèi)打造和使用人工智能時(shí)所做出的倫理選擇更感興趣,。他曾經(jīng)對(duì)一位采訪者透露:“最大的危險(xiǎn)在于,,人們可能會(huì)以不負(fù)責(zé)任的方式或惡意的方式來(lái)對(duì)待人工智能技術(shù),我是指用于謀取個(gè)人私利,?!逼渌目茖W(xué)家也同意本吉奧的看法。然而,,隨著人工智能研究的繼續(xù)向前邁進(jìn),,它依然由全球最強(qiáng)大的政府、企業(yè)和投資者提供資助,。本吉奧的大學(xué)實(shí)驗(yàn)室的運(yùn)轉(zhuǎn)資金基本上就來(lái)自于大型科技公司,。 在討論最大科技公司的會(huì)議期間,本吉奧曾一度對(duì)我說(shuō):“我們希望Element AI能夠發(fā)展成為這些科技巨頭中的一員,?!蔽覇?wèn)本吉奧,到那時(shí),,他是否會(huì)跟那些公司一樣關(guān)注他所不齒的財(cái)富和權(quán)力,。他回答說(shuō):“我們的理念不僅僅是為了創(chuàng)建一家公司,,然后成為全球最有錢(qián)的公司。我們的目標(biāo)是改變世界,,改變商業(yè)的運(yùn)行方式,,讓它不像現(xiàn)在這樣集中在少數(shù)企業(yè)當(dāng)中,并讓它變得更加民主,?!北M管我十分敬佩他的姿態(tài),也對(duì)他的抱負(fù)有信心,,但他的話與大型科技公司所奉行的口號(hào)沒(méi)有多大的區(qū)別,。不干壞事。讓世界更加開(kāi)放和互聯(lián),。打造一個(gè)符合道德規(guī)范的企業(yè)并不在于創(chuàng)始人的抱負(fù),,而是關(guān)乎企業(yè)所有者在一段時(shí)間之后如何在社會(huì)公益和企業(yè)利潤(rùn)之間進(jìn)行取舍。接下來(lái)應(yīng)該怎么做,?如果計(jì)算機(jī)仍然難以給出答案,,那么多少令其感到安慰的是,人類(lèi)在這個(gè)問(wèn)題上也沒(méi)有比計(jì)算機(jī)睿智多少,。(財(cái)富中文網(wǎng)) 本文最初刊于2018年7月1日的《財(cái)富》雜志,。 譯者:馮豐 審校:夏林 |
On a warm September afternoon in 2015, Bengio and four of his closest colleagues met at Le Bouthillier’s Montreal home. The gathering was technically a strategy meeting for a technology-transfer company Bengio had cofounded years earlier. But Bengio, harboring serious anxieties about the future of his field, also saw an opportunity to raise some questions he had been dwelling on: Was it possible to create a business that would help a broader ecosystem of startups and universities, rather than hurt it—and maybe even be good for society at large? And if so, could that business compete in a Big Tech–dominated world? Bengio especially wanted to hear from his friend Jean-Fran?ois Gagné, an energetic serial entrepreneur more than 15 years his junior. Gagné had earlier sold a startup he cofounded to a company now known as JDA Software; after three years working there, Gagné left and became an entrepreneur-inresidence at the Canadian venture capital firm Real Ventures. Bengio was keen on getting involved in Gagné’s next project, provided it aligned with his own goals. Gagné, as it happened, had also been wrestling with how to survive in a Big Tech–dominated world. At the end of the three-hour meeting, as the sun began to set, he told Bengio and the others, “Okay, I’m going to flesh out a business plan.” That winter, Gagné and a colleague, Nicolas Chapados, visited Bengio at his small University of Montreal office. Surrounded by Bengio’s professorial paraphernalia—textbooks, stacks of papers, a whiteboard covered in cat-scratch equations—Gagné announced that with Real Ventures’ blessing he had come up with a plan. He proposed cofounding a startup that would build A.I. technologies for startups and other under-resourced organizations that couldn’t afford to build their own and might be attracted to a non–Big Tech vendor. The startup’s key selling point would be one of the most talented workforces on earth: It would pay researchers from Bengio’s lab, among other top universities, to work for the company several hours a month yet keep their academic positions. That way, the business would get top talent at a bargain, the universities would keep their researchers, and Main Street customers would stand a chance of competing with their richer rivals. Everyone would win, except maybe Big Tech. GOOGLE CEO Sundar Pichai declared earlier this year, “A.I. is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire.” Google and the other companies that together constitute the Big Tech threat that occupies Bengio have positioned themselves as forces to democratize A.I., by making it affordable for consumers and businesses of all sizes, and using it to better the world. “A.I. is going to make sweeping changes to the world,” Fei-Fei Li, the chief scientist for Google Cloud, tells me. “It should be a force that makes work, and life, and society, better.” When Bengio and Gagné began their discussions, the largest tech companies hadn’t yet been embroiled in the high-profile A.I. ethics messes—about controversial sales of A.I. for military and predictive policing, as well as the slipping of racial and other biases into products—that would soon consume them. But even then, it was clear to insiders that Big Tech companies were deploying A.I. to compound their considerable power and wealth. Understanding this required knowing that A.I. is different from other software. First of all, there are relatively few A.I. experts in the world, which means they can command salaries well into the six figures; that makes building a large team of A.I. experts too expensive for all but the wealthiest companies. Second, A.I. often requires more computing power than traditional software, which can be expensive, and good data, which can be difficult to get, unless you happen to be a tech giant with nearly limitless access to both. “There’s something about the way A.I. is done these days … that increases the concentration of expertise and wealth and power in the hands of just a few companies,” Bengio says. Better resources attract better researchers, which leads to better innovations, which brings in more revenue, which buys more resources. “It sort of feeds itself,” he adds. Bengio’s earliest encounters with A.I. anticipated the rise of Big Tech. Growing up in Montreal in the 1970s, he was especially taken with science fiction books like Philip K. Dick’s novel Do Androids Dream of Electric Sheep?—in which sentient robots created by a megacorporation have gone rogue. In college, Bengio majored in computer engineering; he was in graduate school at McGill University when he came across a paper by Geoff Hinton and was lightning-struck, finding echoes of the sci-fi stories he had loved so much as a child. “I was like, ‘Oh my God. This is the thing I want to do,’ ” he recalls later. In time, Bengio, along with Hinton and LeCun, would become an important figure in a field known as deep learning, involving computer models called neural networks. But their research was littered with false starts and confounded ambitions. Deep learning was alluring in theory, but no one could make it work well in practice. “For years, at the machine-learning conferences, neural networks were out of favor, and Yoshua would be there cranking away on his neural net,” recalls Mozer, the University of Colorado professor, “and I’d be like, ‘Poor Yoshua, he’s so out of it.’ ” In the late 2000s it dawned on researchers why deep learning hadn’t worked well. Training neural networks at a high level required more computing power than had been available. Further, neural networks need good digital information in order to learn, and before the rise of the consumer Internet there hadn’t been enough of it for them to learn from. By the late 2000s, all that had changed, and soon large tech companies were applying the techniques of Bengio and his colleagues to achieve commercial milestones: translating languages, understanding speech, recognizing faces. By that time, Bengio’s brother Samy, also an A.I. researcher, was working at Google. Bengio was tempted to follow his brother and colleagues to Silicon Valley, but instead, in October 2016, he, Gagné, Chapados, and Real Ventures launched their own startup: Element AI. “Yoshua had no material ownership in any A.I. platform, despite being hounded over the last five years to do so, other than Element AI,” says Matt Ocko, a managing partner at DCVC, which invested in the company. “He had voted with his reputation.” To win customers, Element relied on the star power of its researchers, the reputational glitz of its funding, and a promise of more personalized service than Big Tech could provide. But its executives also worked another angle: In an age in which Google was competing to sell A.I. to the military, Facebook had played host to rogue actors who influence elections, and Amazon was gobbling up the global economy, Element could position itself as a less predaceous, more ethical A.I. outfit. This spring, I visited Element’s headquarters in Montreal’s Plateau District. The headcount had expanded dramatically, to 300, and judging from the colorful Post-it notes columned on the walls, so had the workload. In one meeting, a dozen Elementals, as employees call themselves, watched a demo of a product in development, in which a worker could enter questions on a Google-like screen—“What’s our hiring forecast?”—and get up-to-date answers. The answers would be based not just on existing information but also on the A.I.’s predictions about the future based on its understanding of business goals. As is typical at fast-growing startups, the employees I met seemed simultaneously energized and utterly exhausted. A persistent challenge for Element is the dearth of good data. The simplest way to train A.I. models is to feed them lots of well-labeled examples—thousands of cat images, or translated texts. Big Tech has access to so much consumer-oriented data that it’s all but impossible for anyone else to compete at building large-scale consumer products. But businesses, governments, and other institutions own huge amounts of private information. Even if a corporation uses Google for email, or Amazon for cloud computing, it doesn’t typically let those vendors access its internal databases about equipment malfunctions, or sales trends, or processing times. That’s where Element sees an opening. If it can access several companies’ databases of, say, product images, it can then—with customers’ permission—use all of that information to build a better product-recommendation engine. Big Tech companies are also selling A.I. products and services to businesses—IBM is squarely focused on it—but no one has cornered the market. Element’s bet is that if it can embed itself in these organizations, it can secure a corporate data advantage similar to the one Big Tech has in consumer products. Not that it has gotten anywhere close to that point. Element has signed up some prominent Canadian firms, including the Port of Montreal and Radio Canada, and counts more than 10 of the world’s 1,000 biggest companies as customers, but executives wouldn’t quantify their customers or name any non-Canadian ones. Products, too, are still in early stages of development. During the demo of the question-answering product, the project manager, Fran?ois Maillet, who is not a native English speaker, requested information about “how many time” employees had spent on a certain product. The A.I. was stumped, until Maillet revised the question to ask “how much time” had been spent. Maillet acknowledges the product has a long way to go. But he says Element wants it to become so intelligent that it can answer the deepest strategic questions. The example he offers—“What should we be doing?”—seemed to go beyond the strategic. It sounded quite nearly prayerful. LOOK NO FURTHER than Google’s employee revolt over its decision to provide A.I. to the Pentagon as evidence that tech companies’ stances on military use of A.I. have become an ethical litmus test. Bengio and his cofounders vowed early on to never build A.I. for offensive military purposes. But earlier this year, the Korea Advanced Institute of Science and Technology, a research university, announced it would partner with the defense unit of the South Korean conglomerate Hanwha, a major Element investor, to build military systems. Despite Element’s ties with Hanwha, Bengio signed an open letter boycotting the Korean institute until it promised not to “develop autonomous weapons lacking meaningful human control.” Gagné, more discreetly, wrote to Hanwha emphasizing that Element wouldn’t partner with companies building autonomous weapons. Soon Gagné and the scientists received assurances: The university and Hanwha wouldn’t be doing so. Autonomous weapons are neither the only ethical challenge facing A.I. nor the most serious one. Kate Crawford, a New York University professor who studies the societal implications of A.I., has written that all the “hand-wringing” over A.I. as a future existential threat distracts from existing problems, as “sexism, racism, and other forms of discrimination are being built into the machinelearning algorithms.” Since A.I. models are trained on the data that engineers feed it, any biases in the data will poison a given model. Tay, an A.I. chatbot deployed to Twitter by Microsoft to learn how humans talk, soon started spewing racist comments, like “Hitler was right.” Microsoft apologized, took Tay off-line, and said it is working to address data bias. Google’s A.I.-powered feature that uses selfies to help users find their doppelg?ngers in art matched African-Americans with stereotypical depictions of slaves and Asian-Americans with slant-eyed geishas, perhaps because of an overreliance on Western art. I am an Indian-American woman, and when I used the app, Google delivered me a portrait of a copperfaced, beleaguered-looking Native American chief. I also felt beleaguered, so Google got that part right. (A spokesman apologized and said Google is “committed to reducing unfair bias” in A.I.) Problems like these result from bias in the world at large, but it doesn’t help that the field of A.I. is believed to be even less diverse than the broader computer science community, which is dominated by white and Asian men. “The homogeneity of the field is driving all of these issues that are huge,” says Timnit Gebru, a researcher who has worked for Microsoft and others and is an Ethiopian-American woman. “They’re in this bubble, and they think they’re so liberal and enlightened, but they’re not able to see that they’re contributing to the problem.” Women make up 33% of Element’s workforce, 35% of its leadership, and 23% of technical roles—higher percentages than at many big tech companies. Its employees come from more than 25 countries: I met one researcher from Senegal who had joined in part because he couldn’t get a visa to stay in the U.S. after studying there on a Fulbright. But the company doesn’t break down its workforce by race, and during my visit, it appeared predominantly white and Asian, especially in the upper ranks. Anne Martel, the senior vice president of operations, is the only woman among Element’s seven top executives, and Omar Dhalla, the senior vice president of industry solutions, is the only person of color. Of the 24 academic fellows affiliated with Element, just three are female. Of 100 students listed on the website of Bengio’s lab, MILA, seven are women. (Bengio said the website is out of date and he doesn’t know the current gender breakdown.) Gebru is close with Bengio but does not exempt him from her criticisms. “I tell him that he’s signing letters against autonomous weapons and wants to stay independent, but he’s supplying the world with a mostly white or Asian group of males to create A.I.,” she said. “How can you think about world hunger without fixing your issue in your lab?” Bengio said he is “ashamed” about the situation and trying to address it, partly by widening recruitment and earmarking funding for students from underrepresented groups. Element, meanwhile, has hired a new vice president for people, Anne Mezei, who set diversity and inclusion as a top priority. To address possible ethical problems with its products, Element is hiring ethicists as fellows, to work alongside developers. It has also opened an AI for Good lab, in a London office directed by Julien Cornebise, a former researcher at Google DeepMind, where researchers are working, for free or at cost, with nonprofits, government organizations, and others on A.I. projects with social benefit. Still, ethical challenges persist. In early research, Element is basing some products on its own data; the question-answering tool, for example, is being trained partly on shared internal documents. Martel, the operations executive, tells me that because Element executives aren’t sure from an ethics standpoint how they might use A.I. for facial recognition, they plan to experiment with it on their own employees by installing video cameras that will, with employees’ permission, capture their faces to train the A.I. Executives will poll employees on their feelings about this, to refine their understanding of the ethical dimensions. “We want to figure it out through eating our own dog food,” Martel says. That means, of course, that any facial-recognition model will be based, at least at first, on faces that are not representative of the broader population. Martel says executives are aware of the issue: “We’re really concerned about not having the right level of representativeness, and we’re looking into solutions for that.” Even the question that Element’s product aims to answer for executives—What should we be doing?—is loaded with ethical quandaries. One could hardly fault a business-oriented A.I. for recommending whatever course of action maximizes profit. But how should it make those decisions? What social costs are tolerable? Who decides? As Bengio has acknowledged, as more organizations deploy A.I., millions of humans are likely to lose their jobs, though new ones will be created. Though Bengio and Gagné originally planned to pitch their services to small organizations, they have since pivoted to target the 2,000 largest companies in the world; Element’s need for large data sets turned out to be prohibitive for small organizations. In particular, they are targeting finance and supply-chain companies—the biggest of which aren’t exactly defenseless underdogs. Gagné says that as the technology improves, Element expects to sell it to smaller organizations as well. But until that happens, its plan to give an A.I. advantage to the world’s biggest companies would seem better-equipped to enrich powerful incumbent corporations than to spread A.I.’s benefits among the masses. Bengio believes the job of scientists is to keep pursuing A.I. discoveries. Governments should more aggressively regulate the field, he says, while distributing wealth more equally and investing in education and the social safety net, to mitigate A.I.’s inevitable negative effects. Of course, these positions assume governments have their citizens’ best interests in mind. Meanwhile, the U.S. government is cutting taxes for the rich, and the Chinese government, one of the world’s biggest funders of A.I. research, is using deep learning to monitor citizens. “I do think Yoshua believes that A.I. can be ethical, and that his can be the ethical A.I. company,” says Domingos, the University of Washington professor. “But to put it bluntly, Yoshua is a little naive. A lot of technologists are a little naive. They have this utopian view.” Bengio rejects the characterization. “As scientists, I believe that we have a responsibility to engage with both civil society and governments,” he says, “in order to influence minds and hearts in the direction we believe in.” ONE COLD, BRIGHT MORNING this spring, Element’s staff gathered for an off-site training in collaborative software design, in a high-ceilinged church that had been converted into an event space. The attendees, working in groups at round tables, had been assigned to invent a game to teach the fundamentals of A.I. I sat with some halfdozen employees, who had decided on a game about an A.I. named Sophia the Robot who had gone rogue and would need to be fought and captured, using, naturally, A.I. techniques. Mezei, the new VP for people, happened to be at this table. “I like the fact that it’s Sophia, because we need more women,” she interjected. “But I don’t like fighting.” There were murmurs of assent all around. An executive assistant suggested, “Maybe the goal is changing Sophia’s mindset so it’s about helping the world.” This was a more palatable version of the game, one better aligned with Element’s self-image. One employee told me, “At the office, we’re not allowed to talk about Skynet”—the antagonistic A.I. system from the Terminator franchise. Anyone who slips up has to put a dollar into a special jar. A colleague added, in a tone of great cheer, “We’re supposed to be positive and optimistic.” Later I visited Bengio’s lab at the University of Montreal, a warren of carceral, fluorescent-lit rooms filled with computer monitors and piled-up textbooks. In one room, some dozen young men were working on their A.I. models, exchanging math jokes, and contemplating their career paths. Overheard: “Microsoft has all these nice perks—you get cheaper airline tickets, cheap hotels.” “I go to Element AI once a week, and I get this computer.” “He’s a sellout.” “You can scream, ‘Sellout!’ in other fields, but not deep learning.” “Why not?” “Because in deep learning, everyone’s a sellout.” Bengio’s sellout-free vision, it seemed, had not quite been realized. Still, perhaps more than any other academic, Bengio has influence over A.I.’s future, by virtue of training the next generation of researchers. (One of his sons has become an A.I. researcher too; the other is a musician.) One afternoon I went to see Bengio in his office, a small, sparse room whose main features were a whiteboard across which someone had scrawled the phrase “Baby A.I.,” and a bookcase featuring such titles as The Cerebral Cortex of the Rat. Despite being an Element cofounder, Bengio acknowledged that he hadn’t been spending a lot of time at the offices; he had been preoccupied with frontiers in A.I. research that are far from commercial application. While tech companies have been focused on making A.I. better at what it does—recognizing patterns and drawing conclusions from them—Bengio wants to leapfrog those basics and start building machines that are more deeply inspired by human intelligence. He hesitated to describe what that might look like. But one can imagine a future in which machines wouldn’t just move products around a warehouse but navigate the real world. They wouldn’t just respond to commands but understand, and empathize with, humans. They wouldn’t just identify images; they’d create art. To that end, Bengio has been studying how the human brain operates. As one of his postdocs told me, brains “are proof that intelligent systems are possible.” One of Bengio’s pet projects is a game in which players teach a virtual child—the “Baby A.I.” from his whiteboard—about how the world operates by talking to the pretend infant, pointing, and so on: “We can use inspiration from how babies learn and how parents interact with their babies.” It seems far-fetched until you remember that Bengio’s once-outlandish notions now underpin some of Big Tech’s most mainstream technologies. While Bengio believes human-like A.I. is possible, he evinces impatience with the far-reaching ethical worries popularized by people like Elon Musk, premised on A.I.s outsmarting humans. Bengio is more interested in the ethical choices of the humans building and using A.I. “One of the greatest dangers is that people either deal with A.I. in an irresponsible way or maliciously—I mean for their personal gain,” he once told an interviewer. Other scientists share Bengio’s feelings, and yet, as A.I. research continues apace, it remains funded by the world’s most powerful governments, corporations, and investors. Bengio’s university lab is largely funded by Big Tech. At one point, during a discussion of the biggest tech companies, Bengio told me, “We want Element AI to become as large as one of these giants.” When I questioned whether he would then be perpetuating the same sort of concentration of wealth and power that he has decried, he replied, “The idea isn’t just to create one company and be the richest in the world. It’s to change the world, to change the way that business is done, to make it not as concentrated, to make it more democratic.” As much as I admired his position and believed in his intentions, his words didn’t sound much different from the corporate slogans once chosen by Big Tech. Don’t be evil. Make the world more open and connected. Creating an ethical business is less about founders’ intentions than about how, over time, business owners measure societal good against profit. What should we be doing? If computers are still struggling to answer that question, they should take some solace in knowing that we humans are not much better. This article originally appeared in the July 1, 2018 issue of Fortune. |
-
熱讀文章
-
熱門(mén)視頻