我想看一级黄色片_欧美性爱无遮挡电影_色丁香视频网站中文字幕_视频一区 视频二区 国产,日本三级理论日本电影,午夜不卡免费大片,国产午夜视频在线观看,18禁无遮拦无码国产在线播放,在线视频不卡国产在线视频不卡 ,,欧美一及黄片,日韩国产另类

首頁(yè) 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 商潮 專(zhuān)題 品牌中心
雜志訂閱

人工智能亟需加強(qiáng)監(jiān)管

WILL HUNT
2020-12-15

人工智能創(chuàng)新需要明確的法規(guī),,而拜登政府可能會(huì)頒布這些法規(guī),。

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)

圖片來(lái)源:GROVE PASHLEY—GETTY IMAGES

安迪?泰勒的目標(biāo)既適度又雄心勃勃:首次將人工智能引入空中交通管制。作為一名職業(yè)空中交通管制員,,泰勒很快就意識(shí)到了計(jì)算機(jī)視覺(jué)技術(shù)的改進(jìn)可以為他的職業(yè)帶來(lái)潛在的好處,。

示例:每次飛機(jī)離開(kāi)跑道時(shí),空中交通管制員都必須進(jìn)行標(biāo)記,,并通知下一架飛機(jī)跑道可以使用,。這項(xiàng)簡(jiǎn)單、重復(fù)的任務(wù)占據(jù)了管制員所有的注意力,,使其無(wú)法觀察到停機(jī)坪上發(fā)生的其他任何情況,。即使稍有耽擱,在一天中累積起來(lái)所產(chǎn)生的影響也不能小覷,,尤其是在泰勒工作的倫敦希思羅機(jī)場(chǎng),,該機(jī)場(chǎng)的航班預(yù)訂從早上六點(diǎn)到晚上11點(diǎn)30分從不間斷。

如果人工智能系統(tǒng)能夠自動(dòng)處理此項(xiàng)工作,,會(huì)怎么樣,?NATS是英國(guó)唯一的空中交通管制服務(wù)提供商,泰勒現(xiàn)在帶頭進(jìn)行NATS的開(kāi)創(chuàng)性工作,,以回答該問(wèn)題,,并采用人工智能來(lái)承擔(dān)這項(xiàng)工作以及相關(guān)的空中交通管制任務(wù)。

他創(chuàng)新路上的最大障礙是什么,?人工智能航空安全法規(guī)尚未成型,。

缺乏法規(guī)可能會(huì)阻礙像泰勒這樣的創(chuàng)新者,有些人可能會(huì)認(rèn)為這種說(shuō)法有悖常理,。畢竟,,通常情況下,無(wú)阻礙創(chuàng)新的支持者與那些擔(dān)心不受約束的競(jìng)爭(zhēng)會(huì)造成社會(huì)危害的人們?cè)趪@監(jiān)管進(jìn)行爭(zhēng)論時(shí)一直都站在對(duì)立面,。

特朗普政府加入了前一陣營(yíng),,主張各機(jī)構(gòu)對(duì)新法規(guī)采取“低干預(yù)”的手段,其認(rèn)為新的法規(guī)“不必要且會(huì)阻礙人工智能創(chuàng)新和發(fā)展”,。

作為一個(gè)日益強(qiáng)大的政治主體,,許多硅谷的精英們也同意這種觀點(diǎn),,其對(duì)監(jiān)管的厭惡眾所周知。

但是,,盡管不干預(yù)的手段可能會(huì)促進(jìn)互聯(lián)網(wǎng),、航空和其他行業(yè)的創(chuàng)新,但它可能會(huì)成為進(jìn)步的障礙,。我在加利福尼亞大學(xué)伯克利分校(UC Berkeley)的人工智能安全倡議研究中心(AI Security Initiative)的報(bào)告中作出了解釋,。部分問(wèn)題在于,航空安全法規(guī)范圍太廣,,且與人工智能情況嚴(yán)重不符,,因此需要對(duì)現(xiàn)有規(guī)則進(jìn)行大量修訂和補(bǔ)充。

例如,,飛機(jī)認(rèn)證過(guò)程遵循基于邏輯的方法,,其中每一個(gè)可能的輸入和輸出都有人關(guān)注并對(duì)其進(jìn)行分析。但是這種方法通常對(duì)人工智能模型不起作用,,其中許多模型的反應(yīng)不同,,甚至對(duì)輸入的輕微擾動(dòng)反應(yīng)也不同,從而產(chǎn)生了幾乎無(wú)限量的結(jié)果需要考慮,。

應(yīng)對(duì)這一挑戰(zhàn)不僅僅是修改現(xiàn)有監(jiān)管語(yǔ)言的問(wèn)題:它需要人們進(jìn)行新的技術(shù)研究,,構(gòu)建行為可預(yù)測(cè)和可解釋的人工智能系統(tǒng),以及制定新的技術(shù)標(biāo)準(zhǔn)來(lái)對(duì)標(biāo)安全性和其他性能標(biāo)準(zhǔn),。在制定這些標(biāo)準(zhǔn)和法規(guī)之前,,企業(yè)必須完全從零開(kāi)始,為人工智能應(yīng)用程序構(gòu)建安全案例,。即使對(duì)于像NATS這樣的開(kāi)路先鋒企業(yè)來(lái)說(shuō),,這也是一項(xiàng)艱巨的任務(wù)。

“這絕對(duì)是個(gè)挑戰(zhàn),,”泰勒今年早些時(shí)候告訴我,,“因?yàn)闆](méi)有指南或要求,我沒(méi)辦法指著某個(gè)指南或要求說(shuō),,‘我遵循的就是這個(gè)要求,。’”

一個(gè)更深層次的問(wèn)題是,,空中交通管制企業(yè)以及波音公司和空中客車(chē)公司等制造商明白,,制定人工智能新規(guī)則不可避免。雖然他們渴望獲得人工智能帶來(lái)的成本和安全方面的好處,,但可以理解的是,,它們中大多數(shù)不愿意進(jìn)行重大投資,因?yàn)樗麄儧](méi)有信心,能夠保證由人工智能產(chǎn)生的產(chǎn)品將符合未來(lái)的法規(guī),。

其結(jié)果可能是采用人工智能的速度大幅放緩:如果監(jiān)管機(jī)構(gòu)無(wú)法獲得更多資源以及白宮不施加強(qiáng)有力的領(lǐng)導(dǎo),,設(shè)定標(biāo)準(zhǔn)和制定符合人工智能的法規(guī)的過(guò)程需要花費(fèi)幾年甚至幾十年的時(shí)間。

即將上任的拜登政府準(zhǔn)備進(jìn)行強(qiáng)有力的領(lǐng)導(dǎo),,這與特朗普政府“治理人工智能”所采取的不干預(yù)手段形成了鮮明對(duì)比,。

在影響拜登政府對(duì)待人工智能監(jiān)管的態(tài)度方面,商界領(lǐng)袖和技術(shù)專(zhuān)家可以發(fā)揮關(guān)鍵作用,。他們首先可能鼓勵(lì)政府優(yōu)先考慮進(jìn)行和樹(shù)立與支持航空業(yè)和其他行業(yè)創(chuàng)新的人工智能有關(guān)的安全研究和監(jiān)管框架,。或者,,他們可以發(fā)揮自己的專(zhuān)長(zhǎng):在私營(yíng)部門(mén)開(kāi)發(fā)原型解決方案(例如,,見(jiàn)OpenAI關(guān)于人工智能治理的監(jiān)管市場(chǎng)提案)。

如果他們的努力取得成功,,安迪?泰勒和其他企業(yè)家就可以放開(kāi)拳腳,,在從航空到醫(yī)療保健再到軍事等安全至上的行業(yè)中進(jìn)行創(chuàng)新。如果未能取得成功,,NATS等少數(shù)企業(yè)仍將嘗試在這些行業(yè)開(kāi)發(fā)新的人工智能應(yīng)用程序。但這并不容易,,而且會(huì)增加發(fā)生事故的風(fēng)險(xiǎn),。人工智能的潛在好處——改進(jìn)的醫(yī)療診斷、經(jīng)濟(jì)適用的城市空氣流動(dòng)性等等——在技術(shù)上將仍然可行,,但總是要幾年時(shí)間,。

因此,支持創(chuàng)新的商業(yè)領(lǐng)袖和技術(shù)專(zhuān)家應(yīng)該減少對(duì)新的法規(guī)會(huì)使人工智能發(fā)展減緩的擔(dān)心,,而是努力制定加快發(fā)展所需的智能法規(guī),。(財(cái)富中文網(wǎng))

威爾?亨特是喬治敦大學(xué)安全與新興技術(shù)中心(Georgetown University’s Center for Security and Emerging Technology)的研究分析師以及加州大學(xué)伯克利分校(University of California at Berkeley)的政治學(xué)在讀博士生。他與別人合作編寫(xiě)了《華爾街日?qǐng)?bào)》上的科技政策社評(píng),。先前,,他曾是加州大學(xué)伯克利分校人工智能安全倡議研究中心的研究生研究員。

翻譯:曉田

審校:汪皓

安迪?泰勒的目標(biāo)既適度又雄心勃勃:首次將人工智能引入空中交通管制,。作為一名職業(yè)空中交通管制員,,泰勒很快就意識(shí)到了計(jì)算機(jī)視覺(jué)技術(shù)的改進(jìn)可以為他的職業(yè)帶來(lái)潛在的好處。

示例:每次飛機(jī)離開(kāi)跑道時(shí),,空中交通管制員都必須進(jìn)行標(biāo)記,,并通知下一架飛機(jī)跑道可以使用。這項(xiàng)簡(jiǎn)單,、重復(fù)的任務(wù)占據(jù)了管制員所有的注意力,,使其無(wú)法觀察到停機(jī)坪上發(fā)生的其他任何情況。即使稍有耽擱,在一天中累積起來(lái)所產(chǎn)生的影響也不能小覷,,尤其是在泰勒工作的倫敦希思羅機(jī)場(chǎng),,該機(jī)場(chǎng)的航班預(yù)訂從早上六點(diǎn)到晚上11點(diǎn)30分從不間斷。

如果人工智能系統(tǒng)能夠自動(dòng)處理此項(xiàng)工作,,會(huì)怎么樣,?NATS是英國(guó)唯一的空中交通管制服務(wù)提供商,泰勒現(xiàn)在帶頭進(jìn)行NATS的開(kāi)創(chuàng)性工作,,以回答該問(wèn)題,,并采用人工智能來(lái)承擔(dān)這項(xiàng)工作以及相關(guān)的空中交通管制任務(wù)。

他創(chuàng)新路上的最大障礙是什么,?人工智能航空安全法規(guī)尚未成型,。

缺乏法規(guī)可能會(huì)阻礙像泰勒這樣的創(chuàng)新者,有些人可能會(huì)認(rèn)為這種說(shuō)法有悖常理,。畢竟,,通常情況下,無(wú)阻礙創(chuàng)新的支持者與那些擔(dān)心不受約束的競(jìng)爭(zhēng)會(huì)造成社會(huì)危害的人們?cè)趪@監(jiān)管進(jìn)行爭(zhēng)論時(shí)一直都站在對(duì)立面,。

特朗普政府加入了前一陣營(yíng),,主張各機(jī)構(gòu)對(duì)新法規(guī)采取“低干預(yù)”的手段,其認(rèn)為新的法規(guī)“不必要且會(huì)阻礙人工智能創(chuàng)新和發(fā)展”,。

作為一個(gè)日益強(qiáng)大的政治主體,,許多硅谷的精英們也同意這種觀點(diǎn),其對(duì)監(jiān)管的厭惡眾所周知,。

但是,,盡管不干預(yù)的手段可能會(huì)促進(jìn)互聯(lián)網(wǎng)、航空和其他行業(yè)的創(chuàng)新,,但它可能會(huì)成為進(jìn)步的障礙,。我在加利福尼亞大學(xué)伯克利分校(UC Berkeley)的人工智能安全倡議研究中心(AI Security Initiative)的報(bào)告中作出了解釋。部分問(wèn)題在于,,航空安全法規(guī)范圍太廣,,且與人工智能情況嚴(yán)重不符,因此需要對(duì)現(xiàn)有規(guī)則進(jìn)行大量修訂和補(bǔ)充,。

例如,,飛機(jī)認(rèn)證過(guò)程遵循基于邏輯的方法,其中每一個(gè)可能的輸入和輸出都有人關(guān)注并對(duì)其進(jìn)行分析,。但是這種方法通常對(duì)人工智能模型不起作用,,其中許多模型的反應(yīng)不同,甚至對(duì)輸入的輕微擾動(dòng)反應(yīng)也不同,,從而產(chǎn)生了幾乎無(wú)限量的結(jié)果需要考慮,。

應(yīng)對(duì)這一挑戰(zhàn)不僅僅是修改現(xiàn)有監(jiān)管語(yǔ)言的問(wèn)題:它需要人們進(jìn)行新的技術(shù)研究,,構(gòu)建行為可預(yù)測(cè)和可解釋的人工智能系統(tǒng),以及制定新的技術(shù)標(biāo)準(zhǔn)來(lái)對(duì)標(biāo)安全性和其他性能標(biāo)準(zhǔn),。在制定這些標(biāo)準(zhǔn)和法規(guī)之前,,企業(yè)必須完全從零開(kāi)始,為人工智能應(yīng)用程序構(gòu)建安全案例,。即使對(duì)于像NATS這樣的開(kāi)路先鋒企業(yè)來(lái)說(shuō),,這也是一項(xiàng)艱巨的任務(wù)。

“這絕對(duì)是個(gè)挑戰(zhàn),,”泰勒今年早些時(shí)候告訴我,,“因?yàn)闆](méi)有指南或要求,我沒(méi)辦法指著某個(gè)指南或要求說(shuō),,‘我遵循的就是這個(gè)要求,。’”

一個(gè)更深層次的問(wèn)題是,,空中交通管制企業(yè)以及波音公司和空中客車(chē)公司等制造商明白,,制定人工智能新規(guī)則不可避免。雖然他們渴望獲得人工智能帶來(lái)的成本和安全方面的好處,,但可以理解的是,,它們中大多數(shù)不愿意進(jìn)行重大投資,因?yàn)樗麄儧](méi)有信心,,能夠保證由人工智能產(chǎn)生的產(chǎn)品將符合未來(lái)的法規(guī),。

其結(jié)果可能是采用人工智能的速度大幅放緩:如果監(jiān)管機(jī)構(gòu)無(wú)法獲得更多資源以及白宮不施加強(qiáng)有力的領(lǐng)導(dǎo),設(shè)定標(biāo)準(zhǔn)和制定符合人工智能的法規(guī)的過(guò)程需要花費(fèi)幾年甚至幾十年的時(shí)間,。

即將上任的拜登政府準(zhǔn)備進(jìn)行強(qiáng)有力的領(lǐng)導(dǎo),這與特朗普政府“治理人工智能”所采取的不干預(yù)手段形成了鮮明對(duì)比,。

在影響拜登政府對(duì)待人工智能監(jiān)管的態(tài)度方面,,商界領(lǐng)袖和技術(shù)專(zhuān)家可以發(fā)揮關(guān)鍵作用。他們首先可能鼓勵(lì)政府優(yōu)先考慮進(jìn)行和樹(shù)立與支持航空業(yè)和其他行業(yè)創(chuàng)新的人工智能有關(guān)的安全研究和監(jiān)管框架,?;蛘撸麄兛梢园l(fā)揮自己的專(zhuān)長(zhǎng):在私營(yíng)部門(mén)開(kāi)發(fā)原型解決方案(例如,,見(jiàn)OpenAI關(guān)于人工智能治理的監(jiān)管市場(chǎng)提案),。

如果他們的努力取得成功,安迪?泰勒和其他企業(yè)家就可以放開(kāi)拳腳,,在從航空到醫(yī)療保健再到軍事等安全至上的行業(yè)中進(jìn)行創(chuàng)新,。如果未能取得成功,NATS等少數(shù)企業(yè)仍將嘗試在這些行業(yè)開(kāi)發(fā)新的人工智能應(yīng)用程序,。但這并不容易,,而且會(huì)增加發(fā)生事故的風(fēng)險(xiǎn),。人工智能的潛在好處——改進(jìn)的醫(yī)療診斷、經(jīng)濟(jì)適用的城市空氣流動(dòng)性等等——在技術(shù)上將仍然可行,,但總是要幾年時(shí)間,。

因此,支持創(chuàng)新的商業(yè)領(lǐng)袖和技術(shù)專(zhuān)家應(yīng)該減少對(duì)新的法規(guī)會(huì)使人工智能發(fā)展減緩的擔(dān)心,,而是努力制定加快發(fā)展所需的智能法規(guī),。(財(cái)富中文網(wǎng))

威爾?亨特是喬治敦大學(xué)安全與新興技術(shù)中心(Georgetown University’s Center for Security and Emerging Technology)的研究分析師以及加州大學(xué)伯克利分校(University of California at Berkeley)的政治學(xué)在讀博士生。他與別人合作編寫(xiě)了《華爾街日?qǐng)?bào)》上的科技政策社評(píng),。先前,,他曾是加州大學(xué)伯克利分校人工智能安全倡議研究中心的研究生研究員。

翻譯:曉田

審校:汪皓

Andy Taylor has a goal both modest and ambitious: bring artificial intelligence, or A.I., to air traffic control for the first time. A career air traffic controller, Taylor was quick to see the potential benefits that advances in computer vision technology could bring to his profession.

Example: Every time a plane clears its runway, an air traffic controller must flag it and notify the next plane that the runway is free. This simple, repetitive task takes controllers’ attention away from everything else that’s happening on the tarmac. Even short delays can add up considerably over the course of a day—especially at airports such as London’s Heathrow, where Taylor works, which has flights booked end-to-end from six in the morning till 11:30 at night.

What if an A.I. system could handle this work autonomously? Taylor now leads the groundbreaking effort by NATS, Britain’s sole air traffic control provider, to answer that question, and to bring A.I. to bear on this and related air traffic control tasks.

His biggest obstacle to innovation? The nonexistence of A.I. safety regulations for aviation.

That a lack of regulations might obstruct innovators like Taylor might be counterintuitive to some. After all, arguments around regulation usually pit proponents of unencumbered innovation against those concerned about social harms resulting from unchecked competition.

The Trump administration falls into the former camp, advocating that agencies adopt a light-touch approach toward new regulations, which it feels could “needlessly hamper A.I. innovation and growth.”

So do many Silicon Valley elites—an increasingly powerful political constituency with a well-documented distaste for regulation.

But while a hands-off approach might foster innovation on the Internet, in aviation and other industries it can be an obstacle to progress. In a report from UC Berkeley’s AI Security Initiative, I explain why. Part of the problem is that safety regulations for aviation are both extensive and deeply incompatible with A.I., necessitating broad revisions and additions to existing rules.

For example, aircraft certification processes follow a logic-based approach in which every possible input and output receives attention and analysis. But this approach often doesn’t work for A.I. models, many of which react differently even to slight perturbations of input, generating a nearly infinite number of outcomes to consider.

Addressing this challenge isn’t a mere matter of modifying existing regulatory language: It requires novel technical research on building A.I. systems with predictable and explainable behavior and the development of new technical standards for benchmarking safety and other performance criteria. Until these standards and regulations are developed, firms will have to build safety cases for A.I. applications entirely from scratch—a tall order, even for pathbreaking firms like NATS.

“It’s absolutely a challenge,” Taylor told me earlier this year, “because there’s no guidance or requirements that I can point to and say, ‘I’m using that particular requirement.’”

A further issue is that air traffic control firms, as well as manufacturers such as Boeing and Airbus, know that new rules for A.I. are inevitable. While they are eager to reap the cost and safety benefits offered by A.I., most are understandably reluctant to make serious investments without confidence that the resulting product will be compatible with future regulations.

The result could be a major slowdown in A.I. adoption: Without more resources for regulators and strong leadership from the White House, the process of setting standards and developing A.I.-appropriate regulations will take years or even decades.

The incoming Biden administration is poised to offer that leadership, striking a contrast with the Trump administration’s light-touch approach to A.I. governance.

Business leaders and technologists have a key role to play in influencing the Biden administration’s attitude toward A.I. regulation. They might start by encouraging the administration to prioritize A.I. safety research and regulatory frameworks for A.I. that support innovation in aviation and other industries. Or they could do what they do best: develop prototype solutions in the private sector (for a great example, see OpenAI’s proposal of regulatory markets for A.I. governance).

If successful, these efforts could free up Andy Taylor and other entrepreneurs to innovate in safety-critical industries from aviation to health care to the military. If not, a handful of companies like NATS will still try to develop new A.I. applications in these industries. But it won’t be easy and could increase the risk of accidents. The potential benefits of A.I.—improved medical diagnoses, affordable urban air mobility, and much more—would remain technically feasible, but always a few years away.

Pro-innovation business leaders and technologists should therefore worry less about new regulations slowing down progress and instead work on developing the smart regulations required to speed it up.

Will Hunt is a research analyst at Georgetown University’s Center for Security and Emerging Technology and a political science Ph.D. student at the University of California at Berkeley. He has coauthored commentary on technology policy in the Wall Street Journal, and he was previously a graduate researcher at the UC Berkeley AI Security Initiative.

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識(shí)產(chǎn)權(quán)為財(cái)富媒體知識(shí)產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專(zhuān)屬所有或持有,。未經(jīng)許可,,禁止進(jìn)行轉(zhuǎn)載、摘編,、復(fù)制及建立鏡像等任何使用,。
0條Plus
精彩評(píng)論
評(píng)論

撰寫(xiě)或查看更多評(píng)論

請(qǐng)打開(kāi)財(cái)富Plus APP

前往打開(kāi)
熱讀文章