抵抗“造假”程序大戰(zhàn)在即,我們準(zhǔn)備好了嗎,?
假新聞,假社交媒體賬號(hào),,線(xiàn)上調(diào)查假受訪(fǎng)者,,還有假購(gòu)票者。造假泛濫反映出一個(gè)趨勢(shì):程序造假泛濫,。 什么時(shí)候才能去偽存真,,制止造假程序? 造假程序幾乎占互聯(lián)網(wǎng)流量的20%,。這些電腦程序竊取商業(yè)網(wǎng)站的內(nèi)容,,迫使一些網(wǎng)站關(guān)閉,影響收費(fèi)廣告指標(biāo)的正常表現(xiàn),,在各種論壇灌水,,還會(huì)搶購(gòu)百老匯音樂(lè)劇《漢密爾頓》(Hamilton )的票高價(jià)倒賣(mài)。 隨著媒體曝光俄羅斯利用程序干涉美國(guó)總統(tǒng)大選,,《紐約時(shí)報(bào)》推出有關(guān)Twitter僵尸粉絲交易和轉(zhuǎn)發(fā)帖子的熱門(mén)調(diào)查報(bào)道,,事實(shí)已經(jīng)生動(dòng)地說(shuō)明,造假程序比大多數(shù)人意識(shí)的嚴(yán)重,。 然而造假程序仍在泛濫,,主要由于兩大因素:一是開(kāi)發(fā)和銷(xiāo)售程序的監(jiān)管法律含糊不清,二是社交媒體公司對(duì)用戶(hù)數(shù)量的真實(shí)性睜一只眼閉一只眼,。 嚴(yán)厲打擊造假程序并非易事,,但新近一些實(shí)例顯示行動(dòng)已經(jīng)開(kāi)始,。我們應(yīng)該將造假程序視為社會(huì)公敵。 造假程序滲透社交媒體 不久以前,,造假程序主要涉及信息技術(shù)或者某些深?yuàn)W的商業(yè)問(wèn)題,,犯罪分子的主要手段包括網(wǎng)頁(yè)抓取,、暴力破解攻擊,、競(jìng)爭(zhēng)數(shù)據(jù)挖掘、侵入賬號(hào),、未經(jīng)授權(quán)的漏洞檢測(cè),、發(fā)送垃圾郵件和點(diǎn)擊量作假等。 可如今,,造假程序的應(yīng)用趨勢(shì)令人不安,,已經(jīng)能通過(guò)大型社交媒體平臺(tái)操縱選舉和政治議題。 去年10月,,美國(guó)國(guó)會(huì)議員舉行聽(tīng)證會(huì),,召集Facebook、Twitter和谷歌的高管,,要求其解釋俄羅斯方面如何利用三家公司旗下的平臺(tái)干擾2016年美國(guó)總統(tǒng)大選,。三家公司的高管承諾會(huì)改進(jìn)。今年1月末,,美國(guó)民主黨議會(huì)領(lǐng)袖又呼吁Facebook和Twitter分析俄羅斯的程序在網(wǎng)上競(jìng)選活動(dòng)中發(fā)揮的作用,,并公布一份包含美國(guó)聯(lián)邦調(diào)查局(FBI)對(duì)俄羅斯政府干擾大選絕密信息的備忘錄。 今年2月16日,,美國(guó)特別檢察官羅伯特·米勒起訴13名俄羅斯公民,,指控其操縱電腦程序傳播不實(shí)信息,在社交媒體散布有利于現(xiàn)任美國(guó)總統(tǒng)唐納德·特朗普的宣傳信息,。 造假程序在Twitter上肆虐的形勢(shì)比很多人意識(shí)中還嚴(yán)重,。Twitter的高管在美國(guó)國(guó)會(huì)作證稱(chēng),約5%的Twitter賬號(hào)來(lái)自造假程序,。但一些研究顯示,,實(shí)際占比高達(dá)15%。去年11月,,F(xiàn)acebook告知股東,,社交平臺(tái)上約有6000萬(wàn)賬號(hào)可能是虛假賬號(hào),占其月均用戶(hù)總數(shù)的2%,。 和線(xiàn)上內(nèi)容出版商一樣,,社交媒體公司容許平臺(tái)上存在造假程序,因?yàn)樵露然钴S用戶(hù)是衡量業(yè)績(jī)的一大指標(biāo),。不管背后是不是真人,,賬號(hào)就是賬號(hào),。 制止瘋狂 這個(gè)問(wèn)題上,社交媒體公司非常虛偽,,就像好萊塢經(jīng)典電影《卡薩布蘭卡》(Casablanca)里反面人物雷諾局長(zhǎng)(Captain Renault),。片中,身為警察局局長(zhǎng)的雷諾一邊在男主人公的酒吧里賭錢(qián),,轉(zhuǎn)頭又驚呼“我非常震驚,,這里(酒吧)居然有賭場(chǎng)?!爆F(xiàn)狀必須改變,。因?yàn)樯缃幻襟w實(shí)際上有能力影響言論,所以在造假程序操縱選舉和公眾議論的過(guò)程中,,其不作為造成了極大危害,。社交媒體必須積極行動(dòng),加強(qiáng)自我管理,。 他們完全能做到,。看看吧,,在《紐約時(shí)報(bào)》公布上述調(diào)查后,,Twitter的幾十個(gè)知名用戶(hù)賬號(hào)一下子減少了超過(guò)100萬(wàn)關(guān)注者。我可不信這是巧合,。 Twitter應(yīng)該考慮將“認(rèn)證”服務(wù)范圍擴(kuò)大到所有人類(lèi)用戶(hù),,認(rèn)證賬號(hào)會(huì)獲得藍(lán)色徽標(biāo),可以幫用戶(hù)識(shí)別賬號(hào)真?zhèn)?。假如Twitter這么做,,技術(shù)上會(huì)是個(gè)大工程,畢竟造假程序太難阻止,,一般來(lái)說(shuō)虛假賬號(hào)會(huì)假扮合法用戶(hù),,又通過(guò)人工智能技術(shù)模仿人類(lèi)。不過(guò),,人工智能同樣可以用來(lái)鑒定賬號(hào)身份,。 政府的作用 與此同時(shí),政府應(yīng)該參與打擊造假程序的戰(zhàn)爭(zhēng),。這場(chǎng)仗不好打,,因?yàn)樵旒俪绦虻膫鞑フ呤悄涿模瑹o(wú)法識(shí)別身份就很難通過(guò)法律手段懲治,。 2016年9月,,美國(guó)聯(lián)邦政府才第一次針對(duì)造假程序立法。當(dāng)時(shí)國(guó)會(huì)通過(guò)了打擊黃牛票的《優(yōu)化線(xiàn)上售票法案》(BOTS),。耐人尋味的是,,法案推出后票務(wù)問(wèn)題仍然存在,。部分原因是美國(guó)聯(lián)邦貿(mào)易委員會(huì)(FTC)沒(méi)怎么落實(shí)。 國(guó)會(huì)接下來(lái)會(huì)更新早已過(guò)時(shí)的《電腦欺詐和濫用法》(CFAA),,明確侵入電腦獲取和修改信息屬于違法行為,。令人吃驚的是,直到現(xiàn)在這部1986年出臺(tái)的法案還是執(zhí)法依據(jù),。美國(guó)的法律應(yīng)該清晰地界定允許和禁止的行為,。 美國(guó)各州政府也能發(fā)揮作用。今年1月,,紐約州檢察長(zhǎng)史樹(shù)德就做出了一項(xiàng)為人稱(chēng)道的決定:調(diào)查出售社交媒體假粉絲賬號(hào)的公司Devumi,,也是《紐約時(shí)報(bào)》調(diào)查報(bào)道中曝光的對(duì)象,。 無(wú)須再忍 最后,,我們身為消費(fèi)者也都受夠了造假程序。公平地說(shuō),,受害者就兩塊:一是社交媒體公司二是用戶(hù),。當(dāng)年創(chuàng)始人創(chuàng)立Twitter時(shí)并沒(méi)料到會(huì)被俄羅斯攻擊,初衷是幫助人們互相交流,。用戶(hù)也沒(méi)想到身份信息會(huì)被竊取,,賬號(hào)被濫用。盡管如此,,我們?nèi)匀灰笊缃幻襟w平臺(tái)更透明,,否則只能拋棄。 現(xiàn)在當(dāng)務(wù)之急是認(rèn)清造假程序的危險(xiǎn)性,,然后著手解決問(wèn)題,。不能容忍造假繼續(xù),不然每個(gè)人都會(huì)受害,。(財(cái)富中文網(wǎng)) 本文作者拉米·埃塞德是Distil Networks的聯(lián)合創(chuàng)始人兼董事長(zhǎng),。該公司主要業(yè)務(wù)為檢測(cè)造假程序并降低危害。 譯者:Pessy 審稿:夏林 |
Fake news. Fake social media accounts. Fake online poll takers. Fake ticket buyers. And behind them all: The prolific fakery of botnets. When will we get real and stop them? Malicious bots account for nearly 20% of all Internet traffic. These robotic computer scripts have been responsible for stealing content from commercial websites, shutting down websites, swaying advertising metrics, spamming forums, and snatching away Hamilton tickets for exorbitant resale. But revelations about Russian bots meddling in the U.S. election and a scorching New York Times investigation into the selling of fake Twitter followers and retweets vividly illustrate that the bot epidemic is even more severe than most people realized. And yet the bots march on, aided by a double whammy: murky laws governing their creation and sale, and social media companies that have too often turned a blind eye to the veracity of their reported user numbers. Tightening our defenses against malicious bots won’t be easy, but recent events show that the effort is warranted. Bots should be considered nothing less than a public enemy. Bots infiltrate social media Not long ago, bots were mainly thought of as an IT or somewhat esoteric business problem—the main culprits behind web scraping, brute force attacks, competitive data mining, account hijacking, unauthorized vulnerability scans, spam, and click fraud. But the use of bots to manipulate elections and political discussion via the major social media platforms is a new and unnerving trend. In October, members of Congress hauled executives from Facebook, Twitter, and Googleinto a hearing to explain Russian interference via their platforms in the 2016 presidential campaign. The executives promised to do better. And yet in late January, top congressional Democrats called on Facebook and Twitter to analyze the role of Russian bots in the online campaign to release a memo containing classified information about the federal investigation into Russia’s meddling. On Feb. 16, Special Counsel Robert Mueller filed an indictment accusing 13 Russians of running a bot farm and disinformation operation that spread pro-Donald Trump propaganda on social media. Bots are more prevalent on Twitter than many realize. While Twitter testified before Congress that about 5% of its accounts are run by bots, some studies have shown that number to be as high as 15%. In November, Facebook told shareholders that around 60 million, or 2%, of its average monthly users may be fake accounts. Social media companies—just like online publishers—have a vested interest in letting bots exist on their platforms because monthly active users are one of their main measurements of success. Accounts, human or not, are accounts. Stopping the madness Social media companies’ disingenuous Captain Renault act—he was the character in Casablanca who declared, “I’m shocked, shocked, to find that gambling is going on here”—must stop. With its ability to influence opinions, social media does remarkable harm by playing a role in the rigging of elections and public debate. So social media companies must step up and more aggressively self-police. We know they can do it. Look at how more than a million followers disappeared from the accounts of dozens of prominent Twitter users right after the New York Timesinvestigation was published. I doubt this was a coincidence. Twitter should consider extending its “verified” program—that blue badge that lets people know an account of public interest is authentic—to all human users. This would be a huge technological undertaking—after all, bots are so hard to prevent because they act as a legitimate user would—but the same artificial intelligence technologies that allow bots to emulate humans could be used to verify humans. The government’s role Meanwhile, government needs to join the fight against bad bots. This won’t be easy, as bot promulgators are anonymous and it’s difficult to legislate against those you can’t identify. The bot problem didn’t prompt its first piece of federal legislation until September 2016, when Congress passed the anti-ticket scalping Better Online Ticket Sales (BOTS) Act. Interestingly, the ticket problem persists despite the law, in part because the Federal Trade Commission has done little to enforce it. A good next move for Congress would be to launch a long-overdue update of the Computer Fraud and Abuse Act from 1986, which makes it unlawful to break into a computer to access or alter information and, astoundingly, still serves as a legal guidepost today. U.S. law needs better definition of what’s allowed and what’s not. States can play a role too, as evidenced by New York Attorney General Eric Schneiderman’s laudable decision to investigate Devumi, the company selling fake social media followers and the subject of the New York Times investigation. Enough is enough Finally, we as consumers should say we’re tired of these shenanigans. Now, to be fair, there are two victims: the social media companies and the users. Twitter’s founders didn’t create its platform expecting it to be under attack from the Russians; they wanted people to communicate. Users didn’t expect their profiles to be stolen and their accounts to be abused. Nevertheless, we can demand that social media platforms be more transparent—or else we won’t use them. It’s high time to recognize that bad bots are a serious threat and start addressing the problem head-on. The fakery can’t be allowed to continue, or we all suffer. Rami Essaid is co-founder and chairman of Distil Networks, a bot detection and mitigation company. |
-
熱讀文章
-
熱門(mén)視頻