“移花接木”色情視頻令女性人人自危,,應(yīng)該如何阻止,?
如今在互聯(lián)網(wǎng)的一些陰暗角落里,你能夠找到艾瑪·沃特森和薩爾瑪·海耶克等名人主演的A片,。這些作品當(dāng)然不是由沃特森等人本人出演的,,然而要分辨這些偽作卻是相當(dāng)困難的。近年來(lái),,隨著人工智能軟件的高速發(fā)展,,將女明星甚至普通女性的面部移植到AV女優(yōu)的身上,搞“移花接木”的A片,,騙過(guò)普通觀眾的眼睛,,已經(jīng)是一件相當(dāng)容易的事了。 這種移花接木的A片只是“深度造假”技術(shù)衍生出的用途之一,。這些影片都經(jīng)過(guò)了精心處理,,看上去十分真實(shí)。它們的出現(xiàn),甚至對(duì)現(xiàn)代民主制度都構(gòu)成了一定威脅,。不法分子可以甚至已經(jīng)在使用這種手段炮制假新聞,。深度造假技術(shù)的另一大風(fēng)險(xiǎn),就是它可以成為一種騷擾甚至侮辱女性的工具,。 色情網(wǎng)站上已經(jīng)有很多名人的偽AV作品了,。還有不少互聯(lián)網(wǎng)論壇搞起了深度定制的造假服務(wù)。比如有些男人為了滿(mǎn)足自己的陰暗思想,,在未經(jīng)對(duì)方允許的情況下,,花錢(qián)給這些互聯(lián)網(wǎng)論壇,請(qǐng)他們制作其前女友或同事等人的偽AV視頻,。由于人工智能軟件的普及,,加上在社交媒體上下載別人的照片十分簡(jiǎn)單,因此制作這些深度造假的AV作品并不困難,,費(fèi)用也不算很高,。 然而受害者要想刪除這些偽AV作品,卻面臨不小的法律挑戰(zhàn),。她們的境遇與面臨其他形式的網(wǎng)絡(luò)性騷擾的女性一樣,,在依法維權(quán)上面臨著巨大阻礙。 《憲法第一修正案》與深度偽造技術(shù) 加利福尼亞州的作家夏洛特·勞斯深知非法色情作品對(duì)人的毀滅性,。有人曾經(jīng)在一個(gè)知名色情網(wǎng)站上發(fā)布過(guò)她年僅十幾歲的女兒的裸照,。受此事影響,勞斯成功組織了一場(chǎng)呼吁將“色情報(bào)復(fù)”入罪化的活動(dòng),。她本人也對(duì)深度造假行為深?lèi)和唇^,。 她表示:“深度造假給人帶來(lái)的痛苦不亞于色情報(bào)復(fù)作品。深度造假的作品看起來(lái)十分真實(shí),,而我們生活的這個(gè)充斥著假新聞的世界又進(jìn)一步惡化了它的影響,。” 勞斯補(bǔ)充道,,深度造假已經(jīng)成為了羞辱或恐嚇女性的一種常見(jiàn)方式,。她對(duì)500名色情報(bào)復(fù)作品的受害者調(diào)查發(fā)現(xiàn),其中有12%的人也是深度造假作品的受害者,。 那么我們應(yīng)該如何解決這一問(wèn)題,?推動(dòng)各州立法禁止色情報(bào)復(fù)行為,或許是一種可行的解決方案,。目前,,美國(guó)有41個(gè)州已經(jīng)制定了相關(guān)法律。這些法律都是近期出臺(tái)的,,它們也標(biāo)志著政客們開(kāi)始轉(zhuǎn)變對(duì)非自愿色情作品的態(tài)度,。 勞斯說(shuō)道:“在我剛開(kāi)始呼吁的時(shí)候,,它并不是一個(gè)人們關(guān)心的問(wèn)題。聽(tīng)說(shuō)了這件事的人,,從媒體,、立法者到執(zhí)法部門(mén)都不重視受害者。事情實(shí)際上是在朝著另一個(gè)方向發(fā)展,。但現(xiàn)在大家開(kāi)始注重保護(hù)受害者了,。” 推動(dòng)刑事立法,,可能也是打擊深度偽造作品的一種有效方法,。另一種辦法則是對(duì)偽造者提起民事訴訟。正如美國(guó)電子前沿基金會(huì)在一篇博客文章中指出的那樣,,深度偽造的受害者可以以誹謗罪或“歪曲形象”為名提起訴訟,。受害者也可以主張自己的“形象權(quán)”受到侵犯,主張深度造假者未本人允許,,通過(guò)傳播自己的形象獲取利益,。 然而,所有這些潛在解決方案都可能遇到一個(gè)強(qiáng)大的阻礙,,即美國(guó)的言論自由權(quán),。任何因涉嫌深度偽造被起訴的人,都可能主張這些視頻作品是一種文化或政治表達(dá)方式,,受美國(guó)憲法第一修正案保護(hù),。 至于這個(gè)觀點(diǎn)能否說(shuō)服法官,則是另一回事了,。深度造假是一種新生事物,,對(duì)于哪種深度造假可以受言論自由權(quán)保護(hù),法院也尚未有任何決定性裁決,。而美國(guó)有關(guān)形象權(quán)的法律莫衷一是,,又使這個(gè)問(wèn)題變得更加復(fù)雜,。 對(duì)此,,洛約拉法學(xué)院的教授詹妮弗·羅斯曼表示:“第一修正案的尺度應(yīng)該在形象權(quán)案件中保持一致,但現(xiàn)實(shí)中卻并非如此,,不同法院會(huì)給出不同裁決,。”羅斯曼著有一本關(guān)于隱私和形象權(quán)的書(shū),。 羅斯曼認(rèn)為,,在涉及色情作品的深度造假案件中,多數(shù)法官可能不會(huì)支持第一修正案的主張,,尤其是在受害者并不出名的案件中,。她指出,,要主張深度造假作品侵犯了自己的形象權(quán)或涉嫌誹謗,就要證明不法分子將深度造假作品當(dāng)成了真實(shí)作品進(jìn)行宣傳,。而且法律對(duì)公眾人物的裁判尺度也不一樣,,如果受害者是一位知名人士,她要想打贏官司,,還必須證明對(duì)方有“真實(shí)惡意”,,也就是證明不法分子明知道視頻材料是假的,但仍然出于真實(shí)惡意進(jìn)行傳播,。 任何針對(duì)深度造假的刑事法律,,如果只是狹義地涵蓋性剝削因素,而不包括出于藝術(shù)和政治諷刺目的創(chuàng)作的材料,,則還是能夠經(jīng)受住第一修正案的審查的,。 簡(jiǎn)而言之,言論自由權(quán)不太可能成為打擊深度造假色情作品的阻礙,。然而不幸的是,,即使法律站在他們這一邊,受害者也沒(méi)有什么切實(shí)可行的途徑去刪除那些視頻,,或者懲罰相關(guān)責(zé)任人,。 建立新的違規(guī)視頻下架體系 如果你在網(wǎng)上發(fā)現(xiàn)了你的不雅視頻,或是移花接木的剪輯作品,,你想去糾正這種情形,,那么你可能還會(huì)遭到更多挫折——現(xiàn)在我們幾乎沒(méi)有什么實(shí)際可行的方法來(lái)解決這個(gè)問(wèn)題。 好萊塢女星斯嘉麗·約翰遜最近在接受《華盛頓郵報(bào)》(Washington Post)采訪時(shí)表示:“如果你想讓自己不受互聯(lián)網(wǎng)和它的墮落文化的影響,,基本上是徒勞的……互聯(lián)網(wǎng)是一個(gè)巨大的黑暗蟲(chóng)洞,,會(huì)吞噬掉它自己?!?/p> 斯嘉麗·約翰遜為何如此偏激呢,?因?yàn)榛ヂ?lián)網(wǎng)的基本設(shè)計(jì)是分布式的,沒(méi)有一個(gè)統(tǒng)一的中央監(jiān)管機(jī)構(gòu),,人們可以很容易地通過(guò)互聯(lián)網(wǎng)匿名發(fā)布深度造假作品以及其他令人反感的內(nèi)容,。雖然我們可以動(dòng)用通過(guò)法律手段來(lái)識(shí)別和懲罰這些網(wǎng)絡(luò)惡魔,但這個(gè)過(guò)程是十分緩慢而繁瑣的——尤其是對(duì)那些無(wú)權(quán)無(wú)勢(shì)的人,。 勞斯表示,,在美國(guó),提起相關(guān)法律訴訟的成本大約在5萬(wàn)美元左右,,由于被告往往一文不名,,或是居住地十分遙遠(yuǎn),因此這些訴訟成本往往很難收回,。最后一個(gè)選擇是追究發(fā)布不雅視頻的網(wǎng)站的責(zé)任,,但這樣做往往也不會(huì)有什么實(shí)際收獲,。 美國(guó)有一條強(qiáng)大的法律條文叫做“230條款”,它為網(wǎng)站運(yùn)營(yíng)商提供了一個(gè)有力的法律屏障,。比如對(duì)于Craigslist這樣的網(wǎng)站,,如果用戶(hù)使用他們的分類(lèi)廣告功能撰寫(xiě)誹謗信息,網(wǎng)站是不用承擔(dān)法律責(zé)任的,。 對(duì)于8Chan和Mr. Deepfakes這種儲(chǔ)存有大量深度造假視頻的網(wǎng)站,,運(yùn)營(yíng)商可以主張豁免,因?yàn)樯蟼饕曨l的不是他們,,而是他們的用戶(hù),。 這層法律屏障不是絕對(duì)的,有一個(gè)例外情況,,就是侵犯知識(shí)產(chǎn)權(quán),。根據(jù)法律,如果網(wǎng)站收到了版權(quán)所有者的通知,,就必須刪除侵權(quán)內(nèi)容,。(如果網(wǎng)站反對(duì),可以出具通知書(shū)告知請(qǐng)求人,,并恢復(fù)相關(guān)材料,。) 這條規(guī)定有助于深度造假色情作品的受害者打破網(wǎng)站的豁免權(quán),尤其是在受害者主張維護(hù)形象權(quán)的時(shí)候,。但是在這方面,,美國(guó)的法律依然是混亂的。羅斯曼表示,,很多法院并不清楚知識(shí)產(chǎn)權(quán)例外條款是否適用于各州的知識(shí)產(chǎn)權(quán)法——如適用于形象權(quán)案件,,抑或只適用于關(guān)于版權(quán)和商標(biāo)等爭(zhēng)議物的聯(lián)邦法律。 所有這些都提出了一個(gè)問(wèn)題,;國(guó)會(huì)和美國(guó)司法系統(tǒng)是否應(yīng)該修改法律,,使深度造假色情作品的受害者能更容易地刪除這些形象,雖然近年來(lái),,美國(guó)司法體系已經(jīng)在對(duì)“230條款”進(jìn)行零敲碎打的修訂,。勞斯認(rèn)為,修法將是一個(gè)有用的舉措,。 勞斯表示:“我的看法和斯嘉麗·約翰遜不一樣,,在過(guò)去的五年里,,我見(jiàn)證了我們?cè)趫?bào)復(fù)色情領(lǐng)域的巨大進(jìn)步,,我對(duì)法律的持續(xù)進(jìn)步和修改抱有很大希望,相信我們最終能控制住這些問(wèn)題,?!?/p> 事實(shí)上,,隨著很多人越來(lái)越看不慣互聯(lián)網(wǎng)平臺(tái)擁有的“不負(fù)責(zé)任的權(quán)力”(法律學(xué)者麗貝卡·圖什內(nèi)特語(yǔ)),支持勞斯的觀點(diǎn)的人也變得越來(lái)越多,。在最近的一起受到密切關(guān)注的涉及約會(huì)軟件Grindr的案件中,,法院也正在權(quán)衡是否應(yīng)該要求網(wǎng)站運(yùn)營(yíng)商更加積極地凈化網(wǎng)站上的不法行為。 然而,,并不是所有人都認(rèn)為這是一個(gè)好主意,。“230條款”被很多人視為一項(xiàng)充滿(mǎn)遠(yuǎn)見(jiàn)的立法,,它保障了美國(guó)互聯(lián)網(wǎng)公司能夠在不受法律威脅的情況下蓬勃發(fā)展,。美國(guó)電子前沿基金會(huì)也警告稱(chēng),削弱網(wǎng)站的豁免權(quán),,很可能會(huì)扼殺美國(guó)的商業(yè)和言論自由,。 那么,美國(guó)國(guó)會(huì)能否起草一部專(zhuān)門(mén)法律,,在不造成這種意外后果的情況下,,維護(hù)深度造假色情作品受害者的權(quán)利呢?愛(ài)達(dá)荷大學(xué)的法學(xué)教授安瑪麗·布里迪指出,,在現(xiàn)實(shí)中,,有些企業(yè)和個(gè)人曾經(jīng)惡意利用版權(quán)下架體系的漏洞,刪除網(wǎng)絡(luò)上的合法評(píng)論和其他合法內(nèi)容,。 盡管如此,,布里迪認(rèn)為,考慮到深度造假色情作品的危害性,,美國(guó)仍然有必要起草一項(xiàng)新的法律,。 她表示:“在我看來(lái),深度造假色情作品的嚴(yán)重危害表明,,美國(guó)有必要迅速采取補(bǔ)救措施,。但為了正確處理問(wèn)題,我們還有必要設(shè)置一種即時(shí)的,、有意義的上訴權(quán),,以免有人濫用通知權(quán),以虛假借口刪除合法內(nèi)容,?!?span>(財(cái)富中文網(wǎng)) 譯者:樸成奎 |
In the darker corners of the Internet, you can now find celebrities like Emma Watson and Selma Hayek performing in pornographic videos. The clips are fake, of course—but it’s distressingly hard to tell. Recent improvements in artificial intelligence software have made it surprisingly easy to graft the heads of stars, and ordinary women, to the bodies of X-rated actresses to create realistic videos. These explicit movies are just one strain of so-called “deepfakes,” which are clips that have been doctored so well they look real. Their arrival poses a threat to democracy; mischief makers can, and already have, used them to spread fake news. But another great danger of deepfakes is their use as a tool to harass and humiliate women. There are plenty of celebrity deepfakes on pornographic websites, but Internet forums dedicated to custom deepfakes—men paying to create videos of ex-partners, co-workers, and others without their knowledge or consent—are proliferating. Creating these deepfakes isn’t difficult or expensive in light of the proliferation of A.I. software and the easy access to photos on social media sites like Facebook. Yet the legal challenges for victims to remove deepfakes can be daunting. While the law may be on their side, victims also face considerable obstacles—ones that are familiar to those who have sought to confront other forms of online harassment. The First Amendment and Deepfakes Charlotte Laws knows how devastating non-consensual pornography can be. A California author and former politician, Laws led a successful campaign to criminalize so-called “revenge porn” after someone posted nude photos of her teenage daughter on a notorious website. She is also alarmed by deepfakes. “The distress of deepfakes is as bad as revenge porn,” she says. “Deepfakes are realistic, and their impact is compounded by the growth of the fake news world we’re living in.” Laws adds that deepfakes have become a common way to humiliate or terrorize women. In a survey she conducted of 500 women who had been victims of revenge porn, Laws found that 12% had also been subjected to deepfakes. One way to address the problem could involve lawmakers expanding state laws banning revenge porn. These laws, which now exist in 41 U.S. states, are of recent vintage and came about as politicians began to change their attitudes to non-consensual pornography. “When I began, it wasn’t something people addressed,” Laws says. “Those who heard about it were against the victims, from media to legislators to law enforcement. But it’s really gone in the other direction, and now it’s about protecting the victims.” New criminal laws could be one way to fight deepfakes. Another approach is to bring civil lawsuits against the perpetrators. As the Electronic Frontier Foundation notes in a blog post, those subjected to deepfakes could sue for defamation or for portraying them in a “false light.” They could also file a “right of publicity” claim, alleging the deepfake makers profited from their image without permission. All of these potential solutions, however, could bump up against a powerful obstacle: free speech law. Anyone sued over deepfakes could claim the videos are a form of cultural or political expression protected by the First Amendment. Whether this argument would persuade a judge is another matter. Deepfakes are new enough that courts haven’t issued any decisive ruling on which of them might count as protected speech. The issue is even more complicated given the messy state of the law related to the right of publicity. “The First Amendment should be the same across the country in right of publicity cases, but it’s not,” says Jennifer Rothman, a professor at Loyola Law School and author of a book about privacy and the right of publicity. “Different circuit courts are doing different things.” In the case of deepfakes involving pornography, however, Rothman predicts that most judges would be unsympathetic to a First Amendment claim—especially in cases where the victims are not famous. A free speech defense to claims of false light or defamation, she argues, would turn in part on whether the deepfake was presented as true and would be analyzed differently for public figures. A celebrity victim would have the added hurdle of showing “actual malice,” the legal term for knowing the material was fake, in order to win the case. Any criminal laws aimed at deepfakes would likely survive First Amendment scrutiny so long as they narrowly covered sexual exploitation and did not include material created as art or political satire. In short, free speech laws are unlikely to be a serious impediment for targets of deepfake pornography. Unfortunately, even if the law is on their side, the victims nonetheless have few practical options to take down the videos or punish those responsible for them. A New Takedown System? If you discover something false or unpleasant about you on the Internet and move to correct it, you’re likely to encounter a further frustration: There are few practical ways to address it. “Trying to protect yourself from the Internet and its depravity is basically a lost cause … The Internet is a vast wormhole of darkness that eats itself,” actress Scarlett Johansson, whose face appears in numerous deepfakes, recently told the Washington Post. Why is Johansson so cynical? Because the fundamental design of the Internet—distributed, without a central policing authority—makes it easy for people to anonymously post deepfakes and other objectionable content. And while it’s possible to identify and punish such trolls using legal action, the process is slow and cumbersome—especially for those who lack financial resources. According to Laws, it typically takes $50,000 to pursue such a lawsuit. That money may be hard to recoup since defendants are often broke or based in a far-flung location. This leaves the option of going after the website that published the offending material, but this, too, is likely to prove fruitless. The reason is because of a powerful law known as Section 230, which creates a legal shield for website operators as to what users post on their sites. It ensures a site like Craigslist, for instance, isn’t liable if someone uses their classified ads to write defamatory messages. In the case of sites like 8Chan and Mr. Deepfakes, which host numerous deepfake videos, the operators can claim immunity because it is not them but their users that are uploading the clips. The legal shield is not absolute. It contains an exception for intellectual property violations, which obliges websites to take down material if they receive a notice from a copyright owner. (A process that lets site operators file a counter notice and restore the material if they object). The intellectual property exception could help deepfake victims defeat the websites’ immunity, notably if the victim invokes a right of publicity. But here again the law is muddled. According to Rothman, courts are unclear on whether the exception applies to state intellectual property laws—such as right of publicity—or only to federal ones like copyright and trademark. All of this raises the question of whether Congress and the courts, which have been chipping away at Section 230’s broad immunity in recent years, should change the law and make it easier for deepfake victims to remove the images. Laws believes this would be a useful measure. “I don’t feel the same as Scarlett Johansson,” Laws says. “I’ve seen the huge improvements in revenge porn being made over the past five years. I have great hope for continual improvement and amendments, and that we’ll get these issues under control eventually.” Indeed, those who share Laws’ views have momentum on their side as more people look askance at Internet platforms that, in the words of the legal scholar Rebecca Tushnet, enjoy “power without responsibility.” And in a closely watched case involving the dating app Grindr, a court is weighing whether to require website operators to be more active in purging their platforms of abusive behavior. Not everyone is convinced this a good idea, however. The Section 230 law is regarded by many as a visionary piece of legislation, which allowed U.S. Internet companies to flourish in the absence of legal threats. The Electronic Frontier Foundation has warned that eroding immunity for websites could stifle business and free expression. This raises the question of whether Congress could draft a law narrow enough to help victims of deepfakes without such unintended consequences. As a cautionary tale, Annemarie Bridy, a law professor at the University of Idaho, points to the misuse of the copyright takedown system in which companies and individuals have acted in bad faith to remove legitimate criticism and other legal content. Still, given what’s at stake with pornographic deep fake videos, Bridy says, it could be worth drafting a new law. “The seriousness of the harm from deep fakes, to me, justifies an expeditious remedy,” she says. “But to get the balance right, we’d also need an immediate, meaningful right of appeal and safeguards against abusive notices intended to censor legitimate content under false pretenses.” |