Выбрать главу

In May 2019 a group of Machine Learning Engineers released an audio clip they created using their RealTalk technology which sounded like podcaster Joe Rogan talking about investing in a new hockey team made up of chimpanzees.897 It wasn’t perfect, but if you didn’t know that it was fake before you heard it, you may be fooled into thinking that it’s real. The researchers admitted, “the societal implications for technologies like speech synthesis are massive. And the implications will affect everyone.”898

“Right now, technical expertise, ingenuity, computing power and data are required to make models like RealTalk perform well. So not just anyone can go out and do it. But in the next few years (or even sooner), we’ll see the technology advance to the point where only a few seconds of audio are needed to create a life-like replica of anyone’s voice on the planet. It’s pretty f*cking scary,” the creators wrote on their blog.899

They went on to list some of the possible abuses this technology may be used for, “if the technology got into the wrong hands.” These include, “Spam callers impersonating your mother or spouse to obtain personal information. Impersonating someone for the purposes of bullying or harassment. Gaining entrance to high security clearance areas by impersonating a government official,” and “An ‘audio deepfake’ of a politician being used to manipulate election results or cause a social uprising.”900

They raise some great points. What’s to stop people from creating deepfakes of politicians, CEOs of major corporations, or popular YouTubers, and making them appear as if they’re saying racist, hateful, or violent things, and claiming they got it from a coworker or a “friend” who secretly recorded it, or that the clip was from an old YouTube video once uploaded to someone’s channel that they later deleted?

National Security Concerns

In July 2017 researchers at Harvard, who were backed by the U.S. Intelligence Advanced Research Projects Activity (IARPA), published a report titled Artificial Intelligence and National Security where they detailed the growing risk of deepfake forgeries, saying, “The existence of widespread AI forgery capabilities will erode social trust, as previously reliable evidence becomes highly uncertain,” and details some of the horrific possibilities that are right around the corner.901

The report then quotes part of an article one of the researchers wrote for Wired magazine about these dangers, saying, “Today, when people see a video of a politician taking a bribe, a soldier perpetrating a war crime, or a celebrity starring in a sex tape, viewers can safely assume that the depicted events have actually occurred, provided, of course, that the video is of a certain quality and not obviously edited. But that world of truth—where seeing is believing—is about to be upended by artificial intelligence technologies.”902

The article continues, “When tools for producing fake video perform at higher quality than today’s CGI and are simultaneously available to untrained amateurs, these forgeries might comprise a large part of the information ecosystem.”903

The Artificial Intelligence and National Security report goes on to warn that, “A future where fakes are cheap, widely available, and indistinguishable from reality would reshape the relationship of individuals to truth and evidence. This will have profound implications for domains across journalism, government communications, testimony in criminal justice, and of course national security… In the future, people will be constantly confronted with realistic-looking fakes.”904

It concludes that, “We will struggle to know what to trust. Using cryptography and secure communication channels, it may still be possible to, in some circumstances, prove the authenticity of evidence. But, the ‘seeing is believing’ aspect of evidence that dominates today—one where the human eye or ear is almost always good enough—will be compromised.”905

Elon Musk is funding a non-profit organization called OpenAI which is trying to ensure that the creation of artificial intelligence will be “safe,” but they created an AI tool so powerful they won’t release it to the public out of concern that it could create such realistic forgeries and fake news articles that they would be difficult to distinguish from real ones. “Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” the organization wrote on their blog.906

Others are equally concerned. Sean Gourley, who is the founder and CEO of a company called Primer, which data mines social media posts for U.S. intelligence agencies to track issues of concern and possible threats, warns, “The automation of the generation of fake news is going to make it very effective.”907

Nothing may be safe from the weaponization of artificial intelligence. A group of researchers at the University of Chicago developed an AI system in 2017 that could write fake Yelp reviews and even though sites like Yelp and Amazon have machine learning algorithms designed to detect fake reviews written by trolls or bots, when they unleashed their Yelp review writer on the site their safeguards had a hard time detecting the fake reviews.908

Ben Zhoa, one of researchers who worked on the project, said, “We have validated the danger of someone using AI to create fake accounts that are good enough to fool current countermeasures,” and warned, “more powerful hardware and larger data for training means that future AI models will be able to capture all these properties and be truly indistinguishable from human-authored content.”909

This makes the forged documents purported to be George W. Bush’s service record in the National Guard or the infamous “Steele Trump-Russia Dossier” created by Fusion GPS seem like child’s play. The New York Observer reported that there are already multiple fake “Trump sex tapes” circulating among those working in intelligence agencies and suggested that they were created in order to “muddy the waters” in the event that a “real” Trump sex tape surfaces, which some believe was made by the Kremlin when Trump visited Russia in 2013 for the Miss Universe Pageant, for what the KGB calls “kompromat” or compromising material.910

Trump has insisted that even before his trip to Russia he was well aware of hidden cameras in hotel rooms there and the government’s attempt to gain blackmail material on high profile individuals like himself, and made sure not to get ensnared in their trap.911 His bodyguard testified that prior to the trip he and Trump had discussed that the Russians used such tactics and knew not to take the bait.912

So it’s highly unlikely that a real Trump sex tape exists, but it is likely that Deep State operatives within our own CIA may have manufactured such fakes for the same reason they floated the idea of doing such a thing to Saddam Hussein and Osama bin Laden — to discredit Trump and use it as propaganda to fan the flames of an insurgency hoping to bring him down.

As Winston Churchill said, “A lie gets halfway around the world before the truth has a chance to get its pants on.”913 Nobody is safe from being smeared by deepfakes, whether they’re an ordinary person who has been targeted by a jealous ex-lover, a disgruntled coworker or classmate, or whether they are the President of the United States whose political opponents or a foreign adversary want to bring down.