We have all witnessed manipulation, blackmail, and extortion. Why? Well, because we live on planet earth. That’s what humans do to each other. We have also seen people go to great lengths to embarrass or defame others. All these mean-spirited methods were conceived by evil, conspiring, and I might add – often highly intelligent human beings. Or maybe some jilted lover just wanted to get even. The latest ruse aimed at people is a relatively new technology called deepfakes or AI generated videos. Those are videos that depict real people doing and saying whatever the creator of the video chooses. Yes, you read it correctly. You can now make a video of another person saying and doing anything you want – in their own voice and explicit likeness!
A few years ago, I began following the news reports as they flooded the media outlets regarding this new technology. It was intriguing to me; I imagined all the possibilities of being able to exactly replicate the body movements and voices of virtually any person in existence to say and do anything. The possibilities seemed endless in my creative mind. But in my mind I also saw the other, horrible side of this technology. What if someone decided to create a deepfake of an enemy, an ex-spouse, revered religious leader, or politician; the star in a video depicting an embarrassing or illegal act?
‘Deepfake’ is a term coined in 2017 describing videos made by superimposing or combining existing images and videos onto source images and videos using machine learning technology. You might initially think, “Well, like other counterfeit media I’ve seen, I will be able to pick out the fake stuff without any problem.” Well, that may not be as easy at you think. According to a May/June 2019 Foreign Affairs article, “Intelligence agencies will face the Herculean task of exposing deepfakes. The technology, known as generative adversarial networks, pits two computer algorithms against each other, one generating images while the other attempts to spot fakes.” Because the algorithms learn by competing with each other, any deepfake detectors are unlikely to work for long before being outsmarted. In other words, it could be virtually impossible to detect a deepfake, even using available technology.
Recently, a South Korean news anchor began her show by going through the day’s headlines. It was the ‘normal’ list of stories for the end of the year – pandemic and COVID-19 updates. However, this particular show was far from normal. The news anchor had been replaced by a ‘deepfake’ version of herself – a computer generated copy that perfectly reflected her voice, gestures, and facial expressions. To viewers, what they saw was an exact AI version of the news anchor. At first glance, it was impossible to tell the real person from the fake.
Viewers had been informed before the show began that what they would see was not the ‘real’ version of their favorite news anchor. Some of their responses included, “I am worried how people will make a living in the future if AI replaces real people!” Another person remarked, “Is there a need for actual newscasters? AI programs might articulate words better than humans.” Regardless of what people think about deepfake technology, the technology is here, and the ramifications are as open and broad as the imaginations of the creators of the videos.
Some watchdog groups are calling for technology companies or individuals who develop deepfake or AI-generated video apps to bear the responsibility of developing verification techniques and including them into the software. These additions to the software would include markers that alert a user that the media is synthetic and not real. Communications of the ACM, an online technology journal recently published an article, “What to do about Deepfakes.” The article stated, “Technical experts should develop and evaluate verification strategies, methods, and interfaces. The enormous potential of deepfakes to deceive viewers, harm subjects, and challenge the integrity of social institutions such as news reporting, elections, business, foreign affairs, and education, makes verification strategies an area of great importance.”
Unfortunately, in many democratic nations with free markets and freedom of speech and press rights, controlling deepfake technology will eventually fall on respective legislative bodies to pass laws regulating the creation and usage of such media. But don’t look for that to begin until the damage is well in progress and thousands of people’s reputations and dignity have been slammed and defamation lawsuits have flooded the court systems. And don’t look for developers to regulate themselves, as Communications of the ACM strongly suggests they should.
To protect yourself from becoming a victim of deepfake technology, there are a few things you can do. Keep in mind, however, that if you have already participated in social media by sharing pictures, friend lists, and personal information, you are already at risk. Lock down your friend lists and make them invisible to others. Be very selective with whom you share pictures and personal information. Set up all security features on every social media platform you use. And report any suspicious activities or weird behaviors. Good luck; you’ll need it.
You want to know what deepfake technology looks like? Check out this video; it’s a deepfake.