Deepfake videos are artificial intelligence copy a human face and plant the face on an existing video. Politicians fear security breaches. Celebrities fear fake porno. However, the technology behind Deepfakes videos can also save lives.
The expectation of the artificial intelligence revolution is high. Alone with these expectations, hopes, and promises, there is a great fear: The Fear of losing jobs. The fear of being a victim of a self-sufficient weapon. A fear we will no longer trust our eyes.
It all started when a Reddit user published an artificial intelligence algorithm that allowed the implantation of a human face onto another person’s video. On February, social network user posted an app called FakeApp that allowed anyone without technical knowledge to use it easily. Social media users were overwhelmed, and the Deepfake phenomena were born.
The Deepfake videos are created from a vast image repository or a video of the person whose face they want to transplant. Content available to social networking is open to anyone — first the software “studies” the human face. After the learning phase, the software places the face in the desired video, and create the fake.
The phenomenon refuses to fade. On the contrary, many users create fake porn videos of celebrities and other women. There are significant privacy issues — the woman’s right to her body, humiliation, and more. Reddit, where the phenomenon was born, has already removed the social forum where the first videos posted.
Recently, Gobiggi found and removed a fake video embedded in the Video Paperless Business card. “Deepfake technology exposes us to a considerable risk. People can use video and voice forgery techniques for advertising purposes, distribution of fake news, Facebook evidence, and more.” Said Mira, at Gobiggi. A logical IT software infrastructure focused on humans and their business relationships.
On the other hand, technology companies continuing to perfect various artificial intelligence-based counterfeiting techniques. There are many potential positive uses, including video editing, movies, and design.
The negative uses of the software and the great discourse around it have led to a race in developing tools to identify fake videos. It’s a challenging task because the Deepfake creation process is irreversible. The second challenge concerns the effect of the fake videos and the impact they create before found and confirm fake. Women whom their body was exploded via fake video as revenge find it very hard to remove the content from the Internet.
Reddit, Twitter, and Pornhub have banned Deepfake pornographic sharing. However, it is tough to identify and remove the content. The video continues to appear on many sites that do not impose hard rules.
Detect forgery by identifying heart rate
It’s a war between a cat, and a mouse is in its infancy. The Angora project, which developed artificial intelligence that searches for a higher quality fakes web of content, people upload. The Maru project identifies and labels individuals’ faces, and can identify, for example, that the subject of fake pornographic content looks like a particular advertiser but is not the same. The Truepic startup is developing a technology that examines details the hair, which is virtually impossible to forge accurately over thousands of frames.
The panic that “we no longer believe our eyes” did not miss politicians and government agencies. Three U.S. House representatives, Adam Schiff, Stephanie Murphy, and Carlos Curbelo sent a letter to U.S. National Intelligence Dan Coates, asking him to assess the threat that Deepfake poses to the national security. The videos described in the letter as “hyper-realistic digital forgery where individuals appear to say or do things they have never done.” Lawmakers warn that the technology is available, and it can lead to blackmail, disinformation, and threaten U.S. discourse and security. Rubio warrants that government like Russia, who already tried to interfere in American elections and politics through digital means will use the Deepfake technology.
Not just porn: How you can dance like Beyonce?
On the other hand, many professionals are perfecting forgery techniques. As with any technology tool, the potential is two-sided, and opportunities lie alongside the dangers. Researchers at Carnegie Mellon University, known for artificial intelligence, they developed a counterfeiting method that helps filmmakers work quickly and cheaply and teach autonomous cars how to drive overnight. The new tool lets copy the facial expression of a person, the gestures as taken from a video, and embed then to another person in another video.
In a video released by the university, the facial expressions of the comedian John Oliver embedded to his colleague Stephen Colbert or from Martin Luther King to Obama, and from Obama to Trump.
Scientists at Berkeley University transferred one person’s dance moves to another one. No longer you need to be a great sexy dancer. You can produce a video where you move like Beyonce.
Moreover, what about fake image-based artificial intelligence? They have substantial positive potential, and even life saves. The GAN (Generative Adversarial Training) with the help of a computer can create fake images. The method uses two networks. One to produces an entirely new and reliable image after studying the details of many real pictures of the same flower. The other network examines and evaluates whether the images created are real.
The Next Growing Industry: Content Verification
Some believe that fake porn videos can be used positively. Adult movie company Naughty America introduced a new product in August that seeks to fulfill fantasies for its customers. Customers can request to be transfixed with the company’s porn video. It’s done with the consent of the implanted video and of course the actresses themselves.
Deepfake can’t be stopped. It rises to an opportunity and a space to fill, which will manifest in the growth validation developments. Also, indeed, in an age when Fake News discourse is only increasing, he may well be right in his estimations.