paint-brush
Taking Deepfakes Seriouslyby@synthesys
796 reads
796 reads

Taking Deepfakes Seriously

by Synthesys AI StudioApril 18th, 2022
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Deepfakes are images, voices, and videos that are created by or with the help of AI algorithms intended to mislead an audience. The term "deepfake" was first used in late 2017 by a Reddit user of the same name, who shared pornographic videos using open-source face-swapping technology on the Reddit site. Technology could also be used to make people in the real world appear in videos and audios that say or do things they never said or did, to replace people in existing videos, or to create video content with completely non-existent characters, celebrities, or important politicians.
featured image - Taking Deepfakes Seriously
Synthesys AI Studio HackerNoon profile picture


Artificial intelligence isn't going away. This game-changing technology has the potential to improve efficiency by simply simulating human thought and can be trained to solve specific problems. According to Statista, AI will be a larger billion-dollar industry by 2025, with applications ranging from personalized learning in education to improved customer service in e-commerce and business.


The landscape for artificial intelligence advancements is limitless and rapid, with new breakthroughs occurring daily. For example, in terms of AI videos and voices, we can anticipate new features being added and video generations becoming more realistic and controllable in the coming years, one of which is Deepfake.


In many ways, this is a new frontier for ethics and risk assessment just as much as it is for other growing technologies. This has given rise to organizations adopting AI codes of ethics to formally specify the role of artificial intelligence in advancing humanity. An AI code of ethics' objective is to offer stakeholders with much-needed direction when faced with an ethical decision surrounding the use of artificial intelligence.

An Introduction to Deepfake Technology


Deepfakes are images, voices, and videos that are created by or with the help of AI algorithms intended to mislead an audience.


The term "deepfake" was first used in late 2017 by a Reddit user of the same name, who shared pornographic videos using open-source face-swapping technology on the Reddit site. The term has since been expanded to include "Synthetic Media Applications" that existed prior to the Reddit page, as well as new creations such as STYLE-GAN – "realistic-looking still images of people that don't exist."


Deepfake technology uses someone's behaviour – such as voice, face, common facial expressions, or bodily movements – to generate new audio or video content that is barely distinguishable from what is real. This technology could also be used to make people in the real world appear in videos and audios that say or do things they never said or did, to replace people in existing videos, or to create video content with completely non-existent characters, celebrities, or important prominent politicians; and this has raised numerous concerns about the ethics of deepfakes.


Deepfake effects used to take at least a year to create for experts in high-tech studios, but with the use of machine learning, the rapid development of deepfake technology over the years has made the creation of truly convincing fake content much easier and faster.

Networks Underlying AI Images, AI Videos and AI Audio


Deepfakes began with the development of Artificial Neural Networks (ANNs). An ANN is a machine learning model that is built on a network of neurons that is remarkably similar to the human brain. It differs, though, in that the AI does not make predictions about new data supplied to it; instead, it creates new data. These algorithms are known as Generative Adversarial Networks (GANs), and recent breakthroughs have fueled research and development, resulting in the emergence of deepfakes.


Convolutional Neural Networks (CNNs), which are based on ANNs, simulate how the visual cortex processes images in order to perform computer image recognition. Artificial and Convolutional Neural Networks lay the basis for deep learning programs and underlie the algorithms that generate deepfakes today: Generative Adversarial Networks.


Face-swapping apps, such as Zao and Faceapp (one of the earliest deepfake successes), for example, allow users to swap their faces with another person's, occasionally a celebrity's, to create a deepfake or AI video or image in seconds. These advancements result from Deep Generative Modelling, a breakthrough technology that allows us to make duplicates of existing faces and build new, breathtakingly realistic representations of people who do not exist.


This new technology has justifiably aroused worries about privacy and identity. But, if an algorithm can build our looks, will it be possible to replicate even more characteristics of our own digital identity, such as our voice – or perhaps create a full-body double?

Threats Deepfakes Pose


Deepfakes pose a significant threat to our community, political system, and business because they put pressure on journalists who are struggling to distinguish between real and fake news, endanger national security by publishing propaganda and disrupting elections, undermine citizen trust in authorities, and raise cybersecurity concerns for individuals and organizations.


Deepfakes are most likely posing the greatest danger to the journalistic business as they are more dangerous than "conventional" fake news since they are more difficult to detect, and consumers are more likely to assume the fake is real. In addition, the technology enables the creation of ostensibly credible news videos, putting journalists' and the media's reputations in danger. From only a few images, videos can now be created and misattributed video material, such as a real protest march or violent conflict captioned to imply it took place elsewhere, is a rising issue that will be compounded by the rise of deepfakes.


Reuters, for example, discovered a video claiming to show the moment a suspect was shot dead by police while searching for eyewitness videos of the mass shooting in Christchurch, New Zealand. However, they quickly realised that it was from a different occurrence in the United States and that the suspect in the Christchurch shooting had not been murdered.


It is enough for intelligence agencies to have some fear, as deepfakes can be used to endanger national security by propagating political propaganda and interfering with election campaigns.


US intelligence authorities have often warned of the dangers of foreign involvement in American politics, particularly in the run-up to elections. Putting words in someone's mouth on a viral video is a strong weapon in today's disinformation wars, and edited films can easily sway voter opinion. While such fabricated recordings are likely to generate domestic unrest, riots, and election disturbances, other nation-states may opt to carry out their foreign policy based on deception, potentially leading to international crises and wars.


A continuous stream of such recordings is also likely to impede digital literacy and citizens' trust in authority-provided information. Phoney recordings, which can easily be generated with a text to speech feature like Synthesys' AI voices of government officials expressing things that never happened, cause people to distrust authorities. Furthermore, people may dismiss authentic videotapes as false just because they have learned to assume that everything they do not want to accept must be fake. In other words, the greatest danger is that people would start to see everything as deception rather than being tricked.


Another problem offered by deepfakes is cybersecurity vulnerabilities. Deepfakes could also be used to influence the market and stocks, for instance, by depicting a CEO speaking racist obscenities, announcing a fake merger, or presenting them as though they committed a crime. Furthermore, deepfake porn or product announcements could be used to harm a company's brand, blackmail, or humiliate management. Deepfake technology can also allow the digitalized impersonation of an executive, for example, to request an urgent cash transfer or private information from an employee.

The Positive Side to Deepfakes


Despite the potential hazards presented by deepfake technology, it can have positive applications in areas such as entertainment, educational media, digital communications, games, social media, and healthcare.


For example, in movies where actors' voices have been lost due to disease, deepfake technology can assist in creating synthetic voices or updating film footage rather than remaking it. As a result, moviemakers will be able to reproduce old movie scenes, create new films that can star long-dead actors, employ CGI effects and complex face editing in post-production, and elevate amateur videos to a professional standard.


Deepfake technology also enables natural voice dubbing for films in any language, letting various audiences enjoy films and educational materials more effectively. A 2019 worldwide malaria awareness commercial starring David Beckham broke down language boundaries by using visual and voice-altering technologies to make him appear multilingual.



https://www.youtube.com/watch?v=QiiSAvKJIHo


Deepfakes technology provides improved telepresence in online games and virtual chat worlds, natural-sounding and -looking smart assistants, and virtual replicas of individuals. This contributes to the development of better human relationships and online engagement.


Businesses also have much to benefit from the possibilities of brand-applicable deepfake technology because it has the ability to significantly revolutionize e-commerce and advertising.


For example, Deepfake technology can enable virtual fittings to give customers the ability to preview how an outfit would appear on them before purchasing and may make personalized fashion commercials that vary depending on time, weather, and viewer, as well as create AI avatars that can personalize communications with customers and enable superpersonal content that transforms people into models. In addition, the ability to try on clothes on the internet is an obvious potential use; the technology not only allows people to make digital clones of themselves but also allows people to try on bridal wear in digital form and, in turn, virtually experience a wedding location.

Ethical Practices to Embrace


Broad-scale innovation is an ethical issue since ethics is basically concerned with anything that can enhance or hinder human well-being. As a result, ethics is important in judging the goals of innovation, such as deepfakes, as well as the process by which it is carried out and the outcomes that result from it. The fundamental question is, "Who are deepfakes designed for?"


"What is the purpose of their creation?" "How can the most severe consequences be mitigated?" Answering these questions can help organizations and individuals align with the following ethical framework, which UNICEF maintains regardless of whether they work on a picture, audio, or video deepfake.


They include:


  • Design with the user in mind.
  • Understand the existing ecosystem.
  • Design for scale.
  • Build for sustainability.
  • Be data-driven.
  • Use open standards, open data, open-source, and open innovation.
  • Reuse and improve.
  • Do no harm.
  • Be collaborative.