An opinion on deepfakes and their increased uses
By Aarushi Patel and Nari Funke

(Graphic source: Ofcom)
[ Open a social media feed. Any feed—preferably one that contains image comments. Check the comments, the most popular videos; what do you notice? There’s a good chance you’ll stumble upon celebrity bodies hastily mapped onto meme videos, famous movie scenes, music videos. A woman, uncanily staring through the screen, talking about the products she was forced to use to lose weight. Joe Biden and Donald Trump engaged in amorous activities. ]
We are in the era of artificial intelligence (AI). Some may say it is an exciting era, and that we should embrace the concept of embedding AI into our daily lives; on the contrary, many voice concerns about the potential dangers which come with it. From what it once was, artificial intelligence has rapidly progressed in the manner of a few short years. Chat GPT, one of the more famous AIs, is extremely diverse, with capabilities spanning from video summarization to homework help to image generation.
Regardless, among all of these exciting advances, AI brings a new sense of fear. Fear of losing jobs, as many companies would rather use AI for free rather than pay humans. Fear of losing problem solving, as many students use AI to do their work and therefore neglect their cognitive skills. Most recently, a new fear has arisen: fear of failing to recognize reality, a more recent problem of distinguishing what is real versus what is AI. While AI has many benefits, it seems like the disadvantages outweigh the advantages from a moral standpoint.
A notorious product of artificial intelligence is AI faces. Deepfakes, digital avatars, and similar forms of media all have one common thread: they were fabricated by a machine and do not truly exist. AI faces are achieved through evaluating several or as little as one face of a person and creating a generated model, or “avatar” as it is often called. These avatars are then able to move and speak digitally with very little distinction marking them as AI. Occasionally there exists a watermark on this video, but with the <easy-to-access> watermark removal software floating around the web, their effectiveness is practically inexistent.
Artificially generated avatars are used for a plethora of reasons, the common and more docile being visible customer support chats, informational videos, and free hyper realistic profiles to generate content with. These avatars do not have a name tied to their face and do little more than a hired actor would. However, where an actor is fairly compensated and real, an AI is not. Instead, it takes the job from that same actor and turns it into an automated [ task ].
Whether or not these AI faces are used for a clear cut “good” or “bad” purpose, they are most often—and some would argue, always—immoral. Many digital avatars on the internet use the face of a person, oftentimes without consent, and have this persona say or do things unbeknownst to the one whose face they’ve taken. Deepfake makers, for example, can be found with a simple Google search, are open to the public, and accept anyone’s faces. Whether or not there is a confirmation of hypothetical consent does not matter; people with malicious intent have no qualms about lying.
Disinformation, which stands separate from its counterpart “misinformation,” is information spread with the intention of being false. Disinformation has always existed, but it has become a heightened threat with the rise of deepfakes. It is likely that you’ve seen videos of famous celebrities crudely stuck onto a soccer player scoring a winning goal, or dancing. The invalidity of those videos may seem obvious to the average individual, but that does not eliminate the gullible elderly and children, nor does it account for the increasing quality of these open domain video generators like Sora AI. Even accounting for those factors, we are only looking at surface-level content. Artificial personas have been used for political propaganda, SEM, false evidence in trials, and many forms of manipulation. According to an article from Stanford University, the author quotes Michael Tomz, a professor of political science at Stanford School of Humanities and Sciences and faculty affiliate at the Stanford Institute for Human-Centered AI (HAI), “he saw a headline in the Taipei Times reporting that the Chinese government was using AI-generated social media posts to influence voters in Taiwan and the United States” (Walsh). You may have also been there for the AI-generated clothless deepfakes of Taylor Swift. Those are only two of many capabilities AI possesses. [ You get to wondering about just how many victims whose stories got buried solely because they didn’t possess as much renown as a government or Swift. ]
“Harmless” or not, this is illegal. Users of AI face softwares that knowingly put someone’s face into them are actively removing that person of their autonomy. It doesn’t matter that the person didn’t actually do that—the people watching the video don’t always know that.
Even if one is to argue that it isn’t that big of a deal—that people will always know what’s human or not—it has been proven that AI possesses the ability to warp one’s judgement of what truly exists. In a study from Psychology Today, “people [are] increasingly unable to distinguish AI from real human faces” (Wei). This not only provides an advantage for people using deepfakes for malicious and immoral uses, but also is extremely confusing for the average person looking for who to trust. Many people complain of videos on their shorts feed with those videos where it’s impossible to distinguish AI from reality. Deepfakes are like that: making you lose your sense of reality while you’re too focused on not trying to fall for AI.
In essence, AI faces are both ethical and unethical for quite a lot of reasons. It is a tool which can easily be used for malicious intent, cuts too close to humanity, among a myriad of other reasons. But, on the other hand, it has aided progress in a variety of fields, sped up efficiency, and done many things which have contributed to the progress of society. We are in a time of rapid progress. In the last 200 years alone, we went from horse and carriages to planes and now things such as AI which our predecessors can not comprehend. Nevertheless, we must remember to keep our sense of humanity in mind, keeping AI at arm’s length. AI faces may be interesting, but they do more harm than good.
Works Cited
“People Now See Ai-Generated Faces as More Real than Human Ones.” Psychology Today, Sussex Publishers, www.psychologytoday.com/us/blog/urban-survival/202311/people-now-see-ai-generated-faces-as-more-real-than-human-ones. Accessed 5 Dec. 2025.
Walsh, Dylan. “The Disinformation Machine: How Susceptible Are We to AI Propaganda?” Stanford HAI, hai.stanford.edu/news/disinformation-machine-how-susceptible-are-we-ai-propaganda. Accessed 15 Dec. 2025.
