Rise of the deepfakes: Blurring the line between the fake and real worlds
This kind of technology can have far reaching consequences, especially in democratic countries, where bad actors can use it to erode trust in such systems, as well as creating doubt in actual events and news

There's more to the scandalous videos of Alia Bhatt, Katrina Kaif, Priyanka Chopra than what seems to meet the eye.
What is true in a world where anyone can fake a video of anyone saying anything with a small payment to one of a multitude of websites?
In recent years the rise of deepfake technology, and its ease of use and accessibility to the average consumer, has led to a disturbing trend of women having their faces put on porn videos, of politicians saying self damaging or dangerous things, and even scammers using the faces and voices of family members to convince you of their legitimacy.
In fact, scientific studies have shown that since the technology's proliferation into the public domain, 95% to 99% of its use has been to create nonconsensual pornography of women – the target usually celebrities.
Perhaps more troublingly, this technology can be used to claim plausible deniability even in the face of direct evidence of a crime.
Welcome to the world of Deepfakes.
The technology began to surface on the internet in 2017, mainly appearing on social media sites like Facebook, Twitter and Reddit.
Initially, it seemed harmless enough, with many social media users using its precursor, face swap, to make funny videos to share online.
It was not long, however, before bad actors began to apply it to more malicious ends.
What is Deepfake?
Deepfakes, a portmanteau of "Deep Learning" and "Fake" are a form of synthetic media, and can be used to replace one person's likeness with another, or portray events that never occurred.
They are made using artificial neural networks, which means that they can be produced increasingly easily to look authentic and convincing.
Examples such as photos of Donald Trump being arrested and dragged away by a crowd of officers, or of Pope Francis wearing designer jackets took the social media world by storm when they were released.
While many understood that the images were fake, even more were unaware of the possibility and believed that the events pictured actually took place.
They have gained widespread attention in recent years as a result, and their potential use in creating child sexual abuse material, celebrity pornographic videos, revenge porn, fake news, hoaxes, bullying, and financial fraud has only grown.
They have also been used in the creation of "sock puppets", entirely artificial yet eerily realistic videos and images of non-existent human persons which can then be used online as a fake persona.
Ten years ago this kind of technology was the realm of research institutes, IT companies, and Hollywood, requiring days of data processing and powerful server banks to create.
Now it is as simple as using Google to find a website that offers it as a service, paying a nominal fee, as low as $20, and submitting the video you want to deepfake and the subject you want to place in it.
How are they made?
Deepfakes are made using two separate AI deep-learning algorithms. One will create its best possible replica of a real image or video, while the second will check to see if it can be detected as fake.
If the second algorithm detects a fake, the differences are reported and first will use it to remake the video into something that is more real. The process continues until the second algorithm can no longer detect that the video is fake.
This technology has been evolving rapidly.
Three years ago it was easy enough for someone paying attention to tell when a video was faked.
Companies made tools that exploited the weaknesses of deepfakes at the time, such as the inability to blink, or when teeth looked fake. Unfortunately it only took a few months before these gaps were closed and it became harder and harder to distinguish between true and false
Now it's near impossible for the human eye to spot the difference.
So what is the solution to this problem?
In a refrain that is becoming all too common in this day and age, AI is both the problem and the solution.
Artificial intelligence systems can be trained to spot fakes, but it comes with a serious weakness inherent to all generative AI's.
These programs need to be trained with real footage in order to spot the fakes, thus the less footage they have of you the less likely this will work to protect you.
Thus the ones most protected tend to be celebrities, those with hours of available footage with which a detection AI can be trained with, if you don't have hours and hours of footage online, things become much more difficult.
Another potential solution would be a blockchain ledger system.
Yes the same sort of system used for cryptocurrencies and the now nearly defunct NFTs could be used to hold tamper proof records of digital media. This would allow people to confirm a video's origins, and check for manipulation.
What are the consequences?
This kind of technology can have far reaching consequences, especially in democratic countries, where bad actors can use it to erode trust in such systems, as well as creating doubt in actual events and news.
There have already been incidents in both the US and Russia for example, where deepfake technology was used to promote false information straight from the governments mouth.
Video evidence will become less trusted, as blackmail or crimes caught on such can be declared as deepfake in order to grant plausible deniability.
Fake news can be shared with alarming quickness and devastating effect as people can see those they trust spreading false information.
Identity theft can be used to create fake accounts to impersonate others on social media, trick victims into fraudulent payments or even convince employees into divulging restricted information or make money transfers at the order their superiors in a company's hierarchy.
This technology does have positive uses, however those cases have been overrun by the masses of dangerous and abusive material that have begun to spread like wildfire across the world.
It has become the new rat race of the 21st century, with companies and governments racing to mitigate or adapt to a new paradigm. But with AI being both the problem and the solution, it's like asking your right hand to defeat your left in an arm wrestling competition.
Like all other paradigm shifting technologies that have been created in the last 200 years, it will take time for humanity and our societies to adapt.
Only time will tell if the world will be the same in the aftermath.