At the beginning of March, a video of Ukrainian President Volodymyr Zelensky that had been doctored began circulating online. In the video, a computer-generated version of Zelenskyy issued a surrender order to the Ukrainian national army. The video was shared online, but it was discovered to be a deepfake, hyper-realistic but fake, and altered video made using artificial intelligence. The video was disproved very fast.
Although it appears that Russian disinformation is having only a limited influence, this concerning case demonstrated the possible repercussions of deep fakes.
On the other hand, deep fakes are already being successfully used in assistive technology. People who have Parkinson’s disease, for instance, have the option of using voice cloning as a means of communication.
Education is one application of deepfakes; for example, the speech synthesis business CereProc, which is located in Ireland, generated a synthetic voice for John F. Kennedy so that he could once again deliver his famous address.
Deepfakes have the potential to be extremely lifelike and nearly undetectable by the naked eye.
Because of this, the same voice-cloning technique might be utilized for activities such as phishing, defamation, and blackmail. Deep fakes have the potential to undermine democracy when they are used on purpose to influence public opinion, instigate social conflict, and manipulate election results.
Causing chaos
The method known as generative adversarial networks, in which two algorithms train each other to produce images, is the foundation for the creation of deepfakes.
In spite of the fact that the technology behind deep fakes may appear to be hard, it is not difficult to create one. A plethora of software available on the internet, such as Faceswap and ZAO Deepswap, can generate deepfakes in a matter of minutes.
There are some examples of code that can be found in the Google Colaboratory, which is an online repository for code written in various different programming languages. These examples of code can be used to make bogus images and movies. Because this software is so easy to obtain, it is not difficult to see how normal individuals may create deepfakes and cause havoc without being aware of the potential security hazards.
The widespread use of face-swapping applications and websites like Deep Nostalgia is evidence of how rapidly and extensively deepfakes could be adopted by the general population. It is estimated that roughly 15,000 films utilizing deepfakes were discovered in 2019. And it is anticipated that this number will continue to rise.
Deepfakes are the ideal instrument for disinformation campaigns since the fake news they generate is so convincing that it requires considerable effort to disprove. In the meanwhile, the damage that may be done by deepfakes — particularly the harm done to people’s reputations — is typical of a lasting and irreparable nature.
Is seeing believing?
One of the most potentially harmful repercussions of deep fakes is the ease with which they may be used to spread disinformation during political campaigns.
This was demonstrated when Donald Trump labeled as “fake news” any media coverage that was critical of him or his administration. Trump was able to utilize disinformation as a defense for his wrongdoings and as a tool for propaganda by accusing his detractors of spreading false news.
By asserting “that actual events and stories are fake news or deepfakes,” President Trump is able to keep his base of support despite operating in an atmosphere rife with mistrust and false information thanks to his strategy.
The credibility of both the authorities and the media is being called into question, which is contributing to an atmosphere of mistrust. And as the production of deep fakes continues to increase, politicians will have an easier time denying any responsibility for developing scandals. If someone shown in a video denies being that person, how can their identification be established if the video exists?
However, democracies have always faced difficulty when it comes to battling disinformation while attempting to protect the right to free speech. By having humans check information, human-artificial intelligence cooperation can help mitigate the growing threat posed by deep fakes. There is also the possibility of considering introducing new legislation or applying existing laws with the intention of penalizing producers of deep fakes for fabricating information and impersonating persons.
To defend democratic societies from being corrupted by misleading information, multidisciplinary approaches implemented by international and national governments, private enterprises, and other groups are all absolutely necessary.
0 Comments