Site icon Welcome to CYBER MITHRA

“Seeing is no longer believing” : Cybercriminals are using AI for creating Deepfakes. Beware!

deepfake

Deepfake cybercrimes are in the news for quite sometime. In continuation of how cybercriminals are using Artificial Intelligence(AI) tools for cybercrimes, last week I spoke about how ChatGPT was used for cybercrimes and this week I am going to talk about how cybercriminals are using deep fakes done with the help of AI tools for cybercrimes.

The term deepfake originated in 2017, when an anonymous Reddit user called himself “Deepfakes.” In simple words, Deepfake is an AI-based technology used to produce or alter image, audio or video content so that it presents something that didn’t, in fact, occur. Deepfake technology works by using deep learning neural networks to manipulate video (faces) and audio (voice). Neural networks are in some ways similar to how our brain processes information. For example, in the case of deepfake videos, the images of the targets are morphed and merged. Afterward, voice is overlaid and lips are synced.

Cyber criminals use facial mapping technologies to create an accurate facial symmetry data set. They use AI to swap the face of a person on to the face of another person. Apart from this, voice matching technology is used to accurately copy the user’s voice.  It is then used to generate fake news and commit financial fraud among other wrongdoings.

Some of the major deep fakes which have caused a strom in recent times include:

Various governments, organisations like UN and experts are warning about the risks posed by deepfakes, like:

There are many AI apps like Zao, deepfakesweb, deepfacelab, Wombo etc which are making the high end AI technology available to cybercriminals to create sophisticated deepfakes.

How cybercriminals are using AI for creating deepfakes :-

A deepfake is made using a form of artificial intelligence known as deep learning. In simple terms, the process involves feeding the AI algorithm millions of images and videos. The algorithm “learns” from this existing video footage and can create fake videos in which the faces and features of characters are replaced.

Deepfakes are notorious for their use as vehicles for false news and cybercrime. Here are some negative ways in which deepfake technology is used:

How to prevent or detect deepfake crimes :-

You can detect if a picture or a video is deepfake or not by the following methods :

If you have been scammed :-

Immediately call 1930 cyber helpline number or file a complaint at cybercrime.gov.in. Call the concerned bank and lodge a complaint to freeze the funds. If you have shared Aadhaar with someone, lock your Aadhaar card at uidai.gov.in. Change user ids and passwords/pins of exposed banking accounts. Don’t panic inform friends, family and post on your social media status that its fake and you have been extorted or defamed.

Remedies available to victim legally(India) :-

“Seeing is no longer believing” : Cybercriminals are using AI for creating Deepfakes. Beware!

If you find this useful, please like it and share, also if you have any questions or topics you want me to cover please add them in the comments section. Also you can share one page poster covering the entire content with your friends and relatives in other social media.

Frequently Asked Questions :-

Is there any positive use of deep fake technology?

There are quite a lot of positive use cases of deepfake technology, like it can be helpful in dubbing movies to different languages easily, preparing interesting education materials etc

Can AI be used to detect deepfakes?

Yes, there is lot of work going on to use AI to detect if a image or video or audio is a deepfake or not. In near future, we may have realtime AI deepfake detection software.

Who created first deepfake program

Deepfakes started with the Video Rewrite program, created in 1997 by Christoph Bregler, Michele Covell, and Malcolm Slaney. The program altered existing video footage to create new content of someone mouthing words they didn’t speak in the original version.

Exit mobile version