AI

Cybercrimes committed using AI

This Article talks about the various cybercrimes done with the help of OpenAI’s ChatGPT software platform and the action taken by OpenAI to disrupt and prevent such crimes based on their published article.

AI (Artificial Intelligence) has been a hot topic in recent times because the impact it can have on all sectors is so immense. When I read the article “Disrupting malicious uses of AI: June 2025” published by the famous organization OpenAI, some of the things in it surprised me, and I wanted to share it with you all. OpenAI is the organization that created the famous ChatGPT that we have all used at least once. AI is like a knife, it can be used for the benefit of humanity or for harm or to increase the capacity, scope and scale of cybercrime as discussed in this article. In this OpenAI article, we talk about different instances of cybercrime – cyber espionage, fake job schemes and other cyber scams committed using AI and what OpenAI is doing to disrupt them. For those interested, you can read my previous articles on how countries have used AI for cyber terrorism, how fraudsters are using AI for innovative or advanced cyber frauds in the ChatGPT Crimes column. Below I will tell you about some of the crimes in that article, how AI was used for that crime and which country is behind such a crime.

Cybercrimes committed using AI :-

  • Fake employment schemes : Here ChatGPT was used to prepare fake employees, for example by creating fake resumes with fake educational backgrounds, fake employment histories in major companies and references. It was used to apply for work from home or remote work in the US. ChatGPT was then used to answer recruitment tests (for example, coding assignments and answering real-time interview questions). They were observed to have taken jobs and were using ChatGPT to configure the company-issued laptops to appear to be domestically located to the company through geolocation masking and endpoint security circumvention. The country behind this was found to be North Korea, suggesting that it was a cyber espionage operation. OpenAI disabled those users and added logic to ChatGPT modules in later versions to detect and filter out the prompts used there. In one of my previous articles, I explained how AI is being used to scam people by presenting them with credible job ads, job offer letters, and emails.
  • Social Media Campaign Scams : Here, detailed articles, opinion pieces, comments, etc., created with the help of AI, along with fake images/videos, were used on social media platforms like Twitter/X, TikTok, Reddit, Instagram and Facebook to support issues that were in their favor or to oppose issues that affected their national interest. Some examples include articles opposing USAID, Taiwan’s popular computer game Reversed Front and Pakistani Baloch activist Mahrang Baloch who opposes China’s activities in Balochistan, among others. Most of the handles involved were found to belong to China. To thwart this, OpenAI has banned the relevant accounts and is tuning the ChatGPT AI model to detect and block such signals. We have all seen similar fake news, images, videos, articles and opinion pieces on social media platforms during Operation Sindoor, farmers’ protests, WAQF Amendment Bill and other times.
  • Cyber ​​Espionage : In this case, some individuals of Chinese origin, posing as news organizations or geopolitical organizations from Turkey and other European countries, used ChatGPT from OpenAI to obtain classified information from leading American economic and financial experts. It is known that social engineering techniques were used in these cyber crimes, which I have written about in detail in my previous article. The investigation revealed that ChatGPT was used to translate from Chinese to English, create opinion articles or social media messages, create related websites. Apart from this, ChatGPT was also used to create fake social media campaigns and recruit local individuals for espionage. Such behavior may have taken place in India as well, as we all know that some of the individuals involved in such espionage were arrested after the recent Operation Sindoor. To disable this and prevent further misuse of ChatGPT, OpenAI banned the user accounts and tuned ChatGPT’s algorithms.
  • Election rigging/frauds : The investigation revealed that OpenAI’s ChatGPT was used to rig the 2025 German election or change the outcome of the election as they wanted. In this cyber operation, OpenAI’s ChatGPT was used to change the outcome of the election or to win or defeat a candidate/party of their choice, collect information/news about those candidates/parties and use it to create fake news, images and videos that served their purpose. It was shared on German-related social media groups such as Twitter/X, TikTok, Reddit, Instagram and Facebook. In addition, OpenAI’s ChatGPT was also used to create or polarize the opinion of German voters by creating fake social media profiles of influential people. The user handles or computer tools/networks behind all these activities were traced to Russia in the investigation. We have all heard/seen many times that such election operations were carried out in Canada, America and our India as well. To disable this and prevent further misuse of ChatGPT, the Russian-based accounts related to OpenAI were banned and ChatGPT algorithms were tuned.
  • Cyber ​​​​warfare/terrorism/destruction : Here, OpenAI’s ChatGPT was used to create, strengthen and spread various malware (a type of software program) of Microsoft Windows that we all commonly use. That malware steals sensitive information from the computer on which it is installed, opens an external door for fraudsters to enter the organization’s network and establish complete control over that computer. The concern here is that the malware was not just intended to steal money or information like a normal hacker, but to carry out cyber terrorism/destruction on the country that was infected by the malware, and to spread this malware infection to the important institutions of that country, it was carried out through social engineering techniques, for which OpenAI’s ChatGPT was used. Israel had previously tried to destroy the Iranian nuclear reactor with the help of such malware. Here, OpenAI not only blocked the relevant accounts but also worked with the organization that hosted the malware to deactivate the malware. I have discussed only some of the major AI-based cybercrimes in that article.

The gist of this article is that like an AI knife, we can only control its use through appropriate laws, rules and regulations, but we cannot completely prevent its misuse.

Leave a Reply

Your email address will not be published. Required fields are marked *