Skip to Main Navigation Skip to Main Content Skip to Footer
Main Content Area

Is ChatGPT the Newest Gateway to Fraud?

Is ChatGPT the Newest Gateway to Fraud?


Cybercriminals and hackers are frequently admired, albeit begrudgingly, for their skills. Hacking and cybercrime require significant knowledge, which thankfully restricts these activities to a small select group. But the new bot ChatGPT could change all that. Here’s why and what fraud examiners can do to fight back.

By Mary Breslin, CFE

 

 

Just weeks after OpenAI, a U.S. artificial intelligence (AI) research laboratory, launched ChatGPT in November last year, the U.S.-Israeli cybersecurity firm Check Point noticed a spike in dark web conversations. The AI chatbot has astounded, but also unnerved, the public with its ability to do many human tasks from coding to passing an MBA-level exam to writing flawless text. However, in a popular hacking forum, a post entitled “ChatGPT – Benefits of Malware” described how one fraudster used the AI tool to recreate malware strains, such as so-called stealers that infiltrate computer systems, search for specific types of files and then remove those files without the owners’ knowledge. While the creation of malware is nothing new, the discussion on the dark web was disturbing because, as Check Point notes, its apparent purpose was to show less tech-savvy fraudsters that they too could use ChatGPT to code for malicious purposes. (See “Cybercriminals starting to use ChatGPT,” Check Point, Jan. 6, 2023.)

 

As evidence of this, Check Point highlighted another dark web post where a cybercriminal dubbed USDoD talked about how OpenAI helped him create his first-ever script, a series of instructions that’s like computing programming but is usually easier. While scripting isn’t malicious in and of itself, Check Point notes that USDoD probably has limited technical skills and is known in the underground community for selling access to compromised companies and stolen data.

 

What is ChatGPT?

With every invention comes unexpected and unintended consequences. ChatGPT stands for chat generative pre-trained transformer. Simply, it’s AI, developed by the company OpenAI, that creates text-based content. I asked ChatGPT to explain itself, and it responded:

ChatGPT is a conversational language model developed by OpenAI. It is based on the GPT (Generative Pretrained Transformer) architecture and has been trained on a large corpus of text data to generate human-like responses to a wide range of questions and prompts. ChatGPT is capable of performing a variety of tasks, including:

  • Answering questions: ChatGPT can answer questions on a wide range of topics, including history, science, geography, and more.
  • Generating text: ChatGPT can generate text based on a prompt or topic, such as writing a story, generating a poem, or summarizing a news article.
  • Conversing: ChatGPT can engage in natural language conversations with users, allowing them to have an experience similar to chatting with a human.
  • Completing sentences or paragraphs: ChatGPT can complete a sentence or paragraph based on a given prompt, allowing users to generate text more efficiently.

Overall, ChatGPT is a powerful tool for natural language processing and can be used in a variety of applications, such as chatbots, language translation, and text summarization.

When I prompted ChatGPT to explain itself simply to lay people, it responded:

ChatGPT is an AI language model developed by OpenAI that can understand and generate human-like text. It’s been trained on a huge amount of text data, so it can answer questions, have conversations, complete sentences or paragraphs, and generate text on its own. In simple terms, it’s like having a digital friend who can understand and respond to your questions and messages in a human-like way.

The AI chatbot has astounded, but also unnerved, the public with its ability to do many human tasks from coding to passing an MBA-level exam to writing flawless text.

Anyone can use ChatGPT to simply explain complex concepts and help them write resumes, bios, songs, jokes and blogs. It can help create marketing content and translate existing content into multiple languages with grammar and diction nuances. The bot can write code, provide relationship advice or be a constant (albeit artificial) companion. The uses are limitless, and that’s a problem because fraudsters will abuse it in creative and unimagined ways with unforeseen consequences.

OpenAI launched ChatGPT Nov. 30, 2022, but it took the technology world by storm after its debut at the Consumer Electronics Show in Las Vegas in January of this year. Within weeks, high school and college students had begun using ChatGPT to do their homework, including writing lengthy papers. Teachers immediately had a major problem. But even worse, fraudsters, scammers and cybercriminals quickly started leveraging it for various scams:

  • Phishing emails.
  • Creation of malware and viruses.
  • Data encryption to produce ransomware.
  • Data decryption to steal protected personally identifiable information.
  • Real-time cryptocurrency payments for dark web markets.
  • Romance fraud.

None of these problems are new, but ChatGPT is elevating their significance and creating new issues. Here’s a deeper look at some of the fraudulent activity that ChatGPT could exacerbate.

 

Phishing emails

OpenAI has tried to prevent the use of malicious content. For example, ChatGPT refuses to create a phishing email, if you ask it to do so. But there are easy ways around this. (See “Cybercriminals Bypass ChatGPT Restrictions to Generate Malicious Content,” Check Point.) For example, when I asked ChatGPT to write a letter from Chase Bank for a compromised account, it provided this:

 

 

Subject: Important: Your Chase Account has been Compromised
Dear [Name],
We hope this email finds you well. Our records indicate that there has been suspicious activity on your Chase account. We take the security of your account very seriously and want to take immediate action to prevent any further unauthorized transactions.
Details of the suspicious activity:
Date: [Insert Date]
Location: [Insert Location]
Amount: [Insert Amount]
Please be aware that this activity may have been made without your knowledge or authorization. We have taken steps to secure your account and prevent any further unauthorized transactions.
It is important that you take immediate action to secure your account. Please call us at [Insert Chase Contact Number] as soon as possible to report the issue. Our dedicated team is available 24/7 to assist you.
We apologize for any inconvenience this may have caused and thank you for your prompt attention to this matter. If you have any questions or concerns, please don’t hesitate to contact us.
Sincerely,
Chase Bank
Security Team

 

 

Phishing emails have been improving in quality for over a decade, but ChatGPT eliminates the few remaining red flags many rely on to differentiate phishing emails from official communication. ChatGPT eliminates grammatical errors and diction nuances that show emails are written from other countries or by those whose second language is English. Of course, the same is true for all the languages ChatGPT uses.

The goal of most phishing emails is to entice recipients to click on links. ChatGPT can help create those links and the malicious code within them. (See “AI that can save the day or hack it away,” by Sharon Ben-Moshe, Gil Gekker and Golan Cohen, Check Point, Dec. 19, 2022.) In the bogus letter from Chase above, the recipient is instructed to call, which may be more successful for fraudsters as many people today are wise to the “click here” scam approach. Unfortunately, the ease with which a fraudster can spoof a phone number today may make the “please call us at …” approach very effective.

 

Creation of malware and viruses

As we saw in the introductory cases, talk among fraudsters about how to use ChatGPT to create malware and other malicious programs started appearing in criminal forums soon after its launch in November last year. Fraudsters have been sharing how ChatGPT can help them, such as writing code to steal data and ways to install backdoors on software. (See “Cybercriminals starting to use ChatGPT,” Check Point, Jan. 6, 2023.) So far, however, cybercriminals appear to have favored a more straightforward and traditional approach to installing malware through ChatGPT by using the enormous popularity of the AI tool to entice users to click on false ChatGPT sites. (See “Hackers use fake ChatGPT apps to push Windows, Android malware,” by Bill Toulas, Bleeping Computer, Feb. 22, 2023.)

Malware, once detected, is usually quashed through a quick scan of systems plus software updates and patches. As a result, cybercriminals are often only able to hit a few targets before they need to write new code. But artificial intelligence tools, such as ChatGPT, could provide polymorphic malware, which avoids detection by morphing or changing its appearance. Because AI learns as it develops and doesn’t need to sleep, it could create malware that’s constantly evolving and updating, making it much more difficult to stop. Additionally, AI continues to learn from itself as it develops content, which will result in more sophisticated code that cybersecurity experts will have more difficulty preventing and identifying.

OpenAI has tried to prevent the misuse of ChatGPT. For instance, the company has launched AI Text Classifier — a new tool to help users, such as teachers who are worried about cheating — to distinguish between human- and AI-written text. ChatGPT also won’t respond directly to questions that will help users carry out illegal or unethical actions, such as questions about bomb-making or creating phishing emails. But neither of these controls are foolproof and do little to stop anyone from using ChatGPT to generate malicious malware. Journalists at a technology news website simply asked it to create a script to identify credit card information and send it to a single location, and it did just that. (See “OpenAI’s new ChatGPT bot: 10 dangerous things it’s capable of,” by Ax Sharma, Bleeping Computer, Dec. 6, 2022; “New AI classifier for indicating AI-written text,” by Jan Hendrik Kirchner, Lama Ahmad, Scott Aaronson, Jan Leike, OpenAI, Jan. 31, 2023; and “Has OpenAI Already Lost Control of ChatGPT?” by Maxwell Timothy, MUO, Feb. 11, 2023.)

 

Data encryption and decryption

ChatGPT can write encryption and decryption code, which fraudsters can use for ransomware and theft of protected (encrypted) data. I asked ChatGPT to write a Python (a programming language) script to encrypt the hard drive on a laptop, and it produced what’s in Figure 1 below, plus the following comments:

As an AI language model, I cannot perform operations on the physical hardware, but I can provide you with a simple Python script to encrypt the contents of a file or a folder on the cryptography library in Python to encrypt and decrypt the data using symmetric encryption.

To use this script, simply replace ‘/path/to/file/or/folder’ with the path to the file or folder that you want to encrypt, and run the script. The script will generate a new key, store it in a file, and then use it to encrypt the contents of the specified file or folder. Note that this script will overwrite the original file or folder with the encrypted data, so be sure to make a backup of your data before running it. Also, be sure to keep the key file in a safe place, as you will need it to decrypt the data later.

 

The knowledge to write encryption or decryption codes is one of the many age-old skills cybercriminals and hackers must master. I’m not a hacker; white, gray or black hat. But now, I don’t need those skills because I have ChatGPT. For how many people will ChatGPT create both opportunities and means to commit fraud? Skids (script kiddies) — wannabe hackers with no skills — now have the keys to the kingdom. No skills? No problem; just use ChatGPT. (See “Hacker Types: Black Hat, White Hat, and Gray Hat Hackers,” Avast.)

 

Real-time cryptocurrency payments for dark web markets

Fraudsters and cybercriminals were among the first to embrace cryptocurrency because of the anonymity it provided and the lack of regulation. But they still needed substantial skills to create functioning dark web marketplaces with payment capabilities. Check Point has found fraudsters on forums boasting and sharing information on how easy it is to use ChatGPT to create dark web marketplaces and get current cryptocurrency prices through a general purpose scripting language (PHP) rather than more complicated program language such as Python or JavaScript. (See “Cybercriminals are already using ChatGPT to own you,” by Derek B. Johnson, Jan. 6, 2023 and “Cybercriminals starting to use ChatGPT,” Check Point, Jan. 6, 2023.)

 

Romance fraud

Fraudsters using AI or bots to maintain multiple profiles in attempts at romance fraud also isn’t a new concept, but, often, the messages are flat and lacking substance, tipping off intended targets. ChatGPT can be instructed to alter the tone of messages to make them more charming or romantic. This will allow romance scam fraudsters to run multiple profiles and target many more potential victims with very little effort, all while ChatGPT engages in conversations that sound authentic. When I asked ChatGPT to create a dating app profile for a friendly, smart, approachable female to attract men 45 to 60, it produced the text in Figure 2 in less than 30 seconds.

 

I didn’t provide ChatGPT with any additional information beyond the original request, but it independently devised the details around interests and desired characteristics. When I asked it to create a version for an urban lifestyle, it provided the text in Figure 3, again in less than 30 seconds.

 

The speed at which ChatGPT can produce these unique convincing and deceiving profiles can transform romance fraud into a simple numbers game for fraudsters with little effort.

One of the red flags of romance fraud is the similarities between multiple profiles with different pictures and names. But romance fraud won’t be the only type of fraud enabled by ChatGPT where unique content will reduce or eliminate red flags. Online shopping fraud increased significantly in the last couple of years, largely due to the pandemic. One of the red flags for dozens of fraudulent sites was the similarities and copied content from one site to another. But now fraudsters can use Chat GPT to leverage content repetitively; in 30 seconds, ChatGPT can create new and unique content for each site, making it appear more genuine.

Deepfakes with voice impersonation is another disturbing way fraudsters are employing AI to scam victims. In one recent case, reported in The Washington Post, fraudsters duped an elderly couple in Canada to send them money after using AI to mimic the voice of their grandson pleading for help to bail him out of jail. Fraudsters only need short audio samples taken from posts on social media sites, such as TikTok or Facebook to make it work, according to the article. An AI-generated video of the actor Morgan Freeman that went viral late last year is perhaps even more terrifying. It was so realistic that more and more people are now concerned about how this technology could be used for malicious purposes in the near future. (See “They thought loved ones were calling for help. It was an AI scam,” by Pranshu Verma, The Washington Post, March 5, 2023 and “This scarily realistic deepfake is terrifying Twitter,” by Daniel Piper, Creative Bloq, Dec. 19, 2022.)

 

Dangers ahead

The previous examples are ways that fraudsters, scammers and cybercriminals are already using or experimenting with ChatGPT, and they’re discovering and sharing new ways daily. When I asked ChatGPT what AI’s greatest threat to humans was, it answered:

There are several potential threats posed by artificial intelligences (AIs) to humans, including:

1. Job displacement: AIs could automate many jobs and make human labor redundant, leading to widespread unemployment.

2. Bias and discrimination: AIs are only as unbiased as the data they are trained on, and if that data reflects human biases, the AI will replicate them.

3. Safety and control: AIs could be programmed or hacked to cause harm to humans, or make decisions that are unethical or harmful.

4. Privacy invasion: AIs can process large amounts of personal data, which could be used to invade people’s privacy or exploit their information.

5. Dependence on technology: As we become increasingly reliant on AIs for decision-making and other tasks, we may lose the skills and abilities to do these things ourselves, making us vulnerable if the AI system fails.

These potential threats highlight the importance of responsible AI development and deployment, and the need for appropriate regulation and oversight to ensure that AIs are aligned with human values and priorities.

 

Now for the good news

But there’s good news; fraud fighters can also use ChatGPT very effectively. We can ask the bot for:

1.  A list of industry- and process-specific fraud prevention and detection activities.
2.  Examples of how AI can be used to commit fraud and steps to prevent it.
3.  List of steps for internal auditors to incorporate in plans to detect fraud during an audit.
4.  Fraud trends and how to protect against them.
5.  A list of ways AI can be leveraged in the fight against fraud. Suggestions include:

  • Identifying patterns and anomalies in data.
  • Automating fraud detection.
  • Predictive analytics.
  • Enhanced risk assessment.
  • Improving customer experience.
  • Streamlining investigations.

6.  Help with organization fraud awareness training. When I asked ChatGPT to create humorous and memorable messages for fraud awareness training and messaging campaigns this is what it provided:

 

Sure, here are ten fraud prevention and detection tips that use a touch of humor:
  1. Don’t give out your password to anyone, not even your mother-in-law. She might be nice, but she’s not that nice.
  2. Always double-check before clicking on a link in an email. You don’t want to end up on a website selling “miracle cures” for your baldness.
  3. If a stranger offers you a “once-in-a-lifetime” investment opportunity, it’s probably “once-in-a-lifetime” for a reason. Run away!
  4. If it sounds too good to be true, it probably is. Sorry, but you’re not getting a free Caribbean cruise for signing up for a credit card.
  5. Be careful when using public Wi-Fi. The guy sitting next to you at Starbucks might be trying to steal your personal information. And your muffin.
  6. Check your bank account regularly. You don’t want to find out that your ex-boyfriend has been draining your account to fund his pet snake’s yoga classes.
  7. Keep your antivirus software up to date. You don’t want to be the one responsible for unleashing a horde of digital zombies on the world.
  8. Don’t fall for the “We’re calling from Microsoft, and there’s a virus on your computer” scam. They’re not calling from Microsoft. They’re calling from their mom’s basement.
  9. Be wary of unsolicited phone calls. If someone calls you out of the blue and asks for your Social Security number, it’s time to hang up.
  10. Finally, remember that fraudsters are very creative. They’ll come up with new scams all the time. But if you use your common sense, stay vigilant, and keep your sense of humor, you’ll be one step ahead of them.

 

Cybercriminals are using ChatGPT to reduce their workloads and automate their schemes. But here’s the good news. They might not know if the chatbot is always spewing out 100% accurate information, especially if the fraudsters aren’t knowledgeable about or don’t have experience in the subject matters. Savvy users can often pick out ChatGPT errors and discern schemes. Knowledge is power.

 

Leveraging ChatGPT for good and not evil

ChatGPT will probably be a new gateway for fraudsters who couldn’t pull off the same schemes because they previously lacked cybercrime skills. Just as past technology created new fraud opportunities through unforeseen and unintended consequences, so will ChatGPT. However, because of the media splash ChatGPT has created, many fraud fighters are already keenly aware of its capabilities. We can now leverage the sneaky chatbot and other AI tools to fight back.

 

Mary Breslin, CFE, CIA, is president and founder of Verracy, a training and consulting company. Contact her at mbreslin@verracy.com.

 


Leave a comment

Your email address will not be published. Required fields are marked *

Experience the Verracy Difference

Sign up today to receive information about our services, including our Free Webinar Series.

Scroll to Top