July 20

ChatGPT Used By Cybercriminals to Expand Threats

Since ChatGPT Release

In late 2022 OpenAI released the beta version of ChatGPT, a tool that generates AI-generated text and code from user input. Since the release of ChatGPT, cyber threat actors have been attempting to abuse it maliciously and touting its capabilities on cybercrime forums.

Just weeks after ChatGPT debuted, Israeli cybersecurity company Check Point demonstrated how the web-based chatbot, when used in tandem with OpenAI’s code-writing system Codex, could create a phishing email capable of carrying a malicious payload. Check Point threat intelligence group manager Sergey Shykevich told TechCrunch that he believes use cases like this illustrate that ChatGPT has the “potential to significantly alter the cyber threat landscape,” adding that it represents “another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities.”

More Deceptive Phishing Schemes

Specifically, threat actors are developing phishing schemes and could use ChatGPT to create unique, more deceptive email messages in multiple languages, prompting recipients to provide valuable information. In the past, somebody could quickly identify poorly written messages with spelling errors and grammatical problems in phishing emails. Now, with ChatGPT, bad actors can generate professional-looking malicious messages without these tell-tale issues that were common to phishing scams in the past.

According to the Harvard Business Review: “While more primitive versions of language-based AI have been open-sourced (or available to the general public) for years, ChatGPT is far and away the most advanced iteration to date. In particular, ChatGPT’s ability to converse so seamlessly with users without spelling, grammatical, and verb tense mistakes makes it seem like there could very well be a real person on the other side of the chat window. From a hacker’s perspective, ChatGPT is a game changer.”


Be More Vigilant Concerning Phishing Attacks

Everyone has to assume that malicious actors have already used ChatGPT’s capabilities in attacks and other negative scenarios, which may be harder to detect thanks to the AI tool’s features. With these risks in mind, organizations should remain even more vigilant concerning legitimate-looking phishing emails and other social engineering schemes, among other threats.

Want to learn more about cyber threats and how to protect your organization in today's ever-changing world? Contact Workplace Technologies today–we look forward to hearing from you!