Threat actors riding on the popularity of ChatGPT have multiple copycat hacker tools that offer similar chatbot services to the real generative artificial intelligence (AI) based app. The difference, however, is that these apps promote malicious activity. FraudGPT starts at $200 per month and goes up to $1,700 per year with over 3,000 confirmed sales and reviews for their product. Another similar, AI-driven hacker tool, WormGPT, was outlined in detail in a report by SlashNext. Like ChatGPT, these emerging adversarial AI tools are also large language models, and they can generate human-like text based on the input they receive.
The tools “appear to be among the first inclinations that threat actors are building generative AI features into their tooling,” said John Bambenek, principal threat hunter at Netenrich, a cloud data analytics security company in Plano, Texas. “Before this, our discussion of the threat landscape has been theoretical.” FraudGPT — which in ads is touted as a “bot without limitations, rules, and boundaries” — is sold by a threat actor who claims to be a verified vendor on various underground Dark Web marketplaces, including Empire, WHM, Torrez, World, AlphaBay, and Versus.
In this blog, we’ll review these two fraud-driven chatbots, exploring the risks they pose and how your organization can protect itself.
Generative AI tools across the board provide criminals with the same core functions that they provide technology professionals. For example, with the ability to operate at greater speed and scale, attackers can now generate phishing campaigns quickly and launch more simultaneously. Due to the predictability of human behavior, exploiting human vices, habits, and choices becomes relatively simple. Even with sophisticated malware, hackers still require access to an individual’s mindset, which is where phishing comes into play.
The primary use of FraudGPT and WormGPT remains to help attackers create convincing phishing campaigns for business email compromise attacks. The tool’s proficiency at this was even touted in promotional material that appeared on the Dark Web, demonstrating how FraudGPT can produce a draft email that will entice recipients to click on the supplied malicious link. While ChatGPT can also be exploited as a hacker tool to write socially engineered emails, there are ethical safeguards that limit this use. The growing prevalence of malicious AI tools like WormGPT and FraudGPT demonstrates that it isn’t difficult to re-implement the same technology without those safeguards.
AI isn’t just employed in phishing and impersonation schemes. AI is currently used to create undetectable malware, locate targets and weaknesses, disseminate false information, and execute attacks with a high level of intelligence.
There are increasing reports of con artists using AI to carry out sophisticated attacks. They create voice clones, pose as real individuals, and conduct highly targeted phishing attempts. In China, a hacker used AI to generate a deepfake video, impersonating the victim’s acquaintance and convincing them to send money. Additionally, con artists have abused client identification procedures on crypto exchanges like Binance using deepfakes.
These AI-driven tools can also help attackers use AI to their advantage in other ways. They can:
The pressing question is, “How can businesses safeguard themselves against the increasing threat brought by AI?”
The answer lies in implementing a comprehensive security strategy that transcends conventional cybersecurity measures and acknowledges the human factor.
As phishing remains one of the primary ways cyber attackers gain initial entry into an enterprise system to conduct further malicious activity, it’s essential to implement conventional security protections against it. These defenses can still detect AI-enabled phishing and, more importantly, subsequent actions by the threat actor.
“Fundamentally, this doesn’t change the dynamics of what a phishing campaign is, nor the context in which it operates,” John Bambenek, principal threat hunter at Netenrich, says. “As long as you aren’t dealing with phishing from a compromised account, reputational systems can still detect phishing from inauthentic senders, i.e., typosquatted domains, invoices from free Web email accounts, etc.” He says that implementing a defense-in-depth strategy with all the security telemetry available for fast analytics can also help organizations identify a phishing attack before attackers compromise a victim and move on to the next phase of an attack.
“Defenders don’t need to detect every single thing an attacker does in a threat chain; they just have to detect something before the final stages of an attack — that is, ransomware or data exfiltration — so having a strong security data analytics program is essential,” Bambenek says. Other security professionals also promote using AI-based security tools, the numbers of which are growing, to fight adversarial AI, in effect fighting fire with fire to combat the increased sophistication of the threat landscape.
Generative AI is still a developing technology, and its full potential is yet to be revealed. Even still, it presents serious implications for the cybersecurity of your organization. Learn more about the opportunities and threats of generative AI in these supporting blogs.
Four AI Attack Types Threatening Your Cybersecurity
ChatGPT-3 and now ChatGPT-4 — What Does it Mean for Cybersecurity?