News
Jul 27, 2023
A troubling development in the world of cybercrime has surfaced, as criminals are now employing AI-powered chatbots to automate hacking and data theft. One such chatbot, known as FraudGPT, has been discovered lurking on the Dark Web and Telegram. Its capabilities include generating realistic phishing emails, crafting cracking tools, and making purchases with stolen credit card information.
The creator of FraudGPT has been actively promoting this malicious tool in a hacking forum, touting it as a "cutting-edge tool" that will revolutionize the hacking community. The chatbot's ability to create enticing emails, particularly in business email compromise (BEC) phishing campaigns, makes it a significant threat to organizations and individuals alike.
To gain access to FraudGPT, cybercriminals must pay a subscription fee of $200 per month, which can amount to $1,700 per year. Once in possession of the chatbot, attackers have an array of harmful capabilities at their disposal, from writing malicious code and creating undetectable malware to finding vulnerabilities, leaks, and cardable sites.
The ease of use and rapid deployment of FraudGPT make it particularly concerning, as even inexperienced cybercriminals can launch large-scale attacks without the need for advanced technical skills. This reflects the dangerous potential of generative AI in the wrong hands.
Moreover, another AI-powered cybercrime tool called WormGPT has emerged on underground forums, further highlighting the growing threat posed by AI in the realm of cybercrime. Both FraudGPT and WormGPT operate without ethical boundaries, raising significant alarm in the cybersecurity community. As these AI-powered chatbots become more prevalent, it becomes crucial for businesses and individuals to stay vigilant and implement robust security measures to safeguard against potential attacks.