How Unrestricted AI Models Are Weaponizing Web3 Attacks
How unrestricted AI models are weaponizing Web3 attacks through fake support, social engineering, and malicious code generation. Fine-tuning of open-source models has enabled the creation of an entire arsenal, including WormGPT, FraudGPT, GhostGPT, and DarkBERT, forcing a dramatic shift in security strategies.
This could be your shot: $4.2M up for grabs in the WOW2025 Grand Prix, including a Cybertruck for the top finisher. Registration closes July 15th!
Supply Chain Attacks via AI Tools: The Cursor Case and Malicious npm Packages
We have already reviewed the interim numbers of Web3 security incidents, as well as key trends related to attack vectors. However, one of the most critical factors behind this growth and transformation is the advancement of tools in the hands of attackers. Although AI can be an incredibly powerful tool in crypto trading and has already shown impressive results through integration, its reverse side demands more caution than ever.
One of the manifestations of this risk is supply chain compromise. In particular, SlowMist describes a case in which a project developer lost control over a smart contract as a result of malicious code being injected via a modified AI model. The affected party installed Cursor, which had been purchased on Taobao. The installation included the download of third-party packages – sw-cur, aiide-cur, and sw-cur1 – which deeply modified the behavior of the development environment.
After activation, the packages embedded a backdoor into the local application, enabled a remote control channel, intercepted commands, and injected arbitrary code fragments. The attackers inserted a hardcoded address into the contract that had permission to access the funds. The developer claimed he had not written this line manually, and the investigation confirmed that the change was made without his knowledge.
However, since the commit came from his account, the attribution of responsibility became legally ambiguous. According to an internal analysis, this attack chain affected more than 4,200 developers, primarily on macOS, and was distributed through so-called "cheap AI access" offerings disguised as IDE assistants.
Unrestricted LLMs as a New Threat – Generative Models in Web3 Attacks
The second threat category involves using LLM models stripped of safety filters and ethical constraints. These models, referred to as unrestricted, are used to generate content, malicious code, phishing campaigns, and fraudulent smart contracts.
WormGPT and FraudGPT – Content Generation Tools for Offensive Operations
WormGPT is a modified version of GPT-J 6B trained on scam-related datasets. It is used to create phishing materials, infected code, and fake documentation. FraudGPT is positioned as an advanced variant aimed at constructing full-scale fake projects. The model generates whitepapers, landing pages, Discord chats, and wallet connection interfaces, and can adapt its writing style to the target audience. FraudGPT has been actively used to mimic MetaMask and Trust Wallet interfaces, create tokens with malicious logic, and distribute fake KYC notifications on behalf of centralized platforms.
GhostGPT and Polymorphic Malware – Code Generation with Variable Signatures
GhostGPT specializes in generating advanced malicious scenarios. It has been used to create contracts with non-revocable admin privileges, dynamic logic, and embedded asset-drain mechanisms. Of particular importance are polymorphic stealers – malware that changes its signature at each generation stage, making it undetectable by conventional systems. GhostGPT has also been used to produce audio files that mimic the voices of executives, which were deployed in business email compromise (BEC) attacks.
DarkBERT and Targeted Attacks via Darknet
DarkBERT results from an academic initiative, originally developed by KAIST and S2W as a research model trained on darknet corpora. Its application includes generating content tailored to specific targets. The model allows attackers to collect open-source information about project teams, previously published audits, geographic activity, and to create personalized phishing chains, including simulated internal correspondence, insider alerts, and marketing communications.
Venice.ai as a Gateway to a Fleet of Unfiltered LLMs
A critical component of this landscape is Venice.ai – a platform that offers access to numerous unrestricted LLMs, providing tools for generating, testing, and deploying malicious prompts. The platform enables the simulation of thousands of user interaction scenarios, creates attack content tailored to specific channels (Telegram, Discord, email), and uses feedback to improve effectiveness. It also offers integration with Telegram bots that automate data collection, instruction delivery, simulated customer support, and the distribution of fake "verification" pages.
Structural Features of the Threats and the Limits of Traditional Defensive Methods
LLM-based attacks possess a set of characteristics that make them particularly dangerous.
- First, they are scalable: the model can generate tens of thousands of unique texts and interface clones without using templates.
- Second, they are adaptive: upon receiving feedback from the victim or the system, the model can change its attack logic in real time.
- Third, they are difficult to attribute: the origin of the generated text cannot be proven without special watermarking, and the models can be fine-tuned on private logs of past attacks.
Blacklists, keyword-based filtering, and behavioral heuristics are not effective against LLM-generated content. Furthermore, even fraud detection systems powered by machine learning – especially those trained on legacy phishing datasets – cannot detect dialog-based, grammatically correct, and lexically unique attacks produced by these models.
This could be your shot: $4.2M up for grabs in the WOW2025 Grand Prix, including a Cybertruck for the top finisher. Registration closes July 15th!
Conclusions
It was fairly evident that AI would become a double-edged sword and would be used for malicious content, code, and entire scenarios. And to be candid, offensive strategies have always been ahead of defensive ones – but with LLMs, the scale is significantly larger.
All industries must realistically assess the new threat landscape and aim not merely to catch up, but to act proactively. This includes developing advanced systems for generating and validating watermarks in LLM content, restricting access to LLM-based tools in production pipelines, isolating development environments from external dependencies, and fully reevaluating how smart contract behavior is verified in light of the possibility of automated generation.
Stay with us to keep up with the latest updates and opportunities in crypto, blockchain, and DeFi.
The content provided in this article is for informational and educational purposes only and does not constitute financial, investment, or trading advice. Any actions you take based on the information provided are solely at your own risk. We are not responsible for any financial losses, damages, or consequences resulting from your use of this content. Always conduct your own research and consult a qualified financial advisor before making any investment decisions. Read more
Tags
$2.373B in Losses During Security Incidents 2025 Mid-Year
July 4, 2025
Previous ArticleMetaMask, Zoom, Telegram: New Phishing Threats Hit Web3
July 5, 2025
Next ArticleAlexandros
My name is Alexandros, and I am a staunch advocate of Web3 principles and technologies. I'm happy to contribute to educating people about what's happening in the crypto industry, especially the developments in blockchain technology that make it all possible, and how it affects global politics and regulation.
Our top picks
Unlock Up to $1,000 Reward
Start Trading10% Bonus + Secret Rewards
Start TradingGet 50% More to Trade Futures
Start Trading

