As artificial intelligence continues to transform how we do business, cybercriminals are finding equally innovative ways to weaponize it. Over the past few weeks, security researchers from Intel 471 and Proofpoint have uncovered a disturbing trend: AI-powered phishing kits are now being sold openly on Telegram, many of them boasting integrations with ChatGPT-style language models and LinkedIn scraping capabilities.
This isnāt theoretical anymore. The era of scalable, hyper-personalized social engineering is hereāand itās cheap, easy to access, and alarmingly effective.
Traditionally, phishing kits were rudimentaryātemplated emails with poor grammar, vague threats, and minimal personalization. The success of these campaigns relied on sheer volume. But AI changes the equation.
Now, phishing kits are leveraging generative AI to craft believable, context-aware emails in multiple languages. Some kits even use scraped LinkedIn data to customize messages based on the targetās company, role, and connectionsāturning what used to be a blunt instrument into a precision tool.
We’re seeing:
And the barrier to entry? Practically nonexistent. Some kits are subscription-based or even offered for free with āpremiumā add-onsāmirroring legitimate SaaS models.
One of the more surprising insights from the research is the sheer volume of phishing activity occurring on Telegram. Threat actors are using the platform to market, sell, and support their phishing kitsācomplete with changelogs, walkthrough videos, and even customer support groups.
In these forums, buyers can select from templates targeting Office 365, banking portals, or HR login screens. Many of the AI-powered kits now include easy-to-use interfaces for training language models on specific industries or companies.
This shift signals a dangerous democratization of capability. It no longer takes a skilled attacker to launch a sophisticated phishing campaignājust a few dollars and a Telegram account.
Whether you’re securing an MSP stack, defending an enterprise network, or managing threat detection across multiple tenants, the game has changed. AI has drastically lowered the bar for launching convincing, large-scale phishing attacks, and legacy defenses arenāt built to handle it.
Key takeaways for security leaders:
At Seceon, weāre closely tracking how attackers are adapting to the AI era. Our platform is designed to detect what others missāeven when initial access looks “normal.”
What makes our approach different:
We’re not just watching login pagesāweāre watching what happens after.
AI isnāt just transforming how businesses operateāitās redefining the threat landscape. Phishing kits that once took hours to build now take minutes. And they’re more believable than ever.
The takeaway isnāt fearāitās awareness. If we understand how attackers evolve, we can stay a step ahead. Itās time to stop relying on legacy assumptions and start preparing for a world where threat actors are just as agileāand AI-enabledāas we are.