As artificial intelligence continues to transform how we do business, cybercriminals are finding equally innovative ways to weaponize it. Over the past few weeks, security researchers from Intel 471 and Proofpoint have uncovered a disturbing trend: AI-powered phishing kits are now being sold openly on Telegram, many of them boasting integrations with ChatGPT-style language models and LinkedIn scraping capabilities.
This isn’t theoretical anymore. The era of scalable, hyper-personalized social engineering is here—and it’s cheap, easy to access, and alarmingly effective.
Traditionally, phishing kits were rudimentary—templated emails with poor grammar, vague threats, and minimal personalization. The success of these campaigns relied on sheer volume. But AI changes the equation.
Now, phishing kits are leveraging generative AI to craft believable, context-aware emails in multiple languages. Some kits even use scraped LinkedIn data to customize messages based on the target’s company, role, and connections—turning what used to be a blunt instrument into a precision tool.
We’re seeing:
And the barrier to entry? Practically nonexistent. Some kits are subscription-based or even offered for free with “premium” add-ons—mirroring legitimate SaaS models.
One of the more surprising insights from the research is the sheer volume of phishing activity occurring on Telegram. Threat actors are using the platform to market, sell, and support their phishing kits—complete with changelogs, walkthrough videos, and even customer support groups.
In these forums, buyers can select from templates targeting Office 365, banking portals, or HR login screens. Many of the AI-powered kits now include easy-to-use interfaces for training language models on specific industries or companies.
This shift signals a dangerous democratization of capability. It no longer takes a skilled attacker to launch a sophisticated phishing campaign—just a few dollars and a Telegram account.

Whether you’re securing an MSP stack, defending an enterprise network, or managing threat detection across multiple tenants, the game has changed. AI has drastically lowered the bar for launching convincing, large-scale phishing attacks, and legacy defenses aren’t built to handle it.
Key takeaways for security leaders:
At Seceon, we’re closely tracking how attackers are adapting to the AI era. Our platform is designed to detect what others miss—even when initial access looks “normal.”
What makes our approach different:
We’re not just watching login pages—we’re watching what happens after.
AI isn’t just transforming how businesses operate—it’s redefining the threat landscape. Phishing kits that once took hours to build now take minutes. And they’re more believable than ever.
The takeaway isn’t fear—it’s awareness. If we understand how attackers evolve, we can stay a step ahead. It’s time to stop relying on legacy assumptions and start preparing for a world where threat actors are just as agile—and AI-enabled—as we are.
