At Kaspersky’s Global Research and Analysis Team (GReAT), we closely track the activities of more than 900 APT-related groups and operations worldwide. Each year, we review the most significant and sophisticated campaigns to understand how the threat landscape is evolving.
In this edition of the KSB series, we look back at 2025, evaluate our previous predictions and outline what these may mean for the year ahead.
Review of last year’s predictions
Hacktivist alliances to escalate in 2025
Throughout the past year, we observed a noticeable escalation in hacktivist activity. These groups continue to be primarily motivated by sociocultural and political conflicts, seeking visibility and public impact rather than financial gain. A defining feature of their operations has been their public self-promotion and deliberate amplification: many campaigns were actively broadcast by the actors themselves, who regularly shared claims, evidence and narratives across online platforms.
Against the backdrop of rising tensions in the Middle East and North Africa, our DFI team analyzed over 11,000 posts published by 120 hacktivist groups and distributed across both surface and dark web channels. The analysis revealed that hashtags have evolved into a core coordination mechanism, serving simultaneously as campaign identifiers, organizational markers and claims of responsibility. DDoS attacks remained the most common technical method, reflecting their accessibility and strong demonstrative effect. At the same time, hacktivist operations were far from geographically confined: while rooted in regional conflicts, targeted victims spanned Europe, the United States, India, Vietnam and Argentina.
Taken together, these findings point to growing coordination, stronger narrative alignment and increasing cross-border activity among hacktivist actors, clear indicators of the escalating alliances anticipated in the forecast.
The IoT to become a growing attack vector for APTs in 2025
In 2025 there were several massive campaigns conducted by APT actors, using IoT in their attacks as targets and also as core elements of intrusion chains. Thus, in Q3 2025 our statistics showed continued dominance of Mirai-based families targeting IoT devices, showing an ongoing exploitation of poorly secured devices. Mirai botnet variant was observed targeting DVR devices by exploiting CVE-2024-3721, allowing attackers to remotely compromise vulnerable systems.
At the same time, active exploitation of CVE-2025-55182 in smart home ecosystems demonstrated that consumer IoT devices are also becoming viable entry points. By targeting IoT management components and embedded systems, attackers can compromise interconnected devices and potentially pivot across networks, a particularly relevant risk in hybrid work environments where home and corporate infrastructures overlap.
Increasing supply chain attacks on open-source projects
In 2025, supply chain attacks against open-source ecosystems became more visible and operationally significant. Rather than targeting organizations directly, attackers compromised widely used packages and relied on trusted dependency chains to distribute malicious code at scale.
The Shai-Hulud worm illustrates this shift: more than 500 npm packages were infected, including popular libraries with millions of downloads. The worm propagated automatically through dependencies, stealing credentials and spreading further within the ecosystem.
In another case, the AdaptixC2 post-exploitation framework was embedded in a malicious npm package masquerading as a legitimate utility. By placing a C2 agent inside a dependency, attackers turned the package ecosystem itself into a delivery channel.
These incidents have shown that open-source repositories are already deliberate targets in modern intrusion strategies.
C++ and Go malware to adapt to the open-source ecosystem
In 2025 the trend was reflected in multiple real-world cases showing how threat actors align their tooling with modern development environments and open-source supply chains.
Operations attributed to BlueNoroff involved the distribution of malicious Go packages through public repositories, disguised as legitimate dependencies. By embedding malicious logic into developer workflows, the attackers effectively turned the open-source ecosystem into an initial access vector. The same campaign also relied on Go-based implants and C++ components.
In parallel, large-scale abuse of the npm ecosystem, including the Shai-Hulud worm confirmed that public repositories have become stable platforms for supply chain compromise.
Broadening the use of AI in the hands of state-affiliated actors
In 2025, AI-assisted capabilities became increasingly visible in operational APT activity, moving into practical use within active campaigns.
Operations attributed to BlueNoroff, GhostCall and GhostHire, demonstrated the integration of AI-assisted elements into malware development and parts of the operational workflow. This included streamlining development tasks and enhancing adaptability across macOS and Windows environments.
At the same time, AI adoption was not limited to state-affiliated actors. The Maverick banking Trojan, distributed via WhatsApp, incorporated AI-assisted code development components, including logic related to certificate decryption and supporting routines. This demonstrates that AI-supported development practices are diffusing across different segments of the threat landscape.
These cases indicate that AI is increasingly embedded in both state-linked APT operations and financially motivated malware campaigns.
Deepfakes will be used by APT groups
In 2025, deepfake technology began to appear in documented APT-related activity in a more operational context. A campaign targeting South Korean entities revealed the use of generative AI to produce realistic forged identity documents, including deepfake military IDs. These synthetic artifacts were incorporated into spear-phishing workflows to strengthen credibility and increase the likelihood of successful engagement.
This case demonstrates that advanced actors are already integrating deepfake materials into the social engineering phase of targeted operations.
Backdoored AI models
In 2025, we did not observe confirmed cases of widely distributed large AI models being deliberately backdoored and later uncovered in APT campaigns. However, developments during the year show that AI-related ecosystems are already emerging as viable supply chain targets.
This case confirms that AI toolchains and surrounding open-source infrastructure are being targeted as part of the broader attack surface. However, direct large-scale backdooring of AI models themselves has not yet been documented.
The rise of BYOVD (bring your own vulnerable driver) exploits in APT campaigns
BYOVD techniques were consistently observed in advanced intrusion activity, confirming their growing role in APT operations.
One example is activity linked to HoneyMyte, where a kernel-mode rootkit was deployed via a malicious driver signed with a compromised or outdated certificate. By operating at the kernel level, the driver protected malicious processes, intercepted system activity and concealed components, reinforcing persistence and complicating detection.
A related pattern appeared in the abuse of the legitimate but vulnerable ThrottleStop.sys driver. In this case, attackers used the driver’s weaknesses to disable security products, gaining elevated privileges without introducing an obviously malicious kernel module.
These incidents show that driver-level abuse is being systematically leveraged to bypass endpoint protections and harden access, confirming BYOVD as an established element of contemporary APT tradecraft.
APT predictions for 2026
AI will complicate attribution
The widespread use of generative AI will make cyberattack attribution more challenging. The issue is not necessarily more sophisticated false flags, but rather a shift in the “fingerprint” of attackers.
Code, phishing content and internal comments will increasingly be generated with AI tools. Such output tends to be neutral and standardized, lacking distinctive mistakes, linguistic traits or individual programming patterns.
In the past, analysts could rely on coding style or linguistic indicators, such as characteristic errors made by native speakers of certain languages. With broader AI adoption, these signals are likely to lose their reliability.
Increased use of bootkit- and rootkit-based implants
Kernel-level implants, including bootkits and rootkits, were widely used in the 2010s, but their prevalence declined as Windows introduced stronger driver validation and enhanced kernel protections.
Recently, however, there has been a renewed interest in these techniques. While vulnerable drivers were often used primarily as utility tools, for example, to disable antivirus solutions, threat actors are increasingly embedding kernel-level implants directly into their core malicious payloads.
Kernel-mode implants offer significant advantages: they operate with elevated privileges, provide deep visibility into system activity and can intercept or manipulate security mechanisms at a low level. Compared to user-mode malware, they are more resilient and considerably harder to detect or remove.
At the same time, generative AI lowers the technical barrier to developing such components. In the past, creating bootkits or rootkits required extensive knowledge of operating system internals. Today, foundational code structures and implementation guidance can be generated with AI assistance, reducing the expertise needed to experiment with kernel-level malware.
AI will increasingly be used to fully develop malicious implants
Over the past few years, AI has become a powerful development aid, significantly accelerating and simplifying coding tasks. However, the widespread availability of AI assistants means that these advantages are not limited to legitimate software developers. Threat actors are actively adopting AI tools as part of their workflow, and this is rapidly becoming standard practice.
The safety mechanisms embedded in large language models (LLMs) to prevent malicious code generation are often limited in effectiveness and can be bypassed with relatively simple prompt engineering techniques. As a result, LLMs can be used to generate substantial portions, or even the entirety, of a malicious implant, from initial scaffolding to functional modules.
Evidence of AI-assisted malware development has become increasingly visible. The FunkSec group, for example, has demonstrated heavy reliance on AI-assisted tooling in its operations. Its Rust-based malware combines data theft and encryption capabilities, can disable numerous processes, perform self-cleanup and includes auxiliary components such as DDoS functionality and password generation. Another case involves the RevengeHotels campaign in 2025, where LLMs were used to generate a significant portion of the initial infector and downloader code.
Attackers will increasingly use cloud platforms for data exfiltration
The use of legitimate cloud services for data exfiltration is expected to grow further. As organizations strengthen monitoring of network activity, transfers to unknown or atypical servers are more likely to appear anomalous and attract attention.
To reduce the risk of detection, threat actors will increasingly disguise exfiltration as normal user activity by leveraging popular cloud storage platforms, file-sharing services and other legitimate infrastructure.
Ransomware actors will increasingly conduct targeted attacks aimed at infrastructure destruction
Whereas infrastructure destruction was previously more characteristic of hacktivist activity, this strategy is now increasingly being adopted by ransomware operators whose primary objective is financial gain.
In such attacks, adversaries aim to disrupt business operations and halt organizational processes in order to increase pressure on the victim and raise the likelihood of ransom payment. The underlying message is clear: the faster the payment is made, the sooner data can be restored and operations resumed.
2025 has already seen several incidents supporting this trend, including attacks on JLR and Asahi Group Holdings.
Use of AI agents as a persistence mechanism
Beyond software development, AI agents are increasingly being deployed within organizations to automate internal processes and administrative tasks, making them an attractive attack surface.
Some AI agent solutions are granted broad or even full system access. If compromised, attackers could modify the system prompt or the agent’s configuration, for example, causing it to download a payload on every startup.
Attackers will place a greater emphasis on AI when bypassing security solutions
In the past, attackers relied on crypters to evade antivirus detection. These tools did not alter the core logic of the malware but modified its representation, for example, changing the byte structure to preserve functionality while complicating detection.
With the use of AI, this approach becomes more flexible. Generative models allow attackers not only to obfuscate code but to rewrite programs entirely: changing the implementation language, architecture or communication methods while preserving the intended outcome.
As a result, AI may replace traditional crypters by enabling deeper modifications to source code. In this environment, security solutions will need to adapt more quickly to constantly evolving malware implementations.
Satellites may become a new target for attackers
Satellite internet is becoming increasingly widespread, including in commercial aviation and other transportation services. As the technology becomes more affordable and scalable, the number of connected systems and users continues to grow.
This infrastructure relies on wireless data transmission and centralized satellite communication nodes. As a result, satellites and their associated ground stations may become attractive targets for attackers, since compromising such systems could impact a large number of users and services simultaneously.