The emergence of LAMEHUG, a novel malware attributed to the Russian GRU-linked group APT28 (Fancy Bear), marks a significant inflection point in the landscape of cyber warfare. This malware, uniquely integrating a large language model (LLM) for real-time command generation, is a “watershed moment” that transforms artificial intelligence (AI) from a theoretical cyber threat into a tangible, operationalized part of state-sponsored offensive capabilities. This development sets a critical precedent for other state and non-state actors, signaling a new phase in cyber conflict where AI-driven automation and adaptability are poised to become central.
LAMEHUG’s operational deployment, even in an experimental capacity, indicates Russia’s strategic commitment to using cutting-edge AI for offensive cyber operations. The rapid transition of AI from research laboratories to active cyber tools, exemplified by LAMEHUG’s relatively swift appearance following the public availability of advanced LLMs, shows a compressed innovation cycle within the cyber domain. This accelerated pace necessitates an equally agile and foresight-driven approach to defensive strategies, moving beyond reactive measures to proactive threat modeling and anticipatory countermeasures. The strategic choice by APT28 to use a non-Western LLM and employ tactics to evade detection also highlights a deliberate effort to secure resilient AI supply chains, further complicating the intelligence picture.
Furthermore, the analysis reveals a complex and dynamic Russian sentiment towards AI in cyber operations, ranging from quiet state endorsement and pragmatic industry acceptance to a nuanced mix of curiosity and skepticism within the cybercriminal underground. This internal dynamic, coupled with the proliferation of illicit AI tools in dark markets, creates a fertile environment for the convergence of state-sponsored and cybercriminal AI capabilities. This convergence blurs traditional lines of attribution and necessitates a holistic approach to understanding and defending against an increasingly sophisticated and less attributable threat landscape. The strategic foresight analysis projects an inevitable and rapid escalation of AI integration in offensive cyber operations globally, moving towards greater autonomy, sophistication, and widespread availability, thereby fueling an intense adversarial AI arms race across all domains of cyber conflict.
- Transnational Cyber Intelligence-Driven Cybercrime and Crimeware Analyst
3-Day Intensive In-Person Training | The Hague, Netherlands | September 29–October 1, 2025
Introduction– The Evolving Landscape of AI in Cyber Operations
The integration of artificial intelligence into cyber operations represents a profound shift in the dynamics of digital conflict. For years, the potential weaponization of AI and large language models (LLMs) has been a subject of theoretical discussion among cybersecurity professionals and strategic analysts. However, recent developments, most notably the emergence of LAMEHUG malware, unequivocally demonstrate that AI is no longer a futuristic concept but an active, evolving component of both nation-state and cybercriminal arsenals. This transition demands a fundamental re-evaluation of defensive postures, shifting the focus from purely technical incident response to a comprehensive strategic analysis that incorporates geopolitical implications, predictive forecasting, and proactive prevention.
The dual-use nature of AI technologies, capable of enhancing both offensive and defensive cybersecurity applications, has made them highly attractive to malicious actors. The rapid advancements in generative AI, particularly LLMs, have provided adversaries with unprecedented capabilities to automate, scale, and personalize cyberattacks. While the cybersecurity community has issued warnings about the accelerating speed and scale of AI-driven attacks, including automated reconnaissance, tailored phishing, and self-evolving malware, LAMEHUG’s appearance serves as a stark validation of these concerns.
A critical observation from the current landscape is the significantly compressed innovation cycle in the cyber domain. The timeline from the public availability of advanced LLMs to their operationalization in state-sponsored malware, as seen with LAMEHUG, has been remarkably short. The rapid adaptation and weaponization of readily available, even open-source, AI models by adversaries indicate a strategic imperative to innovate quickly. Consequently, defensive strategies must be equally agile and foresight-driven, moving beyond reactive measures to anticipate and prepare for the rapid, iterative deployment of AI capabilities by malicious actors. Counter strategies require continuous threat modeling and strategic foresight, acknowledging that the pace of innovation in offensive AI is accelerating at an unprecedented rate.
APT28’s Pioneering Integration of LLMs– The LAMEHUG Case Study
APT28, also known as Fancy Bear and attributed to the Russian GRU, has marked a significant milestone in cyber warfare with its LAMEHUG malware. Discovered by Ukraine’s CERT-UA in July 2025, LAMEHUG represents a novel approach to offensive cyber operations by directly integrating an LLM into its malicious functionality.
LAMEHUG’s Technical Modus Operandi
LAMEHUG is primarily written in Python and uses Alibaba’s Qwen 2.5-Coder-32B-Instruct model through the HuggingFace API. This integration allows the malware to translate natural-language instructions into executable system commands on victim machines. For instance, an operator can provide textual descriptions of tasks, such as “enumerate running processes and steal documents,” and the LLM will generate the appropriate Windows shell commands to carry out these actions. This capability enables APT28 operators to automate sophisticated post-compromise activities, including reconnaissance, file searching, and data exfiltration, flexibly and dynamically, without the need for hard-coding all instructions in advance. Once collected, the captured data, which includes system information and files from directories like Documents and Desktop, is then exfiltrated to attacker-controlled servers via SFTP or HTTP.
Experimental Nature and Strategic Objectives
The LAMEHUG operation appears to be in an experimental phase, suggesting an ongoing process of development and adaptation by APT28. Investigators noted that the malware was not fully optimized for covert operations, exhibiting characteristics such as a lack of advanced evasion techniques and the use of three variant loaders (*.pif, *.exe, .py), indicative of active development. Furthermore, the attackers used approximately 270 different HuggingFace API tokens to invoke the Qwen 32B model, a tactic likely employed to evade rate limits or detection. Ukraine, in this context, serves as a proving ground where Fancy Bear can test AI-driven tools in live attack scenarios with relatively low perceived risk, a common practice for testing new exploits or malware. While CERT-UA has not definitively confirmed the effectiveness of this AI-assisted approach in improving attack success rates, the objectives pursued by LAMEHUG—namely, gathering system information and harvesting Office, PDF, and TXT files—are consistent with typical espionage goals, indicating the LLM’s role in streamlining data theft operations.
The choice of Alibaba’s Qwen model and the HuggingFace API, combined with the extensive use of API tokens, reveals a deliberate strategic decision by APT28. This approach to using widely accessible, non-Western AI infrastructure suggests a concerted effort to circumvent potential monitoring or restrictions imposed by Western AI providers. APT28 had previously been detected abusing OpenAI ChatGPT accounts, which were subsequently shut down. This prior experience likely influenced their shift towards diversified and more resilient AI supply chains, pointing to a broader trend among adversaries to seek out AI resources that are less susceptible to external control or surveillance. This adaptation has significant implications for intelligence collection and policy, as it suggests a potential future reliance on self-hosted or domestically developed models further to reduce reliance on external, potentially monitored, infrastructure.
International Reactions and Assessments
The discovery of LAMEHUG garnered significant attention within the global cybersecurity community. It was widely regarded as a “watershed” moment in cyber threats, underscoring how generative AI could now directly orchestrate live attacks. Western media coverage frequently highlighted the novelty and inherent dangers, with headlines proclaiming “AI-powered malware” appearing across tech press and threat intelligence reports.
Cybersecurity organizations such as Logpoint and Cato Networks assessed that APT28’s deployment of LAMEHUG against Ukraine was likely a testing phase for new AI capabilities before their broader deployment. This assessment aligns with warnings from British and U.S. officials, who have cautioned that such AI integration could dramatically accelerate attack speed and scale, enabling automated reconnaissance, highly tailored phishing campaigns, and even self-evolving malware. Security firms echoed this sentiment, with CrowdStrike predicting that AI would empower attackers to craft personalized emails and identify vulnerabilities with unprecedented precision rapidly. In essence, LAMEHUG’s emergence was perceived internationally as a tangible manifestation of these warnings, signaling the definitive commencement of an AI-driven cyber arms race.
Russian Perspectives on AI in Cyber Warfare– Official, Industry, and Underground Sentiments
The discourse surrounding AI and LLM integration in cyber operations within Russia is complex, encompassing official government silence, pragmatic industry assessments, and a nuanced mix of pride, skepticism, and practical experimentation within the cybercriminal underground.
Official Stance
Consistent with Moscow’s established policy of neither confirming nor denying its cyber operations, there has been no open official Kremlin commentary regarding APT28’s use of AI. This silence maintains strategic ambiguity and allows the state to distance itself from attribution while implicitly benefiting from the advancements made by its affiliated groups.
IT Media and Industry Professionals
Within Russian IT media, reporting on LAMEHUG has adopted a largely factual and measured tone, acknowledging APT28’s alleged role and framing AI-enhanced malware as a “natural evolution of tactics”. This perspective suggests a recognition within the professional cybersecurity community that AI integration is an inevitable progression in offensive capabilities. Roman Reznikov, an analyst at Positive Technologies, a prominent Russian cybersecurity firm, urged against panic, stating that “the high potential of AI in attacks is no reason to panic”. He advocated for a realistic and prepared response, specifically emphasizing the importance of countering attacking AI with defensive AI. This pragmatic stance indicates that Russian experts not only acknowledge AI’s capacity to amplify cyber offense but also view it as a powerful tool that Russian industry and government can harness for their own cybersecurity and defensive purposes.
Social Platforms and Forums (OSINT)
Public sentiment on Russian social platforms and forums regarding domestic AI pioneering in cyberwarfare is mixed. Some users express a particular “grim pride” or intrigue, viewing it as a national achievement in a critical domain reflecting a nationalistic or strategic alignment with state objectives, suggesting a top-down push or cultural acceptance of state-led cyber innovation. However, this pride is often tempered by skepticism regarding AI’s immediate practical impact or its over-reliance.
Within the more clandestine Russian underground hacker forums, the sentiment towards AI tools is particularly nuanced. While many hackers are actively experimenting with ChatGPT-like models, seasoned members frequently deride those who rely on AI without demonstrating genuine skill. Such reliance is often associated with laziness or a lack of actual ability, as evidenced by instances where AI-written tutorials were publicly shamed or code generated by ChatGPT was dismissed as “useless,” showing a bottom-up resistance or caution among experienced hackers who value their craft and bespoke solutions.
Despite this cynicism, there is genuine interest in advanced AI concepts, with discussions emerging on topics such as voice cloning for extortion or building “autonomous AI C2” infrastructure. However, these discussions are often met with caution, acknowledging that the underlying technology is “still in early research stages”. Furthermore, operational security (OPSEC) concerns are prevalent, with some users warning that using Western-tied platforms like ChatGPT to discuss criminal plans constitutes “opsec suicide”. Overall, Russian cybercriminal communities perceive AI as intriguing but immature, a tool loudly touted by less-skilled actors, while professionals either quietly test its capabilities or mock the hype. This split sentiment—a blend of curiosity and skepticism—indicates that while the game-changing potential of AI is recognized, many remain unconvinced it is a universal solution at present.
This internal dynamic within the Russian cyber community reveals a potential strategic tension. While the state aims for AI leadership and integration into its offensive capabilities, the practical realities of operational security and the “craft” of hacking create friction suggesting that while state actors might aggressively push for AI adoption, its organic integration into the broader cyber ecosystem will be significantly influenced by the practical acceptance and perceived utility by the skilled, independent hacker community. These actions will likely lead to a bifurcation of AI use, where state-sponsored groups like APT28 push the boundaries of AI integration, while some elite cybercriminals might be slower or more selective in their adoption, prioritizing stealth and control. Such a divergence could create distinct AI signatures or operational patterns between various Russian threat actors, potentially aiding attribution for sophisticated attacks, while less sophisticated, AI-augmented attacks become more common from the “script kiddie” segment.
Broader Russian State and Private Sector Engagement with Offensive AI
Beyond APT28’s pioneering efforts, evidence indicates a widespread and accelerating trajectory of AI adoption across the broader Russian state apparatus, encompassing other intelligence services, military research and development, and even the private sector.
GRU (Beyond APT28)
The GRU’s interest in AI extends beyond APT28. A joint investigation by OpenAI and Microsoft in early 2024 revealed that several nation-state hacking units, including “Forest Blizzard” (Microsoft’s designation for APT28), were misusing AI platforms like ChatGPT for offensive tasks such as script generation and technical research. While Sandworm (GRU Unit 74455), known for its destructive attacks on critical infrastructure, has not yet been publicly linked to AI-based malware, its historical engagement with advanced techniques suggests a logical progression towards offensive AI. For instance, Sandworm famously experimented with obfuscation techniques to bypass machine-learning antivirus models, including a 209 backdoor that was 99% legitimate code to deceive ML classifiers. The techniques demonstrate a long-standing GRU interest in countering AI defenses, making offensive AI a natural next step. Furthermore, information emanating from pro-Russian hacktivist endeavors like “Cyber Front Z” indicates a broader interest in using AI for propaganda generation and attack augmentation.
- Transnational Cyber Intelligence-Driven Cybercrime and Crimeware Analyst
3-Day Intensive In-Person Training | The Hague, Netherlands | September 29–October 1, 2025 - AI-Infused Hybrid Cyber Intel, Counterintel, & Info Ops Oct 6-9, 2025 – Schiphol, Amsterdam, NL
SVR (APT29)
APT29, also known as Midnight Blizzard or Cozy Bear, Russia’s foreign intelligence service hacking unit, has maintained a lower profile regarding overt AI deployment. There have been no confirmed reports directly linking APT29 to a public incident similar to LAMEHUG. Their operations in 2023–2024, such as the widespread Microsoft 365 token theft campaign, primarily relied on traditional techniques like credential guessing and social engineering, without a prominent AI component. However, OpenAI’s analysis noted that Russia-affiliated actors were using AI to translate technical papers, debug code, and generate content for spear-phishing. Ukrainian officials have also stated that Russia is employing AI to process the vast quantities of data stolen from Ukraine, including military intelligence and personal data, to extract actionable insights. Although artificial intelligence is not overtly embedded within APT29’s malware payloads, it is probable that AI technologies are utilized in their analytic processes for cyber espionage, enhancing their capabilities in discreet data processing and intelligence analysis.
FSB (Turla)
FSB-affiliated actors, such as Turla (also known as Venomous Bear), are also strong candidates for AI experimentation. While Turla has not been publicly linked to AI use in the wild, this group is renowned for its innovation, including hijacking satellite links and developing complex espionage malware. It is plausible that Turla is assessing AI for tasks such as automated target profiling or generating “living off the land” scripts on compromised systems, aligning with their focus on sophisticated, persistent espionage.
Academic and Military R&D
Russia’s investment in AI capabilities extends deeply into its academic and military research institutions. The Russian Federal Security Service (FSB) oversees information security academia, with active research into AI and machine learning at institutions like the FSB Academy and institutes under the Russian Academy of Sciences. Critically, the Ministry of Defense’s Technopolis ERA in Anapa is officially known for its heavy focus on AI, supercomputing, and robotics for military applications. Many projects at ERA involve analysis and autonomous systems, and some AI research and development is allocated to both cyber offense and defense. A high-level meeting at ERA in 2023 explicitly discussed “applications of artificial intelligence for defense needs,” likely encompassing cybersecurity applications. This structural investment indicates a top-down commitment to developing AI capabilities that state hacking units can use.
Russian Homegrown LLMs and Partnerships
A significant strategic imperative for Russia is the development of indigenous AI capabilities and the forging of non-Western AI partnerships. This approach aims to achieve AI sovereignty, reducing reliance on Western AI infrastructure, mitigating detection risks, and ensuring long-term, unhindered access to advanced AI for both offensive and defensive applications. Sberbank, a major Russian financial institution, launched its own ChatGPT-like model, GigaChat, in 2023.. Additionally, Sberbank announced a partnership with China’s AI outfit DeepSeek in 2025 to collaborate on AI research. Similarly, Yandex, a leading Russian technology company, has developed its own LLM, YaLM. These homegrown resources could be readily tapped by state hackers for Russian-language operations, thereby avoiding the scrutiny associated with using Western platforms like OpenAI or HuggingFace. APT28’s use of a Chinese model (Qwen) in LAMEHUG shows this preference for non-Western AI, reflecting both practical availability and a countermeasure against Western companies policing access to their AI services, which have previously banned state hackers. This trend suggests that future Russian AI-powered cyber operations will increasingly rely on a closed, national, or allied AI ecosystem, making it more challenging for Western intelligence to monitor their AI development and operational use by tracking API calls to known Western services. What this means is that there is the potential for highly specialized, culturally nuanced AI tools for influence operations and espionage.
Private Sector and Early Hints of Broader Testing
Russian cybersecurity firms, such as Positive Technologies and Kaspersky Lab, are integral to Russia’s cybersecurity ecosystem and maintain interactions with government entities. Positive Technologies, in late 2024, published a detailed report forecasting that AI could be used in 59% of cyber-attack techniques soon, noting a dramatic surge in phishing emails (+265% post-GPT4) and a potential for significantly increased AI-assisted attacks. The AI-assisted attacks show a keen awareness among Russian experts of the offensive possibilities of AI. It would not be surprising if firms like Sber’s cybersecurity division or even Moscow State University were quietly supporting government experiments with homegrown LLMs.
An early indication of broader AI testing within the Russian threat landscape is the “Skynet” malware sample discovered by Check Point in mid-2023. While not directly attributed to a known group, its context suggested Russian origin. Skynet notably contained an embedded prompt injection designed to deceive AI-based code analysis tools, instructing them to report “NO MALWARE DETECTED falsely. Although crude and ultimately ineffective, this attempt signaled a “new wave of cyber-attacks” focused on adversarial AI techniques to evade detection. Russian actors are not only exploring how to use AI for attacks but are also actively contemplating methods to counter defensive AI systems.
Across the GRU, SVR, FSB, and allied entities, there is a clear, pervasive, and accelerating trajectory of AI adoption. This progression begins with limited applications such as phishing email generation, scripting assistance, and data processing, and is now advancing into active operational deployments like LLM-guided malware. Russia’s intelligence apparatus, and potentially even private military companies (PMCs) or hacktivist fronts like KillNet, are likely to integrate AI modules as they prove effective. The current situation represents the nascent stages of an AI arms race in cyber operations, with Russia intent on maintaining pace.
The Rise of Malicious LLMs and Dark AI Markets in the Russian Cybercrime Ecosystem
While nation-state actors like APT28 conduct internal AI experimentation, the Russian-language underground has simultaneously experienced its own AI revolution, marked by the emergence of illicit AI chatbots and dedicated dark AI markets.
Illicit AI Chatbots and Their Features
Beginning in 2023, a proliferation of illicit AI chatbots and “jailbroken” models emerged across cybercriminal forums and Telegram channels. These tools, including names like WormGPT, FraudGPT, DarkGPT, WolfGPT, and GhostGPT, are essentially uncensored ChatGPT clones specifically marketed for facilitating cybercrime. They promise to generate malicious outputs that legitimate AI models would refuse, such as phishing emails and malware code.
WormGPT, announced in June–July 2023 by a developer named “laste,” was promoted as a “ChatGPT Alternative for blackhat” activities. It was built on an open-source GPT-J-6B model, fine-tuned with malware development data. Advertised on both English (HackForums) and prominent Russian forums (Exploit), WormGPT boasted features like “unlimited characters, no anti-illegal restrictions, different AI modes, code formatting,” with Version 2 offering improved privacy and multiple model options. The service was offered at various price tiers, including monthly subscriptions and private builds.
Following WormGPT’s debut, FraudGPT launched in late July 2023, promoted across darknet markets and Telegram groups as an “unrestricted AI for all things fraud”. The seller claimed thousands of sales, with access priced from $90–$200 per month. FraudGPT’s advertisements showcased its versatility in “creating undetectable malware, writing malicious code, finding vulnerabilities, creating phishing pages, and learning hacking”. Video demonstrations illustrated the AI generating convincing phishing webpages and crafting phishing SMS messages, positioning FraudGPT as a comprehensive illicit toolkit for scammers and hackers.
Russian-Language Underground Forums as Platforms
The emergence of these tools has significantly impacted Russian cyber forums. By 2024, prominent platforms like XSS (formerly Exploit) had set up dedicated “AI/ML” sections within their underground marketplaces. These sections host a wide array of discussions, from “Обсуждение методов и способов атак на ИИ” (“Discussing methods of attacking AI”) to lists of AI resources, tutorials on building custom private GPTs, and strategies for “bypassing or influencing AI decision-making”. The presence of threads sharing Python scripts for custom GPT chatbot creation indicates a strong desire within the community to develop and deploy their own local AI models. Another notable area of discussion is adversarial techniques against AI, likely involving research on prompt injection and data poisoning. This active engagement highlights the community’s dual interest in both using and subverting AI technologies.
Skepticism Versus Adoption Trends
Despite the high interest, the underground market for “dark AI” tools is also characterized by scams and skepticism. The popularity of WormGPT and FraudGPT led to numerous knock-offs and questionable sellers. For instance, the ransomware group “Kill Security” publicly exposed one WormGPT vendor as a scammer, even leaking the model’s prompt online to undermine his sales. Kill Security mocked the vendor’s lack of skill and advised others to use open platforms like FlowGPT, fine-tune their uncensored models, or simply apply jailbreak prompts on ChatGPT. This incident demonstrates that while demand is high enough to foster fraud, veteran criminals often prefer do-it-yourself approaches over purchasing hyped, black-box “malicious GPTs.” Some forum users also advocate incorporating legitimate public research, such as AI-powered red teaming frameworks, for offensive purposes.
Nevertheless, the volume of discussion around “dark AI” tools on Russian forums skyrocketed by late 2024, with a reported increase of approximately 29% from 2023 to 2024. Thousands of posts now discuss WormGPT, jailbroken ChatGPT, and related topics, indicating a strong upward trend in adoption expected to continue into 2025. The cybercrime world is actively exploring a wide range of AI use-cases, from automated phishing and business email compromise (BEC) attacks to fundamental vulnerability discovery and malware coding. The observed illicit uses of AI align with security researchers’ fears– lowering the barrier to entry for less-skilled criminals by offering step-by-step guidance or ready-made malicious code.
However, the reception among criminals remains double-edged. Experienced hackers frequently view over-reliance on AI as characteristic of “script kiddies,” ridiculing those who produce AI-written content or ask basic questions implying a desire for AI to perform all the work. There are also significant OPSEC concerns, with some Russian users recognizing that discussing criminal plans with platforms like ChatGPT (a product with U.S. ties) constitutes “opsec suicide”. Despite these cultural reservations, the allure of automation and speed offered by AI is steadily driving its adoption within cybercriminal circles. Forums are adapting, with some Russian marketplaces even implementing their own “forum GPT bot” to assist with content. As the technology matures, the stigma may fade, much like early hackers eventually embraced automated exploit kits. For now, the Russian underground exhibits a vibrant, albeit sometimes skeptical, discourse on AI, with many actively developing custom solutions (e.g., private GPT instances) and sharing knowledge on how to abuse or defeat AI systems.
The prevalence of “dark AI” tools and discussions on Russian cybercrime forums, coupled with the “grim pride” of some Russian users regarding state AI advancements, creates a fertile ground for a “trickle-down” effect. State-sponsored AI advancements, including LAMEHUG’s implementation of large language models (LLMs), may be studied or adapted by various parties, including those engaged in unauthorized activities. Conversely, adversarial AI techniques developed by criminals, like prompt injection or model evasion, could be adopted by state actors. This bidirectional knowledge transfer accelerates the overall AI cyber arms race within Russia. This dynamic implies that the “distinctive tools” that once aided attribution are becoming less unique, making attribution increasingly dependent on intent and context rather than solely on technical fingerprints. Defenders cannot rely on signature-based detection for long, as variants will proliferate, necessitating a focus on behavioral analysis, network telemetry for AI API calls, and understanding the strategic motivations behind attacks. There is a need for intelligence agencies to monitor criminal forums not just for criminal activity, but as a leading indicator of emerging state-level capabilities and techniques.
Comparative Analysis– AI Adoption Across Global Advanced Persistent Threats
To contextualize APT28’s integration of AI, an examination of AI/LLM capabilities across other significant state-sponsored threat actors, both Russian and non-Russian, reveals a diverse yet rapidly evolving landscape. The following table provides a comparative overview of their AI integration stages, models and tools used, primary use cases, and associated infrastructure.
| Threat Group (Affiliation) | AI Integration Stage | Models/Tools Used | Use Cases & Examples | Infrastructure / Platforms |
| APT28 “Fancy Bear” (Russia, GRU) | Pilot / Experimental – first to deploy LLM in malware. Testing AI capabilities on Ukraine as a “test bed.” | Qwen-2.5 (32B) coding LLM via HuggingFace API. Also abused OpenAI ChatGPT accounts (detected & shut down). | Malware command generation (LAMEHUG uses Qwen to produce OS commands from text instructions). Automated recon & exfiltration (LLM-coded scripts gather system info & files). Likely also phishing content creation and basic coding via ChatGPT (per OpenAI, APT28 queried satellite comms & coding help). | Hugging Face cloud – used ~270 API tokens to invoke the model. Compromised email accounts to deliver LLM-powered payloads. Possibly domestic AI soon (Sber/others) to avoid Western oversight. |
| Sandworm (Russia, GRU Unit 74455) | Unknown / In Development – no public AI tools seen yet, but high interest given mission profile. | No known direct model use. Possibly evaluating ICS-focused AI or using open-source LLMs internally. | Destructive ops & ICS attacks – historically use tailored malware (Industroyer, etc.) without AI. We could use AI for automating network mapping or industrial process disruption in the future. No confirmed cases yet. | N/A (so far). Likely uses Russian military R&D (ERA technopolis, etc.) for AI research. May adapt open models for specialized tasks, but nothing public. |
| Turla (Russia, FSB) | Unknown – not reported using AI in the wild. Likely researching quietly. | No public data on model usage. Possibly experimenting with language models for translation (Turla targets many countries) or AI for C2 communications. | Stealth espionage – Turla might use AI to parse large stolen datasets or generate polymorphic malware loaders. Also, potential use of AI for social engineering (e.g., crafting tailored spear-phishing in multiple languages). No confirmed AI-based tool yet. | N/A publicly. Turla’s long-term development suggests access to academic/government AI projects in Russia. May use local GPU infrastructure to run models offline for ops (to avoid detection). |
| APT29 “Cozy Bear” (Russia, SVR) | Low-Key Experimentation – likely uses AI for support tasks, but keeps a low profile. | ChatGPT (GPT-3.5/4) – OpenAI noted Russia-affiliated actors using it for research, translation, and coding. Possibly testing Russian LLMs for internal use. | Espionage support – Used AI to translate technical papers, debug code, and “generate content for spear-phishing.” May use AI to analyze stolen intel or assist complex intrusions (e.g., identifying vulnerabilities to exploit). No known AI-driven malware from APT29 yet. | OpenAI API – accounts terminated by OpenAI after detection. Now likely shifting to closed environments– could be using Sberbank’s GigaChat or other local models for operational security. |
| Lazarus Group (North Korea) | Emerging Operational – incorporating AI in various phases of ops. | Likely uses stolen or open models (no indigenous LLM known). Observed using Google’s Gemini (Bard) via VPN. May use ChatGPT via illicit accounts for English content. Also uses DeepFaceLab/Deepfake tech for video deception. | Multifaceted– Used AI for reconnaissance (researching targets’ infrastructure, free hosting, etc.). Malware dev – sought AI help converting code (Python↔Node, etc.) and evasion techniques. Social engineering – famously created deepfake video profiles of recruiters to social-engineer targets (AI-generated avatars). Also, AI used in their “IT job placement” scams (GPT used to draft resumes and cover letters to infiltrate companies). | Google Bard (Gemini) – abused for tech Q&A and coding help. Possibly OpenAI via proxies. Likely has access to Chinese AI services (NK operatives in China). Also uses custom deepfake tools on local machines for video/image manipulation. |
| APT4 (China, MSS contractor) | Active Exploration – heavy testing of AI for offense, blending state and criminal use. | Various– Known to use Google Gemini (per Google, 20+ PRC groups used it). Possibly uses Chinese LLMs (e.g. GLM, ERNIE) internally. Also attempted to make Gemini reveal its system info (tried to jailbreak it). | Automating post-exploit tasks – used AI to generate scripts for deeper network access (e.g. signing malicious Outlook add-ins, managing Active Directory). Tool development – had AI assist in reverse-engineering an EDR product and writing exploits (attempted via Gemini). Recon & research – queries to AI about U.S. military tech, IT networks, vulnerabilities, etc., to inform targeting. Also notable for trying to abuse the AI itself (APT4 asked Gemini for its own IP and config; it failed). | Google’s cloud AI (Gemini/Bard) – widely used by Chinese groups via API or frontend. Likely also using local big models hosted on PRC infrastructure for sensitive tasks. APT4, being blend of state and criminal, might even maintain its own fine-tuned models for operations. |
| APT34 “OilRig” (Iran, MOIS) | Initial Experimentation – starting to use AI assistance. | ChatGPT – Identified by Microsoft as using OpenAI for scripting and phishing content. Possibly small open models as well. | Phishing & development – Used AI to generate spear-phishing emails (English content), and for app/web dev scripting support (writing or fixing code for their tools). Also queried how malware might evade detection, showing interest in AI advisory for stealth. No known AI-built malware yet, but they are applying AI to improve their workflow and social engineering. | OpenAI platform – accounts were terminated in Microsoft/OpenAI sweep. May turn to open-source LLMs on VPN or regional AI services (e.g. AliBaba’s Tongyi, since Iran has ties with China) for continued use. |
*Note– The “Integration Stage” is evaluated based on publicly available evidence; it is likely that numerous groups undertake additional activities privately.* Also, the absence of evidence (especially for Russian FSB or GRU units beyond APT28) does not mean they are not actively developing AI tools – just that nothing has surfaced. Each actor’s use cases evolve with their strategic goals– e.g., North Korea leans into crypto theft and social engineering (where AI helps craft lures), while Russia’s military hackers focus on quick data extraction and sabotage (AI helping with automation and scale). China’s groups, having ample AI access, appear to systematically incorporate AI to enhance all phases (from reconnaissance to post-exploitation).
Key Observations from the Comparative Analysis
The comparative matrix delineates key factors pertaining to the adoption of artificial intelligence by advanced persistent threats. APT28 stands out as somewhat unique in its operationalization of an LLM directly within malware, transforming an LLM into an active component of the attack loop. While other top-tier groups are actively experimenting with AI, their current usage often positions AI as a human helper rather than an autonomous decision-maker within the attack chain. This distinction suggests that APT28 has advanced further along the spectrum of AI integration into active operations.
A significant observation is the widespread adoption of AI across major state-sponsored APTs. All four of the “Big Four”—Russia, China, Iran, and North Korea—were identified as having used OpenAI or similar models for malicious purposes in 2023. Their use shows a global acknowledgment of AI’s utility in offensive cyber operations. Chinese and North Korean groups appear to be aggressively integrating AI, systematically enhancing all phases of their operations, from reconnaissance to post-exploitation.
Furthermore, the analysis reveals a likely strategic shift among Russian groups, specifically APT28 and APT29, from reliance on Western AI platforms like OpenAI to non-Western alternatives such as Alibaba’s Qwen or domestic models like Sberbank’s GigaChat and Yandex’s YaLM. This shift often follows detection and termination of their accounts on Western platforms, indicating a proactive adaptation to maintain operational security and reduce susceptibility to external monitoring.
The comparative matrix also shows a strategic divergence in AI adoption. While some APTs, like APT28, are pushing for autonomous AI integration directly into malware, others, such as APT29 and APT34, are primarily using AI for the augmentation of human operators, for tasks like content generation, research, or coding assistance. This difference suggests a “crawl, walk, run” progression in AI maturity. Initial AI use often focuses on augmenting human tasks to improve efficiency and sophistication. As confidence and technology mature, AI is then integrated into more critical, autonomous roles within the attack chain. APT28’s LAMEHUG operation appears to be further along this “run” phase, where the AI is directly involved in decision-making and execution within the malware itself, leading to increased speed, scale, and adaptability of attacks. This progression implies that defenders should anticipate groups currently using AI for augmentation to eventually move towards more autonomous integration, mirroring APT28’s trajectory. Russian adversaries must prepare for threats that are not just faster or more personalized, but also more adaptive and less predictable due to AI-driven decision-making within the malware itself.
The Convergence of State-Sponsored and Cybercriminal AI Capabilities
The advent of AI tools is acting as a powerful catalyst, accelerating the convergence of state-sponsored and cybercriminal activities within the Russian threat landscape. This convergence creates a fluid and complex environment where tools, techniques, and even motivations increasingly overlap, blurring traditional lines of attribution and needing a holistic approach to cyber defense.
Shared Tools and Models
A key indicator of this convergence is the shared use of tools and models. APT28’s utilization of the open HuggingFace platform and a publicly available model like Qwen proves that state actors are not averse to employing off-the-shelf resources that are equally accessible to cybercriminals. Conversely, cybercriminals are now using academically published and open-source AI models, such as GPT-J in WormGPT, which are also available to nation-states. This homogenization of toolsets can lead to significant overlaps in infrastructure. For instance, if both an APT and a criminal gang are querying standard AI service endpoints (e.g., OpenAI, Google, HuggingFace, Anthropic), their activity might appear similar, making differentiation challenging without additional contextual information. Logpoint, for example, advises monitoring for outbound connections to these legitimate AI service domains as potential indicators of compromise, noting that unexpected use on a server could flag malicious AI activity. This blurring of toolsets complicates attribution and increases the risk of criminal operations piggybacking on state-developed AI methods, and vice versa.
Knowledge Transfer in Forums
Russian cyber forums serve as dynamic melting pots where cybercriminals, security researchers, and likely state-linked operatives interact, often under pseudonyms. This environment helps a significant transfer of knowledge. When the ransomware group “Kill Security” publicly leaked WormGPT’s prompt, for example, it effectively provided the broader community—including potential state actors or their contractors—with a free blueprint for an uncensored chatbot. Similarly, detailed guides on building private GPTs or developing adversarial techniques against AI, posted on forums like XSS, are accessible to anyone, allowing cybercrime-developed techniques to feed into state tool development. It is well-documented that Russian intelligence agencies have historically recruited talent from these underground communities. With AI, this trend could extend to recruiting or consulting individuals proficient in jailbreaking ChatGPT or fine-tuning models for malware, effectively importing criminal innovation into state-sponsored espionage. The reverse also occurs– when APT28’s LAMEHUG was exposed, criminal actors undoubtedly took note of the concept, and it would not be surprising if crimeware authors attempt their own “AI malware” variants, especially given the technical details made public. In this sense, state operations can validate and inspire criminal ones.
Hybrid Threat Actors
The existence of hybrid threat actors further blurs the line between state-sponsored and financially motivated cyber activity. China’s APT4 is a prime example, conducting both espionage and financially motivated hacks, with its AI usage (for exploit development and task automation) serving both objectives. Russia has analogous situations, with overlaps observed between groups like Evil Corp/Indictment and certain ransomware crews potentially moonlighting for Moscow. If these ransomware groups adopt AI—for instance, to automate target network discovery or generate multi-language ransom notes—and simultaneously engage in “patriotic hacking,” then the same AI tools effectively advance both criminal and espionage objectives. The recent emergence of hacktivist groups like KillNet or Infinity Hackers, operating in Russia’s interest, could also serve as vectors. These semi-official actors might use criminal-developed AI tools like WormGPT to amplify their attacks on Western targets, effectively acting as proxies for the state, blurring the lines between state cyber units and for-profit criminal gangs.
Shared Infrastructure and Indicators
The abuse of AI services or model APIs by multiple actors can lead to overlapping indicators of compromise (IoCs). For example, CERT-UA noted specific user-agent strings and API URLs used by LAMEHUG, which mimicked a legitimate Firefox browser. Should another actor, whether state-sponsored or criminal, reuse such tactics or even leaked code, it could create confusing attribution scenarios. While there has not yet been a public instance of a cybercrime group reusing APT28’s HuggingFace tokens or servers, it remains a possibility. If criminals also opt to use cloud APIs as command-and-control (C2) channels for stealth, common IoCs such as API hostnames or similar base64-encoded prompt structures could appear across disparate attacks, making it more difficult for defenders to immediately ascertain whether an AI-aided intrusion is solely the work of a cybercriminal or an APT employing a false flag.
Collaboration and Token Trading
The dark web markets could facilitate the sale or exchange of API tokens for AI services, including brute-forced ChatGPT credentials or HuggingFace API keys. If APT28 acquired hundreds of tokens, it could have been through bulk-registered accounts or purchases from underground sellers, who might simultaneously supply tokens to scammers. This scenario could lead to a single cache of illicit API keys being used by multiple actors of different stripes, inadvertently linking them. Furthermore, direct collaboration is conceivable, where APT28 might quietly patronize criminal AI developers for custom model training, mirroring historical instances of states hiring virus writers. While direct evidence of this is currently lacking, Russian criminal forums frequently host freelance offers, and an effective “AI phishing service” advertised there could attract a state sponsor as a client.
Blurred Motivations via AI
AI tools significantly lower entry barriers and operational costs, enabling a wider array of actors to operate at a higher level of sophistication. As security researchers have warned, AI could potentially be used in 59% of MITRE ATT&CK techniques, making sophisticated attacks more accessible. A lone cybercriminal’s attack could achieve effects previously attainable only by a state actor, such as highly personalized deception at scale. Conversely, state campaigns might adopt criminal-like volumes (e.g., mass spam, ransomware) because AI automates the labor-intensive tasks. Russia’s information operations, which have historically used rudimentary automation, could use LLMs to mass-produce fake personas and disinformation indistinguishable from human content. If a surge of AI-generated phishing emails targeting diplomats and business executives worldwide occurs, it could simultaneously represent a state espionage campaign and a cybercrime wave, employing identical techniques. The traditional distinction between espionage and crime thus blurs into a general “threat ecosystem” empowered by AI.
AI is acting as a force multiplier that both state and criminal actors are rapidly seizing, leading to a convergence of their capabilities and methodologies. The old paradigms—where states conducted espionage with custom tools and criminals engaged in fraud with crude malware—are eroding. State actors are adopting criminal techniques (e.g., APT28 using phishing lures that resemble cybercriminal fake ware), while criminals are embracing state-of-the-art methods (e.g., using AI to evade detection, traditionally an APT domain). This convergence means that intelligence and law enforcement agencies must view the AI threat space holistically. A phishing kit generated by WormGPT in a Russian forum could, for instance, end up in the hands of a state-backed group in Iran. Attribution will increasingly hinge on intent and context rather than on distinctive tools. Indicators of this hybridization include the dramatic rise in “AI hack” discussions on forums, signaling criminals arming themselves, and reports of identical AI services being misused by multiple state actors from different countries. When both espionage units and e-crime rings employ the same AI systems, it signifies a true “convergence of narrative and capability,” requiring security analysts to treat malicious AI usage as a shared domain of threat activity rather than siloed by actor type.
Strategic Foresight– Anticipating Future Adversary AI Pathways and Implications
The integration of AI into cyber operations is not merely a transient trend but a fundamental shift expected to intensify and evolve rapidly. Employing techniques like Adversarial Cognitive Simulation (ACS), which involves thinking like the adversary with their new AI capabilities, allows for the projection of several likely developments.
Refinement and Scaling by APT28
Following the experimental deployment of LAMEHUG in Ukraine, APT28 will undoubtedly analyze its results and iterate on the malware. An ACS of Fancy Bear suggests a pursuit of stealthier and more autonomous variants. Future iterations may embed smaller, localized LLMs (perhaps distilled models) directly into the malware, reducing reliance on external APIs that can be monitored or disrupted. The responsibilities of the LLM could be extended beyond generating predefined reconnaissance commands to include making adaptive decisions, such as initiating more advanced payloads contingent upon the absence of detected antivirus software. This progression points towards AI-driven autonomy. It is plausible that APT28 could integrate a reinforcement learning agent that learns from each compromised host to optimize data theft, essentially creating an intelligent implant that adapts its behavior based on the specific environment. While current LLMs possess limitations, the trajectory towards AI-powered decision loops in malware is clearly on the horizon. Western analysts anticipate that “AI-driven malware powered by reinforcement learning will continuously evolve,” altering its behavior to evade detection and complicating attribution. APT28 or its counterparts may pioneer such capabilities in real-world attacks in the near future.
Broader AI Adoption by Other APTs Globally
Hypothesis Evolution Tracking (HET) indicates that the initial hypothesis suggesting “Russian APTs are slow to adopt AI” is now outdated, decisively disproven by APT28’s actions. The revised hypothesis posits that most advanced threat actors will integrate AI in some form within the next one to two years.
- Sandworm (GRU Unit 74455)– This group could incorporate AI to coordinate more complex, multi-stage attacks, for example, automating aspects of a power grid attack where an AI determines the optimal sequence to trip breakers for maximum impact.
- Turla (FSB)– Turla might use AI to manage its extensive espionage infrastructure, perhaps using an AI to handle infected machines autonomously and only alert human operators for high-value information.
- Chinese APTs– These groups are expected to integrate AI aggressively, given that over 20 Chinese groups have already been observed using Google’s AI. A likely pathway involves a Chinese group deploying LAMEHUG-like malware, potentially more advanced, by using their large domestic models such as ERNIE-3.0.
- North Korea’s Lazarus Group– Lazarus could use AI to automate the movement of stolen cryptocurrency or to generate scam narratives at scale for financial attacks, enhancing their prolific cybercrime operations.
- Iran’s APT34 and other groups– Having already experimented with AI for phishing, these actors might progress to creating AI-driven phishing platforms that generate campaigns with minimal human input.
The overall trajectory suggests that AI will transition from an experimental tool to an operational staple across APTs globally by 2025–2026, mirroring the widespread adoption of network worms in the early 2000s.
Adversarial AI Arms Race
As attackers increasingly employ AI, defenders will respond in kind, leading to a continuous cycle of innovation and counter-innovation—a classic arms race turbocharged by machine speed. Narrative Convergence Analysis (NCA) shows that multiple stakeholders, including governments, industry, and even adversaries, acknowledge this trajectory. The “Skynet” malware’s attempt at prompt injection was an early warning. An increase in malware explicitly designed to fool AI-based analysis tools is anticipated, such as malicious code that rewrites itself upon detecting an AI-driven scanner. Attackers will likely exploit biases or blind spots in defenders’ machine learning models, potentially identifying “universal adversarial perturbations” to confuse them. The arms race cycle, previously seen in sandbox versus sandbox-evasion, will now manifest as AI versus AI-evasion, accelerating in tempo. A significant number of emerging AI evasion techniques are anticipated, such as the poisoning of AI models—wherein a defender’s AI is deceived into misclassifying malware as benign by means of subtle input modifications—or overwhelming the system with misleading telemetry data.
Use of AI for Strategic Influence
Beyond technical hacking, AI will significantly amplify information warfare. Russian information operations actors have already used basic AI for generating fake social media posts; with LLMs, they can mass-produce plausible disinformation and deepfake content. Narrative Convergence in Russian strategic circles points to a clear understanding that AI can supercharge propaganda. While a 2022 fake “Ukrainian surrender” video was crude, within a year, an AI-generated deepfake of a world leader could be far more convincing. APT28 and related psychological operations units might employ AI to generate entire ecosystems of fake journalism and personas that are exceedingly difficult to distinguish from authentic content. This capability extends into hybrid warfare, with expectations that Russian PMCs or influence campaigns will deploy AI bots for social engineering at scale, capable of adapting to counter-narratives in real-time. Defensive AI employed by platforms to detect fake accounts will, in turn, prompt attackers to craft AI that mimics human behavior with greater fidelity, potentially by deliberately inserting subtle “human-like” errors or emotional cues.
AI as a Service for Threat Actors
The underground economy surrounding AI tools is poised for significant maturation. Following the emergence of WormGPT and FraudGPT, more specialized “products” are expected to enter the market, such as “ExploitGPT” for discovering zero-day vulnerabilities or “StealerGPT” optimized for writing info-stealer malware. While many such offerings may be scams, a subset will likely gain sufficient legitimacy to achieve traction. By 2025, the dark web may feature AI-as-a-Service (AIaaS) platforms, offering subscription-based access where a user can input an objective, such as “infect and ransomware this network,” and the AI platform generates a comprehensive playbook or even custom malware for that scenario. AiaaS is a next-generation evolution of crimeware-as-a-service, further lowering the skill requirements for sophisticated attacks and raising potential proliferation concerns for nation-states. Conversely, prominent state actors, prioritizing secrecy, might prefer in-house development but will undoubtedly monitor these AI marketplaces for useful tools or to identify and recruit talent.
The projected shift towards embedded, smaller LLMs within malware and AI-driven autonomous decision-making signifies a move from “AI-assisted” to “AI-enabled” cyber operations. The “human-in-the-loop” for tactical decisions will diminish, potentially leading to attacks that are faster, more adaptive, and less predictable, posing a significant challenge to traditional human-centric defensive response models. The move away from human-centric models shifts the defensive challenge– instead of defending against a pre-programmed attack or a human operator, defenders will face a dynamic, self-modifying, and potentially self-learning adversary. There is a needed paradigm shift in defense, moving towards AI-driven defenses that can also adapt in real-time, predict adversary moves, and operate at machine speed, rather than relying on human analysis of static indicators. Eventual Agentic AI complicates forensic analysis and attribution, as the attack chain becomes less deterministic.
Actionable Recommendations for Strategic Defense and Policy
The evolving landscape of AI in cyber operations needs a proactive and forward-looking stance from defenders and policymakers. The insights derived from APT28’s activities and the broader threat environment underscore the urgency for strategic adaptation.
Early Warning Indicators
Effective defense against AI-enabled threats begins with robust early warning capabilities. It is crucial to monitor chatter in closed channels, such as Telegram and dark web forums, for the development of specific AI tools. Observing sudden surges of interest in fine-tuning open models for offensive tasks or new leaks of AI prompts and models, like the WormGPT prompt leak. An increase in AI-related indicators of compromise (IoCs) in incident reports, such as unusual API calls to AI services or LLM-related file artifacts, would serve as a key warning sign that more attackers are deploying AI. Network telemetry showing connections to AI model endpoints from servers that typically have no legitimate reason to access them could indicate an in-progress AI-assisted breach. On the geopolitical front, any deepening partnerships between adversaries like Russia and countries like China on AI development, as exemplified by Sberbank’s collaboration with DeepSeek, should be interpreted as an indicator of rapid improvement in their AI prowess and a precursor to more AI in their cyber arsenal.
Policy and Collaboration
Addressing the malicious use of AI requires a collaborative effort between policymakers and technology companies. OpenAI’s decision to work with Microsoft to identify and ban state abuse of its services sets a valuable precedent. Similarly, other AI platforms, including HuggingFace, should consider implementing policies and technical measures to curb API abuse, such as anomaly detection systems to identify usage patterns consistent with malware generation. International norms about the use of AI in cyber warfare should be actively discussed, potentially leading to agreements, however challenging to enforce, that prohibit fully autonomous AI cyber weapons targeting critical infrastructure, akin to ongoing discussions on AI in kinetic warfare. Given the inherent verification challenges, emphasis should be placed on building resilience and preparedness. Governments must support the development of secure, vetted AI models for defensive applications, such as code auditing and threat hunting, while simultaneously hardening these models against adversarial manipulation. A tangible policy involves encouraging robust information sharing between AI providers and cybersecurity agencies. For instance, if an AI provider flags that a state-sponsored group used its service to research specific vulnerabilities, that intelligence should be rapidly sent to potential targets to enable timely defensive actions.
Red-Teaming and Training
Organizations must update their threat models and conduct red-team exercises that explicitly incorporate AI-enabled attackers. These scenarios should simulate adversaries possessing capabilities such as instantly generated exploit code (even if imperfect, but faster than human-generated), or phishing emails crafted at native-speaker proficiency in multiple languages with convincing context. Red teams can use tools like ChatGPT, with careful prompt engineering, to simulate what an adversary with an uncensored model might achieve, for example, generating a polymorphic malware loader that changes its signature with every execution. By practicing against AI-augmented adversaries, defenders can identify critical gaps in their defenses, such as whether their Security Operations Center (SOC) would detect an infected host connecting to an AI API, or if their playbooks account for an attacker who rapidly adapts tactics (given that an AI might attempt numerous privilege escalation methods in minutes until one succeeds). These exercises are vital for raising awareness and enhancing preparedness. Blue teams should also strategically use AI, where safe and appropriate, to counteract the speed of AI-driven attacks. Using defensive AI to triage the overwhelming volume of phishing emails that AI can produce or to analyze malware at machine speed aligns with the perspective of Russian expert Roman Reznikov, who noted, “The logical countermeasure to attacking AI is a more effective AI in defense”. Investments in defensive AI, capable of dynamically detecting AI-written exploits or phishing content, will be critical.
Enhanced Monitoring and Controls
Early warning can also be derived from enhanced technical controls. Enterprises and government networks should consider monitoring egress traffic for AI API calls while establishing clear allowances for known legitimate uses. Application control policies could be implemented to block unknown processes from accessing AI endpoints. Threat intelligence feeds should be enriched with known signatures of illicit AI tools, such as YARA rules designed to identify WormGPT’s output style or the presence of specific AI-related libraries within malware. As criminals increasingly integrate AI into malware generation, these tools may leave discernible patterns, such as a specific coding style or library usage, that can be identified. The sharing and collation of these indicators will be instrumental in the early detection of AI-crafted attacks.
In conclusion, AI is poised to become both a formidable weapon and an essential shield in cyberspace. APT28’s venture into LLM-assisted hacking is likely the vanguard of a broader trend, with other threat actors rapidly following suit. Russian sentiment, across official, industry, and underground contexts, demonstrates a keen awareness of this paradigm shift, from calls to harness AI for defense to hackers actively experimenting with it for offense. Internationally, a consensus is forming that the world is entering a new era of cyber conflict where generative AI can dramatically amplify threats. The convergence of criminal and state use of AI further complicates this picture, potentially yielding a hybrid threat environment where attacks are more frequent, more sophisticated, and less attributable. To effectively prepare, defenders and policymakers must embrace a proactive, forward-looking stance– investing in AI-driven security tools, fostering robust public-private intelligence sharing on malicious AI use, and training against AI-empowered adversaries. Crucially, agility must be maintained through continuous Hypothesis Evolution Tracking, constantly updating threat models as adversaries evolve their AI tactics. By doing so, a form of Adversarial Cognitive Simulation at scale can be achieved, anticipating the operational “brain” of an AI-augmented attacker and developing strategies to outmaneuver it. Ultimately, the side that more effectively uses AI and mitigates the opponent’s AI capabilities will gain the upper hand. Early warning and creative red-teaming will offer the critical insights to shape strategies in this rapidly changing domain. The following 2–8 months will be pivotal in revealing the full extent to which actors like APT28 and their global peers will apply AI to cyber operations, and the readiness of defenders will determine if these advancements lead to being caught off-guard or met with resilience.
Treadstone 71
- Transnational Cyber Intelligence-Driven Cybercrime and Crimeware Analyst
3-Day Intensive In-Person Training | The Hague, Netherlands | September 29–October 1, 2025 - AI-Infused Hybrid Cyber Intel, Counterintel, & Info Ops Oct 6-9, 2025 – Schiphol, Amsterdam, NL
