The Persian-language report “هنر پرسش از ChatGPT برای دریافت پاسخهای باکیفیت” (The Art of Asking ChatGPT for High-Quality Answers) is a comprehensive Farsi guide to prompt engineering techniques. Originally authored by Ibrahim John and translated into Farsi (with local edits), it covers 20+ prompt strategies for shaping ChatGPT’s output. These range from basic instruction-based prompts to advanced methods like role-playing prompts, few-shot examples, chain-of-thought reasoning, self-consistency, knowledge generation/integration, controlled text generation, adversarial prompts, clustering, and more. Each technique is illustrated with formulas and examples (often in English within the Farsi text) and guidance on usage in ChatGPT. The guide’s aim is to teach how to precisely control AI outputs for high quality results. This provides a structured “prompt engineering” playbook in Farsi – knowledge that can be applied to various domains, including intelligence and influence operations.
Notably, the guide emphasizes combining techniques for better outcomes. For example, it suggests mixing instruction-based prompts with role-based and seed-word prompts to further refine the model’s output. It also covers how to format prompts to get structured answers (e.g. outlines, lists) and how to prompt the model to think step-by-step for complex tasks. In an intelligence context – where precise, reliable information and controlled messaging are critical – these Farsi prompt-engineering tactics offer powerful tools. Below we break down key techniques from the report and assess their practical utility for intelligence operations focused on Iranian adversaries.
Key Prompt Techniques and Intelligence Applications
1. Instruction-Based Prompts – Controlled Directives
The guide begins with “پرسمان دستوری” (instructional prompting) which steers ChatGPT using explicit commands and constraints. The user defines a Task (what the model should do) and Instructions (how to do it). Effective instructions should be clear, specific, detailed, and logically structured. For example, rather than asking vaguely “Write about AI”, one should specify: *“Write a 500-word essay on the history of AI in medicine, with an introduction, three main challenges, and a conclusion.”*. This yields more focused, high-quality output. In intelligence work, such controlled prompts can enforce structured analytic writing or report formats. An analyst could prompt, “Provide a bullet-point summary of [an intercepted communication], including who, what, when, where, why.” or “List 5 key findings from dataset X.” The guide even provides a framework for designing instructions: define the goal, format, any limitations, target audience, and level of detail. This ensures the AI’s output is tailored for a specific intel need (e.g. a concise briefing for senior officials, a formatted list of indicators of compromise, etc.). By using precise directives, one can also constrain the model to avoid certain content, which is useful for narrative control – for instance, instructing the model not to mention a particular entity or to use a neutral tone, thereby indirectly shaping the narrative it produces.
2. Role-Based Prompts – Simulation & Deception
One powerful technique is “پرسمان نقشآفرینی” (role prompting), where the model is instructed to assume a specific persona or role. *“The model can be made to take on a particular role…very useful for generating text in a tone, perspective, and language suited to a specific context or audience.”*. The guide notes that by defining a clear, specific role (essentially a simulated character), one can influence the style and content of the answers. For example, telling the AI “You are a cybersecurity analyst” or “Act as a news editor” will yield responses in line with that persona’s knowledge and tone. The prompt formula is given simply as: *“Generate [task] as a [role].”*. This has immediate intelligence applications. Role simulation can be used to emulate an adversary’s perspective or a target’s persona. An analyst could prompt in Farsi: “به عنوان یک فرمانده سایبری ایرانی، درباره نحوه انجام یک حمله دیس اطلاعات توضیح بده” (As an Iranian cyber commander, explain how to conduct a disinformation attack). The model – adopting that role – might produce a plausible strategy, revealing tactics or narratives such an actor might use. This aids red-teaming and psychological operations planning, by letting us “think like” the adversary. Conversely, for defensive training, one could simulate a “phishing email from a bank manager” or “extremist recruiter speech” to educate or test targets on spotting manipulation.
Role prompts also enable impersonation and deception in offensive ops: a threat actor could instruct ChatGPT (in Farsi or English) to “Write a message as if you are a trusted friend of the target, inviting them to click a link,” producing a convincing phishing lure. The guide highlights that role definitions help the model use specialized knowledge and appropriate style for that persona, even categorizing roles into professional (doctor, engineer), expert (e.g. “متخصص امنیت سایبری” – cybersecurity expert), stylistic (poet, satirist), or process roles (coach, critic). For Iranian disinformation purposes, an actor might adopt roles like “religious scholar” or “Western journalist” to craft propaganda with the desired credibility or bias. Meanwhile, defenders can use the same method to “role-play” as conspiracy theorists, terrorist recruiters, or other threat personas to stress-test narratives and preempt malign influence themes.
3. Zero, One, and Few-Shot Examples – Knowledge Seeding
The chapter on “پرسمانهای صفر، یک و چندنمونهای” details how providing examples in the prompt can guide the model’s output. In zero-shot prompting, no examples are given – the model gets only a general task. One-shot provides one example, and few-shot provides a handful of examples of the desired output or task. The report shows that as more examples are given, the output is typically more tailored but at the cost of prompt complexity. For instance, to generate a product review, one could give 3 sample reviews of similar products (few-shot) so that the model follows that style. For intelligence use, few-shot prompting in Farsi could involve feeding a model known pieces of disinformation (e.g. prior propaganda statements) and then asking it to produce a new statement in a similar style. This could help an analyst see what a future adversary message might look like, or help a threat actor automatically generate messaging consistent with past narratives. An Iranian information operation could take a few known social media posts that went viral and use them as examples for ChatGPT to create new posts with similar emotional impact and wording. The guide’s example formulas – e.g. “Generate a product description for this new smartwatch with zero examples.” vs with one example vs few examples – illustrate how specificity and tone can be controlled by example. In essence, few-shot prompts allow knowledge fusion: blending given examples (which might contain misinformation or true data) with the model’s own knowledge to produce a tailored output. This is a way to frame knowledge in the prompt – supplying the model with the context or angle you want it to adopt.
4. “Let’s Think” Multi-step Reasoning – Analytical Rigor
The guide introduces the “بیایید دربارۀ این فکر کنیم” prompt (the classic “Let’s think about this step by step”) as a technique to force the model into a chain-of-thought reasoning mode (Chapter 6). By explicitly asking the AI to reason in multiple steps, the model will break down complex problems logically. In intelligence analysis, this is analogous to a structured analytic technique: you might prompt, “List all assumptions, then analyze evidence, then draw a conclusion about scenario X.” A Persian example could be: “گامبهگام فکر کن و توضیح بده که چگونه …” which would have the model outline its thinking process. This approach can stress-test reasoning and expose gaps or contradictions in a narrative. For instance, to expose disinformation, an analyst could feed a suspect statement and prompt the model to critically examine it step by step for internal consistency or evidence – effectively an AI-assisted deconstruction of propaganda. Indeed, the report’s later chapters cover a Self-Consistency Prompt (Chapter 7, پرسمان خودسازگاری) which involves checking an output against itself for contradictions. An example given is having ChatGPT verify whether a piece of text is internally consistent and flag any discrepancies (like conflicting population figures in an article). *“Self-consistency prompting helps ensure ChatGPT’s outputs are more accurate, reliable, and aligned with the input, and that data contradictions are resolved in sensitive tasks like data analysis or report writing.”*. Intelligence personnel can leverage this by having the AI double-check reports or narratives for logical consistency – crucial when sifting truth from adversary deception.
5. Knowledge Generation and Integration – Framing Information
The Farsi guide dedicates chapters to Knowledge Generation prompts (Chapter 9: پرسمان تولید دانش) and Knowledge Integration prompts (Chapter 10: یکپارچهسازی دانش) which are highly relevant to intelligence synthesis. Knowledge Generation refers to prompting the model to produce new insights or information by combining what it “knows.” According to the text, this technique *“allows using ChatGPT to create new and unique information. The model is allowed to combine its existing knowledge and generate new content”*. Essentially, it’s used to brainstorm or extrapolate. The guide notes this can help extract implicit knowledge, identify relationships between concepts, and organize relevant info coherently. For example, a prompt might be: *“Generate new and accurate information about [specific topic].”*. In an intel scenario, one might ask the model to infer a threat actor’s possible next moves based on known data, effectively generating hypotheses. This can expose potential implicit connections the human analyst missed (though such AI-generated “new knowledge” must be vetted for truth). The guide cautions that evaluating the quality and truth of AI-generated knowledge can be challenging and requires skill – a reminder that while these prompts can surface creative insights, they can also produce believable misinformation if not carefully checked.
Knowledge Integration prompts, on the other hand, focus on fusing new information with the model’s prior knowledge. The Farsi text explains this technique *“uses the model’s pre-existing knowledge to merge new information or connect different pieces of information… very useful for combining existing knowledge with new data to achieve a more comprehensive understanding of a specific topic.”*. Essentially, you feed the model some fresh intel (reports, intercepts) and prompt it to integrate that with what it already knows about the context. The guide describes how to use it: provide both the new info and existing knowledge as input, and specify how the model should integrate them (e.g. connecting dots, updating prior understanding). This is directly useful for threat actor profiling and situational analysis. For example, one could input a recent incident report and ask ChatGPT to integrate it with historical data on the same threat group, yielding a fuller picture of their tactics or evolution. In Farsi, an analyst might prompt: “با توجه به اطلاعات جدید زیر درباره گروه APT ایرانی و دانش قبلی خودت، یک تحلیل یکپارچه ارائه بده” – instructing the model to merge new intel with prior knowledge on an Iranian APT group. This helps frame the knowledge in a structured way, possibly revealing a pattern or confirming an analytic judgment. Conversely, a malicious actor could use knowledge integration to blend truthful facts with false narratives seamlessly – for instance, merging a few accurate details into a body of propaganda to make the overall story more convincing. The guide itself acknowledges such prompts leverage the model’s “prior knowledge” to connect information and achieve a deeper understanding – which could be weaponized to produce very cohesive misinformation that mixes lies with known truths.
6. Controlled Generation and Formatting
In Chapter 13 (پرسمانهای تولید کنترلشده), the guide addresses ways to tightly control the style, format, or content of model outputs. While we have fewer direct Farsi quotes from that section, earlier examples in the text illustrate the concept. One example prompt (from the instruction prompt chapter) explicitly asks for a structured output: *“… write a 500-word essay with an introduction, three main challenges, and a conclusion.”*. Another example demands a multi-step guide be produced with specific numbered sections: “این چارچوب باید: 1) … 2) … ۳) …” and so on. This kind of prompt effectively dictates the output format – ensuring the model’s answer contains all required elements. For intelligence purposes, this is extremely useful. Analysts can request outputs in standardized formats (e.g. a JSON list of entities for downstream processing, a table comparing two scenarios, or a briefing with header sections). Controlled formatting prompts can also enforce that certain language or tone is used – for instance, “Respond in formal Persian without emotional language” to avoid inflammatory tone, or conversely “Use emotive, patriotic language” to see how an adversary might stir public sentiment. The guide’s emphasis on clarity and specificity in instructions and note that *“the more precise the instructions, the more coherent and high-quality the output”* underlines that one can indeed shape not just what is said but how it’s presented. This has implications for sentiment shaping and narrative control: by controlling format and wording, propagandists could generate messages that consistently hit the same talking points and emotional notes, while defenders might use controlled prompts to generate factual, dispassionate counter-narratives or structured reports that debunk rumors point-by-point.
7. Q&A and Dialogue Prompts – Interactive Simulation
The guide also covers question-and-answer prompts (Chapter 14) and dialogue-based prompts (Chapter 16), which help in crafting interactive exchanges with the model. In an intelligence operation, Q&A prompts can be used to interrogate the model on specific points (“What are the weaknesses in this argument?” or “Who might benefit from spreading this claim?”). Dialogue prompts allow one to simulate conversations – for example, between a suspect and an interrogator, or between a propaganda bot and a victim – which can be invaluable for training and social engineering simulations. An Iranian threat actor might use dialogue prompts to have ChatGPT draft convincing chat transcripts (e.g. posing as a distressed citizen to lend credibility to a false story), whereas security teams might simulate a scam call by prompting the AI to play the role of a scammer and a victim in conversation. The role-play combined with dialogue format yields a realistic script that can reveal tactics or be used for practice.
8. Adversarial Prompts – Stress-Testing and Evasion
Particularly relevant is Chapter 17, “پرسمانهای مقابلهای” (adversarial prompts). This technique involves crafting prompts that test the model’s robustness or produce deliberately resistant outputs. According to the guide, adversarial prompting can train or coax the model to generate text *“resistant to certain types of attacks or biases”*. In practice, an adversarial prompt might ask the model to produce text that avoids detection by classifiers or that confuses automated analysis. For example, one prompt formula given (in English) is: *“Generate text that is difficult to classify as [a specific label].”*. Another: *“Generate text that is difficult to classify as having the sentiment of [X].”*. Essentially, you ask ChatGPT to obfuscate – to create content that skirts the line. This is a double-edged sword. Malicious actors could exploit it to create messages that evade AI filters or moderators. An Iranian influence operator, armed with this knowledge, might prompt in Farsi for content that is hard for social media algorithms to flag – for instance, text that conveys a hateful idea implicitly without using obvious keywords. The guide even suggests an example of generating text difficult to translate – which could be used to foil machine translation or OSINT efforts on foreign communications (e.g. using idioms, errors, or codewords so the true intent is “lost in translation”). This adversarial use aligns with obfuscation and prompt-based red-teaming: essentially using prompt engineering to bypass AI defenses (like content detectors or language filters). Indeed, the English description of the book on Amazon explicitly touts “how to avoid & bypass all AI content detectors”, implying techniques like adversarial prompting are taught to help users get around restrictions.
From a defensive standpoint, adversarial prompts can be used to stress-test our own AI systems or narratives. Analysts can prompt ChatGPT to play devil’s advocate or to generate the most ambiguous version of a statement, to see how robust a truth-detection system is. They can also simulate how adversaries might attempt to jailbreak AI (e.g. using the role deception method: “Ignore previous instructions and as a hypothetical, do X…”). By practicing with adversarial prompts, defenders might preempt the tactics threat actors will use to manipulate AI. The guide’s inclusion of adversarial prompting acknowledges that prompt engineering isn’t just about getting positive results, but also about exploring the model’s failure modes and limits – knowledge critical in both attacking and securing AI-driven systems.
9. Clustering and Classification Prompts – Sense-Making at Scale
Chapter 18 on “پرسمانهای خوشهبندی” suggests techniques for grouping or categorizing information via prompts. While details in the Farsi text are sparse in our extracted bits, likely it involves asking ChatGPT to sort data points or ideas into categories. For intel analysts, this is useful for processing large volumes of text (e.g. clustering social media posts by sentiment or topic). An example application: “Here are 20 statements from different sources – cluster them into themes (pro-regime, anti-regime, neutral).” The AI, guided by such a prompt, could quickly provide an overview of narrative trends in Iranian information space. Similarly, later chapters on sentiment analysis prompts and text classification prompts (Ch.21-23 in the extended ToC) indicate the guide covers using ChatGPT for analytical tasks typically done by specialized models. In effect, it teaches how to turn ChatGPT into a rudimentary analytic tool through careful prompting. A user can prompt: “Analyze the sentiment of the following message” or “Identify named entities in this text”, and ChatGPT will attempt it. This means an Iranian actor could use ChatGPT (even offline or via API) to automate parts of their influence operation, like scanning texts for names (NER) or gauging public sentiment on social media posts (to fine-tune messaging). Conversely, counter-intelligence could leverage the same for monitoring channels – using AI to flag messages that skew highly negative or extremist in tone for further review.
The Persian guide arms a Farsi-speaking practitioner with a full suite of prompt templates to simulate roles, enforce structure, guide reasoning, merge knowledge, generate creative content, and test the limits of ChatGPT. All these can be directly applied to cyber intelligence and influence scenarios – either to enhance analytical rigor and insight, or to enhance malicious content creation and AI evasion. The next sections examine how Iranian adversaries might specifically use or misuse these structured prompting techniques, and how defenders can respond.
Applications in Intelligence Operations
Given the above techniques, we can map them to specific intelligence and influence operations tasks relevant to Iranian adversaries–>
Disinformation Exposure & Counter-Propaganda
Analysts can use chain-of-thought and self-consistency prompts to dissect suspicious narratives. For example, feeding an official statement filled with propaganda into ChatGPT and prompting it (in Farsi) to “consider the veracity of each claim step by step” could surface inconsistencies or logical fallacies. The guide’s emphasis on structured reasoning and Q&A prompts provides a method to have the AI play the role of a fact-checker. Additionally, clustering prompts could group hundreds of news articles or social posts into thematic clusters, helping expose coordinated influence campaigns by seeing which messages look artificially uniform. On the flip side, Iranian disinformation operators reading this guide would learn how to craft more resilient propaganda – e.g. using adversarial prompts to word claims in ways that avoid triggering fact-check algorithms, and using knowledge integration to blend in just enough truthful detail to make lies harder to debunk. They might also use summarization prompts (Chapter 15) to condense complex false narratives into catchy soundbites or slogans for easier spread.
Adversary Targeting & Threat Profiling
The role-based prompting technique is directly applicable to profiling. An intelligence officer could prompt: “Act as a member of [Threat Group X] and describe your motivations and next likely action.” By role-playing, the AI can generate a profile that might align with known behaviors – essentially an AI-assisted red team assessment. The Farsi guide even specifically mentions roles like “متخصص امنیت سایبری” (cybersecurity expert) and *“تروریستی” (possibly in examples)*, indicating scenarios of interest. Knowledge integration prompts can be used to fuse new HUMINT or SIGINT with the model’s stored knowledge of an adversary, producing a quick combined dossier or hypothesis. Iranian security services could likewise use these prompts to profile dissidents or foreign adversaries by inputting collected data and asking the model to infer intent or connections. The danger is that if they over-rely on AI “analysis,” they might generate false accusations (AI-invented links) – but the guide’s existence means they are aware of how to structure such prompts carefully to minimize nonsense (e.g. by setting the task clearly and specifying the need for accuracy).
Psychological Operations & Sentiment Shaping
Crafting messages that resonate emotionally with a target population is a classic PSYOP goal. The prompt guide’s advice on tailoring language to audience and context (through roles and styles) is a recipe for sentiment engineering. For instance, an Iranian propaganda unit could prompt ChatGPT in Persian to “compose a heartfelt, patriotic story about [topic] as a war veteran, evoking pride and sacrifice.” The role as a veteran ensures language that tugs at patriotic heartstrings. The output can be used as-is or with slight edits for a disinformation campaign. The seed word prompt technique (Chapter 8) is also notable here: by providing emotionally charged seed words (e.g. “martyrdom”, “freedom”) the user can anchor the model’s output around certain sentiments. The guide states that word-seed prompting lets users maintain more control over generated text and keep it on a specific topic or context – essentially ensuring the emotional or thematic focus stays as intended. Defenders need to recognize such linguistically fine-tuned content; if many messages share an odd consistency in style or keyword usage (perhaps due to the same prompt template being used), that could indicate AI-assisted influence at play. Conversely, counter-PSYOP agents can use the same techniques to produce compelling counter-narratives or to inoculate audiences by exposing how propaganda is constructed (for example, showing multiple AI-generated variants of the same message to reveal the formula).
Cyber Influence Modeling & Red-Teaming
The structured prompt techniques allow extensive what-if scenario generation. Analysts can combine multi-step reasoning with role simulation to have ChatGPT generate scenarios: “As a malicious hacker, think step-by-step of how you would phish an employee of Company Y. Then as a security officer, list how to mitigate each step.” This single prompt (leveraging multiple roles and steps) can yield a detailed model of an attack and defense – a training exercise produced in seconds. The guide’s coverage of reinforcement learning prompts (Chapter 19) hints at iterative refinement, which can be interpreted as instructing the model to refine its answers based on feedback. In practice, an intel team might prompt the AI to produce an influence plan, critique its weaknesses, and improve it in cycles – essentially an AI red-team brainstorming its own plan and fixing it. Iranian threat actors could similarly use ChatGPT to refine their techniques: e.g., generate multiple phishing email drafts (using multi-option prompts, Chapter 11) and then use the model’s feedback to pick the most convincing one. The adversarial prompt section, which shows examples of making text that fools classifiers, is directly akin to red-teaming AI defenses. It wouldn’t be surprising if threat actors experiment with prompt-engineered content to see if it gets past Facebook or Telegram moderation – effectively AI-driven OPSEC for influence operations.
Threat Actor Use of AI for Efficiency–>By following this Farsi guide, Iranian actors (or any Farsi-speaking group) can significantly streamline content generation and analysis. They can prompt ChatGPT to output well-structured reports on complex topics in Persian, saving time on drafting. They can generate fake interviews, quotes, or articles in specific styles by just providing a role and a few instructions. The guide even shows how prompts can produce legal documents on demand – imagine an adversary generating fake but official-sounding decrees or letters to lend false credibility to a story. Indeed, an example from the text: *“Generate a legal document that is compliant with relevant laws and regulations…”* demonstrates how one could fabricate authoritative-looking content with correct formalities. In influence ops, forged documents or regulations can be powerful disinformation tools, and here the AI is essentially coached to create them on command.
Defensive Use – Detection and Response–>On the other side, those defending against Iranian info ops can use the guide’s techniques to anticipate and counter moves. For instance, knowing that adversaries can produce ambiguously worded propaganda (via adversarial prompts), defenders can program their own AI to look for tell-tale signs of such obfuscation. The guide lists that adversarially-generated text might be intentionally hard to classify in sentiment or category – thus, if an algorithm finds a message strangely balanced or contradictory (neither clearly positive nor negative, for example), it might merit human review. Also, understanding the “role” an adversary might choose for a message (e.g. posing as a disaffected student) helps in building profiles of likely fake personas. Intel agencies could maintain a library of AI-generated exemplars for each known propaganda persona, then use that to train classifiers to flag similar content online.
Overall, the guide’s techniques greatly enhance both the attacker’s ability to craft tailored, impactful, and covert messages and the defender’s ability to structure analysis and simulate adversaries. It essentially levels the playing field by openly describing methods that were once tradecraft held by top prompt engineers. When both sides have access to such knowledge, the conflict shifts to speed, creativity, and context – who can better leverage these AI capabilities in the field.
Potential Malicious Applications and Risks
While the guide is aimed at quality outputs, many techniques can be weaponized by threat actors:
Obfuscation and Censorship Evasion–>The adversarial prompt strategy is almost a manual for bypassing automated defenses. By instructing the AI to produce text that avoids certain classifications or detections, malicious users can generate propaganda that slips past content filters. For example, hate speech can be cloaked in metaphor or subtle language that sentiment analyzers (or even human moderators) might not easily flag. The AI, guided by “the text should be difficult to classify as hate”, will try to sanitize just enough. This is essentially prompt-based jailbreaking of ethical safeguards – instead of tricking the AI into violating its rules, one asks it to be clever in phrasing outputs to avoid external detection. Iranian state media or troll farms could use this to continue spreading messages on platforms that have anti-disinformation policies, staying under the radar.
Narrative Control & Confirmation Bias–>The structured prompt techniques can ensure the AI only produces content aligning with a desired narrative frame. By providing carefully chosen examples (few-shot) or an initial context, an adversary can bias ChatGPT’s response. For instance, feeding it several conspiracy-leaning articles as examples and then asking for an “analysis” will likely produce an analysis that accepts those conspiracies as premise. This prompt-injected misinformation technique allows threat actors to launder lies through the AI – the output feels like an AI-generated, neutral text but is actually steered by the misinfo given in the prompt. The guide doesn’t explicitly encourage lying, but the tools it teaches (context setting, knowledge integration, etc.) can be misused to that effect. Emotional conditioning is another risk: an adversary could repeatedly use role prompts to have ChatGPT produce emotionally charged responses (e.g. always framing a group as heroic or villainous) and then feed those outputs back into the model in a conversation, reinforcing a certain emotional narrative. Over a long chat, this could condition the AI to become increasingly extreme in tone (a kind of inadvertent model steering via prompt feedback). If such conversations were leaked or published, they could amplify extreme sentiments.
Social Engineering at Scale–>With role-based and dialogue prompts, even an average operator can generate highly convincing social engineering content. Phishing emails, scam texts, fake customer service chats – all can be drafted in perfect Farsi (or any target language) with minimal effort. The guide explicitly highlights that adopting a specific role yields outputs with appropriate jargon and behavior. A criminal could say “Act as a bank manager speaking to a client” and produce a near-authentic phishing script in seconds. This lowers the barrier for launching phishing campaigns or impersonation scams, especially against Persian-speaking targets (who previously were safer from AI-generated English scams). It’s a force multiplier for low-skilled threat actors.
AI Red-Teaming and Exploitation–> Knowledge of these prompt techniques allows threat actors to probe AI systems for weaknesses. For instance, by using multi-turn reasoning prompts, they might find sequences of queries that gradually circumvent safety filters (each step might be innocuous, but the end result is a problematic instruction). The guide’s breadth (covering even programmatic learning and reinforcement hints) might inspire attackers to chain prompts in creative ways to trick AI into compliance. A known example in the wild was the “DAN” prompt (Do Anything Now) where the user frames a role prompt that subverts the OpenAI policy. With this guide, one could iterate on such adversarial roles systematically, stress-testing the AI’s guardrails. Iranian cyber units might engage in AI red-teaming to develop custom exploits against Western AI models (to make them produce disallowed content or leak confidential info). There’s also a defensive angle: Iranian authorities could use adversarial prompts to detect and filter certain content. For example, they could prompt an AI to generate all the “difficult to detect” ways someone might discuss a protest, then use those outputs as keywords to censor messages on their networks. It’s a grim flip side – using AI to find creative expressions of dissent in order to block them.
Weaponized Format Directives–> By insisting on certain formats, threat actors can also exploit how humans or machines process information. The guide shows outputs in lists, tables, JSON, etc. A clever adversary might prompt ChatGPT to output disinformation in a very official-looking report format with footnotes and formal language, to lend it unwarranted credibility (knowing people trust structured, academic-looking texts). Or they might generate malicious code or scripts embedded in seemingly benign text (though current AI might refuse outright malware creation, an adversarial prompt could coax it into outputting something close to it, or obfuscated code). The structured output knowledge could also enable data poisoning – e.g. generating fake dataset entries that appear legitimate in format but carry false data, hoping to mislead AI systems trained on open-source data.
In essence, the guide’s techniques flatten the learning curve for malicious actors to exploit AI. What once required expert manipulation can now be done by following recipe-like instructions in one’s native language. This democratization of prompt-craft is double-edged: it empowers the good and the bad equally. For every beneficial use (like faster intelligence collation), there’s a sinister mirror (faster propaganda generation). The question then becomes how to mitigate misuse.
Implications for Iranian Adversaries and Defenders
The focus on Iranian adversaries is apt because Iran has a vibrant information environment with state and non-state actors engaged in influence operations. Having this guide in Farsi means Iranian actors – from state-aligned media arms to hacktivist groups – can directly apply advanced prompt engineering without a language barrier. They could use these methods to enhance Persian content on social media, generate anti-Western narratives targeted at Middle Eastern audiences, or even create deepfake interview scripts where the AI writes both questions and answers in the style of a real person. We should anticipate more polished, hard-to-detect Farsi disinformation as a result. For example, instead of poorly worded propaganda posts, we may start seeing very coherent, well-structured long-form articles in Persian that subtly push the regime’s line – likely AI-assisted via these prompts.
Iranian cyber units might also use prompt engineering in cyber warfare planning. For instance, using the AI to simulate an opponent (Israel or U.S.) response in a conflict scenario by role prompting (“You are a U.S. military strategist, what will you do if…”). This could help them war-game asymmetric tactics. While the accuracy of AI for such predictions is debatable, the confidence and breadth of ideas it provides could embolden actors to try novel approaches gleaned from the AI’s suggestions.
On the defensive side, Iranian researchers and security forces will also be aware that others can use these prompts against them. They might invest in AI literacy and countermeasures – for instance, developing prompt-detection tools to identify if a piece of content was likely AI-generated by looking at tell-tale signs (the guide’s structured outputs can sometimes be spotted by certain style markers). Also, Iranian social media platforms or Telegram channels might incorporate adversarial filtering of their own: after reading about adversarial prompts, they know content might be intentionally using weird phrasing to dodge filters, so they might adjust their filters to flag content that is too perfectly balanced or contains unusual synonyms.
Internationally, those countering Iranian misinformation should use the same knowledge. They can train their AI to generate likely Iranian propaganda narratives (by prompting it to role-play as an IRIB news editor, for example) and thus be forewarned of emerging themes. They can also use clustering and sentiment analysis prompts to sift through Iranian state media output quickly for changes in tone or new storylines, which might indicate coordinated campaigns.
Linguistic features mentioned in the prompt (like specific phrasing or tone) are critical. The Farsi guide puts emphasis on matching the “لحــن” (tone) and “زاویه دید” (point of view) to the context and audience. In practice, this means threat actors will try to localize content deeply – using culturally resonant language. Detecting this might require native-level understanding and perhaps AI assistance as well. For instance, if suddenly a lot of posts start using an obscure Persian proverb in the same way, that could be an indicator of AI-driven propagation via a common prompt that inserted that proverb as a stylistic flourish.
The inclusion of formatting directives in prompts (like the numbered requirements lists or explicit instruction sections) is something analysts should look for in intercepted prompts or leaked chats of actors. If ever an Iranian threat actor’s prompt to ChatGPT is obtained, it might literally contain the patterns from this guide (e.g. “✅ وظیفه:” and “✅ دستورالعمل:” as the guide uses in examples). Those symbols and structure could fingerprint the use of this very guide or similar training. Likewise, defensive AI could be set to recognize when an inquiry it receives looks formulaic (suggesting an adversary using a prompt template) and either refuse or log it.
The Persian prompt engineering guide is a double-edged sword in the intelligence arena. It provides powerful capabilities for precision prompt construction in Farsi – enabling everything from advanced role-play simulations to controlled output formatting and knowledge synthesis – which can significantly aid intelligence analysis, psychological operations, and counter-disinformation efforts. At the same time, it hands those same capabilities to adversaries who can use them for malicious purposes like refined propaganda, social engineering, and evading detection. Both Iranian actors and those targeting them will need to update their playbooks: success may depend on whose prompt craft is superior. As the guide itself emphasizes, *“we explore how different prompt engineering techniques can be used to achieve other goals”* – it ultimately comes down to the goals of the user. In the hands of threat actors, those goals may be nefarious; in the hands of defenders, they can bolster our defenses. Understanding and monitoring the use of these Farsi prompt techniques is now a necessary part of staying ahead in the evolving landscape of AI-driven intelligence operations.
