AI Security Practice

Your AI is already
deployed. Is it secure?

Illumant helps CISOs and engineering leaders secure the AI systems their businesses now depend on — from shadow Copilot deployments to customer-facing chatbots to autonomous agents with production access.

01 · The problem

AI adoption has outpaced security readiness — and adversaries know it.

The data is stark. Most organizations have rolled out AI before establishing access controls, governance, or adversarial testing. Attackers are taking advantage.

97%
of AI-related breaches involved systems lacking proper access controls.
IBM, 2025
1 in 5
organizations have already suffered a breach tied to shadow AI usage.
IBM, 2025
$670K
higher breach cost when shadow AI is involved versus standard incidents.
IBM, 2025
86%
of organizations lack visibility into how data flows to and from AI tools.
Reco, 2025
02 · This is not theoretical

Real AI systems are being actively exploited in production.

Prompt injection, system prompt leakage, and agent hijacking stopped being hypothetical a long time ago. Here's a small sample from the past year.

CVE-2025-32711 · Microsoft 365 Copilot

EchoLeak: zero-click prompt injection

A crafted email triggered data exfiltration from Microsoft 365 Copilot without any user interaction — bypassing traditional defenses by leveraging content the AI automatically processed.

June 2025 · Financial services

$250,000 lost via AI banking assistant

Attackers manipulated an AI banking assistant into bypassing transaction verification. The assistant approved fraudulent transfers because it interpreted attacker-supplied instructions as legitimate rules.

2025 · Autonomous coding agent

$500 was all it took to compromise Devin AI

An independent researcher demonstrated that an autonomous coding agent could be manipulated through prompts to expose ports, leak access tokens, and install command-and-control malware.

Feb 2025 · Multi-agent research

Working AI worm demonstrated

Researchers built a proof-of-concept that propagates between autonomous agents through prompt injection embedded in normal messages — the first credible "AI worm" capable of self-spreading across systems.

03 · Our services

Five practices. Full coverage of the modern AI attack surface.

Whether you're assessing AI risk for the first time or red-teaming a production agent, we have the depth to help — and engage only where it creates value.

01
AI Governance
& Risk Advisory

Top-down strategy work: building the policies, inventories, and risk frameworks your board, auditors, and regulators expect. Foundation work that unblocks the rest of your AI program.

AI Risk Assessment Policy Development EU AI Act / NIST AI RMF Vendor AI Review AI Inventory
02
AI Implementation
Security Assessment

Evaluate how securely AI is deployed inside your environment. Copilot oversharing, over-permissioned agents, integration risks, and the shadow AI tools you didn't know were there.

Oversharing Assessment Access & Privilege Review Shadow AI Discovery Integration Security Data Pipeline Review
03
AI Product
Security Testing

Adversarial testing of AI-powered products — chatbots, assistants, and RAG pipelines. Structured around the OWASP LLM Top 10 (2025) so you get a defensible, auditor-ready deliverable.

OWASP LLM Top 10 Pentest Prompt Injection System Prompt Extraction RAG Pipeline Testing LLM API Security
04
Agentic AI
Security Testing

Purpose-built testing for AI that takes action — tool-using agents, autonomous systems, and MCP integrations. Built on the new OWASP Agentic Top 10 (2026). When your AI can spend money, write to production, and coordinate with other agents, retrofitted chatbot testing isn't enough.

OWASP ASI Top 10 Goal Hijack Testing Tool Misuse & Privilege MCP Supply Chain Memory Poisoning Inter-Agent Comms
05
AI Red Team

Full-scope adversary simulation targeting AI systems, pipelines, and the humans who trust them. AI-assisted social engineering, multi-step attack chains, and supply chain abuse — modeled after the threats you actually face.

Adversarial Attack Simulation AI-Assisted Social Engineering Multi-Step Attack Chains Supply Chain Scenarios Human-in-the-Loop Bypass
04 · Framework

Built on OWASP — LLM Top 10 and Agentic Top 10.

Every engagement is structured against an industry-standard OWASP framework. LLM testing follows the LLM Top 10 (2025). Agentic testing follows the newly-released Agentic Applications Top 10 (2026), purpose-built for autonomous AI. Your auditors, boards, and regulators get deliverables they recognize — and your engineering teams get findings they can act on.

LLM01:2025
Prompt Injection
Direct and indirect instruction override via inputs, retrieved content, or tool output.
LLM02:2025
Sensitive Information Disclosure
Model leaks PII, credentials, or proprietary data through responses.
LLM03:2025
Supply Chain Vulnerabilities
Compromised models, datasets, or third-party AI components.
LLM04:2025
Data and Model Poisoning
Malicious training or fine-tuning data manipulating model behavior.
LLM05:2025
Improper Output Handling
Unvalidated output executed downstream — XSS, SSRF, SQL injection, RCE.
LLM06:2025
Excessive Agency
AI agents with too much functionality, permission, or autonomy.
LLM07:2025New 2025
System Prompt Leakage
Exposure of confidential system prompts containing business logic or credentials.
LLM08:2025New 2025
Vector & Embedding Weaknesses
RAG and vector database attacks: poisoning, unauthorized retrieval, embedding inversion.
LLM09:2025
Misinformation
Hallucinations, biased outputs, and overreliance on unverified model output.
LLM10:2025New 2025
Unbounded Consumption
Resource exhaustion, wallet attacks, model theft via inference abuse.
For agentic systems

OWASP Agentic Top 10 — released Dec 2025.

For AI that takes action — tool-using agents, autonomous systems, multi-agent orchestration. The ASI prefix stands for Agentic Security Issue. Each item below is mapped to documented 2025 production incidents.

ASI01:2026
Agent Goal Hijack
Redirecting agent objectives via direct or indirect instruction injection. EchoLeak was an ASI01.
ASI02:2026
Tool Misuse & Exploitation
Agents using legitimate tools in unsafe ways. Amazon Q's destructive AWS compromise was an ASI02.
ASI03:2026
Identity & Privilege Abuse
Exploiting inherited credentials, cached tokens, and delegated permissions.
ASI04:2026
Agentic Supply Chain
Compromised MCP servers, plugins, and runtime components. First malicious MCP was found in Sep 2025.
ASI05:2026
Unexpected Code Execution
Agent-generated or agent-invoked code causing unintended execution or compromise.
ASI06:2026
Memory & Context Poisoning
Corrupting stored context — memory, embeddings, RAG — to bias future reasoning.
ASI07:2026
Insecure Inter-Agent Comms
Spoofing, intercepting, or manipulating agent-to-agent messages via A2A or similar protocols.
ASI08:2026
Cascading Agent Failures
One agent's failure propagating across an orchestration chain into systemic compromise.
ASI09:2026
Human-Agent Trust Exploitation
Misleading the humans reviewing or approving agent outputs — control bypasses in plain sight.
ASI10:2026
Rogue Agents
Compromised or misaligned agents diverging from intended behavior. The total-loss-of-control state.
05 · Why Illumant

Boutique depth. Offensive instinct. Full-service coverage.

01
Operators, not academics.
Our AI security work is led by practitioners with deep offensive backgrounds — the same people who break your network also break your AI.
02
Standards-mapped.
Every engagement produces a deliverable your auditors, board, and regulators will recognize — OWASP LLM Top 10, OWASP Agentic Top 10, NIST AI RMF, EU AI Act, ISO 42001.
03
One firm, full coverage.
From strategy to red team, you don't need separate vendors for governance, assessment, testing, and adversary simulation. Same team, one consistent voice.
Regulatory clock

The EU AI Act hits full enforcement in August 2026.

Fines reach €35M or 7% of global annual turnover — higher than GDPR. Over half of organizations still lack a basic inventory of their production AI systems. The gap between readiness and reality is widening.

Maximum fine
€35M
Or 7% of global annual turnover — whichever is higher. Per violation.

Secure your AI before someone else does it for you.

A 30-minute scoping call with one of our AI security leads. No commitment, no generic sales deck — we'll discuss your specific AI footprint and where the meaningful risks live.