LLM Failure Mode Audit Pack —
XELANTA Edition
Incident-ready decision framework (study once, use when it matters).
Incident-time use only. Markdown artifacts.
This pack is designed for incident-time use only. It is not intended for training or post-mortems.
When to use this
You have an active incident.
Multiple people have conflicting hypotheses about what's wrong. There's pressure to "just try something" — change the prompt, adjust the temperature, swap the model.
This pack forces you to classify the failure before making changes.
You can't "fix" what you haven't diagnosed.
What this pack does
Structures classification before changes
Before you change anything, you must identify whether this is a model problem, a context problem, or a system problem.
Flags unproductive actions before they happen
If you can't reproduce the failure or if the boundary is unclear, the pack stops you from proceeding.
Identifies system vs. model boundaries
Separates what the LLM can control from what the system must handle.
Requires an explicit final decision
Accept the failure, revert changes, escalate to architecture, or stop all changes. No more endless iteration.
What this pack is not
✗ Not observability
This doesn't log or monitor. It diagnoses after the fact.
✗ Not red teaming
This doesn't prevent failures. It classifies them when they happen.
✗ Not evaluation tooling
This doesn't run tests or generate metrics.
✗ Not prevention
This doesn't stop failures from happening.
✗ Not optimization
This doesn't improve performance. It stops thrashing.
Included artifacts (files)
Six markdown artifacts you run in order during an incident. Each file has a single job: capture the decision boundary, then force closure.
failure_mode_protocol.md
Incident triage worksheet. Classify the failure as model vs. context vs. system before touching prompts or architecture.
reproduction_gate.md
Hard gate. If you can't reproduce the failure, you stop. No "try one more change" until the repro is locked.
system_boundary_check.md
Boundary checklist. Separates what the LLM can influence from what the system must own (routing, retrieval, policies, integrations).
escalation_decision.md
Closure template. Forces a final call: accept, revert, escalate to architecture, or stop all changes.
audit_mirror_prompt.md
Audit prompt for a second pass. Flags missing evidence, fuzzy classification, or decisions made under undocumented uncertainty. System prompt to be used with your LLM of choice.
README.md
Minimal instructions + run order. One page. No theory. Just "do this, then this".
File format
ZIP with plain Markdown (.md). No proprietary tooling. Designed to be edited, versioned, and shared internally during incident response.
Who should buy this
✓ Buy this if you are in an active incident
And you need to stop thrashing before making more changes.
✓ Buy this if you need to stop thrashing
Multiple people are proposing different fixes. You need structure that forces classification first.
✓ Buy this if you need to decide before changing anything
You can't afford another failed iteration. You need a forced decision point.
✗ Do not buy this if:
- • You are exploring ideas
- • You want metrics or dashboards
- • You want to improve performance
This pack doesn't optimize. It diagnoses.
Buy this between incidents. Not during your first one.
"This pack is priced for incident-time use,
where one wrong change costs more than the pack."
Pricing
One-time purchase. Instant download.
- Failure mode protocol + reproduction gate
- System boundary check + escalation decision
- Audit mirror prompt + README
- Secure checkout via Gumroad
- Instant download after payment
- No subscriptions. One payment.
Plain markdown files you can version control and share with your team.