What Is an Explainable AI Report?
An Explainable AI (XAI) report is a plain-language document that explains how an AI system works, what data it uses, how it reaches its outputs, what its performance characteristics are, and what its known limitations are. Unlike the full technical documentation required under Article 11 — which is written for technical reviewers and regulators — an XAI report is written for the people who use, oversee, or are affected by an AI system.
The term "Explainable AI" (sometimes abbreviated XAI) is a field of AI research focused on making AI decisions interpretable to humans. In the EU AI Act context, it is operationalised through the transparency requirements of Article 13, which mandates that high-risk AI systems be accompanied by instructions for use that enable deployers to understand and exercise meaningful oversight.
An XAI report is not a marketing document or a product brochure. It is a compliance document that must accurately reflect the AI system's actual design, capabilities, and limitations — including the limitations that the developer or vendor would prefer not to highlight.
Article 13: The Legal Requirement
Article 13 of the EU AI Act states that high-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately.
It then requires that every high-risk AI system be accompanied by instructions for use that include, at minimum:
- The identity and contact details of the provider
- The characteristics, capabilities, and limitations of the AI system
- The level of accuracy, robustness, and cybersecurity referred to in Article 15, and any known or foreseeable circumstances that may have an impact on that level
- The expected lifetime of the AI system and any necessary maintenance and care measures to ensure proper functioning
- A description of the changes to the system following post-market monitoring
- Data requirements and technical measures to ensure the AI system can be used as intended
- Any known or foreseeable circumstances that may lead to risks to health, safety, or fundamental rights
- The human oversight measures required, including technical measures to facilitate interpretation of the AI system's outputs by the deployers
- Computational infrastructure requirements
This list forms the structural backbone of the XAI report.
What an XAI Report Must Contain
A well-structured Explainable AI report covers the following elements:
1. System Purpose and Intended Use Cases
A clear, jargon-free description of what the AI system is designed to do. This includes: the problem it is solving, the decision or output it produces, the context in which it is intended to be used, and the categories of people it is intended to affect. This section must also specify what the system is not designed to do — the intended use boundaries.
2. Data Inputs and Training Data Description
A non-technical description of what data the AI system uses: what types of data are fed in as inputs, the general nature of the data the system was trained on (without requiring full disclosure of proprietary datasets), any known biases or limitations in the training data, and how data quality is maintained in production.
3. Decision Logic — Non-Technical Explanation
An explanation of how the AI system reaches its outputs, written for a non-technical audience. This does not require disclosure of the model architecture or training algorithm, but it must explain the general logic: what factors the AI considers, how they are weighted (in general terms), and why the system produces the kind of outputs it does. For classification systems (e.g., high/medium/low risk), the explanation should cover what drives each classification.
4. Performance Metrics and Known Limitations
The accuracy, precision, recall, or other appropriate performance metrics for the system, measured on representative test datasets. Crucially, this section must include known failure modes: circumstances in which the system is less accurate, populations for which it performs worse, or edge cases that the system does not handle well. Transparency about limitations is not optional — it is required by Article 13 and is central to meaningful human oversight.
5. Human Oversight Integration
An explanation of how human oversight works in the context of this AI system: who the designated human overseer(s) are, what they are expected to do, how they can interpret the AI's outputs, what the override mechanism is, and what escalation path exists when the AI produces an unexpected or questionable output.
6. Contact Details for Queries
The identity of the provider (company name, registered address, contact person) and a mechanism for deployers and affected individuals to request further information about the AI system's operation. This is a legally required element under Article 13.
Who Reads XAI Reports
The XAI report has multiple audiences, each with different needs:
- National competent authorities — During inspections, regulators will review XAI reports to assess whether Article 13 obligations have been met. The report is a primary inspection document.
- Board members and executives — Responsible for AI governance oversight but typically lacking technical expertise. The XAI report enables informed board-level decision-making about AI deployment.
- HR and compliance officers — Need to understand what the AI tool they're deploying actually does, what biases it may have, and how to exercise meaningful oversight.
- Employees and candidates — People subject to AI-informed decisions have a legitimate interest in understanding how those decisions are made. The XAI report provides the foundation for individual rights disclosures.
- Enterprise customers — B2B buyers of AI tools increasingly require XAI reports before procurement. As enterprise procurement teams build EU AI Act compliance checklists, having a current XAI report is becoming a competitive requirement.
XAI Report vs Technical Documentation: Different Audiences
It is important to distinguish between the full technical documentation required under Article 11 and the XAI report (Article 13). They serve different purposes and different audiences:
| Dimension | Technical Documentation (Art. 11) | XAI Report (Art. 13) |
|---|---|---|
| Primary audience | Regulators, technical auditors, notified bodies | Deployers, board members, employees, customers |
| Language | Technical — model architecture, training data statistics, performance benchmarks | Plain language — accessible to non-technical readers |
| Scope | Full system specification, all seven Annex IV areas | Focused on what deployers need to understand for oversight |
| Availability | Confidential — provided to authorities on request | Shared with deployers before deployment; summarised for affected individuals |
| Update frequency | Updated on any material system change | Updated when changes affect how deployers use or understand the system |
How Aurora Trust Generates Plain-Language XAI Reports
Aurora Trust connects to your AI system and, based on its architecture description, intended use, and performance data, automatically generates a complete, Article 13-compliant XAI report in plain English (or any EU language you require). The report is structured to address every element of the Article 13 requirement, written at the level of a compliance officer or board member rather than a machine learning engineer.
Reports can be exported as PDF or Word documents, white-labelled with your company branding, and attached to vendor contracts or provided to enterprise customers as part of your AI governance package. Aurora Trust also maintains a version history of all XAI reports, so you can demonstrate to regulators how your AI system and its transparency documentation have evolved over time.