xJuRAG: Explainable AI for legal RAG applications
xJuRAG: Explainable AI for legal RAG applications
We are excited to announce the start of our xJuRAG project, which is short for "Explainable AI for legal RAG applications", a new joint research initiative. As the project coordinator, dida is partnering with the renowned Fraunhofer Heinrich-Hertz-Institut (HHI) to tackle one of the biggest challenges in artificial intelligence: the "black box" problem.
While LLMs show great potential, their tendency to "hallucinate" or invent facts makes them risky for high-stakes, sensitive fields like the legal sector. This lack of trust is a major barrier to the widespread adoption of AI in professional legal environments.
The goal: A fully transparent legal assistant
The primary goal of xJuRAG is to build a legal assistance system that is not only intelligent but also completely transparent.
We are developing an application based on Retrieval Augmented Generation (RAG) , specifically designed for the legal field. The system will be able to:
Answer complex legal questions
Reliably cite its sources
Clearly explain how it arrived at its conclusions
The project will initially focus on decisions in traffic and tenancy law.
The innovation: Opening the black box
xJuRAG's innovation lies in its deep integration of advanced Explainable AI (XAI). We will adapt a powerful method developed at Fraunhofer HHI, Attention-aware Layer-wise Relevance Propagation (AttnLRP).
This approach goes far beyond standard RAG systems. It allows us to create a detailed "audit trail" that quantifies exactly which words in a source document were most relevant to the final answer.
The justice system is built on clarity and verifiability - principles that standard AI cannot guarantee. xJuRAG aims to meet these principles directly. By providing a reliable and transparent tool, we will help legal professionals automate time-consuming research and build confidence in AI-supported decisions. This will free up experts to focus on higher-level strategy and argumentation.
By proving the viability of explainable AI in a sensitive field, xJuRAG will also help promote public trust and accelerate responsible AI adoption in other critical sectors like medicine and public administration.
The project is structured in three phases and is set to run for 36 months
Contact
If you represent a legal firm or legal-tech company and are interested in learning more or exploring potential applications, please reach out to us.