Zekun Wu
update
17cea53
raw
history blame
7.54 kB
import streamlit as st
st.set_page_config(
page_title="AI Explainability in the EU AI Act",
page_icon="👋",
)
st.title('AI Explainability in the EU AI Act: A Case for an NLE Approach Towards Pragmatic Explanations')
st.markdown(
"""
## Welcome to the AI Explainability Demo
This application demonstrates the principles of AI explainability in the context of the EU AI Act. It focuses on how Natural Language Explanations (NLE) can be used to provide clear, user-specific, and context-specific explanations of AI systems.
### Abstract
This paper navigates the implications of the emerging EU AI Act for artificial intelligence (AI) explainability, revealing challenges and opportunities. It reframes explainability from mere regulatory compliance with the Act to an organizing principle that can drive user empowerment and compliance with broader EU regulations. The study’s unique contribution lies in attempting to tackle the ‘last mile’ of AI explainability: conveying explanations from AI systems to users. Utilizing explanatory pragmatism as the philosophical framework, it formulates pragmatic design principles for conveying ‘good explanations’ through dialogue systems using natural language explanations. AI-powered robo-advising is used as a case study to assess the design principles, showcasing their potential benefits and limitations. The study acknowledges persisting challenges in the implementation of explainability standards and user trust, urging future researchers to empirically test the proposed principles.
**Key words**: EU AI Act, Explainability, Explanatory Pragmatism, Natural Language Explanations, Robo-Advising
### Table of Contents
1. Introduction
2. EU AI Act: Meanings of Explainability
3. Explanatory Pragmatism
4. NLE and Dialogue Systems
5. Robo-Advising Case Study
6. Limitations
7. Conclusion and Future Work
### Introduction
The introduction outlines the structure of the paper, which is divided into six sections:
1. The EU AI Act's take on AI explainability.
2. Theoretical foundations of explanatory pragmatism.
3. The concept and principles of Natural Language Explanations (NLE).
4. Application of NLE in a Robo-Advising Dialogue System (RADS).
5. Limitations of the proposed approach.
6. Future directions for research.
### 1. EU AI Act: Meanings of Explainability
The EU AI Act is part of the EU’s strategy to regulate AI, aiming to balance innovation with risk management. It categorizes AI systems based on risk levels, with high-risk systems subjected to stricter requirements. Explainability, although not explicitly mandated, is implied in several articles, notably Articles 13 and 14, focusing on transparency and human oversight.
**Articles Overview**:
- **Article 13**: Emphasizes transparency, requiring high-risk AI systems to be understandable and interpretable by users.
- **Article 14**: Stresses human oversight to ensure AI systems are used safely and effectively.
The paper argues that transparency and explainability are crucial for user empowerment and regulatory compliance.
### 2. Explanatory Pragmatism
This section discusses different philosophical approaches to explanation, emphasizing explanatory pragmatism, which views explanations as communicative acts tailored to individual users' needs. The pragmatic framework consists of:
- **Communicative View**: Explanations as speech acts aimed at facilitating understanding.
- **Inferentialist View**: Understanding as context-dependent, involving relevant inferences.
**Design Principles for a Good Explanation**:
1. Factually Correct: Accurate and relevant information.
2. Useful: Provides actionable insights.
3. Context Specific: Tailored to the user's context.
4. User Specific: Adapted to the user's knowledge level.
5. Provides Pluralism: Allows for multiple perspectives.
### 3. NLE and Dialogue Systems
NLE transforms complex model workings into human-comprehensible language. Dialogue systems, which facilitate interaction between users and AI, are proposed as effective means for delivering NLE. Key design principles for dialogue systems include:
1. Natural language prompts.
2. Context understanding.
3. Continuity in dialogue.
4. Admission of system limitations.
5. Confidence levels for explanations.
6. Near real-time interaction.
### 4. Robo-Advising Case Study
Robo-advising, although not explicitly high-risk per the EU AI Act, benefits from explainability for user trust and regulatory adherence. The paper illustrates this through hypothetical dialogues between users and a Robo-Advising Dialogue System (RADS), showcasing the principles in action. Different user profiles—retail consumers, data scientists, and regulators—demonstrate varied needs for explanations, highlighting RADS' adaptability and limitations.
### 5. Limitations
The paper acknowledges technical and ethical challenges in implementing explainability:
- Complexity of queries.
- Coherence and relevance of explanations.
- Context retention and information accuracy.
- Risk of overreliance on AI.
### 6. Conclusion and Future Work
The paper concludes that explainability should extend beyond regulatory compliance to foster ethical AI and user empowerment. It calls for empirical testing of the proposed design principles in real-world applications, particularly focusing on the scalability and practicality of implementing NLE in dialogue systems.
"""
)
st.sidebar.title("Demo Instructions")
st.sidebar.markdown("## Single Evaluation")
st.sidebar.markdown("""
- **Description**: Try the single evaluation by inputting a question and an explanation. This part allows you to see how well an explanation meets the criteria of a good explanation.
- **How to Use**:
1. Enter your question and explanation in the provided fields.
2. Click "Evaluate" to see the evaluation results.
""")
st.sidebar.markdown("## Explanation Generation")
st.sidebar.markdown("""
- **Description**: Upload a CSV file containing questions and generate natural language explanations for each question using different AI models.
- **How to Use**:
1. Upload a CSV file with a column named `question`.
2. Select an explanation template (e.g., "Default", "Chain of Thought", or "Custom").
3. Adjust the model parameters such as temperature and max tokens using the sliders in the sidebar.
4. Click "Generate Explanations" to get the results.
""")
st.sidebar.markdown("## Batch Evaluation")
st.sidebar.markdown("""
- **Description**: Evaluate explanations in bulk by uploading a CSV file containing questions and explanations. This part is useful for assessing the quality of multiple explanations at once.
- **How to Use**:
1. Upload a CSV file with columns named `question` and `explanation`.
2. Click "Evaluate Batch" to see the evaluation results.
""")
st.sidebar.markdown("### Example CSV Format for Explanation Generation and Batch Evaluation")
st.sidebar.markdown("""
question,explanation
"What causes rainbows to appear in the sky?","Rainbows appear when sunlight is refracted, dispersed, and reflected inside water droplets in the atmosphere, resulting in a spectrum of light appearing in the sky."
"Why is the sky blue?","The sky is blue because molecules in the air scatter blue light from the sun more than they scatter red light."
""")