File size: 8,885 Bytes
4b549a4 743be46 4b549a4 743be46 4b549a4 823114a 4b549a4 8b1fa55 4b549a4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 |
import os
import openai
import gradio as gr
from dotenv import load_dotenv
from utils import chat
from config import CFG_APP
# Load API KEY
try:
load_dotenv()
except Exception as e:
pass
openai.api_key = os.environ["OPENAI_API_KEY"]
# SYS Template
system_template = {
"role": "system",
"content": CFG_APP.INIT_PROMPT,
}
# APP
theme = gr.themes.Monochrome(
font=[gr.themes.GoogleFont("Kanit"), "sans-serif"],
)
with gr.Blocks(title=CFG_APP.BOT_NAME, css="assets/style.css", theme=theme) as demo:
gr.Markdown(f"<h1><center>{CFG_APP.BOT_NAME} 🤖</center></h1>")
with gr.Row():
with gr.Column(scale=2):
chatbot = gr.Chatbot(
elem_id="chatbot", label=f"{CFG_APP.BOT_NAME} chatbot", show_label=False
)
state = gr.State([system_template])
with gr.Row():
ask = gr.Textbox(
show_label=False,
placeholder="Ask here your question and press enter",
)
ask_examples_hidden = gr.Textbox(elem_id="hidden-message")
examples_questions = gr.Examples(
[*CFG_APP.DEFAULT_QUESTIONS],
[ask_examples_hidden],
examples_per_page=15,
)
with gr.Column(scale=1, variant="panel"):
sources_textbox = gr.Markdown(show_label=False)
ask.submit(
fn=chat,
inputs=[ask, state],
outputs=[chatbot, state, sources_textbox],
)
ask.submit(lambda x: gr.update(value=""), [], [ask])
ask_examples_hidden.change(
fn=chat,
inputs=[ask_examples_hidden, state],
outputs=[chatbot, state, sources_textbox],
)
demo.queue(concurrency_count=16)
gr.Markdown(
"""
### 🎯 Understanding ESMA-GPT's Purpose
In a context where financial regulations are constantly evolving, direct access to accurate information requires significant effort in sorting through regulatory documents. Given the fluidity of these regulations and the sheer volume of legislative paperwork, pinpointing precise, actionable information can be a daunting task.
\n Reg-GPT, a conversational tool related to a chatbot, offers an effective solution to this challenge. ESMA-GPT is specifically designed to address queries related to credit risk regulations. This tool draws its insights solely from documents published by official European regulatory sources, thus assuring the reliability and pertinence of its responses. By strictly focusing on these documents, ESMA-GPT ensures that it does not reference non-relevant sources, maintaining a high standard of precision in its responses. This novel tool harnesses the power of conversational AI to help users navigate the complex world of credit risk regulations, simplifying the task and promoting compliance efficiency.
"""
)
gr.Markdown(
"""
### 📃 Inputs and functionalities
**Query Embedding's mode** : You have the choice between two embeddings mode for your query :
- **Hypothetical Document Embedding (HyDE)**: This method uses LLM to generate an answer to your question. This answer will contain some keyswords and pattern which will be used to retrieve similar document's passages.
- **Reformulation** : This method uses LLM to reformulate your query. This may include rephrasing the question, defining acronyms, and other techniques to clarify or simplify the user's query to obtain more accurate and relevant answers.
\n
In its initial release, Version 0, ESMA-GPT uses the subsequent 13 documents as the basis for its answers:
\n
|Document|Link|
|:----|:----|
|UCITS|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02009L0065-20230101|
|AIFMD|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02011L0061-20210802&qid=1692715565602|
|CRAR|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02009R1060-20190101&qid=1634569841934|
|EMIR|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02012R0648-20220812|
|Benchmarks Regulation|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02016R1011-20220101|
|MIFIR|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02014R0600-20220101|
|MIFID II|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02014L0065-20230323|
|CSDR|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02014R0909-20220622|
|SFTR|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02015R2365-20220812|
|Prospectus Regulation|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02017R1129-20211110|
|SSR|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02012R0236-20220131|
|SECR|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02017R2402-20210409|
|Transparency Directive|https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02017R1129-20211110|
"""
)
gr.Markdown(
"""
ESMA-GPT provides users with the opportunity to input queries using a dedicated prompt area, much like the one used in OpenAI's ChatGPT. If you're unsure of what to ask, examples of potential questions are displayed below the query bar. Simply click on one of these and the tool will generate corresponding responses.
\n When a query is submitted to the model, 10 sources are extracted from the previously mentioned documents to provide a comprehensive answer. These sources are quoted within the generated answer to ensure accuracy and reliability. For easy reference, exact passages can be quickly located by clicking on the link icon 🔗 located beneath each excerpt, which will directly guide you to the relevant section within the document.
"""
)
gr.Markdown(
"""
### 💬 Prompt Initialization
To limit the model's responses to only the 10 proposed sources, a set of prompts has been designed and will serve as instructions to the GPT API. This design decision ensures that the model's output is reliably grounded in the selected documents, contributing to the overall accuracy and reliability of the tool. The structured guidance provided by these prompts enables the GPT API to more effectively navigate the wealth of information contained within the ten sources, delivering highly relevant and concise responses to the users' queries.
<u>Prompts used to initialize ESMA-GPT: </u>
- "You are ESMA-GPT, an expert in market regulations, an AI Assistant by Nexialog Consulting."
- "You are given a question and extracted parts of regulation reports."
- "Provide a clear and structured answer based only on the context provided."
- "When relevant, use bullet points and lists to structure your answers."
- "When relevant, use facts and numbers from the following documents in your answer."
- "Whenever you use information from a document, reference it at the end of the sentence (ex: [doc 2])."
- "You don't have to use all documents, only if it makes sense in the conversation."
- "Don't make up new sources and references that don't exist."
- "If no relevant information to answer the question is present in the documents, just say you don't have enough information to answer."
"""
)
gr.Markdown(
"""
### ⚙️Technical features
ESMA-GPT operates through two core modules, the GPT API from OpenAI and an embedding model. The functioning of these components is integrated into a seamless workflow, which can be summarized in the figure below :
<div style="display:flex; justify-content:center;">
<img src="file/Images/Reg-GPT.png" width="800" height="800" />
</div>
- Open AI Api version : gpt-3.5-turbo
- Embedding model : https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1
"""
)
gr.Markdown(
"<h1><center>Disclaimer ⚠️</center></h1>\n"
+ """
- Please be aware that this is Version 0 of our application. You may encounter certain errors or glitches as we continue to refine and enhance its functionality. You might experience some nonsensical answers, similar to those experienced when using chat-GPT. If you encounter any issues, don't hesitate to reach out to us at [email protected].
- Our application relies on an external API provided by OpenAI. There may be instances where errors occur due to high demand on the API. If you encounter such an issue, we recommend that you refresh the page and retry your query, or try again a little bit later.
- When using our application, we urge you to ask clear and explicit questions that adhere to the scope of financial market regulations. This will ensure that you receive the most accurate and relevant responses from the system.
"""
)
demo.launch()
|