Daniel Foley commited on
Commit
a9e136f
·
1 Parent(s): 2b516d5

Finishing touches and cleanup

Browse files
EVALUATION.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Analysis of RAG Responses
2
+
3
+ This document analyzes the performance of a Retrieval-Augmented Generation (RAG) system tested on queries categorized into three typologies: specific queries, vague queries, and gibberish or unknown facts. Below are detailed insights into each category, based on the test results and observations.
4
+
5
+ ## Specific Queries
6
+
7
+ ### Examples
8
+
9
+ - **Query:** Where is boston?
10
+
11
+ **Response:** Boston is located in the state of Massachusetts, in Suffolk County, United States, which is part of North and Central America.
12
+
13
+ **Source:** While the response mentions Suffolk County, the sources only contained this as a keyword and did not explicitly state the connection.
14
+
15
+ - **Query:** Who was Hitler?
16
+
17
+ **Response:** Sorry, I was unable to find any documents relevant to your query.
18
+
19
+ **Source:** The sources contained information about World War II but did not specifically address Hitler, even if the BPL includes material about him.
20
+
21
+ - **Query:** Do you know Ken Follet?
22
+
23
+ **Response:** Based on the provided context, there is no information available about Ken Follet.
24
+
25
+ **Sources:** The sources retrieved were not related to Ken Follett. It appears that the Boston Public Library (BPL) does not have any material about him.
26
+
27
+ - **Query:** Do you know the book 1984?
28
+
29
+ **Response:** Sorry, I was unable to find any documents relevant to your query.
30
+
31
+ **Sources:** The retrieved sources were not related to 1984. The BPL database lacks content specific to this book.
32
+
33
+ ### Notes
34
+ The RAG system can easily respond with, "Sorry, I was unable to find any documents relevant to your query," when it lacks specific information to generate a meaningful answer. However, when it does provide answers, they are typically well-constructed and accurate. For the RAG to function effectively, the documents it accesses must also be highly specific. For example, while there may be numerous images of Hitler, the absence of descriptive text accompanying them makes it impossible for the RAG to provide a comprehensive answer. Additionally, since we are working with the Digital Commonwealth rather than a traditional library catalog, it is understandable that there might not be sufficient material on topics like Ken Follett or the book 1984.
35
+
36
+ ## Vague Queries
37
+
38
+ ### Examples
39
+ - **Query:** Tell me about neurology.
40
+
41
+ **Response:** Sorry, I was unable to find any documents relevant to your query.
42
+
43
+ **Sources:** While the response indicates no relevant documents, the sources reportedly contained meaningful information about neurology.
44
+
45
+ - **Query:** Boston.
46
+
47
+ **Response:** The context provides a comprehensive overview of Boston, highlighting its historical and geographical aspects. It includes a south-east view of the city, a view of the State Capitol, and a depiction of the regional open space system. It also provides historical maps of Boston, including one from the 18th century that shows the city's maritime economy and settlement patterns, and a re-issue of this map from 1835. The context also mentions various landmarks, such as the Ether Monument Public Garden, the site of Anne Hutchinson’s house, and the John Hancock house. The city's history is also touched upon, with references to its settlement by Europeans in 1630 and its previous habitation by Native Americans.
48
+
49
+ **Sources:** The sources are highly relevant and align well with the query, providing detailed historical and geographical information about Boston.
50
+
51
+ - **Query:** What happened in 1984?
52
+
53
+ **Response:** Sorry, I was unable to find any documents relevant to your query.
54
+
55
+ **Sources:** All retrieved sources were editions of the same journal from 1984. While temporally related, the content did not address the question directly.
56
+
57
+ ### Notes
58
+ Vague queries pose an interesting challenge for the RAG system. The RAG system often responds with, "Sorry, I was unable to find any documents relevant to your query," even when it retrieves some interesting and relevant sources. It appears that only sufficiently descriptive sources enable the system to generate proper answers. When it does provide an answer, it is typically accurate and on point. However, the database is structured in a way that allows multiple editions of the same journal to exist, leading the RAG to retrieve all editions as meaningful sources, which can be somewhat redundant. To improve the user experience, it would be helpful to limit these cases by reporting only one edition of a journal.
59
+
60
+ ## Gibberish and Unknown Facts
61
+
62
+ ### Examples
63
+ - **Query:** Do you know De Bruyne?
64
+
65
+ **Response:** Sorry, I was unable to find any documents relevant to your query.
66
+
67
+ **Sources:** As expected, the response indicates no knowledge of De Bruyne. The retrieved sources were about someone with a similar name.
68
+
69
+ - **Query:** Qwghkjhgf.
70
+
71
+ **Response:** Sorry, I was unable to find any documents relevant to your query.
72
+
73
+ **Sources:** The retrieved sources were random, as expected for a nonsensical query.
74
+
75
+ - **Query:** Hello, I really like dogs, can you give me some dogs please? Thank you!
76
+
77
+ **Response:** Sorry, I was unable to find any documents relevant to your query.
78
+
79
+ **Sources:** The retrieved sources were about dogs.
80
+
81
+ ### Notes
82
+ The system is behaving as expected: when no relevant answer can be generated, the sources retrieved are either random or associated based on similarity in names or words. For instance, a query about "de Bruyne" results in "Les femmes de Bruges," as these are the most similar terms in the database. (Since de Bruyne is a contemporary football player, it is entirely reasonable that the Digital Commonwealth does not contain any information about him.)
83
+
84
+ ## Query in Different Language (Italian)
85
+
86
+ ### Example
87
+
88
+ - **Query:** Ciao, dove si trova boston?
89
+
90
+ **Response:** Boston si trova negli Stati Uniti, nello stato del Massachusetts. / Sorry, I was unable to find any documents relevant to your query.
91
+
92
+ **Source:** The sources are about Boston, but not as the ones for the same English query / The sources are about Italy, but not related to Boston itself (e.g., Milan or Rome).
93
+
94
+ ### Notes
95
+ Working with another language makes it challenging to receive the same answer consistently. Sometimes, the system provides the correct response (identical to the English version but translated into Italian) and sometimes the default message: "Sorry, I was unable to find any documents relevant to your query." Additionally, the sources retrieved vary from case to case, and the accuracy of the answer seems to depend on the quality and relevance of these sources. It's interesting to see how an Italian query can correspond to sources about Italy and not about the query itself.
96
+
97
+ ## Final Disclaimer
98
+ This test was conducted on a partial database. The inability of the RAG system to find specific information may be due to the absence of relevant data in the current product configuration, even though such information might exist in the complete database.
RAG.py CHANGED
@@ -125,6 +125,9 @@ def RAG(llm: Any, query: str,vectorstore:PineconeVectorStore, top: int = 10, k:
125
  """Main RAG function with improved error handling and validation."""
126
  start = time.time()
127
  try:
 
 
 
128
  # Retrieve initial documents using rephrased query -- not working as intended currently, maybe would be better for data with more words.
129
  # query_template = PromptTemplate.from_template(
130
  # """
 
125
  """Main RAG function with improved error handling and validation."""
126
  start = time.time()
127
  try:
128
+
129
+ # Query alignment is commented our, however I have decided to leave it in for potential future use.
130
+
131
  # Retrieve initial documents using rephrased query -- not working as intended currently, maybe would be better for data with more words.
132
  # query_template = PromptTemplate.from_template(
133
  # """
WRITEUP.md CHANGED
@@ -187,7 +187,7 @@ Our implementation uses GPT-4o-mini from OpenAI, however you could fit in your o
187
  Also, make sure to replace the variable INDEX_NAME in streamlit-app.py with the name of your index.
188
 
189
  ```
190
- OPENAI_API_KEY = "<you-api-key>"
191
  ```
192
 
193
  Once you do that, you're ready to run.
@@ -201,6 +201,13 @@ This will run the app on port 8501. When querying, please be patient, sometimes
201
  **On-going Challenges**
202
  - Vector Store: 1.3 million metadata objects and 147,000 full text docs resulted in a cumulative ~140GB of vectors when using all-MiniLM-L6-v2 and recursive character splitting on 1000 characters. This made locally hosted vectorstores cumbersome and implausible. Our solution was to migrate a portion of our metadata vectors to Pinecone and used that in our final implementation. Hosting on Pinecone can become expensive and adds another dimension of complexity to the project.
203
  - Speed: Currently, the app takes anywhere from 25-70sec to generate a response, we have found that the most time-consuming aspect of this is our calls to the Digital Commonwealth API to retrieve the rest of the metadata for each object retrieved within Pinecone. We were unable to associate an object's full metadata in Pinecone due to internal limits, so we are hitting the Digital Commonwealth API to do so. On average, responses take 1/4 of a sec, however across 100 responses that becomes cumbersome.
204
- - Query Alignment: The way queries are worded can have impact on the quality of retrieval. We attempted to implement a form of query alignment by using an llm to generate a sample response to the query, however we found it to be ineffective and detrimental. Further research should be done in this area to improve standardization of query alignment ot improve retrieval.
205
  - Usage Monitoring: Real-time usage monitoring through the console logs is implemented, however it would be beneficial to implement a form of persistent usage monitoring for generating insights into model performance and query wording for the purpose of ML/OPs.
206
-
 
 
 
 
 
 
 
 
187
  Also, make sure to replace the variable INDEX_NAME in streamlit-app.py with the name of your index.
188
 
189
  ```
190
+ OPENAI_API_KEY = "<your-api-key>"
191
  ```
192
 
193
  Once you do that, you're ready to run.
 
201
  **On-going Challenges**
202
  - Vector Store: 1.3 million metadata objects and 147,000 full text docs resulted in a cumulative ~140GB of vectors when using all-MiniLM-L6-v2 and recursive character splitting on 1000 characters. This made locally hosted vectorstores cumbersome and implausible. Our solution was to migrate a portion of our metadata vectors to Pinecone and used that in our final implementation. Hosting on Pinecone can become expensive and adds another dimension of complexity to the project.
203
  - Speed: Currently, the app takes anywhere from 25-70sec to generate a response, we have found that the most time-consuming aspect of this is our calls to the Digital Commonwealth API to retrieve the rest of the metadata for each object retrieved within Pinecone. We were unable to associate an object's full metadata in Pinecone due to internal limits, so we are hitting the Digital Commonwealth API to do so. On average, responses take 1/4 of a sec, however across 100 responses that becomes cumbersome.
204
+ - Query Alignment: The way queries are worded can have impact on the quality of retrieval. We attempted to implement a form of query alignment by using an llm to generate a sample response to the query, however we found it to be ineffective and detrimental. One specific aspect is the efficacy of specific queries versus vague ones ("Who wrote letters to WEB Du Bois?" vs "What happened in Boston in 1919?"). Queries as a whole may benefit from segmentation into the likely metadata fields they contain in order to inform querying (set up separate vectorstores for each field and then retrieve different parts of the query respectively). Further research should be done in this area to improve standardization of query alignment ot improve retrieval.
205
  - Usage Monitoring: Real-time usage monitoring through the console logs is implemented, however it would be beneficial to implement a form of persistent usage monitoring for generating insights into model performance and query wording for the purpose of ML/OPs.
206
+
207
+ **Ad Hoc Process/Recommendations**
208
+ - Our Demo on huggingface (that will temporarily be hooked up to a group member's personal Pinecone index before being disconnected after submission) only included retrieval over 600k entries in the Digital Commonwealth API. Each entry's title fields and abstract were embedded and input into the vectorstore. We first retrieve the top 100 related vectors to the query (with the intent to reduce vectorstore size and only retrieve on topical relevance), then we retrieve the metadata for certain fields from each retrieved vector's source id deemed related to queries (abstract, title, format, etc.) and rerank with BM25 off of that (with the intent to then prioritize entries on metadata like format and date). This was a way to effectively put together a quick demo.
209
+ - The size of the data is significant in size and largely grows with vectorsize assuming you are significantly chunking each entry. It is our formal recommendation that you host your vectorstore on Pinecone or another service for efficient retrieval and initialization as well as in consideration of the storage of huggingface spaces.
210
+ - As mentioned previously, a way to segment and analyze each query prior to retrieval could create a more reliable and accuracte retriever. Also of not is our prompt engineering. We strongly suggest using XML tags and a parser for efficient Chain of Thought in order to minimize llm calls.
211
+ - Currently we are linearly hitting the Digital Commonwealth API for metadata once we retrieve the top 100 vectors in order to perform reranking and contextual addition to the prompt. This is really slow. We recommend that you either forego this method for some other or parallelize your calls (we tried parallelization, however found that rate limiting was too severe). A solution might be to create a metadata database and initialize it only on startup for referencing or to create proxies for api parallelization.
212
+
213
+ Thank You and Best of Luck!
dataset-documentation/DATASETDOC-fa24.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ***Project Information***
2
+
3
+ * The project name is LibRAG (Retrieval Augmented Generation)
4
+ * https://github.com/BU-Spark/ml-bpl-rag/tree/main
5
+ * [Google Drive](https://drive.google.com/drive/folders/12_tsVcUgwdfUdXalD67NOgUL3tGeI6ss?usp=sharing)
6
+ * This project involved implementing natural language querying into the Digial Commonwealth project.
7
+ * Client: Boston Public Library
8
+ * Contact: Eben English
9
+ * Class: DS549
10
+
11
+ ***Dataset Information***
12
+
13
+ * Our data is contained on the SCC at /projectnb/sparkgrp/ml-bpl-rag-data
14
+ * /vectorstore/final_embeddings/metadata_index - faiss index for the metadata
15
+ * /vectorstore/final_embeddings/fulltext_index - faiss index for the OCR text
16
+ * /full_data/bpl_data.json - metadata
17
+ * /full_data/clean_ft.json - fulltext
18
+ * We did not have formal datasets, instead we used the Digital Commonwealth API and created embeddings from it. There is no need for a data dictionary outside of [Digital Commonwealth API](https://github.com/boston-library/solr-core-conf/wiki/SolrDocument-field-reference:-public-API).
19
+ * What keywords or tags would you attach to the data set?
20
+ * Domain(s) of Application: Natural Language Processing, Library Science
21
+ * Civic tech
22
+
23
+ *The following questions pertain to the datasets you used in your project.*
24
+ *Motivation*
25
+
26
+ * We needed to create embeddings of the Digital Commonwealth's data in order to perform retrieval
27
+
28
+ *Composition*
29
+
30
+ * Each entry in the Digital Commonwealth API represents an object in their repo of varying format
31
+ * There were ~1.3 million total objects last we checked, about 147,000 of which containing full-text from OCR'd documents.
32
+ * Our data was a comprehensive snapshot, the API is being updated.
33
+ * Each field from the API represented metadata classifications
34
+ * Data is publicly accessible and non-confidential
35
+
36
+ *Collection Process*
37
+
38
+ * We collected data from an API endpoint.
39
+ * No sampling was performed
40
+ * This data was collected in October 2024
41
+
42
+ *Preprocessing/cleaning/labeling*
43
+
44
+ * Very limited character correction was performed on the fulltext data.
45
+ * No transformations were applied outside of embedding.
46
+ * The raw data is saved in ml-bpl-rag-data/full_data/bpl_data.json (metadata) clean_ft.json (fulltext)
47
+
48
+ *Uses*
49
+
50
+ * Embedding for retrieval
51
+
52
+ *Distribution*
53
+
54
+ * This data is free to use and access by subsequent students of our project.
55
+
56
+ *Maintenance*
57
+
58
+ There is currently no system in place for cleanly updating the data, though in our instructions within WRITEUP.md we include a way to ingest your own data from the API and embed it.
59
+
load_script.py CHANGED
@@ -1,140 +1,72 @@
1
-
2
-
3
  import json
4
-
5
  import time
6
-
7
  import os
8
-
9
  import sys
10
-
11
  import requests
12
 
13
-
14
  def fetch_digital_commonwealth():
15
-
16
  start = time.time()
17
-
18
  BASE_URL = "https://www.digitalcommonwealth.org/search.json?search_field=all_fields&per_page=100&q="
19
-
20
  PAGE = sys.argv[1]
21
-
22
  END_PAGE = sys.argv[2]
23
-
24
  file_name = f"out{PAGE}_{END_PAGE}.json"
25
-
26
- FINAL_PAGE = 13038
27
-
28
  output = []
29
-
30
  file_path = f"./{file_name}"
31
-
32
  # file_path = './output.json'
33
-
34
  if os.path.exists(file_path):
35
-
36
  with open(file_path,'r') as file:
37
-
38
  output = json.load(file)
39
-
40
  if int(PAGE) < (len(output) + 1):
41
-
42
  PAGE = len(output) + 1
43
-
44
 
45
-
46
  if int(PAGE) >= int(END_PAGE):
47
-
48
  return None
49
-
50
  print(f'Reading page {PAGE} up to page {END_PAGE}')
51
 
52
  retries = 0
53
 
54
  while True:
55
-
56
  try:
57
-
58
  response = requests.get(f"{BASE_URL}&page={PAGE}")
59
-
60
  response.raise_for_status()
61
-
62
  data = response.json()
63
-
64
-
65
-
66
  # Append current page data to the output list
67
-
68
  output.append(data)
69
-
70
 
71
-
72
  # Save the entire output to a JSON file after each iteration
73
-
74
  with open(file_path, 'w') as f:
75
-
76
  json.dump(output, f)
77
 
78
 
79
-
80
-
81
-
82
  # check if theres a next page
83
-
84
  # print(len(response))
85
-
86
  if data['meta']['pages']['next_page']:
87
-
88
  if data['meta']['pages']['next_page'] == int(END_PAGE):
89
-
90
  print(f"Processed and saved page {PAGE}. Total pages saved: {len(output)}")
91
-
92
  break
93
-
94
- elif data['meta']['pages']['next_page'] == FINAL_PAGE:
95
-
96
  print(f"finished page {PAGE}")
97
-
98
  PAGE = FINAL_PAGE
99
-
100
  else:
101
-
102
  print(f"finished page {PAGE}")
103
-
104
  PAGE = data['meta']['pages']['next_page']
105
-
106
  else:
107
-
108
  print(f"Processed and saved page {PAGE}. Total pages saved: {len(output)}")
109
-
110
  break
111
-
112
 
113
-
114
  retries = 0
115
-
116
- # Optional: Add a small delay to avoid overwhelming the API
117
-
118
- # time.sleep(0.5)
119
-
120
  except requests.exceptions.RequestException as e:
121
-
122
  print(f"An error occurred: {e}")
123
-
124
  print(f"Processed and saved page {PAGE}. Total pages saved: {len(output)}")
125
-
126
  retries += 1
127
-
128
  if retries >= 5:
129
-
130
  break
131
 
132
  end = time.time()
133
-
134
  print(f"Timer: {end - start}")
135
-
136
  print(f"Finished processing all pages. Total pages saved: {len(output)}")
137
-
138
  if __name__ == "__main__":
139
-
140
  fetch_digital_commonwealth()
 
 
 
1
  import json
 
2
  import time
 
3
  import os
 
4
  import sys
 
5
  import requests
6
 
 
7
  def fetch_digital_commonwealth():
 
8
  start = time.time()
 
9
  BASE_URL = "https://www.digitalcommonwealth.org/search.json?search_field=all_fields&per_page=100&q="
 
10
  PAGE = sys.argv[1]
 
11
  END_PAGE = sys.argv[2]
 
12
  file_name = f"out{PAGE}_{END_PAGE}.json"
13
+ FINAL_PAGE = 13038 # hardcoded from old version, I suggest doing logic to determine final page. This was used to keep us from going out of index.
 
 
14
  output = []
 
15
  file_path = f"./{file_name}"
 
16
  # file_path = './output.json'
 
17
  if os.path.exists(file_path):
 
18
  with open(file_path,'r') as file:
 
19
  output = json.load(file)
 
20
  if int(PAGE) < (len(output) + 1):
 
21
  PAGE = len(output) + 1
 
22
 
 
23
  if int(PAGE) >= int(END_PAGE):
 
24
  return None
 
25
  print(f'Reading page {PAGE} up to page {END_PAGE}')
26
 
27
  retries = 0
28
 
29
  while True:
 
30
  try:
 
31
  response = requests.get(f"{BASE_URL}&page={PAGE}")
 
32
  response.raise_for_status()
 
33
  data = response.json()
34
+
 
 
35
  # Append current page data to the output list
 
36
  output.append(data)
 
37
 
 
38
  # Save the entire output to a JSON file after each iteration
 
39
  with open(file_path, 'w') as f:
 
40
  json.dump(output, f)
41
 
42
 
 
 
 
43
  # check if theres a next page
 
44
  # print(len(response))
 
45
  if data['meta']['pages']['next_page']:
 
46
  if data['meta']['pages']['next_page'] == int(END_PAGE):
 
47
  print(f"Processed and saved page {PAGE}. Total pages saved: {len(output)}")
 
48
  break
49
+ elif data['meta']['pages']['next_page'] == FINAL_PAGE: # This is hardcoded from an old version
 
 
50
  print(f"finished page {PAGE}")
 
51
  PAGE = FINAL_PAGE
 
52
  else:
 
53
  print(f"finished page {PAGE}")
 
54
  PAGE = data['meta']['pages']['next_page']
 
55
  else:
 
56
  print(f"Processed and saved page {PAGE}. Total pages saved: {len(output)}")
 
57
  break
 
58
 
 
59
  retries = 0
60
+ # time.sleep(0.5) was concerned about rate limiting
 
 
 
 
61
  except requests.exceptions.RequestException as e:
 
62
  print(f"An error occurred: {e}")
 
63
  print(f"Processed and saved page {PAGE}. Total pages saved: {len(output)}")
 
64
  retries += 1
 
65
  if retries >= 5:
 
66
  break
67
 
68
  end = time.time()
 
69
  print(f"Timer: {end - start}")
 
70
  print(f"Finished processing all pages. Total pages saved: {len(output)}")
 
71
  if __name__ == "__main__":
 
72
  fetch_digital_commonwealth()