Cyrile commited on
Commit
dc73419
·
verified ·
1 Parent(s): 20fb799

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -11,29 +11,29 @@ pipeline_tag: sentence-similarity
11
 
12
  # Bloomz-3b Reranking
13
 
14
- This reranking model is built from [cmarkea/bloomz-3b-dpo-chat](https://huggingface.co/cmarkea/bloomz-3b-dpo-chat) model and aims to measure the semantic correspondence between
15
- a question (query) and a context. With its normalized scoring, it helps to filter the query/context matchings outputted by a retriever in an ODQA (Open-Domain Question Answering)context.
16
- Moreover, it allows to reorder the results using a more efficient modeling approach than the retriever one. However, this modeling type is not conducive to direct
17
- database searching due to its high computational cost.
18
 
19
  Developed to be language-agnostic, this model supports both French and English. Consequently, it can effectively score in a cross-language context without being
20
  influenced by its behavior in a monolingual context (English or French).
21
 
22
  ## Dataset
23
- The training dataset is composed of the [mMARCO dataset](https://huggingface.co/datasets/unicamp-dl/mmarco), consisting of query/positive/hard negative triplets. Additionally,
24
- we have included [SQuAD](https://huggingface.co/datasets/rajpurkar/squad) data from the "train" split, forming query/positive/hard negative triplets. In order to generate hard
25
- negative data for SQuAD, we considered contexts from the same theme as the query but from a different set of queries. Hence, the negative observations belong to the same
26
- themes as the queries but presumably do not contain the answer to the question.
27
 
28
  Finally, the triplets are flattened to obtain pairs of query/context sentences with a label 1 if query/positive and a label 0 if query/negative. In each element of the
29
  pair (query and context), the language, French or English, is randomly and uniformly chosen.
30
 
31
  ## Evaluation
32
 
33
- To assess the performance of the reranker, we will make use of the "validation" split of the [SQuAD](https://huggingface.co/datasets/rajpurkar/squad) dataset. We will select
34
- the first question from each paragraph, along with the paragraph constituting the context that should be ranked Top-1 for an Oracle modeling. What's intriguing is that
35
- the number of themes is limited, and each context from a corresponding theme that does not match the query is considered as a hard negative (other contexts outside the theme are
36
- simple negatives). Thus, we can construct the following table, with each theme showing the number of contexts and associated query:
37
 
38
  | Theme name | Context number |
39
  |---------------------------------------------:|:---------------|
 
11
 
12
  # Bloomz-3b Reranking
13
 
14
+ This reranking model is built from [cmarkea/bloomz-3b-dpo-chat](https://huggingface.co/cmarkea/bloomz-3b-dpo-chat) model and aims to measure the semantic correspondence
15
+ between a question (query) and a context. With its normalized scoring, it helps to filter the query/context matchings outputted by a retriever in an ODQA (Open-Domain
16
+ Question Answering) context. Moreover, it allows to reorder the results using a more efficient modeling approach than the retriever one. However, this modeling type is
17
+ not conducive to direct database searching due to its high computational cost.
18
 
19
  Developed to be language-agnostic, this model supports both French and English. Consequently, it can effectively score in a cross-language context without being
20
  influenced by its behavior in a monolingual context (English or French).
21
 
22
  ## Dataset
23
+ The training dataset is composed of the [mMARCO dataset](https://huggingface.co/datasets/unicamp-dl/mmarco), consisting of query/positive/hard negative triplets.
24
+ Additionally, we have included [SQuAD](https://huggingface.co/datasets/rajpurkar/squad) data from the "train" split, forming query/positive/hard negative triplets. In
25
+ order to generate hard negative data for SQuAD, we considered contexts from the same theme as the query but from a different set of queries. Hence, the negative
26
+ observations belong to the same themes as the queries but presumably do not contain the answer to the question.
27
 
28
  Finally, the triplets are flattened to obtain pairs of query/context sentences with a label 1 if query/positive and a label 0 if query/negative. In each element of the
29
  pair (query and context), the language, French or English, is randomly and uniformly chosen.
30
 
31
  ## Evaluation
32
 
33
+ To assess the performance of the reranker, we will make use of the "validation" split of the [SQuAD](https://huggingface.co/datasets/rajpurkar/squad) dataset. We will
34
+ select the first question from each paragraph, along with the paragraph constituting the context that should be ranked Top-1 for an Oracle modeling. What's intriguing
35
+ is that the number of themes is limited, and each context from a corresponding theme that does not match the query is considered as a hard negative (other contexts
36
+ outside the theme are simple negatives). Thus, we can construct the following table, with each theme showing the number of contexts and associated query:
37
 
38
  | Theme name | Context number |
39
  |---------------------------------------------:|:---------------|