Datasets:
ibm
/

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
javiccano commited on
Commit
327a968
1 Parent(s): 5518aa1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -20,7 +20,7 @@ p{color:Black !important;}
20
 
21
 
22
  <p align="center">
23
- <img src="https://ibm.box.com/shared/static/7vq37r4ny12whwcw7les0txr3l2ltdys.png" width=70%/>
24
  <!-- <img src="./figs/Example.png" width=70%/> -->
25
  </p>
26
 
@@ -67,14 +67,14 @@ The dataset has been used in the paper to assess LLMs performance when augmented
67
  The following figure illustrates the evaluation process:
68
 
69
  <p align="center">
70
- <img src="https://ibm.box.com/s/xstzgegdlvqejvd2q12pld1c82vcbhs9" width=70%/>
71
  <!-- <img src="./figs/Evaluation.png" width=70%/> -->
72
  </p>
73
 
74
  And the following table shows the performance of five LLMs (Mistral-7b-inst, Mixtral-8x7b-inst, Llama-2-70b-chat, Llama-3-70b-inst, and GPT-4) on the Wikipedia Contradict Benchmark based on rigorous human evaluations on a subset of answers for 55 instances, which corresponds to 1,375 LLM responses in total.
75
 
76
  <p align="center">
77
- <img src="https://ibm.box.com/shared/static/brqfm2g2q5g63cxirsaa5jf0apqmbe5d.png" width=70%/>
78
  <!-- <img src="./figs/table2.png" width=70%/> -->
79
  </p>
80
 
 
20
 
21
 
22
  <p align="center">
23
+ <img src="https://github.com/javiccano/Wikipedia_contradict_benchmark/blob/main/figs/Example.png?raw=true" width=70%/>
24
  <!-- <img src="./figs/Example.png" width=70%/> -->
25
  </p>
26
 
 
67
  The following figure illustrates the evaluation process:
68
 
69
  <p align="center">
70
+ <img src="https://github.com/javiccano/Wikipedia_contradict_benchmark/blob/main/figs/Evaluation.png?raw=true" width=70%/>
71
  <!-- <img src="./figs/Evaluation.png" width=70%/> -->
72
  </p>
73
 
74
  And the following table shows the performance of five LLMs (Mistral-7b-inst, Mixtral-8x7b-inst, Llama-2-70b-chat, Llama-3-70b-inst, and GPT-4) on the Wikipedia Contradict Benchmark based on rigorous human evaluations on a subset of answers for 55 instances, which corresponds to 1,375 LLM responses in total.
75
 
76
  <p align="center">
77
+ <img src="https://github.com/javiccano/Wikipedia_contradict_benchmark/blob/main/figs/table2.png?raw=true" width=70%/>
78
  <!-- <img src="./figs/table2.png" width=70%/> -->
79
  </p>
80