PEFT
Safetensors
orionweller commited on
Commit
28b66d8
·
verified ·
1 Parent(s): ad5c1d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -1
README.md CHANGED
@@ -7,6 +7,19 @@ library_name: peft
7
 
8
  This is a reproduced version of the RepLLaMA model. See [this thread](https://github.com/texttron/tevatron/issues/129) for details of the reproduction process, which changed from their original version.
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  You can use this with the RepLLaMA example code in [tevatron](https://github.com/texttron/tevatron) or with mteb:
11
 
12
  ```python
@@ -50,4 +63,15 @@ deepspeed --include localhost:0,1,2,3 --master_port 60000 --module tevatron.retr
50
  --gradient_accumulation_steps 4
51
  ```
52
 
53
- For citation, please also see the [original RepLLaMA paper](https://arxiv.org/abs/2310.08319).
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  This is a reproduced version of the RepLLaMA model. See [this thread](https://github.com/texttron/tevatron/issues/129) for details of the reproduction process, which changed from their original version.
9
 
10
+ # Other Links
11
+ | Binary | Description |
12
+ |:-------|:------------|
13
+ | [samaya-ai/promptriever-llama2-7b-v1](https://huggingface.co/samaya-ai/promptriever-llama2-7b-v1) | A Promptriever bi-encoder model based on LLaMA 2 (7B parameters).|
14
+ | [samaya-ai/promptriever-llama3.1-8b-instruct-v1](https://huggingface.co/samaya-ai/promptriever-llama3.1-8b-instruct-v1) | A Promptriever bi-encoder model based on LLaMA 3.1 Instruct (8B parameters).|
15
+ | [samaya-ai/promptriever-llama3.1-8b-v1](https://huggingface.co/samaya-ai/promptriever-llama3.1-8b-v1) | A Promptriever bi-encoder model based on LLaMA 3.1 (8B parameters).|
16
+ | [samaya-ai/promptriever-mistral-v0.1-7b-v1](https://huggingface.co/samaya-ai/promptriever-mistral-v0.1-7b-v1) | A Promptriever bi-encoder model based on Mistral v0.1 (7B parameters). |
17
+ | [samaya-ai/RepLLaMA-reproduced](https://huggingface.co/samaya-ai/RepLLaMA-reproduced) | A reproduction of the RepLLaMA model (no instructions). A bi-encoder based on LLaMA 2, trained on the [tevatron/msmarco-passage-aug](https://huggingface.co/datasets/Tevatron/msmarco-passage-aug) dataset. |
18
+ | [samaya-ai/msmarco-w-instructions](https://huggingface.co/samaya-ai/msmarco-w-instructions) | A dataset of MS MARCO with added instructions and instruction-negatives, used for training the above models. |
19
+
20
+
21
+ # Usage
22
+
23
  You can use this with the RepLLaMA example code in [tevatron](https://github.com/texttron/tevatron) or with mteb:
24
 
25
  ```python
 
63
  --gradient_accumulation_steps 4
64
  ```
65
 
66
+ For citation, please also see the [original RepLLaMA paper](https://arxiv.org/abs/2310.08319) and feel free to cite Promptriever as well:
67
+
68
+ # Citation
69
+
70
+ ```bibtex
71
+ @article{weller2024promptriever,
72
+ title={Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models},
73
+ author={Weller, Orion and Van Durme, Benjamin and Lawrie, Dawn and Paranjape, Ashwin and Zhang, Yuhao and Hessel, Jack},
74
+ journal={arXiv preprint TODO},
75
+ year={2024}
76
+ }
77
+ ```