File size: 1,825 Bytes
1372cf0
 
 
 
 
 
 
eecc447
 
 
94388bd
eecc447
 
1372cf0
a87a7a1
92c2bef
a87a7a1
 
 
 
 
 
 
 
a5cff64
 
92c2bef
a5cff64
92c2bef
a5cff64
 
92c2bef
 
a87a7a1
 
 
 
 
 
 
92c2bef
 
a87a7a1
 
 
 
 
 
 
 
 
 
 
 
 
 
92c2bef
a87a7a1
 
 
 
 
 
 
1372cf0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: other
datasets:
- Open-Orca/OpenOrca
- ehartford/wizard_vicuna_70k_unfiltered
tags:
- code
- prompt
- reverse prompt
widget:
- text: "[RESPONSE]The given phrase is incorrect because it contains a grammatical error. The correct phrase should be 'You were late.' The word 'you' is a pronoun that refers to a person or people, and 'were' is the past tense form of the verb 'to be.' In this sentence, 'were' is used to indicate that the person being referred to was late in the past. The word 'your' is a possessive adjective that is used to show ownership or possession of something. It is not a substitute for the subject pronoun 'you.' Therefore, the phrase 'Your was late' is grammatically incorrect.\n[REVERSED-PROMPT]"
  example_title: "reverse prompt"

---

# core-prompt-reverser-opt-1.3b

This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4784
- Accuracy: 0.6753

## Model description

 [INSTRUCTION] {your question}  
 [RESPONSE] {model response}  

or  

[RESPONSE] {response}    
[REVERSED-PROMPT] {model prompt reversed}  

 

## Intended uses & limitations

More information needed

## Training and evaluation data

Wizard, openOrca, custom data



### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0

### Training results
 this model is still training, it ran only 5% of the total training data, it will finish in 4/set


### Framework versions

- Transformers 4.33.0.dev0
- Pytorch 2.1.0.dev20230605+cu121
- Datasets 2.14.4
- Tokenizers 0.13.3