edmundmills
commited on
Commit
·
e6bae1e
1
Parent(s):
cbc33f8
Update README.md
Browse files
README.md
CHANGED
@@ -13,9 +13,26 @@ should probably proofread and complete it, then remove this comment. -->
|
|
13 |
|
14 |
This model is intended to detect the presence of a present-moment experience a human or animal is experiencing in a sentence.
|
15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
## Model description
|
17 |
|
18 |
-
|
19 |
|
20 |
## Intended uses & limitations
|
21 |
|
@@ -27,7 +44,7 @@ This model was trained on 745 training samples, with ~10% of them containing pre
|
|
27 |
|
28 |
## Training procedure
|
29 |
|
30 |
-
|
31 |
|
32 |
### Training hyperparameters
|
33 |
|
|
|
13 |
|
14 |
This model is intended to detect the presence of a present-moment experience a human or animal is experiencing in a sentence.
|
15 |
|
16 |
+
## Usage
|
17 |
+
|
18 |
+
Given a sentence, the model gives logits of whether or not that sentence contains a present-moment experience. Higher values correspond to the sentence having that experience.
|
19 |
+
|
20 |
+
```
|
21 |
+
model = transformers.AutoModelForSequenceClassification.from_pretrained('edmundmills/experience-model-v1') # type: ignore
|
22 |
+
tokenizer = transformers.AutoTokenizer.from_pretrained('edmundmills/experience-model-v1', use_fast=False) # type: ignore
|
23 |
+
sentence = "I am eating food."
|
24 |
+
tokenized = tokenizer([sentence], return_tensors='pt', return_attention_mask=True)
|
25 |
+
input_ids, masks = tokenized['input_ids'], tokenized['attention_mask']
|
26 |
+
with torch.inference_mode():
|
27 |
+
out = model(input_ids, attention_mask=masks)
|
28 |
+
probs = out.logits.sigmoid().squeeze().item()
|
29 |
+
print(probs) # 0.92
|
30 |
+
|
31 |
+
```
|
32 |
+
|
33 |
## Model description
|
34 |
|
35 |
+
This model was fine-tuned from 'microsoft/deberta-v3-large'.
|
36 |
|
37 |
## Intended uses & limitations
|
38 |
|
|
|
44 |
|
45 |
## Training procedure
|
46 |
|
47 |
+
The model was fine-tuned using https://github.com/AlignmentResearch/experience-model. It used BCE Loss.
|
48 |
|
49 |
### Training hyperparameters
|
50 |
|