File size: 4,414 Bytes
b3ea96a
2d1df41
 
 
b3ea96a
 
2d1df41
b3ea96a
 
2d1df41
 
b3ea96a
 
 
 
 
 
 
 
 
 
 
2d1df41
b3ea96a
 
 
 
 
 
 
2d1df41
b3ea96a
 
2d1df41
b3ea96a
 
 
 
 
 
2d1df41
b3ea96a
 
 
 
2d1df41
 
b3ea96a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d1df41
b3ea96a
 
 
2d1df41
 
 
 
 
 
 
 
 
 
b3ea96a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d1df41
b3ea96a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
---
language:
- en
license: apache-2.0
library_name: Transformers
tags:
- nlp
- text-classification
- argilla
- transformers
dataset_name: argilla/emotion
---

<!-- This model card has been generated automatically according to the information the `ArgillaTrainer` had access to. You
should probably proofread and complete it, then remove this comment. -->

# Model Card for *Model ID*

This model has been created with [Argilla](https://docs.argilla.io), trained with *Transformers*.

<!-- Provide a quick summary of what the model is/does. -->

This is a sample model finetuned from prajjwal1/bert-tiny.

## Model training

Training the model using the `ArgillaTrainer`:

```python
# Load the dataset:
dataset = FeedbackDataset.from_huggingface("argilla/emotion")

# Create the training task:
task = TrainingTask.for_text_classification(text=dataset.field_by_name("text"), label=dataset.question_by_name("label"))

# Create the ArgillaTrainer:
trainer = ArgillaTrainer(
    dataset=dataset,
    task=task,
    framework="transformers",
    model="prajjwal1/bert-tiny",
)

trainer.update_config({
    "logging_steps": 1,
    "num_train_epochs": 1,
    "output_dir": "tmp"
})

trainer.train(output_dir="None")
```

You can test the type of predictions of this model like so:

```python
trainer.predict("This is awesome!")
```

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

Model trained with `ArgillaTrainer` for demo purposes

- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Finetuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) for demo purposes
- **Language(s) (NLP):** ['en']
- **License:** apache-2.0
- **Finetuned from model [optional]:** prajjwal1/bert-tiny
### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** N/A


<!--
## Uses

*Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model.*
-->

<!--
### Direct Use

*This section is for the model use without fine-tuning or plugging into a larger ecosystem/app.*
-->

<!--
### Downstream Use [optional]

*This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app*
-->

<!--
### Out-of-Scope Use

*This section addresses misuse, malicious use, and uses that the model will not work well for.*
-->

<!--
## Bias, Risks, and Limitations

*This section is meant to convey both technical and sociotechnical limitations.*
-->

<!--
### Recommendations

*This section is meant to convey recommendations with respect to the bias, risk, and technical limitations.*
-->

<!--
## Training Details

### Training Metrics

*Metrics related to the model training.*
-->

<!--
### Training Hyperparameters

- **Training regime:** (fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision)
-->

<!--
## Environmental Impact

*Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly*

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
-->

## Technical Specifications [optional]

### Framework Versions

- Python: 3.10.7
- Argilla: 1.19.0-dev

<!--
## Citation [optional]

*If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.*

### BibTeX
-->

<!--
## Glossary [optional]

*If relevant, include terms and calculations in this section that can help readers understand the model or model card.*
-->

<!--
## Model Card Authors [optional]

*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->

<!--
## Model Card Contact

*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->