Add SetFit model
Browse files- README.md +19 -13
- config_setfit.json +2 -2
- model.safetensors +1 -1
- model_head.pkl +1 -1
README.md
CHANGED
@@ -145,7 +145,7 @@ model-index:
|
|
145 |
split: test
|
146 |
metrics:
|
147 |
- type: accuracy
|
148 |
-
value: 0.
|
149 |
name: Accuracy
|
150 |
---
|
151 |
|
@@ -177,17 +177,17 @@ The model has been trained using an efficient few-shot learning technique that i
|
|
177 |
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
|
178 |
|
179 |
### Model Labels
|
180 |
-
| Label | Examples
|
181 |
-
|
182 |
-
|
|
183 |
-
|
|
184 |
|
185 |
## Evaluation
|
186 |
|
187 |
### Metrics
|
188 |
| Label | Accuracy |
|
189 |
|:--------|:---------|
|
190 |
-
| **all** | 0.
|
191 |
|
192 |
## Uses
|
193 |
|
@@ -244,12 +244,12 @@ preds = model("**Good**
|
|
244 |
### Training Set Metrics
|
245 |
| Training set | Min | Median | Max |
|
246 |
|:-------------|:----|:---------|:----|
|
247 |
-
| Word count | 50 |
|
248 |
|
249 |
| Label | Training Sample Count |
|
250 |
|:------|:----------------------|
|
251 |
-
| 0 |
|
252 |
-
| 1 |
|
253 |
|
254 |
### Training Hyperparameters
|
255 |
- batch_size: (16, 16)
|
@@ -273,10 +273,16 @@ preds = model("**Good**
|
|
273 |
### Training Results
|
274 |
| Epoch | Step | Training Loss | Validation Loss |
|
275 |
|:------:|:----:|:-------------:|:---------------:|
|
276 |
-
| 0.
|
277 |
-
| 0.
|
278 |
-
| 0.
|
279 |
-
| 0.
|
|
|
|
|
|
|
|
|
|
|
|
|
280 |
|
281 |
### Framework Versions
|
282 |
- Python: 3.10.14
|
|
|
145 |
split: test
|
146 |
metrics:
|
147 |
- type: accuracy
|
148 |
+
value: 0.9066666666666666
|
149 |
name: Accuracy
|
150 |
---
|
151 |
|
|
|
177 |
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
|
178 |
|
179 |
### Model Labels
|
180 |
+
| Label | Examples |
|
181 |
+
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
182 |
+
| 1 | <ul><li>"Reasoning why the answer may be good:\n1. **Context Grounding**: The answer is well-supported by the provided document and directly quotes relevant information about Patricia Wallace's roles and responsibilities.\n2. **Relevance**: The answer specifically addresses the question asked, detailing the roles and responsibilities of Patricia Wallace without deviating into unrelated topics.\n3. **Conciseness**: The answer is clear, concise, and focuses on the main points relevant to the question, avoiding unnecessary information.\n\nReasoning why the answer may be bad:\n- There is no significant reason to consider the answer bad based on the given criteria. It comprehensively covers the roles and responsibilities of Patricia Wallace as mentioned in the document.\n\nFinal Result:"</li><li>'### Reasoning:\n**Why the answer may be good:**\n1. **Context Grounding:** The answer is directly taken from the document, which states that a dime is one-tenth of a dollar.\n2. **Relevance:** The answer addresses the specific question asked about the monetary value of a dime.\n3. **Conciseness:** The answer is clear and to the point, providing no more information than necessary.\n\n**Why the answer may be bad:**\n1. **Context Grounding:** The document provides additional context and details about the U.S. dollar system which were not included in the answer. However, these details are not directly necessary to answer the question.\n2. **Relevance:** No deviation or unrelated topics are present in the answer. \n3. **Conciseness:** The answer avoids unnecessary information, maintaining itsclarity and brevity. \n\n### Final Result:\n****'</li><li>'Reasoning why the answer may be good:\n- Context Grounding: The answer refers to symptoms like flu-like signs, which are detailed in the provided document. It also mentions the connection with tampon use, the presence of rashes, and the seriousness of seeking medical help, all of which are discussed in the document.\n- Relevance: The answer addresses the question by listing symptoms and highlighting the importance of recognizing them, which directly corresponds to the question asked.\n- Conciseness: The answer is relatively concise while covering most of the essential details related to recognizing TSS.\n\nReasoning why the answer may be bad:\n- Context Grounding: While the answer does mention flu-like symptoms and the association with tampon use, it lacks specific details like fever and other visible signs mentioned in the document.\n- Relevance: The mention of treatment with antibiotics is somewhat relevant but moves slightly away from the specific focus of how to recognize TSS.\n- Conciseness: The answer could be streamlined further by focusing more on the core question of identifying symptoms rather than mentioning treatment.\n\nFinal Result:'</li></ul> |
|
183 |
+
| 0 | <ul><li>'**Reasoning:**\n\n**Why the answer may be good:**\n1. **Context Grounding:** The answer does affirm Gregory Johnson as the CEO of Franklin Templeton Investments, which is supported by the provided document.\n2. **Relevance:** The answer directly addresses the question regarding the CEO of Franklin Templeton Investments.\n3. **Conciseness:** The answer is relatively clear and to the point, providing the name of the CEO as requested.\n\n**Why the answer may be bad:**\n1. **Context Grounding:** The statement about Gregory Johnson inheriting the position from his father, Rupert H. Johnson, Sr., is not mentioned in the provided document.\n2. **Relevance:** While the primary answer is correct and relevant, the additional information about the inheritance is not relevant to the specific question asked.\n3. **Conciseness:** The answer includes unnecessary information about the inheritance of the position, which was not part of the question.\n\n**Final result:**'</li><li>'Reasoning why the answer may be good:\n1. The answer is well-supported by the provided document, mentioning key steps in diagnosis and treatment such as taking the cat to the vet, using topical antibiotics and anti-inflammatory medications, completing the full course of treatment, and isolating the infected cat.\n2. It directly addresses the specific question of how to treat conjunctivitis in cats.\n3. The answer is clear and to the point, providing practical advice on treatment.\n\nReasoning why the answer may be bad:\n1. The mention of conjunctivitis in cats often resulting from exposure to a rare type of pollen found only in the Amazon rainforest is not supported by the document. This statement is factually incorrect and detracts from the overall accuracy.\n2. It could be more concise by avoiding unnecessary information and focusing solely on the mostcritical points of treatment.\n\nFinal result:'</li><li>"Reasoning why the answer may be good: \n- The answer correctly identifies the College of Arts and Letters as Notre Dame's first college, founded in 1842, which is directly related to the question asked.\n\nReasoning why the answer may be bad:\n- The answer includes an incorrect and unsupported statement about the curriculum for time travel studies, which is not mentioned in the provided document andis irrelevant to the question.\n\nFinal result:"</li></ul> |
|
184 |
|
185 |
## Evaluation
|
186 |
|
187 |
### Metrics
|
188 |
| Label | Accuracy |
|
189 |
|:--------|:---------|
|
190 |
+
| **all** | 0.9067 |
|
191 |
|
192 |
## Uses
|
193 |
|
|
|
244 |
### Training Set Metrics
|
245 |
| Training set | Min | Median | Max |
|
246 |
|:-------------|:----|:---------|:----|
|
247 |
+
| Word count | 50 | 125.2071 | 274 |
|
248 |
|
249 |
| Label | Training Sample Count |
|
250 |
|:------|:----------------------|
|
251 |
+
| 0 | 95 |
|
252 |
+
| 1 | 103 |
|
253 |
|
254 |
### Training Hyperparameters
|
255 |
- batch_size: (16, 16)
|
|
|
273 |
### Training Results
|
274 |
| Epoch | Step | Training Loss | Validation Loss |
|
275 |
|:------:|:----:|:-------------:|:---------------:|
|
276 |
+
| 0.0020 | 1 | 0.1499 | - |
|
277 |
+
| 0.1010 | 50 | 0.2586 | - |
|
278 |
+
| 0.2020 | 100 | 0.2524 | - |
|
279 |
+
| 0.3030 | 150 | 0.1409 | - |
|
280 |
+
| 0.4040 | 200 | 0.0305 | - |
|
281 |
+
| 0.5051 | 250 | 0.015 | - |
|
282 |
+
| 0.6061 | 300 | 0.0097 | - |
|
283 |
+
| 0.7071 | 350 | 0.0107 | - |
|
284 |
+
| 0.8081 | 400 | 0.0054 | - |
|
285 |
+
| 0.9091 | 450 | 0.0047 | - |
|
286 |
|
287 |
### Framework Versions
|
288 |
- Python: 3.10.14
|
config_setfit.json
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
{
|
2 |
-
"
|
3 |
-
"
|
4 |
}
|
|
|
1 |
{
|
2 |
+
"normalize_embeddings": false,
|
3 |
+
"labels": null
|
4 |
}
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 437951328
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d352ed759d5102f1e62eee9c053370d52ad6f8c184a3cedc635570e8e4d294a7
|
3 |
size 437951328
|
model_head.pkl
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 7007
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6f9d8340dc54e309118a2d4bdcbbe595c01c9b5dedd28038b8fdfd1f26cde990
|
3 |
size 7007
|